id
stringlengths 10
10
| title
stringlengths 14
194
| abstract
stringlengths 111
1.92k
| authors
stringlengths 6
773
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 576
292k
|
---|---|---|---|---|---|---|
2309.11216 | Simple model for the gap in the surface states of the antiferromagnetic
topological insulator MnBi$_2$Te$_4$ | We study the influence of the antiferromagnetic order on the surface states
of topological insulators. We derive an effective Hamiltonian for these states,
taking into account the spatial structure of the antiferromagnetic order. We
obtain a typical (gapless) Dirac Hamiltonian for the surface states when the
surface of the sample is not perturbed. Gapless spectrum is protected by the
combination of time-reversal and half-translation symmetries. However, a shift
in the chemical potential of the surface layer opens a gap in the spectrum away
from the Fermi energy. Such a gap occurs only in systems with finite
antiferromagnetic order. We observe that the system topology remains unchanged
even for large values of the disorder. We calculate the spectrum using the
tight-binding model with different boundary conditions. In this case we get a
gap in the spectrum of the surface states. This discrepancy arises due to the
violation of the combined time-reversal symmetry. We compare our results with
experiments and density functional theory calculations. | R. S. Akzyanov, A. L. Rakhmanov | 2023-09-20T11:15:37Z | http://arxiv.org/abs/2309.11216v2 | # Origin of the gap in the surface states of the antiferromagnetic topological insulator
###### Abstract
We study the influence of the antiferromagnetic order on the surface states of topological insulators. We derive an effective Hamiltonian for these states, taking into account the space structure of the antiferromagnetic ordering. We obtain a typical (gapless) Dirac Hamiltonian for the surface states if the surface of the sample is not perturbed. However, a shift in the chemical potential of the surface layer opens a gap in the spectrum away from Fermi energy. Such a gap arises only in systems with a finite antiferromagnetic order. We observe that the gap is robust against the surface disorder. The obtained results are consistent with the recent experiments and density functional theory calculations.
## I Introduction
Magnetic topological insulators (MTIs) are narrow-gap semiconductors that exhibit a nontrivial band structure along with magnetic order. A prominent feature of the topological insulators (TIs) is the presence of the surface states that are robust against disorder. The exchange interaction in the MTIs breaks the time-reversal symmetry of the system and can open a band gap in the spectrum of the surface electron states [1; 2]. This significantly distinguishes MTIs from non-magnetic TIs and makes it possible to observe the anomalous quantum Hall effect and chiral Majorana states [3; 4].
The magnetic order in the TIs can be introduced either by doping a non-magnetic TI with magnetic atoms or by synthesis of the stoichiometric TI with magnetic ions in its crystal structure. The latter approach looks more promising since it allows to obtain homogeneous samples. The first synthesized intrinsic MTI was MnBi\({}_{2}\)Te\({}_{4}\)[5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18], which is currently being intensively studied. This material has a layered Van-der-Waals structure. Each seven-atom block or layer of MnBi\({}_{2}\)Te\({}_{4}\) can be schematically written as Te-Bi-Te-Mn-Te-Bi-Te (see Fig. 1). Magnetic ions of Mn are ferromagnetically ordered within the layer, and the layers are ordered antiferromagnetically (AFM). The Neel temperature for MnBi\({}_{2}\)Te\({}_{4}\) is 25 K [5; 8], which is the largest among existing MTIs.
In the case of AFM MTIs, it is an open question whether the spectrum of the surface states has a gap or not. The ARPES measurements give very different values of the gap from 0 [12] to 100 meV [19]. Such a scattering of the results can be attributed to the sample quality. Thus, the effect of the material parameters and the influence of the defects on the electronic gap in the MTIs are important problems [20]. To address these issues, the _ab initio_ calculations of the energy spectrum of MnBi\({}_{2}\)Te\({}_{4}\)[21] were performed as well as analytical analysis based on the effective Hamiltonian of the system in the \(\mathbf{k}\cdot\mathbf{p}\) approximation [22; 23]. The authors of Ref. [22] obtained an analytical dependence of the surface energy gap on the bulk properties of the MTI. They conclude that a possible reason for the scattering of the experimental data could be attributed to the idea that the intralayer ferromagnetic order becomes much smaller and more localized in real materials. The density functional theory (DFT) calculations predict that the charged impurities at the surface increase the surface band gap in MnBi\({}_{2}\)Te\({}_{4}\)[21; 24].
In this work, we study analytically the surface states of the AFM MTI. We start with the MTI Hamiltonian, in which we explicitly take into account a spatial variation of the magnetization transversely across the layers of the material. Then, we calculate an effective Hamiltonian for the surface states. The obtained Hamiltonian differs from the Hamiltonian derived in Ref. [23] since it has an extended basis that includes indexes of the AFM-ordered layers. The effective Hamiltonian for the surface states has a typical Dirac form, and the spectrum is gapless. This gapless spectrum is protected by the extended time-reversal symmetry [23]. However, if we break such symmetry by the perturbation of the surface the gap arises at the Dirac point. We get that shift the chemical po
Figure 1: Schematic structure of the antiferromagnetic topological insulator MnBi\({}_{2}\)Te\({}_{4}\).
tential of the surface layer open a gap in the spectrum at the Dirac point due to the AFM ordering. This gap is robust against disorder. We discuss the consistency of our results with the experiment and DFT calculations.
## II Model
We are interested in the electron spectrum of the AFM MTI of the type MnBi\({}_{2}\)Te\({}_{4}\) near the \(\Gamma\)-point. Following Ref. [23], we start with a low-energy Hamiltonian in the form
\[H_{0}=-\mu+m\sigma_{z}+v(k_{x}s_{x}+k_{y}s_{y})\sigma_{x}+v_{z}k_{z}s_{z}\sigma _{x}, \tag{1}\]
where we neglect the terms of the order of \({\bf k}^{2}\) and higher. Here \(k_{i}\) are the components of momentum \({\bf k}\), \(v\) and \(v_{z}\) are the in-plane and transverse components of the Fermi velocity, respectively. The Pauli matrices \(s_{i}\) act in the spin space \((\uparrow,\downarrow)\) and Pauli matrices \(\sigma_{i}\) act in the space of the low-energy orbitals \((|P1^{+}_{z}),|P2^{-}_{z}))\), where the superscripts \(\pm\) stand for the parity of corresponding states [23]. In the absence of the spin-orbit coupling, the states \(|P1^{+}_{z}\uparrow(\downarrow)\rangle\) are associated with two Bi orbitals, while \(|P2^{-}_{z}\uparrow(\downarrow)\rangle\) are associated with two Te ones. Further, we will use more descriptive notations, Bi\({}_{\uparrow(\downarrow)}\) and Te\({}_{\uparrow(\downarrow)}\), for the considered low-lying states. The spectrum of Hamiltonian (1) is \(E_{0}=-\mu\pm\sqrt{m^{2}+k^{2}}\), and the physical meaning of \(m\) is a gap in the spectrum in the bulk. The Hamiltonian (1) is equivalent to the Hamiltonian for a usual TI [1] up to rotation of the basis.
To describe the magnetic ordering, we should add to \(H_{0}\) the corresponding magnetic terms. To take into account the spacial structure of the AFM ordering explicitly, one has to consider two seven-atom blocks with Mn atoms having opposite directions of the magnetic moment, which requires extension of the Gilbert space of Hamiltonian (1) from 4D to 8D. However, the authors of Refs. [22; 23] made a projection of the space that neglected this feature. Such an approach allows them to reduce the space of the Hamiltonian from 8D to 4D.
In order to restore information on the spatial structure of the AFM state, we introduce an additional Gilbert space \(t\) that takes into account pairs of the magnetic layers with opposite polarization of Mn atoms. First, we transform the kinetic energy term into the Hamiltonian \(H_{0}\). This transformation is \(v_{z}k_{z}s_{z}\sigma_{x}\to v_{z}k_{z}s_{z}\sigma_{x}t_{x}\), where \(t_{x}\) is the Pauli matrix that acts in the \(t\) space, since only the nearest-neighbour hoping between the Bi and Te orbitals is allowed. In general, the AFM ordering results, first, in the finite magnetization \(M_{z}\) of each seven-atom layer and, second, in a spin imbalance between Bi and Te orbitals \(A_{z}\). The corresponding term in the Hamiltonian reads \(H_{m}=M_{z}s_{z}t_{z}+A_{z}s_{z}\sigma_{z}t_{z}\). Thus, the Hamiltonian of the AFM MTI in the extended space is
\[H= - \mu+m\sigma_{z}+v(k_{x}s_{x}+k_{y}s_{y})\sigma_{x} \tag{2}\] \[+ v_{z}k_{z}s_{z}\sigma_{x}t_{x}+M_{z}s_{z}t_{z}+A_{z}s_{z}\sigma_ {z}t_{z}.\]
## III Surface States
We assume that the sample occupies the space \(z<0\) and the surface lies in the \((x,y)\) plane at \(z=0\). We replace \(k_{z}\) by the operator \(-i\partial_{z}\) in the Hamiltonian (\(\hbar=1\)) and \(k_{x},k_{y}=0\) in the main approximation in \({\bf k}\). As a result, the surface states \(\Psi(z)\) obey the equation
\[\left[m\sigma_{z}+s_{z}(M_{z}t_{z}+A_{z}\sigma_{z}t_{z}-iv_{z}\sigma_{x}t_{x} \partial_{z})-\mu\right]\Psi=E\Psi, \tag{3}\]
where \(E\) is the energy. We choose \(E=0\) since we are interested in the states near the Dirac point. We seek the solution to the problem as an eighth-component spinor
\[\Psi=(\text{Bi}_{\uparrow 1},\text{Bi}_{\downarrow 1},\text{Te}_{\uparrow 1}, \text{Te}_{\downarrow 1},\text{Bi}_{\uparrow 2},\text{Bi}_{\downarrow 2},\text{Te}_{ \uparrow 2},\text{Te}_{\downarrow 2}). \tag{4}\]
Each component of the spinor is characterized by three quantum numbers: orbital index \(\sigma=\text{Bi},\text{Te}\), spin projection \(s=\uparrow,\downarrow\) and magnetic layer number \(t=1,2\). In addition, the normalization condition \(\int_{0}^{+\infty}|\Psi(z)|^{2}dz=1\) should be hold.
Now, we need a boundary conditions. A uniform boundary condition \(\Psi(0)=0\) for TIs was suggested in Ref. [1]. However, in the linear approximation in \(\partial_{z}\) such a problem has only a trivial solution. In Refs. [25; 26] the authors noted that the Van der Waals system Bi\({}_{2}\)Se\({}_{3}\) is naturally cleaved in between two five-layer unit cells. This allows to formulate an appropriate boundary condition. Evidently, a similar situation realizes for MnBi\({}_{2}\)Te\({}_{4}\), but now we have the seven-layer unit and two AFM-ordered blocks. As a result, in our matrix notations the boundary conditions can be presented in the form
\[(1+\sigma_{x})(1+t_{z})\Psi(0)=0,\quad\Psi(+\infty)=0. \tag{5}\]
Physically, this condition implies that only one magnetic layer and only one orbital on a proper basis reach the surface. Linear equations (3) along with the boundary conditions Eq. (5) and normalization conditions form a complete system of equations that allow us to calculate the spinor \(\Psi\) for the surface states. We solve this linear problem and obtain two linearly independent solutions that form a 2D space. It is convenient to introduce an orthonormal basis \((\Psi_{1},\Psi_{2})\) in this space, and any solution \(\Psi\) is a linear combination of \(\Psi_{1}\) and \(\Psi_{2}\). We choose \(\Psi_{1}\) and \(\Psi_{2}\) in the form
\[\Psi_{1} = {\rm Bi}^{(0)}_{\uparrow 1}\left(e^{\lambda_{1}z},0,-e^{\lambda_{2}z},0,-i\sqrt{\frac{M_{z}-m-A_{z}-\mu}{M_{z}-m+A_{z}+\mu}}e^{\lambda_{2}z},0,i \sqrt{\frac{M_{z}+m+A_{z}-\mu}{M_{z}+m-A_{z}+\mu}}e^{\lambda_{1}z},0\right), \tag{6}\] \[\Psi_{2} = {\rm Bi}^{(0)}_{\downarrow 1}\left(0,e^{\lambda_{2}z},0,-e^{ \lambda_{1}z},0,-i\sqrt{\frac{M_{z}+m-A_{z}+\mu}{M_{z}+m+A_{z}-\mu}}e^{\lambda _{1}z},0,i\sqrt{\frac{M_{z}-m+A_{z}+\mu}{M_{z}-m-A_{z}-\mu}}e^{\lambda_{2}z} \right),\]
where
\[{\rm Bi}^{(0)}_{\uparrow 1} = \left[\frac{M_{z}+m}{|\lambda_{1}|(M_{z}\!+\!m\!-\!A_{z}\!+\!\mu) }\!+\!\frac{M_{z}-m}{|\lambda_{2}|(M_{z}\!-\!m\!+\!A_{z}\!+\!\mu)}\right]^{- \frac{1}{2}},\] \[{\rm Bi}^{(0)}_{\downarrow 1} = \left[\frac{M_{z}+m}{|\lambda_{1}|(M_{z}\!+\!m\!+\!A_{z}\!-\!\mu) }\!+\!\frac{M_{z}-m}{|\lambda_{2}|(M_{z}\!-\!m\!-\!A_{z}\!-\!\mu)}\right]^{- \frac{1}{2}},\] \[\lambda_{1,2} = -\frac{1}{v_{z}}\sqrt{(M_{z}\pm m)^{2}-(A_{z}\mp\mu)^{2}}. \tag{7}\]
The surface states exist if \({\rm Re}\lambda_{i}<0\) for both \(i=1,2\). This restriction imposes conditions on the values of the parameters under which the surface states can exist.
## IV Effective Hamiltonian of the surface states
To derive an effective surface Hamiltonian \(H_{\rm s}\) we make a projection of the Hamiltonian (2) on the basis vectors Eqs. (6), \(H_{\rm s}=\langle\Psi_{i}|H|\Psi_{j}\rangle\). After integration over \(z\), and considering \(v(k_{x}s_{x}+k_{y}s_{y})\sigma_{x}\) as a perturbation we obtain a typical Dirac-like Hamiltonian for the surface states
\[H_{\rm s} = \tilde{v}(k_{x}\hat{s}_{x}+k_{y}\hat{s}_{y}),\] \[\tilde{v} = v\frac{{\rm Bi}^{(0)2}_{\uparrow 1}{\rm Bi}^{(0)2}_{\downarrow 1}(| \lambda_{1}|+|\lambda_{2}|)}{|\lambda_{1}\lambda_{2}|}, \tag{8}\]
where \(\hat{s}\) are the Pauli matrices in the space of the vectors \(\Psi_{1}\) and \(\Psi_{2}\). The Hamiltonian has a linear gapless spectrum \(E=\pm\tilde{v}\sqrt{k_{x}^{2}+k_{y}^{2}}\). If we assume that the bulk gap is large, \(m\gg|M_{z}|,|A_{z}|\), we derive \(\tilde{v}=v(1-\mu^{2}/m^{2})+O(M_{z}^{2})+O(A_{z}^{2})\).
The wave function \(\Psi_{1}\) corresponds to the orbitals with the real spin projection \(s_{z}=\uparrow\), while \(\Psi_{2}\) corresponds to \(s_{z}=\downarrow\), see Eqs. (6). In addition, in our orthonormalized basis \(\hat{s}_{\alpha}\propto\langle\Psi_{i}|s_{\alpha}|\Psi_{j}\rangle\). Therefore, we can consider the Pauli matrices in the space of the surface states \(\hat{s}\) as real-spin operators. Note that a similar result for the surface states was obtained in Ref. [1].
Up to this point, we see that taking into account the spatial AFM ordering effects only quantitatively the surface states as compared with the case \(M_{z}=A_{z}=0\), which ignores this AFM structure [22; 23]. This result is not surprising: in both cases, the system has the same symmetries except for the time-reversal symmetry. However, the AFM TI has emergent time-reversal-like symmetry in the extended space. Therefore, we need to break symmetry between layers 1 and 2 with different polarizations of Mn atoms to observe a qualitatively new result. The simplest way to break this symmetry is to introduce a difference of \(-\mu_{\rm s}\) in the chemical potential between layers 1 and 2 due to surface doping. For simplicity, we assume that the chemical potentials in the bulk and in the second seven-atom block are the same. We introduce the operator of the surface chemical potential
\[\hat{\mu}=-\mu_{\rm s}\frac{\hat{1}+t_{z}}{2}, \tag{9}\]
where factor \((1+t_{z})/2\) selects layer 1 as the surface termination. On the basis of the surface states, we have
\[\hat{\mu} = -\tilde{\mu}_{\rm s}(1+\delta\hat{s}_{z}), \tag{10}\] \[\tilde{\mu}_{\rm s} = \mu_{\rm s}\frac{\left({\rm Bi}^{(0)2}_{\uparrow 1}+{\rm Bi}^{(0)2}_{ \downarrow 1}\right)\left(|\lambda_{1}|\!+\!|\lambda_{2}|\right)}{2\sqrt{2}| \lambda_{1}\lambda_{2}|},\] \[\delta = \frac{{\rm Bi}^{(0)2}_{\uparrow 1}-{\rm Bi}^{(0)2}_{\downarrow 1}}{{\rm Bi }^{(0)2}_{\uparrow 1}+{\rm Bi}^{(0)2}_{\downarrow 1}}.\]
When \(m\gg|M_{z}|\), \(|A_{z}|\) we get \(\delta=M_{z}\mu/m^{2}(1+\mu^{2}/2m^{2})+A_{z}/m(1/2+\mu^{2}/m^{2})\). We can see that the shift of the surface chemical potential brings the term \(\propto\hat{s}_{z}\) that opens a gap in the spectrum of the surface states of the topological insulator. The effective Hamiltonian (8) with the surface perturbation now reads
\[H=\tilde{v}(k_{x}\hat{s}_{x}+k_{y}\hat{s}_{y})-\tilde{\mu}_{\rm s}(1+\delta\hat{ s}_{z}). \tag{11}\]
The spectrum of this Hamiltonian is
\[E_{\pm}=-\tilde{\mu}_{\rm s}\pm\sqrt{\tilde{v}^{2}k^{2}+\tilde{\mu}_{\rm s}^{2} \delta^{2}}. \tag{12}\]
We plot in Fig. 2 (right panel) the parameter \(\delta\) that controls the gap in the surface spectrum as a function of the AFM magnetization \(M_{z}\) for different values of the bulk chemical potential \(\mu\) and \(A_{z}\). We see that \(\delta\propto M_{z}\), \(\mu\) controls the slope of this line, and \(A_{z}\) shifts \(\delta(M_{z})\) from the origin. The larger \(\mu\) and \(M_{z}\), the larger the surface electron gap. The spectrum \(E({\bf k})\), Eq. (12), is shown in Fig. 2 (left panel) for different values of \(\tilde{\mu}_{\rm s}\). When we ignore the AFM order, \(M_{z}=A_{z}=0\), the gap vanishes since \(\delta=0\) in this case.
## V Effects of the disorder
An important question is the stability of the surface gap against disorder. We apply the Hamiltonian given by Eq. (11) to address it. Note that in the case of a
magnetization-induced gap in the surface states of non-magnetic TI, the strong disorder suppresses the gap [27].
We consider a short-range disorder near the sample surface of the AFM MTI produced by randomly distributed charged point defects. We denote 2D density of the point defects as \(n\) and the local impurity potential at the position \(\mathbf{r}=\mathbf{R}_{j}\) as \(u_{j}\). By analogy with Eq. (10), the operator of the disorder potential in the basis of the surface states has a form \(\hat{U}=\sum_{j}\hat{U}_{j}\), where \(\hat{U}_{j}(\mathbf{r})=\big{(}\hat{1}+\delta t_{z}\big{)}u_{j}\delta(\mathbf{ r}-\mathbf{R}_{j})/2\) and \(\delta(\mathbf{r})\) is the delta function. We assume that the disorder is Gaussian, that is, \(\langle\hat{U}_{j}(\mathbf{r})\rangle=0\) and \(\langle\hat{U}_{i}(\mathbf{r}_{1})\hat{U}_{j}(\mathbf{r}_{2})\rangle=nu_{0}^{2 }\delta(\mathbf{r}_{1}-\mathbf{r}_{2})\delta_{ij}\), where \(\langle...\rangle\) means the spatial average, \(u_{0}^{2}=\langle u_{i}^{2}\rangle\), and \(\delta_{ij}\) is the Kronecker symbol.
We assume that the disorder is weak, that is, \(j=nu_{0}^{2}/(2\pi\tilde{v}^{2})<1\) and, following a standard procedure, calculate the self-energy in the Born approximation. As a result, we obtain in the n-th order a recursive Born series
\[\begin{cases}\hat{\Sigma}^{(n+1)}=\sum\limits_{k,i}\langle\hat{U}_{i}G(\hat{ \Sigma}^{(n)})\hat{U}_{i}\rangle,\\ G^{-1}(\hat{\Sigma}^{(n)})=-H-\hat{\Sigma}^{(n)}.\end{cases} \tag{13}\]
If this procedure converges and \(\hat{\Sigma}^{(n)}\rightarrow\hat{\Sigma}\) for \(n\rightarrow+\infty\), then the sum of the series can be represented as a self-consistent Born approximation (SCBA) solution: \(\tilde{\Sigma}=\sum_{k,i}\langle\hat{U}_{i}G(\hat{\Sigma})\hat{U}_{i}\rangle\).
We obtain that in the AFM MTI, the self-energy of the disorder has a non-trivial spin structure \(\hat{\Sigma}=\Sigma_{0}+\Sigma_{z}s_{z}\). We perform a transformation \(\Sigma_{0}=g_{0}+\delta g_{z},\Sigma_{z}=\delta g_{0}+g_{z}\), and after integration over momentum and summation over \(i\) derive from Eqs. (13)
\[g_{0}^{(n+1)} = j(1-\delta^{2})\frac{\tilde{\mu}_{\text{s}}-g_{0}^{(n)}}{2} \Xi^{(n)}, \tag{14}\] \[g_{z}^{(n+1)} = \frac{1}{2}\ j(1-\delta^{2})g_{z}^{(n)}\Xi^{(n)},\] \[\Xi^{(n)} = \ln\ \frac{\tilde{v}^{2}k_{c}^{2}}{(\delta^{2}-1)\left[\left( \tilde{\mu}_{\text{s}}-g_{0}^{(n)}\right)^{2}-g_{z}^{2(n)}\right]}, \tag{15}\]
where \(k_{c}\) is a cutoff momentum. The obtained result is equivalent to the self-energy equations for the Dirac Hamiltonian without magnetization [27]. Following Ref. [27], we take \(g_{0}^{(0)}=-i0\) and \(g_{z}^{(0)}=0\), which implies that \(g_{z}^{(n)}=0\) and \(\Sigma_{z}^{(n)}=\delta\Sigma_{0}^{(n)}\). We plot self-energy components in Fig. 3. We see that the disorder generates a real part of the self-energy.
The main effect of the disorder is the increase of the surface chemical potential \(\tilde{\mu}_{\text{s}}\rightarrow\tilde{\mu}_{\text{s}}-\operatorname{Re} \Sigma_{0}\), which gives rise to the increase of the gap \(\delta\tilde{\mu}_{\text{s}}\).
## VI Discussion
In this work, we investigate the effects of the antiferromagnetic ordering on the properties of the surface states in the topological insulator. We get that the change in chemical potential at the surface layer opens a gap in the spectrum away from Fermi energy due to the finite AFM order. Such a gap is controlled by the bulk chemical potential and the value of the surface doping. This gap is robust against disorder. We should note that in a real samples the effect of the surface defects can be more significant than just density disorder and can lead to quite complex picture [28].
In the presence of the AFM order, the time-reversal symmetry is broken. Instead, a combined time-reversal-like symmetry is present that includes translation be
Figure 3: Upper figure: the value of the renormalized gap \(\delta_{\text{eff}}=\delta(\mu_{\text{s}}-\operatorname{Re}\Sigma_{0})/\delta \mu_{\text{s}}\) as a function of the disorder for different values of the surface chemical potential \(\mu_{s}\). Lower figure: the self-energy components as a function of the disorder strength \(j\) for \(\mu_{s}=0.1vk_{c}\). We take \(n=5000\), \(\mu=0.5m\), \(M_{z}=0.1m\),\(A_{z}=0\) for both figures.
Figure 2: Left panel: energy spectrum as a function of the momentum \(k_{x}\) and \(k_{y}=0\). We take \(\mu=0.5m\), \(M_{z}=0.1m\), \(A_{z}=0\). Spectrum have a gap if \(\mu_{s}\neq 0\). Right panel: parameter \(\delta\) as a function of the AFM magnetization \(M_{z}\).
tween spin-up and spin-down AFM layers. [23]. Such symmetry protects gapless surface states. Unlike real time-reversal symmetry, this combined symmetry can be broken by the non-magnetic perturbation. If the perturbation breaks symmetry between spin-up and spin-down regions, then a gap in the spectrum of the surface states arises, see Fig. 2.
Recent experiments with MnBi\({}_{2}\)Te\({}_{4}\) reveal that the doping of the first surface layer by Ge increases the gap in the surface states [29]. Also, DFT calculations performed in Ref. [24] show that the surface potential opens a gap in the surface states in the AFM MTI. Our results are consistent with these data. Surface doping is an effective tool that can allow us to achieve a large surface gap in the AFM MTIs.
###### Acknowledgements.
This work is supported by Russian Science Foundation (project No. 22-72-10074).
|
2309.16579 | A Physics Informed Machine Learning Method for Power System Model
Parameter Optimization | This paper proposes a gradient descent based optimization method that relies
on automatic differentiation for the computation of gradients. The method uses
tools and techniques originally developed in the field of artificial neural
networks and applies them to power system simulations. It can be used as a
one-shot physics informed machine learning approach for the identification of
uncertain power system simulation parameters. Additionally, it can optimize
parameters with respect to a desired system behavior. The paper focuses on
presenting the theoretical background and showing exemplary use-cases for both
parameter identification and optimization using a single machine infinite
busbar system. The results imply a generic applicability for a wide range of
problems. | Georg Kordowich, Johann Jaeger | 2023-09-28T16:34:34Z | http://arxiv.org/abs/2309.16579v1 | # A Physics Informed Machine Learning Method for Power System Model Parameter Optimization
###### Abstract
This paper proposes a gradient descent based optimization method that relies on automatic differentiation for the computation of gradients. The method uses tools and techniques originally developed in the field of artificial neural networks and applies them to power system simulations. It can be used as a one-shot physics informed machine learning approach for the identification of uncertain power system simulation parameters. Additionally, it can optimize parameters with respect to a desired system behavior. The paper focuses on presenting the theoretical background and showing exemplary use-cases for both parameter identification and optimization using a single machine infinite busbar system. The results imply a generic applicability for a wide range of problems.
Automatic Differentiation, Backpropagation, Gradient Descent Optimization, Physics Informed Neural Networks, Power System Parameter Optimization
## I Introduction
### _Background_
The growth in distributed generation increases the complexity of the grid. More variability in generation and loadflow demands reinforcements for the power grid. As the construction of new powerlines and grid components can be prohibitively expensive, many recent approaches focus on maximizing the utilization of existing infrastructure. Consequently, a trend towards a smarter grid is notable, characterized by increased automation of grid operations and the installation of sensors and actors to enhance grid observability and controllability.
Accurate models of power systems and components are an essential foundation of the trend towards a smarter grid. The utilization of concepts like digital twins is becoming increasingly prevalent for grid control and supervision [1]. For concepts like model predictive control, an accurate mathematical description is a key prerequisite [2]. Additionally, a lot of recent research focuses on employing different optimization or machine learning techniques in order to improve grid stability and security. Examples are the optimization of protection schemes, energy management, demand response or operational control [3, 4].
### _Challenge_
While good models are a fundamental requirement for all aforementioned advancements, it is often time consuming and difficult to create accurate models. Even though the general structure of a model or digital twin may be known, the task of finding precise parameters that match real world behavior is often particularly difficult.
Therefore, parameter identification and optimization can be identified as a core challenge for enabling secure operation in future power systems. While often described as two different tasks, it is important to note that parameter identification and optimization are essentially the same problem from a mathematical point of view: Parameter identification is in fact an optimization problem, aiming to minimize the error between model and real-world behavior.
### _Automatic Differentiation in Power Systems_
To address the challenge of parameter optimization and identification in power system simulations we propose a novel method that incorporates an automatic differentiation (AD) tool into a dynamic power system simulation. AD tools can be used to compute gradients of mathematical functions. In the context of our method, those gradients are utilized to optimize simulation parameters by minimizing a loss or error function using gradient descent.
The idea of using AD tools in the context of power systems was previously employed for a range of applications. In the steady state domain, they were used to calculate the Jacobian matrix for power flow calculations [5], to model power electronics [6], or for state estimation [7]. In the dynamic domain, AD tools have been used to create an induction machine model in the resistive companion form [8] and for trajectory sensitivity analysis [9].
Another method in the dynamic domain that utilizes AD tools and are related to our approach are Physics Informed Neural Networks (PINNs) [10]. The core idea behind PINNs is the embedding of analytical models into neural networks which benefits training speed and reduces the maximum training error bounds [11]. PINNs have previously been used for a wide range of tasks [12]. Similarly to our approach, PINNs are also capable of system parameter identification [13].
A drawback of PINNs is however, that physical knowledge covers only part of the learning process. Other parts that are either neglected or unknown, must be learned by a neural network that is essentially a black-box model. Consequently,
a training process that requires data is necessary. Additionally, the use of neural networks can lead to instable or unforseeable behavior outside of the training data range [10].
Our approach utilizes the same idea of incorporating physics knowledge into the optimization problem, but completely eliminates the neural networks and solely relies on the physical knowledge. Compared to neural networks (blackbox models) and PINNs (greybox models), our approach ensures maximum interpretability and security by only relying on equations with physical meaning. Therefore, the novel approach can be considered a whitebox model. This comes with a computational cost as shown in Fig. 1. On the other hand, its advantages lie in not relying on training data and its capacity to identify parameters from a singular time series, rendering the approach capable of one-shot, physics informed machine learning.
Another physics informed machine learning approach that does not incorporate neural networks and can be used for parameter estimation is the sparse identification of nonlinear dynamics (SINDy) algorithm [14]. It uses sparsity promoting algorithms to select equations from a set of candidate functions that fit the dynamic behavior of the system [15]. SINDy requires not only time series data of state variables but also the derivatives. Another drawback is, that a reformulation of the power system description into matrix form is necessary, which limits the usability and generalizability of the approach.
Professional power system simulation softwares often contain similar gradient descent based parameter optimization functions. To our best knowledge, all gradient estimation techniques used so far are based on the difference quotient in (1), which makes small variations of the parameters to be optimized necessary.
\[\frac{\partial f(x)}{\partial x}\approx\frac{f(x+h)-f(x)}{h} \tag{1}\]
This means that the number of simulations required per optimization step is equal to the number of parameters to be optimized. Using an AD tool, only one simulation is necessary for any number of parameters, making the presented approach computationally cheaper.
The core contributions of this paper lie in:
* introducing a physics informed optimization method for parameters of dynamic simulations;
* showing, that the optimization process is generically usable for a wide range of problems;
* and the publication of corresponding code for easy reproducibility [16].
## II Methodology
In the following chapter, the optimization method is presented and explained in detail. For this purpose, first the general process behind power system simulations is explained in subsection II-A. Afterwards, a brief introduction to optimization via gradient descent is given. Then, we show how simulation parameters can be optimized via gradient descent theoretically, and afterwards we show how AD tools make the theoretical approach practically viable.
The methodology is shown using the example of the single machine infinite busbar (SMIB) system by Kundur [17]. For the example, we assume the inertia constant \(H\) of the generator is unknown. The goal of the process is the parameter identification of \(H\), so that the dynamic response of the simulated generator matches the response of the original Kundur SMIB system. In practice, the same approach can be used to identify parameters of real world systems.
### _Dynamic Power System Simulation_
An overview over power system simulations is given in the following subsection as a basis for the proposed method. For a more detailed descriptions interested readers are referred to [18] or [19]. Generally, dynamic power system models can be described by a set of Differential Algebraic Equations (DAE) shown in (2):
\[\begin{split}\dot{\mathbf{x}}&=f(\mathbf{x},\mathbf{ y})\\ 0&=g(\mathbf{x},\mathbf{y})\end{split} \tag{2}\]
Here, \(\mathbf{x}\) represents the vector of state variables while \(\mathbf{y}\) contains the algebraic variables. The equation above can further be simplified by describing the algebraic variables \(y\) as a function \(h\) of differential variables \(x\):
\[\dot{\mathbf{x}}=f(\mathbf{x},h(\mathbf{x}))=f_{new}(\mathbf{x}) \tag{3}\]
In the case of the SMIB system, the differential equations represent the 6th order model of a generator where the state variables can be found on the left hand side [19]:
\[\begin{split}\Delta\dot{\omega}&=\frac{1}{2H}(T_{m} -T_{e})-D\Delta\omega\\ \dot{\delta}&=\Delta\omega\\ \dot{{E_{q}}^{\prime}}&=\frac{1}{T_{d0}^{\prime}}(E_ {f}-E_{q}^{\prime}-(X_{d}-X_{d}^{\prime})I_{d})\\ \dot{{E_{d}}^{\prime}}&=\frac{1}{T_{q0}^{\prime}}(-E_ {d}^{\prime}+(X_{q}-X_{q}^{\prime})I_{q})\\ \dot{{E_{q}}^{\prime\prime}}&=\frac{1}{T_{d0}^{\prime \prime}}(E_{q}^{\prime}-E_{q}^{\prime\prime}-(X_{d}^{\prime}-X_{d}^{\prime \prime})I_{d}^{\prime})\\ \dot{{E_{d}}^{\prime\prime}}&=\frac{1}{T_{q0}^{\prime \prime}}(E_{d}^{\prime}-E_{d}^{\prime\prime}+(X_{q}^{\prime}-X_{q}^{\prime \prime})I_{q}^{\prime})\end{split} \tag{4}\]
Fig. 1: Categorization of different Machine Learning Methods for Power System Simulation
The algebraic variables correspond to the bus-voltages, which can be calculated using the admittance matrix and the current injections. As the current injections are a function of the state variables, the algebraic variables can be represented as a function \(h(x)\) of the state variables:
\[\mathbf{y}=\mathbf{V}=\mathbf{Y}^{-1}\mathbf{I}(\mathbf{x})=h(\mathbf{x}) \tag{5}\]
Together, the equations above fully describe the power system model. During a simulation, (3) can be integrated by using suitable integration methods like Runge-Kutta or Euler's method. When using Euler's method, the trajectories of the state variables can be determined by applying the equation below during every timestep:
\[\mathbf{x_{t+1}}=\mathbf{x_{t}}+\mathbf{\dot{x}_{t}}\Delta t=\mathbf{x_{t}}+f( \mathbf{x_{t}})\Delta t \tag{6}\]
Therefore, a power system simulation consists of nothing but a long chain of very basic and most importantly locally differentiable operations. This fact is key for the applicability of the presented approach.
### _Gradient Descent Optimization_
The gradient descent optimization method is a very common and well researched one. Its purpose is to optimize parameters in a way so that a _loss_ or an _error_ function, often a difference between desired and optimal output, gets minimized. This is achieved by adapting the parameters in the opposite direction of the gradient of the loss function with respect to the parameters. Intuitively, this can be described as "going the loss function downhill" towards a local minimum. Formally, the _loss_ or _error_ is often described by a function \(L(\theta)\), that depends on the optimizable parameters \(\theta\). The parameter adaption can formally be described by (7):
\[\theta_{new}=\theta_{old}-\eta\cdot\nabla_{\theta}L(\theta) \tag{7}\]
The parameter \(\eta\) is often referred to as the learning rate and describes how fast the optimizable parameters are changed. It can be adapted to balance the optimization process between fastness of convergence and stability.
In the example of the optimization of the SMIB system, an adequate loss function could be the mean squared error (MSE) between simulated rotor frequency and real rotor frequency, depending on the inertia constant \(H_{sim}\):
\[L(H)=\frac{1}{N}\sum_{t=1}^{N}(\Delta\omega_{real,t}-\Delta\omega_{sim,t}(H_{ sim}))^{2} \tag{8}\]
Here, \(N\) is the number of discretely measured or simulated timesteps for the real world and simulated trajectory of \(\Delta\omega\) respectively. The gradient descent can then be executed by calculating the gradient of the loss function \(L\) with respect to \(H\). The Loss can be minimized, by iteratively adapting \(H\) in the opposite direction of the gradients on \(H\):
\[H_{new}=H_{old}-\eta\frac{\partial L}{\partial H} \tag{9}\]
The same process works for multiple parameters at the same time by calculating the gradient \(\nabla_{\theta}L(\theta)\).
The prerequisite of the gradient descent algorithm is of course, that the gradients on the parameters that must be optimized are known. To determine those gradients, the proposed method makes use of AD tools originally developed for neural networks. Those tools calculate the gradients of the loss function with respect to the weights of a neural network by iteratively applying the chain rule of differentiation. This process is known as backpropagation. The following section II-C shows that the same process can be applied to power system simulations as well.
### _Backpropagation for Power Systems_
Obtaining the gradient of the loss function with respect to \(H\) or other parameters is a challenge. (8) is a long and convoluted equation, as it depends on \(\Delta\omega_{gen,sim}\) which depends on \(H\) in a non-trivial way. On the other hand, the same is true for neural networks, where the loss function typically depends on thousands of parameters which influence the loss function in different ways. However, both neural networks, and power system simulations consist of a long chain of basic, and most importantly locally differentiable operations \((+,-,*,/,ln(x)\), \(e^{x}...)\) as shown in section II-A.
This fact allows the application of the chainrule of differentiation. The gradient of the loss function with respect to the parameters does not have to be computed at once with an explicit expression, but instead it can be split up into small steps of gradients of intermediate basic operations:
\[\frac{\partial L}{\partial H}=\frac{\partial L}{\partial o_{1}}\frac{\partial o _{1}}{\partial o_{2}}\frac{\partial o_{2}}{\partial o_{3}}\ \...\ \ \frac{\partial o_{n-1}}{\partial o_{n}}\frac{ \partial o_{n}}{\partial H} \tag{10}\]
For this to be feasible, it's essential that every operation performed during the simulation is recorded, enabling subsequent calculation of the local gradient. For this purpose AD tools like PyTorch or TensorFlow originally developed for the training of neural networks can be used [20]. When training neural networks, during the forward pass (i.e. the evaluation phase of the neural network), those tools build a computational graph by recording every operation. When wanting to compute the gradients, this graph can be traversed backwards, by applying the chainrule of differentiation in every step. The same process can be applied for arbitrary functions and, as we show here, to power system simulations.
As an example suppose parameter \(\theta\) shall be optimized so the (nonsensical) loss function given in (11) gets minimized:
\[L(\theta)=\left(a-\frac{b}{c\cdot\exp(\theta)}\right)-0 \tag{11}\]
Then, during the evaluation of the function, the AD tool would build the graph shown in Fig. 2, with the corresponding values for the (similarly nonsensical) exemplary values in the figure given in Table I. Afterwards, the "local gradient" of every operation can be calculated in the backward direction as the derivatives of the basic operations are known. After the local gradients are calculated the gradient \(\partial L/\partial\theta\) can directly be computed by applying (10):
\[\begin{split}\frac{\partial L}{\partial\theta}=&\frac {\partial L}{\partial\omega_{3}}\frac{\partial\omega_{3}}{\partial\omega_{2} }\frac{\partial\omega_{2}}{\partial\omega_{1}}\frac{\partial\omega_{1}}{ \partial\theta}=\\ & 1.65*0.30*-5.41*-1=2.63\end{split} \tag{12}\]
Exactly the same process can be applied to power system simulations as shown in Fig. 3. More concrete, the process consists of the following steps:
1. Run the simulation. During the simulation, the AD tool automatically builds a computational graph.
2. Use the output of the simulation to calculate a loss, e.g. the MSE loss between output of the simulation and desired output.
3. Use the AD tool to calculate the gradients of the loss function with respect to the simulation parameters by traversing the computational graph in the backward direction.
4. Use the gradients to calculate new values of the simulation parameters by applying the gradient descent method described in (7).
5. Repeat steps 1) - 4) until the loss is below a predetermined threshold.
In the example of the SMIB system, \(H\) can be adapted, so that the time series data of \(\Delta\omega_{gen,sim}\) matches the real data. This process can also be applied to multiple parameters simultaneously. Additionally, it is not only applicable for time series matching, but for many use cases. The prerequisite is, that the desired behavior can be described by a loss function. This is true for many examples such as determining controller parameters for optimal damping, converting blackbox models to interpretable models, optimal power flow problems and more.
## III Implementation Details
To enable a proof of concept for the new method, we implemented a power system simulation in Python. The implementation is based on the works of Haugdal and Uhlen [18], but a re-implementation and simplification was necessary to allow the gradient tracking. All optimizable parameters are implemented as PyTorch tensors. This enables the PyTorch backend to serve as an AD tool and automatically generate the computational graphs. Two challenges that arise with the approach can be solved in the implementation. The first challenge is known from deep neural networks as the vanishing or exploding gradient problem. The other issue is, that the gradient descent will get stuck in local optima and is therefore highly dependent on the choice of initial parameters.
### _Vanishing and Exploding Gradients_
The long chain of multiplications when applying the chain-rule of differentiation can in theory lead to the total gradient to vanish or explode. As an example, assume the chain in (10) consists of \(1000\) factors, and every local gradient has a value of \(0.9\). In this case, the total gradient would vanish to \(1.7e-46\). Similarly, the gradient can explode for values greater than one. Fortunately, the utilization of normalizing the output and using the per unit system, reduces the problem. We found that the gradients of the loss function with respect to certain parameters are remarkably stable during an optimization process. When adapting the learning rate accordingly, we didn't find any issue with exploding or vanishing gradients when the simulation itself was stable.
The gradients do tend to explode though in cases where the simulation itself is unstable. While the gradients mostly point in the right direction, and therefore lead the simulation towards a more stable operation, an instability in the simulation can lead to an instable optimization process, because the large gradients adapt the parameters too fast. To solve this issue,
Fig. 3: Comparison of the optimization process for the function, ANNs and the power system simulation
Fig. 2: Computational graph for equation (11)
we employ a custom optimizer with a predefined maximum step size that limits the parameter adaption per optimization step. This stabilizes the optimization process, which means that initial values can be used for parameters that lead to unstable simulations.
### _Local Optima_
The second challenge is the existence of local optima. While gradient descent optimizations exist that use a momentum function in order to overcome those, local optima pose an inherent problem to the presented method. Fortunately, in the experiments we ran so far, local optima were an uncommon occurrence. This is shown in Fig. 4. In order to find local optima, we varied the parameters of the SMIB system between \(50\%\) and \(200\%\) of their original value and calculated the MSE-loss between the original and new trajectory of \(\Delta\omega\). The corresponding normalized gradients of the loss function with respect to the parameters are shown in Fig. 4. For an ideally convex optimization, the gradients would be seen as blue, positive values if the parameter was reduced, and red, negative if it was increased. This is true for many parameters, but it can be seen, that especially \(X_{d}^{\prime\prime}\), and \(X_{q}^{\prime\prime}\) do have a number of local optima. Therefore, the initial guess of those parameters is quite relevant.
As it can be quite time consuming to search good initial parameter guesses, we aimed to tackle this issue in the implementation. By using PyTorch tensors for the power system simulation, the implementation is inherently vectorized. Therefore, it is computationally inexpensive to run the simulation process with multiple values for each parameter in parallel. Due to this implementation, a vector of initial guesses can be used for all optimizable parameters. Using a vector of initial guesses significantly decreases the probability that the trajectories from all starting points get stuck in local optima and therefore increases the probability of finding a global optimum. Therefore, the utilization of multiple initial guesses makes the search for good initial guesses obsolete.
## IV Results
To explore the effectiveness of the proposed approach, we show two use-cases. The first one is the already mentioned parameter estimation for the SMIB system, the second experiment aims to tune the parameters of a power system stabilizer (PSS) in order to facilitate maximum damping. The spirit behind the experiments is to maintain simplicity, in order to show the results in a straightforward manner. The code to replicate those experiments is available on GitHub [16].
### _Parameter Identification of a SMIB system_
#### Iv-A1 Model
For the first experiment, we pretend the inertia constant \(H_{gen}\) of the SMIB system is unknown. The SMIB system consists of two 6th order generator models, of which one is very large and can therefore be considered an infinite busbar. The parameters are taken from Kundur [17] and are equivalent to the model used in [18]. No governor, exciter or power system stabilizer are used for the first experiment.
#### Iv-A2 Optimization Process
To apply the parameter identification process, "real data" which the simulation can be aligned with are necessary. To create those, we ran a simulation of the SMIB system with an inertia constant \(H_{gen,real}\) of \(3.5\,\mathrm{s}\) in DIgSILENT's PowerFactory. During the simulation, a short circuit is simulated at \(t=$1\,\mathrm{s}$\) until \(t=$1.05\,\mathrm{s}$\). This induces an oscillation in the trajectory of \(\Delta\omega\) which is then exported. Afterwards, the process described in subsection II-C can be used to fit the simulation parameter \(H_{gen,sim}\) to the "real world" parameter \(H_{gen,real}\). For this purpose, we simulate the same SMIB system disturbed by a short circuit using the power system simulation implemented in Python in order to track the gradients. The initial guess for the parameter \(H_{gen,sim}\) is \(8\,\mathrm{s}\). As a loss function we use the MSE between the trajectory of \(\Delta\omega_{gen,real}\) and \(\Delta\omega_{gen,sim}\) given in (8).
The optimization process iteratively adapts the parameter, which decreases the normalized loss as shown in Fig. 5. Fig. 6 illustrates how the trajectory of \(\Delta\omega_{gen,sim}\) progressively converges toward the trajectory of the real-world scenario, eventually achieving a perfect match. The optimization stops when \(H_{gen,sim}\) changes less than \(1e-6\) within one optimization step. The final value is \(H_{gen,sim}=3.5001\) which is equivalent of an error of \(0.003\,\mathrm{\char 37}\).
#### Iv-A3 Influence of Noise
In order to test the influence of noise on the optimization, we added gaussian noise to the signal as shown in Fig. 6 as a thin blue line. For different noise levels, the error slightly increased to up to \(0.2\,\mathrm{\char 37}\) as shown in Tab. II. Even though the error slightly increases, the optimization is
Fig. 4: Gradients of the loss function with respect to SMIB parameters
extremely robust to noise, as the absolute error is still very low. The reason for this is, that the loss function takes the whole trajectory into account, and therefore inherently averages out noise.
### _Parameter Optimisation of a Power System Stabilizer_
The goal of the second experiment is to tune the parameters of a PSS in order to facilitate maximum damping. This is a classic parameter optimization problem. For this purpose we extended the previously mentioned model with an automatic voltage regulation system (AVR) and a power system stabilizer. In the spirit of simplicity for the experiments, we chose the simple exciter (SEXS) model as an AVR and the STAB1 model shown in Fig. 7 as a PSS [21]. The initial guesses of the relevant parameters of the PSS are listed in Tab. III. All those parameters are tuned simultaneously by calculating all the respective gradients at once using the AD tool. For the optimization process, a suitable loss function must be selected. As the maximum peak is mostly determined by the short circuit, and can not be influenced by the PSS, we chose the mean absolute error as a loss function, as it is more robust to outliers than the MSE:
\[L(K,T_{w},T_{1},T_{2},T_{3},T_{4})=\frac{1}{N}\sum_{t=1\,\mathrm{s}}^{t=10\, \mathrm{s}}\Delta\omega_{sim,t} \tag{13}\]
The optimization goal is therefore to reduce the oscillation of \(\Delta\omega_{sim,t}\) to zero as fast as possible. In Fig. 8 it is clear, that the oscillation decays only very slowly when using the suboptimal values of the initial guess. During the optimization, once again the loss decreases, and the oscillation therefore decays faster and faster. This is achieved by optimizing the parameters towards the final values given in Tab. III. Note that the parameter \(H_{lim}\) for the voltage limiting is not optimized. As no optimal parameters are known for this problem, no error can be given for this experiment. It can be seen though, that the parameter optimization is successful as the oscillation is damped significantly better than with the initial guess.
## V Discussion
The experiments in the previous section show promising results. They imply that the tool has a generic applicability for a wide range of problems. An advantage is the simplicity of the approach and the easy implementation of the optimization
Fig. 5: Loss and inertia constant during the optimization process
Fig. 8: \(\Delta\omega\) trajectories of the simulated generator during the optimization of PSS parameters
Fig. 6: \(\Delta\omega\) trajectories of real-world and simulated generator during the optimization process
Fig. 7: The power system stabilizer model STAB1
itself. However, it's important to acknowledge that there are several uncertainties that need yet to be investigated.
Even though we did not face such a problem so far, the process is not suited for strongly multimodal optimization problems that have many local optima. Even though the challenge of local optima can be extenuated by using a vector of initial guesses, the probability to get stuck in a local optimum increases with their quantity. As of now, it is unclear how many real world power system optimization problems are too multimodal for the optimization.
Additionally, while it is easy to implement the optimization itself, it is quite time consuming to implement and test the power system simulation itself in Python. An implementation using a modular concept, as demonstrated in [18], is feasible. Nevertheless, creating a generic power system simulation library is a substantial undertaking.
While the proposed optimization has some benefits over applying PINNs instead, it must be noted, that the process is computationally expensive in comparison. Python is inherently relatively slow, and the tracking and backpropagation of the gradients does take a certain amount of time. This has not been an issue for the simple experiments so far, but the scalability of the approach must be investigated.
The results show, that the optimization process can correctly identify multiple parameters simultaneously from a single trajectory, and is therefore essentially a one-shot learning approach. It must be noted though, that the optimization only finds "a", not necessarily "the" optimal solution. For larger systems, it is conceivable, that multiple sets of parameters can minimize the loss function equally well. In this case, the one-shot learning property is lost, and more additional information is necessary.
## VI Conclusion
This paper proposes a gradient descent based optimization approach for power system simulations, that relies on tracking the gradients during a simulation using an AD tool, namely PyTorch. The paper shows the theoretical foundation, practical implementation and first results of use-cases. In the experiments, we show that the optimization process can identify uncertain power system parameters from a single trajectory of a dynamic process, and optimize controller parameters with respect to the goal of power swing damping. The results imply that the optimization process is generically applicable and has the potential to help solve a wide range of optimization problems. Additionally, it significantly reduces the number of simulations necessary for the optimization of multiple parameters. Future works will demonstrate more use cases and examine the scalability of the approach.
|
2308.16650 | Optimal confidence interval for the difference of proportions | Estimating the probability of the binomial distribution is a basic problem,
which appears in almost all introductory statistics courses and is performed
frequently in various studies. In some cases, the parameter of interest is a
difference between two probabilities, and the current work studies the
construction of confidence intervals for this parameter when the sample size is
small. Our goal is to find the shortest confidence intervals under the
constraint of coverage probability being at least as large as a predetermined
level. For the two-sample case, there is no known algorithm that achieves this
goal, but different heuristics procedures have been suggested, and the present
work aims at finding optimal confidence intervals. In the one-sample case,
there is a known algorithm that finds optimal confidence intervals presented by
Blyth and Still (1983). It is based on solving small and local optimization
problems and then using an inversion step to find the global optimum solution.
We show that this approach fails in the two-sample case and therefore, in order
to find optimal confidence intervals, one needs to solve a global optimization
problem, rather than small and local ones, which is computationally much
harder. We present and discuss the suitable global optimization problem. Using
the Gurobi package we find near-optimal solutions when the sample sizes are
smaller than 15, and we compare these solutions to some existing methods, both
approximate and exact. We find that the improvement in terms of lengths with
respect to the best competitor varies between 1.5\% and 5\% for different
parameters of the problem. Therefore, we recommend the use of the new
confidence intervals when both sample sizes are smaller than 15. Tables of the
confidence intervals are given in the Excel file in this link. | Almog Peer, David Azriel | 2023-08-31T11:49:36Z | http://arxiv.org/abs/2308.16650v3 | # Optimal confidence interval for the difference of proportions
###### Abstract
Estimating the probability of the binomial distribution is a basic problem, which appears in almost all introductory statistics courses and is performed frequently in various studies. In some cases, the parameter of interest is a difference between two probabilities, and the current work studies the construction of confidence intervals for this parameter when the sample size is small. Our goal is to find the shortest confidence intervals under the constraint of coverage probability being larger than a predetermined level. For the two-sample case, there is no known algorithm that achieves this goal, but different heuristics procedures have been suggested, and the present work aims at finding optimal confidence intervals. In the one-sample case, there is a known algorithm that finds optimal confidence intervals presented by Blyth and Still (1983). It is based on solving small and local optimization problems and then using an inversion step to find the global optimum solution. We show that this approach fails in the two-sample case and therefore, in order to find optimal confidence intervals, one needs to solve a global optimization problem, rather than small and local ones, which is computationally much harder. We present and discuss the suitable global optimization problem. Using the Gurobi package we find near-optimal solutions when the sample sizes are smaller than 15, and we compare these solutions to some existing methods, both approximate and exact. We find that the improvement in terms of lengths with respect to the best competitor varies between 1.5% and 5% for different parameters of the problem. Therefore, we recommend the use of the new confidence intervals when both sample sizes are smaller than 15. Tables of the confidence intervals are given in the Excel file in this link.
## 1 Introduction
The task of constructing confidence intervals for the proportion of the binomial distribution is a basic problem in statistics, which appears in almost all introductory statistics courses and is performed frequently in many studies. In some cases, the parameter of interest is the difference between two proportions, and the present work studies the construction of confidence intervals for this parameter. Specifically, if \(p_{1}\) and \(p_{2}\) are two proportions, the parameter of interest is \(\Delta=p_{1}-p_{2}\). Other functions, such as the ratio \(p_{1}/p_{2}\) or the log odds ratio \(\log\left(\frac{p_{1}/(1-p_{1})}{p_{2}/(1-p_{2})}\right)\) will not be discussed here, but we believe that our methodology can
be extended to these functions.
First, one needs to distinguish between an exact confidence interval (henceforth, CI) and an approximate CI. An exact CI has a guarantee that the confidence level is above some predetermined level of \(1-\alpha\) for all the parameter space, while approximate CI achieves this level only asymptotically, and might have a smaller confidence level for some values of the parameter. An exact CI has the advantage of guaranteeing the desired level for every sample size and for every value of the parameter. However, it might come at the cost of larger intervals. On the other hand, approximate CI has the right coverage level for large sample sizes but may be not appropriate for small sample sizes. This work focuses on exact CI and small sample sizes.
We now review some widely-used methods for the one-sample case. The most popular one is the Wald CI, which is based on the normal approximation of the binomial variable. Specifically, let \(X\sim Binomial(n,p_{1})\), and let \(\hat{p}_{1}=X/n\). Wald CI is \(\hat{p}_{1}\pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{\hat{p}_{1}(1-\hat{p}_{1})}{ n}}\), where \(z_{1-\frac{\alpha}{2}}\) is the \(1-\frac{\alpha}{2}\) quantile of the standard normal distribution. The Wald CI is symmetric around the observed proportion \(\hat{p}_{1}=X/n\), and its width depends on the variance estimator and the level of confidence \(1-\alpha\). Among the approximate CIs, the Wilson score (Wilson (1927)) gained some popularity. Similar to the Wald CI, the Wilson CI is based on the normal approximation, but with a different variance estimator. Agresti and Coull (1998) showed that the performance of the Wald CI is much inferior to the Wilson CI in terms of confidence level. Agresti and Coull also suggest another CI, which they call an adjusted Wald CI. The idea is to simply take \(X^{*}=X+2,n^{*}=n+4\) and compute the Wald CI with \(X^{*}\) and \(n^{*}\).
Brown et al. (2001) provided a comprehensive review of different methods to construct CIs. They compared performance in terms of minimum coverage level, average coverage level, and average diversion from \(1-\alpha\). Based on the above criteria, they recommended the Wilson score CI or the Jeffreys CI for \(n<40\). The Jeffreys CI is obtained by using a prior \(BETA(\frac{1}{2},\frac{1}{2})\), known as the Jeffreys prior, and taking the middle \(1-\alpha\) area under the posterior distribution. For \(n\geq 40\), Brown et al. suggested using either the Wilson or the Jeffreys CIs or the Agresti Coull method that was mentioned above.
The first exact CI for the one-sample case was suggested by Clopper and Pearson (1934), and it is the intersection of two one-sided CIs. The Clopper and Pearson CI is generally too conservative - the intervals are fairly wide. Correspondingly, the confidence level is higher than the desired level, especially for small \(n\). Sterne (1954) developed an exact CI that is shorter than the Clopper and Pearson CI, and is optimal in the sense of being the shortest confidence regions that have the correct confidence level. However, Crow (1956) showed that the Sterne method might lead to confidence regions that are the union of intervals and not a single interval. Crow further modified the Sterne method to return only confidence regions consisting of one interval for any \(x\), preserving the above optimality property for CIs. Blyth and
Still (1983) proposed an algorithm that finds all optimal CIs that are intervals, including the Crow CI. In Section 3.1 the Blyth and Still algorithm is described in detail, as we wish to generalize it to the two-sample case.
Now, we will review several CIs for the two-sample case, i.e., for \(\Delta=p_{1}-p_{2}\). The Wald CI can be easily generalized based on the normal approximation of the differences of the averages. Specifically, let \(X\sim Binomial(n,p_{1}),Y\sim Binomial(m,p_{2})\) let \(\hat{p}_{1}=X/n,\hat{p}_{2}=Y/m\). The Wald CI for \(\Delta\) is
\[\hat{p}_{1}-\hat{p}_{2}\pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{\hat{p}_{1}(1- \hat{p}_{1})}{n}+\frac{\hat{p}_{2}(1-\hat{p}_{2})}{m}}.\]
Miettinen and Nurminen (1985) demonstrated the poor coverage of this CI in a few examples and suggested relying on more stabilized estimators of the variance, which are based on quantiles of the chi-square distribution, and results in an approximate CI.
Newcombe (1998) reviewed 11 methods for creating CIs for \(\Delta\), including the methods that were mentioned above. Newcombe compared the methods by the average coverage, the minimal coverage, and the percentage of non-coverage. Newcombe suggested a method that he calls 'hybrid score' and it performed well in the above criteria; see Section 5.1 for more details. Another recommended method for constructing a CI for \(\Delta\) was proposed by Agresti and Caffo (2000). Generalizing Agresti and Coull CI, they proposed to add four pseudo observations, one to each group, i.e., define \(X^{*}=X+1,Y^{*}=Y+1,n^{*}=n+2,m^{*}=m+2\) and then calculate the Wald CI for the difference.
A few exact CIs for \(\Delta\) were also developed. Santner and Snell (1980) proposed three different methods to construct exact CIs. One of them, called the tail method, has gained popularity due to its simplicity and ease of calculation. This method can be thought of as a two-dimensional analog of the Clopper and Pearson CI for one proportion, where the CIs are an intersection of two one-sided intervals. This method typically leads to too conservative intervals, as shown by Chan and Zhang (1999). The latter paper suggests a different method for constructing exact CIs. Agresti and Min (2001) studied exact CIs for the two-sample case. They reviewed the Chan and Zhang CI and suggested a modification that results in significant improvement in performance. The method is described in detail in Section 5.1. Fagerland et al. (2015) compared several methods including the ones mentioned above and also others both approximate and exact. Their main criterion for comparison was the closeness to the nominal level \(1-\alpha\). They recommended using the Agresti and Min CI.
To sum up, for the one-sample case there exists an algorithm that minimizes the length of the CI under the constraint of obtaining a certain coverage level. For the two-sample case, such an algorithm does not exist, but rather different heuristics were suggested. This work aims at filling this gap, namely, we have constructed an algorithm that computes the optimal CI for small sample sizes in the two-sample
case and compare it to existing methods.
The rest of the work is organized as follows: in Section 2 the optimization problem is stated and the basic notation is introduced. The algorithm suggested in Blyth and Still (1983) finds the optimal solutions for the one-sample case. It is based on solving small and local optimization problems and then using an inversion step to find the global optimum solution. Section 3 presents the algorithm and discusses extensions to the two-sample case. It is shown that this approach fails in the two-sample case and therefore, in order to find optimal CI, one needs to solve a global optimization problem, rather than small and local ones, which is computationally much harder. The global optimization problem is presented and discussed in Section 4. Using the Gurobi package, we find near-optimal solutions when the sample sizes are smaller than 15, and we compare these solutions to some existing methods, both approximate and exact in Section 5. We find that the improvement in terms of lengths with respect to the best competitor varies between 1.5% and 5% for different parameters of the problem. Section 6 concludes with some recommendations and future research directions.
## 2 Problem statement
Recall that \(X\sim Binomial\{(n,p_{1})\}\) and \(Y\sim Binomial\{(m,p_{2})\}\). We aim at constructing CIs for \(p_{1}\) (respectively, \(\Delta:=p_{1}-p_{2}\)) for the one- (respectively, two-) sample cases. In the one-sample case, we define \(C_{1}\) to be the collection of all confidence intervals, i.e.,
\[C_{1}:=\{[l_{x},u_{x}]\}_{x\in\{0,1,\ldots,n\}},\]
where \(l_{x},u_{x}\) is the lower and upper limit of the confidence interval when \(X=x\) is observed. Correspondingly, for the two-sample case, we define
\[C_{2}:=\{[l_{x,y},u_{x,y}]\}_{x\in\{0,1,\ldots,n\},y\in\{0,1,\ldots,m\}},\]
and here \([l_{x,y},u_{x,y}]\) is the confidence interval for \(\Delta\) when \((X=x,Y=y)\) is observed.
We aim to find an optimal exact CI, where optimality is with respect to the sum of all interval lengths. In the one-sample case, the length is
\[Length(C_{1})=\sum_{x=0}^{n}(u_{x}-l_{x}),\]
and in the two-sample case, it is
\[Length(C_{2})=\sum_{y=0}^{m}\sum_{x=0}^{n}(u_{(x,y)}-l_{(x,y)}).\]
For computational reasons, we define a grid \(D\) for \(\Delta\) values, and a grid \(P\) for \(p_{1},p_{2}\) single proportions values, e.g., \(P=\{0,0.01,0.02\ldots,1\},D=\{-1,-0.99,\ldots 0,0.01,0.02\ldots,1\}\)). The grids choices are connected to each other since only \((p_{1},p_{2})\in P\times P\) such that \(p_{1}-p_{2}\in D\) are active in the problem.
The optimization problem we aim to solve for the one-sample case is
\[\min_{C_{1}}Length(C_{1})\text{ subject to }P_{p_{1}}(p_{1}\in[l_{X},u_{X}]) \geq 1-\alpha\ \forall p_{1}\in P, \tag{1}\]
where the sub-index \(p_{1}\) means that the probability is under \(X\sim p_{1}\) (and similar notation is used for the two-sample case). For the two-sample case, the optimization problem is
\[\min_{C_{2}}Length(C_{2})\\ \text{subject to }P_{p_{1},p_{2}}(\Delta\in[l_{(X,Y)},u_{(X,Y)}]) \geq 1-\alpha\text{ for all }(p_{1},p_{2})\in P\times P\text{ such that }p_{1}-p_{2}=\Delta\in D. \tag{2}\]
## 3 Generalization of the Blyth and Still algorithm to the two-sample case
The Blyth and Still algorithm finds all the solutions to the problem (1). In Section 3.1 the algorithm is described in detail. Generalization of the algorithm to the two-sample case is discussed in Section 3.2. It is shown that the generalized algorithm provides confidence regions rather than intervals.
### The Blyth and Still algorithm
We consider the one-sample case, that is, Problem (1), and describe the Blyth and Still algorithm. First, a few definitions are given.
**Definition 3.1**.:
* _A subset_ \(S_{1}=\{r,r+1,\ldots,t\}\) _where_ \(0\leq r<t\leq n\) _is a cover group with respect to_ \(p_{1}\) _if_ \(P_{p_{1}}(X\in S_{1})\geq 1-\alpha\)_._
* _A subset_ \(S_{1}\) _is a minimal cover group (henceforth MCG) with respect to_ \(p_{1}\)_, denoted by_ \(MCG(p_{1})\)_, if there is no other cover group with respect to_ \(p_{1}\) _that has fewer elements._
* _Let_ \(S_{1},\tilde{S}_{1}\) _be two MCGs with respect to_ \(p_{1}\) _and_ \(\tilde{p}_{1}\)_, where_ \(p_{1}\leq\tilde{p}_{1}\)_. We say that the pair_ \((S_{1},\tilde{S}_{1})\) _maintains monotonicity if_ \(\min\{S_{1}\}\leq\min\{\tilde{S}_{1}\}\) _and_ \(\max\{S_{1}\}\leq\max\{\tilde{S}_{1}\}\)_._
The algorithm can be described as follows:
The Blyth and Still algorithm
Input: \(P=\{\rho_{1},\ldots,\rho_{|P|}\}\) - a grid of values in \([0,1]\) such that \(\rho_{1}\leq\rho_{2}\leq\cdots\leq\rho_{|P|}\); \(n\) - sample size; \(1-\alpha\) - desired level.
Output: \(C_{1}\) - a collection of \(n+1\) confidence intervals.
1. Find all MCGs. For all \(p_{1}\in P\) calculate all MCGs.
2. Remove MCGs that do not maintain monotonicity. For all \(i=1,\ldots,|P|-1|\) and for all \(S_{1}=MCG(\rho_{i})\): if for all \(\tilde{S}_{1}\) that is a MCG of \(\rho_{i+1}\) the pair \((S_{1},\tilde{S}_{1})\) does not maintain monotonicity, then remove \(S_{1}\). Also, for all \(i=2,\ldots,|P|\) and for all \(\tilde{S}_{1}=MCG(\rho_{i})\), if for all \(S_{1}\) that is a MCG of \(\rho_{i-1}\) the pair \((S_{1},\tilde{S}_{1})\) does not maintain monotonicity, then remove \(\tilde{S}_{1}\).
3. Choose linear ordering. For \(i=1\) choose \(MCG^{*}(\rho_{1})\) from all the MCGs of \(\rho_{1}\) that remained after the previous step. For \(i=2,\ldots,|P|\), choose \(MCG^{*}(i)\) from all the remaining MCGs of \(\rho_{i}\) such that \((MCG^{*}(\rho_{i-1}),MCG^{*}(\rho_{i}))\) maintains monotonicity.
4. Invert. For all \(x=0,1,2\ldots n\), define \(CR(x):=\{p_{1}\in P:x\in MCG^{*}(p_{1})\}\) and \(l_{x}:=\min\{CR(x)\}\), and \(u_{x}:=\max\{CR(x)\}\).
5. Return \(C_{1}=\{[l_{x},u_{x}]\}_{x\in\{0,1,\ldots,n\}}\).
We now discuss every step of the algorithm in detail.
1. **Find all MCGs**
Finding all MCGs with respect to \(p_{1}\) can be done in the following manner: set \(r=0\) and find the smallest \(t_{0}\) that makes the interval \([0,t_{0}]\) cover \(p_{1}\) with probability of at least \(1-\alpha\), i.e., \(P_{p_{1}}(X\in[0,t_{0}])\geq 1-\alpha\). Then, repeat this procedure for \(r=1,\ldots,n\): for each \(r\), find the smallest integer \(t_{r}\) such that \(P_{p_{1}}(X\in[r,t_{r}])\geq 1-\alpha\). Notice that there exists a critical value \(R\) such that for \(r\geq R\) there is no \(t_{r}\) that provides coverage of \(p_{1}\) with the desired probability, that is, even if we set \(t_{r}=n\), the interval \(S_{1}=[r,n]\) is not a cover group for \(p_{1}\), i.e., \(P_{p_{1}}(X\in[r,n])<1-\alpha\). After calculating \(t_{0},t_{1},...\), the lengths of \([0,t_{0}],[1,t_{1}],...\) are compared and the intervals with minimal length are chosen. Thus, for each \(p_{1}\in P\) there are \(O(n^{2})\) calculations, and the total number of calculations in this step is \(|P|O(n^{2})\).
2. **Remove solutions that do not maintain monotonicity**
This step is needed to ensure that \(CR(x)\) in the invert step (# 4) would be an interval rather than a
confidence set. As mentioned in the introduction, the Sterne CI can lead to optimal confidence sets, which are optimal in terms of length, but they are not necessarily intervals. For a concrete example, suppose that for \(p_{1}=0.1\) the only MCG is \(MCG(0.1)=[1,7]\) and for \(p_{1}=0.11\) the MCGs are \(MCG(0.11)=[0,7],[1,8],[2,9]\). Then, the first MCG \([0,7]\) is removed as it violates the monotonicity assumption with respect to the MCG \([1,7]\) of \(p_{1}=0.1\). If for \(p_{1}=0.1\) there was more than one MCG, \([0.7]\) is removed only if it violates the monotonicity assumption for any MCG of \(p_{1}=0.1\).
3. **Choose linear ordering**
There are different ways to choose a linear ordering that will lead to different CIs. However, all of them will be optimal in the sense of the optimization problem in (1). Blyth and Still explored a few options for choosing MCGs that have other desired properties. For example, if one wants to avoid CIs where \(l_{x}=l_{x+1}\) for some \(x\)'s, then certain linear orderings should be avoided.
4. **Invert**
By the monotonicity property, the set \(CR(x)\) is an interval, i.e., there are no holes in \(CR(x)\). By the construction of \(CR(x)\) we have that \(\sum_{x=0}^{n}\#\{CR(x)\}=\sum_{p_{1}\in P}\#\{MCG^{*}(p_{1})\}\), where \(\#A\) is the number of elements in set \(A\). Since the number of elements in each \(MCG^{*}(p_{1})\) is minimal, so is \(\sum_{x=0}^{n}\#\{CR(x)\}\). Minimizing \(\sum_{x=0}^{n}\#\{CR(x)\}\) is equivalent to Problem (1) and hence the output of the algorithm is a solution to Problem (1). Moreover, by choosing different linear orderings in Step 3, all the optimal solutions can be found by this algorithm.
### A Generalization of the Blyth and Still algorithm to the two-sample case
In this section, we consider a generalization of the Blyth and Still algorithm that aims to address Problem (2). While the minimal length and the desired confidence level are still preserved, we will show that the output of this generalized algorithm is not necessarily a confidence interval, but rather a confidence set. We start with a definition that parallels Definition 3.1.
**Definition 3.2**.:
* _A subset_ \(S_{2}\subseteq\{0,1,\ldots,n\}\times\{0,1,\ldots,m\}\) _is a cover group with respect to_ \(\Delta\in D\) _if for all_ \((p_{1},p_{2})\in P\times P\) _such that_ \(p_{1}-p_{2}=\Delta\) _we have that_ \(P_{p_{1}.p_{2}}((X,Y)\in S_{2})\geq 1-\alpha\)_._
* _A subset_ \(S_{2}\) _is a minimal cover group (henceforth MCG) with respect to_ \(\Delta\in D\)_, denoted by_ \(MCG(\Delta)\)_, if there is no other cover group with respect to_ \(\Delta\) _that has fewer elements._
Notice that here we define a cover group to be a subset of \(\{0,1,\ldots,n\}\times\{0,1,\ldots,m\}\), without requiring that there are no holes (e.g., \(S_{2}\) in which \((0,2),(0,4)\in S_{2},(0,3)\notin S_{2}\) is a possible cover group) as in the one sample definition of a cover group. Later we will demonstrate, that even without this restriction, there are cases in which all possible choices of MCGs lead to confidence regions that are not intervals.
The generalized Blyth and Still algorithm
Input: \(P\) - a grid of values in \([0,1]\); \(D\) - a grid of values in \([-1,1]\); \(n,m\) - sample sizes; \(1-\alpha\) - desired level.
Output: \(\tilde{C}_{2}\) - a collection of \((n+1)(m+1)\) confidence sets.
1. Find one MCG for each \(\Delta\in D\). For all \(\Delta\in D\) find one MCG, denoted by \(MCG(\Delta)\).
2. Invert. For all \((x,y)\in\{0,1,\ldots,n\}\times\{0,1,\ldots,m\}\), define \(CR(x,y):=\{\Delta\in D\text{ if }(x,y)\in MCG(\Delta)\}\).
3. Return \(\tilde{C_{2}}=\{CR(x,y)\}_{x\in\{0,1,\ldots,n\},y\in\{0,1,\ldots,m\}}\).
Notice that in this algorithm the steps of removing MCGs that do not maintain monotonicity and choosing linear ordering are not present. This will be explained below, but first, we describe how to find MCGs in Step 1.
Finding MCGs in the two-sample case is more complicated than the one-sample equivalent task because one needs to ensure \(1-\alpha\) coverage for all \((p_{1},p_{2})\in P\times P\) that satisfy \(p_{1}-p_{2}=\Delta\) and not for just one specific \(p_{1}\). Also, in the one-sample case, the MCGs are intervals but here the MCGs are general sets. We found no simple algorithm to compute MCGs in the two-sample setting and this step is performed by solving Optimization Problem 1, which is given below. The optimal solution was computed by a procedure in the R software that uses the Gurobi package. This optimization problem consists of \((n+1)(m+1)\) binary variables and has at most \(|P|\) constraints for maintaining the confidence level.
**Optimization Problem 1**.: _Problem parameters: \(D\) - a grid of values in \([-1,1]\) for \(\Delta\); \(P\)- a grid of values in \([0,1]\) for \(p_{1}\) and \(p_{2}\); \((n,m)\) - number of trials from each sample; Confidence level \(1-\alpha\)._
_Decision variables: \(r(x,y)\) - a binary variable that equals 1 iff \((x,y)\) belongs to the MCG._
_Objective function: Minimize \(\sum_{y=0}^{m}\sum_{x=0}^{n}r(x,y)\)._
_Constraints:_
_a. Maintain the coverage of \(\Delta\). i.e.,_
\[\sum_{y=0}^{m}\sum_{x=0}^{n}r(x,y)\binom{n}{x}\binom{m}{y}p_{1}{}^{x}(1-p_{1}) ^{n-x}p_{2}{}^{y}(1-p_{2})^{m-y}\geq 1-\alpha, \tag{3}\]
_for all \((p_{1},p_{2})\in P\times P\) such that \(p_{1}-p_{2}=\Delta\)._
_b. The decision variables \(r(x,y)\) are binary, i.e., \(r(x,y)\in\{0,1\}\) for all \((x,y)\in\{0,1,2\ldots,n\}\times\{0,1,2\ldots,m\}\)._
Furthermore, we used the above program to find all possible solutions for the Optimization Prob
lem 1. This allows us to show that there are examples in which no ordering of MCGs will lead to confidence intervals as in the one-dimensional case. For example, when \(n=5,m=5,\alpha=0.1,P=\{0,0.0001,0.0002,\ldots,1\}\) we find that:
a. For \(\Delta=-0.4\) the only MCG contains \((x,y)=(0,5)\).
b. For \(\Delta=-0.37\) there are five MCGs, all of them contain \((x,y)=(0,5)\).
c. For \(\Delta=-0.38\) the only MCG does not contain \((x,y)=(0,5)\).
This means that for any choice of MCGs in this setting, if \((x,y)=(0,5)\) is observed, the confidence set of the invert step will contain \(\Delta=-0.4,-0.37\) but not \(\Delta=-0.38\). That is, the optimal confidence set will be composed of at least two disjoint intervals. Furthermore, by examining the constraint (3) for continuous \(p_{1},p_{2}\) using the analytical Desomos, we verified that this phenomenon will still occur even for a finer grid. The full MCGs for this example are presented in the appendix. In a simulation we made, we observed that this phenomenon occurred very often: from six pairs of sample sizes \((n,m)\in\{(10,5),(5,5),(6,4),(9,6),(7,7)\}\) and \(\alpha=0.05\), only \((10,5)\) and \((6,4)\) do not have MCGs with this deficiency.
It follows that one cannot achieve CIs with minimal length using the Blyth and Still method. Rather, this method guarantees confidence sets (not necessarily intervals) that have a minimal number of elements in \(D\) and have the desired coverage level \(1-\alpha\).
In section 5 we examine the performance of this method where gaps in the confidence sets are simply filled, in order to achieve a confidence interval.
## 4 Performing full optimization
In the previous section we showed that the generalized Blyth and Still algorithm to the two-sample case leads to confidence regions that are optimal in their size, but can be composed of several disjoint intervals, instead of one interval. The solution of filling the gaps between the disjoint intervals is later examined.
Therefore, a different optimization method should be considered in order to solve Problem (2). The aim is to find a set of confidence regions that are optimal in length, have the right coverage level, and are constrained to be intervals. This can be done by solving the following optimization problem.
**Optimization Problem 2**.: _Problem parameters: \(D\) - a grid of values in \([-1,1]\) for \(\Delta\); \(P\)- a grid of values in \([0,1]\) for \(p_{1}\) and \(p_{2}\); \((n,m)\) - number of trials from each sample; Confidence level \(1-\alpha\)._
_Decision variables:_\(l_{(x,y)},u_{(x,y)}\) - the lower and upper limits for when \((x,y)\) is observed; \(r(x,y,\Delta)\) - a binary variable that equals 1 iff the CI includes \(\Delta\) when \((x,y)\) is observed.
_Objective function:_ _Minimize_ \(\sum_{y=0}^{m}\sum_{x=0}^{n}(u_{(x,y)}-l_{(x,y)})\)._
_Constraints:_
_a. Maintain the coverage of \(\Delta\). i.e.,_
\[\sum_{y=0}^{m}\sum_{x=0}^{n}r(x,y,\Delta)\binom{n}{x}\binom{m}{y}p_{1} ^{x}(1-p_{1})^{n-x}p_{2}^{y}(1-p_{2})^{m-y}\geq 1-\alpha, \tag{4}\]
_for all \((p_{1},p_{2})\in P\times P\) such that \(p_{1}-p_{2}=\Delta\)._
_b. Connecting the variables \(r(x,y,\Delta)\) and \(l_{(x,y)}\) and \(u_{(x,y)}\):_
\[r(x,y,\Delta)\leq\frac{(\Delta-l_{(x,y)})}{2}+1\text{ and }r(x,y,\Delta)\leq \frac{(u_{(x,y)}-\Delta)}{2}+1 \tag{5}\]
_for all \((x,y,\Delta)\in\{0,1,2..,n\}\times\{0,1,2,...m\}\times D\)._
_c. Connecting further the variables \(r(x,y,\Delta)\) and \(l_{(x,y)}\) and \(u_{(x,y)}\):_
\[\frac{(u_{(x,y)}-l_{(x,y)})}{d_{max}}+1\leq\sum_{\Delta\in D}r(x,y,\Delta)\leq \frac{(u_{(x,y)}-l_{(x,y)})}{d_{min}}+1 \tag{6}\]
_for all \((x,y)\in\{0,1,2..,n\}\times\{0,1,2,...m\}\), where \(d_{min}\) and \(d_{max}\) are the minimal and maximal distances between successive elements in the sorted grid \(D\)._
_d. The variables \(r(x,y,\Delta)\) are binary:_
\[r(x,y,\Delta)\in\{0,1\}\text{ for all }(x,y,\Delta)\in\{0,1,2..,n\}\times\{0,1, 2,...m\}\times D.\]
_e. Interval limits are between \([-1,1]\):_
\[-1\leq l_{(x,y)}\leq 1\text{ and }-1\leq u_{(x,y)}\leq 1\text{ for all }(x,y)\in\{0,1,2..,n\}\times\{0,1,2,...m\}.\]
A solution to Optimization Problem 2 finds the shortest CI that has \(1-\alpha\) coverage for every \(\Delta\in D\), i.e., it solves Problem (2). The optimization problem consists of \(2(n+1)(m+1)\) variables that assume values in \(D\), and \(|D|(n+1)(m+1)\) binary variables.
The constraint in (5) consists of two conditions, which force \(r(x,y,\Delta)\) to be \(0\) if \(\Delta<l(x,y)\) or \(\Delta>u(x,y)\), respectively. This is because
\[\Delta<l(x,y)\Longleftrightarrow\frac{(\Delta-l_{(x,y)})}{2}<0\Longleftrightarrow \frac{(\Delta-l_{(x,y)})}{2}+1<1.\]
Thus, by condition (5), if \(\Delta<l(x,y)\) then \(r(x,y,\Delta)<1\), which implies that \(r(x,y,\Delta)=0\), as it is a binary variable. Similarly, if \(\Delta>u(x,y)\), then \(r(x,y,\Delta)=0\). If neither \(\Delta<l(x,y)\) nor \(\Delta>u(x,y)\) is
satisfied, then Constraint (5), does not restrict \(r(x,y,\Delta)\) to a certain value. This is where Constraint (6) comes into play. In the case where the grid \(D\) is equally-spaced, Constraint (6) simplifies to
\[\sum_{\Delta\in D}r(x,y,\Delta)=\frac{(u_{(x,y)}-l_{(x,y)})}{d}+1, \tag{7}\]
where \(d\) is the constant difference between successive elements in the sorted grid \(D\). In this case, (7) implies that if \(l_{(x,y)}\leq\Delta\leq u_{(x,y)}\), then \(r(x,y,\Delta)=1\). Combining this with (5), we have that the \(r\) variables are fully determined by the \(l\) and \(u\) variables. Constraint (6) does not change the optimal value, but rather drastically decreases the number of feasible solutions and thus reduces the number of computations needed to solve Optimization Problem 2.
Another way of forcing \(r(x,y,\Delta)\) to be \(1\) if \(l_{(x,y)}\leq\Delta\leq u_{(x,y)}\), even when \(D\) is not equally-spaced, is to change the objective function to
\[\text{minimize }\sum_{y=0}^{m}\sum_{x=0}^{n}[u_{(x,y)}-l_{(x,y)}]-\frac{d_{ min}}{2N}\sum_{x=0}^{n}\sum_{y=0}^{m}\sum_{\Delta\in D}r(x,y,\Delta),\]
where \(N=(n+1)(m+1)|D|\) is the number of \(r\) variables and \(d_{min}\) is the minimal distance between consecutive elements in the sorted grid \(D\).
If one wishes to find a solution that maintains the symmetry of the binomial distribution under the transformation \(p\mapsto 1-p\), then one can add the restriction
\[u_{(x,y)}=-l_{(n-x,m-y)}\text{ for all }(x,y)\in\{0,1,2..,n\}\times\{0,1,2..,m\}. \tag{8}\]
In the Generalized Blyth and Still algorithm that was given in Section 3.2, Optimization problem 1 is being solved \(|D|\) times, each with \((n+1)(m+1)\) binary variables. Here, on the other hand, there are \(|D|(n+1)(m+1)\) binary variables and the optimization problem is solved only once. Since the running time of the optimization problem solver is not linear in the number of the binary variables, Optimization Problem 2 is computationally much more difficult.
## 5 Comparisons
In this section, we compare the full optimization algorithm of Section 4 and the generalized Blyth and Still algorithm of Section 3.1 to several existing methods, both approximate and exact.
### A list of methods
The existing methods we have compared are listed below.
1. The Wald CI, i.e.,
\[\hat{p}_{1}-\hat{p}_{2}\pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{\hat{p}_{1}(1-\hat{ p}_{1})}{n}+\frac{\hat{p}_{2}(1-\hat{p}_{2})}{m}}.\]
It is included in our comparison due to its widespread use even though it is known to perform poorly.
2. The adjusted Wald CI of Agresti and Caffo (2000) (AC) is given by
\[\bar{p}_{1}-\bar{p}_{2}\pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{\bar{p}_{1}(1- \bar{p}_{1})}{n}+\frac{\bar{p}_{2}(1-\bar{p}_{2})}{m}}.,\]
where \(\bar{p}_{1}=(x+1)/(n+2),\bar{p}_{2}=(y+1)/(m+2)\)
3. The hybrid score (HS) of Newcombe (1998).
Newcombe hybrid score (HS)
Input: \(n,m\) - sample sizes; \(1-\alpha\) - confidence level.
Output: \(C_{2}\) - a collection of \((n+1)(m+1)\) confidence intervals.
1. Calculate lower and upper bounds. Let \(\hat{p}_{1}=x/n\) and \(\hat{p}_{2}=y/n\). For each \(x\in\{0,1,\ldots,n\}\), let \(l_{x}(1),u_{x}(1)\) be the two solutions for \(p_{1}\) of \(z_{1-\frac{\alpha}{2}}=\frac{|\hat{p}_{1}-p_{1}|}{\sqrt{\frac{p_{1}(1-p_{1})}{ n}}}\), and for each \(y\in\{0,1,\ldots,m\}\), let \(l_{y}(2),u_{y}(2)\) be the two solutions for \(p_{2}\) of \(z_{1-\frac{\alpha}{2}}=\frac{|\hat{p}_{2}-p_{2}|}{\sqrt{\frac{p_{1}(1-p_{2})} {m}}}\).
2. Hybrid score For all\((x,y)\in\{0,1,\ldots,n\}\times\{0,1,\ldots,m\}\) define \[l(x,y) =\hat{p}_{1}-\hat{p}_{2}-z_{1-\frac{\alpha}{2}}\sqrt{\frac{l_{x} (1)(1-l_{x}(1))}{n}+\frac{u_{y}(2)(1-u_{y}(2))}{m}}\quad\text{ and }\] \[u(x,y) =\hat{p}_{1}-\hat{p}_{2}+z_{1-\frac{\alpha}{2}}\sqrt{\frac{u_{x} (1)(1-u_{x}(1))}{n}+\frac{l_{y}(2)(1-l_{y}(2))}{m}}.\]
3. Return \(C_{2}=\{[l_{(x,y)},u_{(x,y)}]\}_{(x,y)\in\{0,1,\ldots,n\}\times\{0,1,\ldots,m\}}\).
The calculations of the HS CI can be found in the R software in the package DescTools.
4. The exact method of Agresti and Min (2001) (AM)
The exact method of Agresti and Min (2001) (AM)
Input: \(P\) - a grid of values in \([0,1]\); \(D\) - a grid of values in \([-1,1]\); \(n,m\) - sample sizes; \(1-\alpha\) - confidence level.Output: \(C_{2}\) - a collection of \((n+1)(m+1)\) confidence intervals.
1. Calculate scores. For any triplet \((x,y,\Delta)\in\{0,1,\ldots,n\}\times\{0,1,\ldots,m\}\times D\) define \[Z(x,y,\Delta)=\frac{\left(\frac{x}{n}-\frac{y}{m}-\Delta\right)^{2}}{\frac{ \tilde{p}_{1}(1-\tilde{p}_{1})}{n}+\frac{\tilde{p}_{2}(1-\tilde{p}_{2})}{m}},\] where \(\tilde{p}_{1},\tilde{p}_{2}\) are the MLE for \(p_{1},p_{2}\) under \(p_{1}-p_{2}=\Delta\), i.e., they maximize the likelihood \({p_{1}}^{x}(1-p_{1})^{n-x}{p_{2}}^{y}(1-p_{2})^{m-y}\), under the constraint \(p_{1}-p_{2}=\Delta\).
2. Calculate \(\lambda\) values. For any triplet \((x,y,\Delta)\in\{0,1,\ldots,n\}\times\{0,1,\ldots,m\}\times D\) define \[\lambda(x,y,\Delta)=\max\left\{P_{p_{1},p_{2}}\Big{(}Z(X,Y,\Delta)\geq Z(x,y, \Delta)\Big{)}:(p_{1},p_{2})\in P\times P\text{ s.t }p_{1}-p_{2}=\Delta\right\}.\]
3. Invert For all \((x,y)\in\{0,1,\ldots,n\}\times\{0,1,\ldots,m\}\) define \(CR(x,y):=\{\Delta\in D\text{ if }\lambda(x,y,\Delta)>\alpha\}\) and \(l_{(x,y)}:=\min\{CR(x,y)\},u_{(x,y)}:=\max\{CR(x,y)\}\).
4. Return \(C_{2}=\{[l_{(x,y)},u_{(x,y)}]\}_{(x,y)\in\{0,1,\ldots,n\}\times\{0,1,\ldots,m\}}\).
Notice that similar to the generalized Blyth and Still algorithm, \(CR(x,y)\) is not necessarily an interval. Therefore, the confidence interval is defined by the minimum and maximum value of \(CR(x,y)\).
We could not find a code in R that implements the AM algorithm, and therefore we wrote our own code. For calculating the MLEs \(\tilde{p}_{1},\tilde{p}_{2}\) in Step 1 we used the function 'z2stat' in the package 'Propcls'; an explicit expression for the MSE is given in Miettinen and Nurminen (1985).
We ran the AM algorithm under two modes, which we denote by AM1 and AM2. The first mode is with the grids \(D=\{-1,-0.99,-0.98,...1\}\) and \(P=\{0,0.01,0.02,...1\}\), and the second mode is with the grids \(D=\{-1,-0.999,-0.998,...1\}\) and \(P=\{0,0.001,0.002,...1\}\). The reason for considering the coarser grid of the first mode is to attain a better comparison to the full optimization method, in which the finer grid is computationally infeasible. The AM algorithm is sub-optimal but runs much faster than full optimization and therefore can be computed with a finer grid.
5. The generalized Blyth and Still algorithm that is given in Section 3.2 (BSG).
We ran the algorithm where the confidence sets filled if they are not intervals. As in the AM method, we considered two possible modes, denoted by BSG1 and BSG2. In the first mode we used the grids
\(D=\{-1,-0.99,-0.98,...1\}\) and \(P=\{0,0.02,0.04,...1\}\). In the second mode we used the grid \(D=\{-1,-0.999,-0.998,...1\}\) and a different grid \(P\) for every \(\Delta\in D\), a choice that improves the performance of the algorithm. Namely, for \(\Delta\geq 0\) we define
\[P_{\Delta}=\{\Delta,....,1\}\text{ with equal jumps of }\frac{(1-\Delta)}{100}\]
and for \(\Delta<0\) we define
\[P_{\Delta}=\{0,....,1+\Delta\}\text{with equal jumps of }\frac{(1-\Delta)}{100}.\]
The coverage condition of the algorithm in (3) is satisfied for any pair \((p_{1},p_{1}-\Delta)\) where \(p_{1}\in P_{\Delta}\).
6. The full optimization algorithm presented in Section 4 (FULL).
We ran the algorithm of Section 4 with the grid \(D=\{-1,-0.99,-0.98,...1\},P=\{0,0.01,0.02,...1\}\) and denote it by FULL1. Here we only considered the coarse grid since the computational complexity of the optimization problem is much greater. We ran the problem with the symmetric condition 8 and found that this restriction does not change the length of the CIs in the optimal solution.
The Gurobi software was given a time limit of two minutes. If the time limit is reached, the best solution is reported, as is the gap between this solution to the current lower bound in terms of percentage. The starting point of the algorithm is based on the output of the AM method.
Since the grid is relatively coarse, there are non-negligible amount of differences \(p_{1}-p_{2}\) for which \(1-\alpha\) coverage is not preserved. We examined two ways to overcome this problem, where the updated limits are denoted by \(l^{*}_{(x,y)};u^{*}_{(x,y)}\).
1. Extending the CIs in each direction by adding or reducing \(0.01\) (which is the gap size in the grid we used) (FULL2), i.e., \[l^{*}_{(x,y)}=l_{(x,y)}-0.01\text{ and }u^{*}_{(x,y)}=u_{(x,y)}+0.01.\]
2. Extending the CIs in each direction by adding or reducing \(0.01/2\) (FULL3), i.e., \[l^{*}_{(x,y)}=l_{(x,y)}-0.01/2\text{ and }u^{*}_{(x,y)}=u_{(x,y)}+0.01/2.\]
In these extensions, the new limits are truncated if they exceed the interval \([-1,1]\).
### Criteria of performance
We compare the methods listed in Section 5.1 according to the following six criteria.
AVG length. The average length in defined by
\[\frac{\sum_{y=0}^{m}\sum_{x=0}^{n}(u_{(x,y)}-l_{(x,y)})}{(n+1)(m+1)}.\]
Percentage of non-exact. Define the confidence level function \(CL(p_{1},p_{2}):=P_{p_{1},p_{2}}(\Delta\in[l_{(X,Y)},u_{(X,Y)}])\). The Percentage of non-exact is
\[100\times\int_{0}^{1}\int_{0}^{1}I(CL(p_{1},p_{2})<1-\alpha)dp_{1}dp_{2}.\]
Percentage of non-appropriate cover. This is defined by
\[100\times\int_{0}^{1}\int_{0}^{1}I(CL(p_{1},p_{2})<1-\alpha-0.01)dp_{1}dp_{2}.\]
Average deviation from below. This is defined by
\[10,000\times\int_{0}^{1}\int_{0}^{1}[1-\alpha-CL(p_{1},p_{2})]I(CL(p_{1},p_{2}) <1-\alpha)dp_{1}dp_{2}.\]
This expression is the loss for an average pair \((p_{1},p_{2})\) (assuming a uniform distribution), where the loss for each pair is defined by the difference between the desired level \(1-\alpha\) and the actual coverage level \(CL(p_{1},p_{2})\) when \(CL(p_{1},p_{2})\) is below \(1-\alpha\) and zero otherwise. The factor 10,000 is used since this loss is relatively small in most of the methods we used.
Min CL. The minimum confidence level is defined by \(\min_{(p_{1},p_{2})\in[0,1]\times[0,1]}CL(p_{1},p_{2})\).
AVG CL. The average confidence level is \(\int_{0}^{1}\int_{0}^{1}CL(p_{1},p_{2})dp_{1}dp_{2}\).
For calculating the above criteria (besides AVG length), we sampled 40,000 pairs \((p_{1},p_{2})\) from a uniform distribution on \([0,1]\times[0,1]\). This defines a grid \(\mathcal{P}\) in \([0,1]\times[0,1]\). Then the above criteria are computed using this grid. For example, the percentage of non-exact is evaluated by
\[100\times\frac{1}{|\mathcal{P}|}\sum_{(p_{1},p_{2})\in\mathcal{P}}I(CL(p_{1},p _{2})<1-\alpha).\]
### Results
We calculated the resulting CIs of the methods listed in Section 5.1 for three cases of \((n,m)\), namely \((n,m)\in\{(9,6),(14,7),(10,10)\}\). For each of them, three different confidence levels are considered, \(\alpha\in\{0.01,0.05,0.1\}\). For each set of parameters and a CI method, we computed the six criteria of Section 5.2.
The results for \((n,m)=(9,6),(14,7),(10,10)\) are given in Tables 1, 2, 3, respectively. A few observations and conclusions are now given.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline & Method & WALD & AC & HS & AM1 & AM2 & BSG1 & BSG2 & Full1 & Full2 & Full3 \\ \hline \hline \multirow{8}{*}{\(\alpha=0.01\)} & AVG length & 0.934 & 1.008 & 0.921 & 1.017 & 1.026 & 1.009 & 1.028 & 1.004 & 1.024 & 1.014 \\ \cline{2-13} & \% non exact & 100.00\% & 27.11\% & 57.73\% & 4.13\% & 0.23\% & 5.11\% & 0.22\% & 6.25\% & 0.00\% & 0.76\% \\ \cline{2-13} & \% non-appropriate & 100.00\% & 0.79\% & 19.89\% & 0.01\% & 0.00\% & 0.01\% & 0.00\% & 0.01\% & 0.00\% & 0.01\% \\ \cline{2-13} & AVG deviation & 810.8 & 7.3 & 58.1 & 0.4 & 0.0 & 0.4 & 0.0 & 0.6 & 0.0 & 0.0 \\ \cline{2-13} & Min CL & 0.029 & 0.959 & 0.9 & 0.925 & 0.987 & 0.925 & 0.988 & 0.925 & 0.99 & 0.969 \\ \cline{2-13} & AVG CL & 0.909 & 0.993 & 0.987 & 0.994 & 0.995 & 0.994 & 0.995 & 0.994 & 0.994 & 0.994 \\ \hline \hline \multirow{8}{*}{\(\alpha=0.05\)} & AVG length & 0.728 & 0.776 & 0.745 & 0.797 & 0.807 & 0.789 & 0.802 & 0.779 & 0.799 & 0.789 \\ \cline{2-13} & \% non exact & 100.00\% & 16.44\% & 49.91\% & 5.69\% & 0.31\% & 6.51\% & 0.38\% & 9.98\% & 0.00\% & 0.92\% \\ \cline{2-13} & \% non-appropriate & 100.00\% & 3.69\% & 21.68\% & 0.53\% & 0.00\% & 0.70\% & 0.09\% & 1.95\% & 0.00\% & 0.00\% \\ \cline{2-13} & AVG deviation & 910.2 & 11.6 & 54.5 & 2.3 & 0.1 & 2.8 & 0.2 & 5.4 & 0.0 & 0.1 \\ \cline{2-13} & Min CL & 0.029 & 0.896 & 0.847 & 0.925 & 0.94 & 0.925 & 0.93 & 0.925 & 0.95 & 0.947 \\ \cline{2-13} & AVG CL & 0.859 & 0.963 & 0.954 & 0.965 & 0.968 & 0.964 & 0.967 & 0.961 & 0.966 & 0.964 \\ \hline \hline \multirow{8}{*}{\(\alpha=0.1\)} & AVG length & 0.616 & 0.653 & 0.639 & 0.691 & 0.7 & 0.67 & 0.683 & 0.662 & 0.682 & 0.672 \\ \cline{2-13} & \% non exact & 99.26\% & 17.06\% & 43.56\% & 4.45\% & 0.32\% & 6.72\% & 0.80\% & 10.16\% & 0.00\% & 0.63\% \\ \cline{2-13} & \% non-appropriate & 98.46\% & 8.90\% & 28.22\% & 0.58\% & 0.06\% & 1.15\% & 0.22\% & 1.94\% & 0.00\% & 0.00\% \\ \cline{2-13} & AVG deviation & 904.1 & 31.7 & 71.3 & 2.2 & 0.2 & 3.9 & 0.6 & 6.1 & 0.0 & 0.1 \\ \cline{2-13} & Min CL & 0.029 & 0.731 & 0.808 & 0.851 & 0.885 & 0.851 & 0.874 & 0.851 & 0.9 & 0.894 \\ \cline{2-13} & AVG CL & 0.81 & 0.92 & 0.909 & 0.928 & 0.932 & 0.924 & 0.93 & 0.92 & 0.929 & 0.925 \\ \hline \end{tabular}
\end{table}
Table 1: The performance measures of Section 5.2 for the different methods when \(\alpha\in\{0.01,0.05,0.1\}\) and \((n,m)=(9,6)\).
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline & Method & WALD & AC & HS & AM1 & AM2 & BSG1 & BSG2 & Full1 & Full2 & Full3 \\ \hline \hline \multirow{8}{*}{\(\alpha=0.01\)} & AVG length & 0.851 & 0.9 & 0.832 & 0.892 & 0.903 & 0.888 & 0.905 & 0.88 & 0.9 & 0.89 \\ \cline{2-13} & \% non exact & 100.00\% & 33.33\% & 58.65\% & 8.31\% & 0.39\% & 8.81\% & 0.42\% & 14.68\% & 0.00\% & 1.67\% \\ \cline{2-13} & \% non-appropriate & 98.58\% & 0.95\% & 15.50\% & 0.02\% & 0.00\% & 0.02\% & 0.00\% & 0.02\% & 0.00\% & 0.01\% \\ \cline{2-13} & AVG deviation & 632.2 & 11.3 & 46.2 & 0.7 & 0.0 & 0.8 & 0.0 & 1.4 & 0.0 & 0.1 \\ \cline{2-13} & Min CL & 0.038 & 0.956 & 0.895 & 0.894 & 0.988 & 0.894 & 0.986 & 0.894 & 0.99 & 0.958 \\ \cline{2-13} & AVG CL & 0.927 & 0.992 & 0.987 & 0.993 & 0.993 & 0.994 & 0.992 & 0.993 & 0.993 \\ \hline \hline \multirow{8}{*}{\(\alpha=0.05\)} & AVG length & 0.658 & 0.69 & 0.666 & 0.704 & 0.716 & 0.693 & 0.708 & 0.682 & 0.702 & 0.692 \\ \cline{2-13} & \% non exact & 99.60\% & 18.51\% & 47.75\% & 6.16\% & 0.24\% & 7.14\% & 0.63\% & 14.90\% & 0.00\% & 1.47\% \\ \cline{2-13} & \% non-appropriate & 97.75\% & 3.34\% & 13.88\% & 0.40\% & 0.00\% & 0.42\% & 0.01\% & 1.38\% & 0.00\% & 0.00\% \\ \cline{2-13} & AVG deviation & 738.9 & 11.6 & 39.3 & 2.4 & 0.1 & 2.5 & 0.2 & 6.3 & 0.0 & 0.1 \\ \cline{2-13} & Min CL & 0.038 & 0.907 & 0.863 & 0.894 & 0.941 & 0.894 & 0.939 & 0.894 & 0.95 & 0.946 \\ \cline{2-13} & AVG CL & 0.876 & 0.961 & 0.954 & 0.963 & 0.966 & 0.961 & 0.965 & 0.958 & 0.963 & 0.961 \\ \hline \hline \multirow{8}{*}{\(\alpha=0.1\)} & AVG length & 0.555 & 0.58 & 0.568 & 0.61 & 0.619 & 0.585 & 0
* The WALD CI performs poorly. Almost for all pairs, the confidence level is below the desired level \(1-\alpha\) and even below \(1-\alpha-0.01\). Also, the average coverage is well below the desired level. This finding is not surprising as the WALD CI relies on asymptotic approximation, which is not valid for small sample sizes.
* We considered three non-exact methods: WALD, HS and AC. Comparing these methods in terms of average length, the order is \(\text{WALD}<\text{HS}<\text{AC}\), but the same order holds under the non-exact and non-appropriate criteria. This means that narrower CIs come with the price of under-coverage.
* The FULL1 method produces CI with optimal length, or close to optimal; see the discussion below. As we expected, it has the shortest average length among all exact CIs. Compared to the approximate CIs it is longer by \(2\%-10\%\) from HS and WALD and it has a similar length as AC.
* The FULL1 method does not guarantee exact coverage for any \((p_{1},p_{2})\), just for the pairs in the grid. For \((n,m)=(9,6)\), the percentage of non-exact pairs range from \(6\%\) in \(\alpha=0.01\), to \(10\%\) for \(\alpha=0.1\). For the two other sample sizes, it ranges from \(14\%\) to \(18\%\). Examining FULL1 by the criterion of percentage of non-appropriate, we can see that it has good performance, especially for small \(\alpha\). Yet, for \((n,m,\alpha)=(10,10,0.1)\) the percentage of non-appropriate reaches \(8\%\), which might be too high. Still, the FULL1 has a smaller percentage of non-exact and non-appropriate compared to the approximate CIs, including AC.
* The exact methods FULL1, BSG1 and AM1 ran with the same grid for \(\Delta\). Among these methods, the order of the average length is FULL1 \(<\) BSG1 \(<\) AM1. The length improvement of FULL1
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline & Method & WALD & AC & HS & AM1 & AM2 & BSG1 & BSG2 & Full1 & Full2 & Full3 \\ \hline \hline \multirow{4}{*}{alpha=0.01} & AVG length & 0.84 & 0.878 & 0.823 & 0.888 & 0.898 & 0.88 & 0.896 & 0.872 & 0.892 & 0.882 \\ \cline{2-13} & \% non exact & 100.00\% & 33.22\% & 55.63\% & 7.34\% & 0.43\% & 7.42\% & 0.61\% & 14.01\% & 0.00\% & 1.52\% \\ \cline{2-13} & \% non-appropriate & 99.26\% & 0.60\% & 15.60\% & 0.01\% & 0.00\% & 0.01\% & 0.00\% & 0.01\% & 0.00\% \\ \cline{2-13} & AVG deviation & 499.7 & 8.2 & 45.8 & 0.7 & 0.0 & 0.6 & 0.0 & 1.1 & 0.0 & 0.1 \\ \cline{2-13} & Min CL & 0.043 & 0.969 & 0.892 & 0.906 & 0.987 & 0.906 & 0.989 & 0.906 & 0.99 & 0.957 \\ \cline{2-13} & AVG CL & 0.94 & 0.992 & 0.988 & 0.994 & 0.994 & 0.993 & 0.994 & 0.992 & 0.994 & 0.993 \\ \hline \hline \multirow{4}{*}{\(\alpha=0.05\)} & AVG length & 0.649 & 0.673 & 0.654 & 0.69 & 0.7 & 0.68 & 0.698 & 0.673 & 0.692 & 0.682 \\ \cline{2-13} & \% non exact & 100.00\% & 21.56\% & 42.91\% & 9.98\% & 0.76\% & 11.27\% & 1.24\% & 14.73\% & 0.00\% & 1.24\% \\ \cline{2-13} & \% non-appropriate & 99.21\% & 5.31\% & 20.74\% & 2.27\% & 0.03\% & 0.31\% & 0.08\% & 1.27\% & 0.00\% & 0.00\% \\ \cline{2-13} & AVG deviation & 584.2 & 14.9 & 51.0 & 5.6 & 0.3 & 3.8 & 0.4 & 5.5 & 0.0 & 0.1 \\ \cline{2-13} & Min CL & 0.043 & 0.907 & 0.835 & 0.906 & 0.936 & 0.906 & 0.939 & 0.906 & 0.95 & 0.946 \\ \cline{2-13} & AVG CL & 0.892 & 0.96 & 0.954 & 0.962 & 0.965 & 0.961 & 0.965 & 0.959 & 0.964 & 0.961 \\ \hline \hline \multirow{4}{*}{\(\alpha=0.1\)} & AVG length & 0.547 & 0.566 & 0.556 & 0.588 & 0.598 & 0.575 & 0.594 & 0.566 & 0.586 & 0.576 \\ \cline{2-13} & \% non exact & 98.15\% & 23.24\% & 44.90\% & 9.31\% & 0.74\% & 14.35\% & 1.53\% & 18.32\% & 0.00\% & 1.18\% \\ \cline{1-1} \cline{2-13} & \% non-appropriate & 92.87\% & 11.73\% & 26.32\% & 2.72\% & 0.13\% & 4.84\% & 0.47\% & 7.94\% & 0.00\% & 0.01\% \\ \cline{1-1} \cline{2-13} & AVG deviation & 588.1 & 32.5 & 62.6 & 6.3 & 0.4 & 11.9 & 1.3 & 18.5 & 0.0 & 0.2 \\ \cline{1-1} \cline{2-13} & Min CL & 0.043 & 0.743 & 0.801 & 0.835 & 0.883 & 0.835 & 0.872 & 0.835 & 0.9 & 0.865 \\ \cline{1-1} \cline{2-13} & AVG CL & 0.841 & 0.916 & 0.907 & 0.919 & 0.924 & 0.917 & 0.927 & 0.912 & 0.923 & 0.918 \\ \hline \end{tabular}
\end{table}
Table 3: The performance measures of Section 5.2 for the different methods when \(\alpha\in\{0.01,0.05,0.1\}\) and \((n,m)=(10,10)\).
compared to AM1 is about \(2\%-5\%\). On the other hand, AM1 has better coverage than BSG1 and FULL1.
* The modification of FULL2 produces CI that is exact for all pairs but it comes at the cost of a larger length of 0.02. FULL3 does not guarantee exact coverage, but the percentage of non-exact is decreased by about 90% compared to FULL1, and the intervals are extended by half the amount compared to FULL2.
* The BSG2 method achieves significant improvement in the coverage criteria compared to BSG1, at the cost of average length that is greater by about 2%. Similarly, AM2 improves AM1 in terms of coverage, but the average length increases slightly.
* From all the exact methods examined, only BSG2, AM2, FULL2 and FULL3 have satisfactorily performance for the coverage criteria. The confidence level of FULL2 is always larger than \(1-\alpha\) in the above parameters. Comparing BSG2, AM2 and FULL3 we can observe that FULL3 has the largest percentage of non-exact, for most of the nine combinations of \(n,m,alpha\) we considered, while AM2 has the lowest. On the other hand, FULL3 has the smallest percentage of non-appropriate, smaller than 0.01% for all 9 cases. AM2 and BSG2 have slightly higher numbers, yet still very low, ranging from 0.13% to 0.47%. Considering the criterion of AVG deviation, all three methods have low scores, in comparison to the other methods. BSG2 has a slightly higher score than AM2 and FULL3, which are mostly comparable.
To examine further the under-converge of the different methods we plotted in Figure 1 all pairs \((p_{1},p_{2})\in\mathcal{P}\) for which \(CL(p_{1},p_{2})\) is below \(1-\alpha\) when \((n,m,\alpha)=(9,6,0.05)\) for all methods excluding WALD.
We observe that AM2, BSG2 and Full3 have similar low under-coverage, but the pattern is a bit different. For AM2 the under-coverage is mostly for pairs \((p_{1},p_{2})\) that are close \((\frac{1}{2},\frac{1}{2})\), while for FULL3 it is mostly for large \(\Delta=|p_{1}-p_{2}|\). The graph of FULL2 is empty as there is no under-coverage for this method.
Additionally, Figure 2 plots the confidence level as a function of \(p_{2}\) when \(p_{1}=0.5\). We can see that all the seven exact methods (AM1, AM2, BSG1, BSG2, FULL1, FULL2, FULL3) exhibit a similar pattern, and the confidence level is above \(1-\alpha\) for almost all \(p_{2}\). Notice that the graph of FULL2 has a few short lines (looking like points) in the high-confidence area, which do not exist in the FULL3 graph. This is due to the extension of the limits by \(0.01/2\) in FULL3 compared to the extension of \(0.01\) in FULL2.
Figure 1: Plotting all pairs \((p_{1},p_{2})\in\mathcal{P}\) for which \(CL(p_{1},p_{2})\) is below \(1-\alpha\) when \((n,m,\alpha)=(9,6,0.05)\) for all methods listed in Section 5.1 besides WALD.
Table 4 reports the decrease in the average length of the solutions found by the Full1 method compared to the best lower bound that was computed. It is demonstrated that the gap between the best lower bound and the solution that was found is quite small. Even if the time limit of the algorithm is extended, we believe that it generally would not result in better performance. By observing the outputs of the optimization algorithm throughout the run, it seems that the solution found is optimal or very close to optimal, and more running time will mostly improve the computation of the lower bound, and not the solution itself. For example, for \((n,m,\alpha)=(10,10,0.05)\) the solution after 180 seconds, was the same one that was found after 30 seconds. The changes were only in the computation of the gap: from 1.67% to 0.87%.
Figure 2: Plotting the confidence level as a function of \(p_{2}\) when \(p_{1}=0.5\) and \((n,m,\alpha)=(9,6,0.05)\) for for all methods listed in Section 5.1 excluding WALD. The vertical line represents the \(1-\alpha=0.95\) confidence level.
Considering both coverage and length, it seems that FULL3 is the best method among the ones we suggested, namely, FULL1, FULL2, FULL3, BSG1 and BSG2. Among the other methods, AM2 has the best performance. Comparing FULL3 and AM2, they perform similarly in the coverage criteria but FULL3 has a smaller average length.
To examine further the decrease of length of FULL3 compared to AM2, we considered 21 pairs of \((n,m)\), where \(5\leq m\leq n\leq 10\). For each such pair and for \(\alpha\in\{0.01,0.05,0.1\}\) we computed the relative improvement, which is defined by
\[100\times\frac{\text{AVG length(AM2)-AVG length(FULL3)}}{\text{AVG length (AM2)}}. \tag{9}\]
The results are plotted in Figure 3. We observe that for all 21 pairs FULL3 produced shorter intervals and the relative improvement varies from 0.5% to 5%. The larger the \(\alpha\), the larger is the relative improvement. For \(\alpha=0.01\), the relative improvement is about 1%, and for \(\alpha=0.05,0.01\), the range is about 2.5% and 4%, respectively. It also seems that the relative improvement tends to increase with \(n\). In all runs of FULL3, the gap between the solution obtained to the lower bound is rather small and the largest gap is 1.35%. Figure 4, which appears in the appendix, extends Figure 3 to sample sizes \((n,m)\) where \(3\leq m\leq n\leq 15\).
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline \(\alpha\backslash(n,m)\) & \((n=9,m=6)\) & \((n=14,m=7)\) & \((n=10,m=10)\) \\ \hline
0.01 & 0.1\% & 0.957\% & 0.922\% \\ \hline
0.05 & 0.33\% & 0.94\% & 1.08\% \\ \hline
0.1 & 0.77\% & 1.31\% & 1.33\% \\ \hline \end{tabular}
\end{table}
Table 4: A bound of the gap, in terms of percentage of length, between the optimal solution and the one found by FULL1 as computed by the Gurobi package.
### Summary of the findings
The FULL algorithm was shown to be computationally feasible for small \(n,m\) using the rather coarse grid of \(D=\{-1,-0.99,..0,0.01,..1\}\). While the resulting CIs do not have the right coverage probability for \(p_{1}\) and \(p_{2}\) that are not in the grid, simple adjustments can be made to improve the coverage at a small cost in the average length. The adjusted method, FULL3, is comparable, in terms of coverage, to AM2 and BSG2, which are computed under a finer grid, but has shorter CIs.
## 6 Discussion
For small \(n,m\) (\(n,m\leq 15\)) we recommended the use of the FULL3 method, as it has good coverage and a small average length. Tables for various \((n,m,\alpha)\) are presented in the following link. The second best method is the AM2 method, and it can be used when FULL3 is not available.
We also tried several examples with larger sample sizes than 15. When both sample sizes were 25, the
Figure 3: Plotting the relative improvement as defined in (9) for every pair \((n,m)\) where \(n\in\{5,\ldots,10\}\) and \(m\in\{5,\ldots,n\}\) and for \(\alpha\in\{0.01,0.05,0.1\}\). The \(x\)-axis in the graphs is \(n\) and the number near each point is the corresponding \(m\).
algorithm could not find feasible solutions. For smaller sample sizes (around 20) the results were similar to what was reported in Section 5.3. However, a more thorough study is required for larger sample sizes, and we leave this for future research.
Extensions of this work can go in several directions. One can consider extending the FULL algorithm to other frequently used discrete distributions, like Poisson or Hyper-geometric. This amounts to changing the coverage criterion (4) according to the distribution used. One can also consider other related optimization problems, for example finding the shortest CIs that have an average confidence level of \(1-\alpha\), and the minimal confidence level is above \(1-\beta\) for some \(\beta>\alpha\). The availability of powerful optimization algorithms and software allows one to investigate such problems.
|
2309.07474 | A Fuzzy Cascaded Proportional-Derivative Controller for Under-actuated
Flexible Joint Manipulators Using Bayesian Optimization | This paper proposes a novel fuzzy cascaded Proportional-Derivative (PD)
controller for under-actuated single-link flexible joint manipulators. The
original flexible joint system is considered as two coupled $2^{nd}$-order
sub-systems. The proposed controller is composed of two cascaded PD controllers
and two fuzzy logic regulators (FLRs). The first (virtual) PD controller is
used to generate desired control input that stabilizes the first $2^{nd}$-order
sub-system. Solving the equation by considering the coupling terms as design
variables, the reference signal is generated for the second sub-system. Then
through simple compensation design, together with the second PD controller, the
cascaded PD controller is derived. In order to further improve the performance,
two FLRs are implemented that adaptively tune the parameters of PD controllers.
Under natural assumptions, the cascaded fuzzy PD controller is proved to
possess locally asymptotic stability. All the offline tuning processes are
completed data-efficiently by Bayesian Optimization. The results in simulation
illustrate the stability and validity of our proposed method. Besides, the idea
of cascaded PD controller presented here may be extended as a novel control
method for other under-actuated systems, and the stability analysis renders a
new perspective towards the stability proof of all other fuzzy-enhanced PID
controllers. | Changyi Lei, Quanmin Zhu | 2023-09-14T07:23:15Z | http://arxiv.org/abs/2309.07474v1 | A Fuzzy Cascaded Proportional-Derivative Controller for Under-actuated Flexible Joint Manipulators Using Bayesian Optimization
###### Abstract
This paper proposes a novel fuzzy cascaded Proportional-Derivative (PD) controller for under-actuated single-link flexible joint manipulators. The original flexible joint system is considered as two coupled \(2^{nd}\)-order sub-systems. The proposed controller is composed of two cascaded PD controllers and two fuzzy logic regulators (FLRs). The first (virtual) PD controller is used to generate desired control input that stabilizes the first \(2^{nd}\)-order sub-system. Solving the equation by considering the coupling terms as design variables, the reference signal is generated for the second sub-system. Then through simple compensation design, together with the second PD controller, the cascaded PD controller is derived. In order to further improve the performance, two FLRs are implemented that adaptively tune the parameters of PD controllers. Under natural assumptions, the cascaded fuzzy PD controller is proved to possess locally asymptotic stability. All the offline tuning processes are completed data-efficiently by Bayesian Optimization. The results in simulation illustrate the stability and validity of our proposed method. Besides, the idea of cascaded PD controller presented here may be extended as a novel control method for other under-actuated systems, and the stability analysis renders a new perspective towards the stability proof of all other fuzzy-enhanced PID controllers.
fuzzy logic, Bayesian optimization, flexible joint manipulator, under-actuated system, cascaded PD controller, nonlinear control
## 1 Introduction
Flexible-joint manipulators (FJM) represent a class of manipulators whose joint is made of flexible material. Compared with rigid-body manipulators, FJM requires small actuation, low energy consumption and low rate of damages [1, 2]. Nonetheless, the introduction of flexibility also increases the complexity of control. In practice, FJM is a highly nonlinear, strongly coupled and time-varying system [3]. Besides, the degree-of-freedom (DoF) of FJM is larger than its number of control input. Consequently, research into the control method of FJM is valuable to the industry.
Due to its wider importance in industrial applications, flexible joint manipulator has been a heated point of research. Over decades, many control methods have been proposed. Yan et al. proposed a robust controller based on equivalent-input-disturbance (EID) [4, 5]. An EID estimator, full-order state observer and state-feedback control law were realized to ensure the stability of the system. In [6], an adaptive backstepping control method was presented, using Interval Type-2 Fuzzy Neural Network to estimate the unknown dynamics of the system. The errors of the system was proved to be bounded under Lyapunov sense. Yang et al. designed a cascaded controlling composed of three modules, namely an adaptive controller, a torque-tracking controller, and a motor controller [7]. A Kalman observer was used to estimate the state variables, torque as well as their higher-order derivatives. All controllers are designed based on Lyapunov stability theorem. Although the above mentioned methods have satisfying performance, they require too much human craftsmanship and increased computational cost.
Among all the other existing controllers, Proportional-Integral-Derivative (PID) controller has been one of the most widely used in the industry. PID controller, though simple in calculation, has proved to be effective for a wide range of nonlinear systems in practice. Therefore, for flexible joint manipulators, many PID-based controllers have been proposed, trying to solve the problem in a simple but effective way. A neural network based PID controller was proposed to improve the performance of conventional PID [8]. A 3-layer feedforward neural network is used to output the parameters of PID controller online, and is trained using steepest descent algorithm to decrease tracking error. Results show that neural network based PID controller has lower tracking error and faster convergence speed. In [9] and [10], PID controller is enhanced by a fuzzy logic system (FLS). The FLS takes the input of error vector and output adjustments to the parameters of PID controller. In terms of performance, the FLS enhaced PID has smooth tracking performance without overshoot. A multi-PID controller scheme was proposed in [11]. The structure was composed of joint toque generator, torque tracker and motor position controller, which are all realized by simple PID. Besides, a friction observer is mounted to increase disturbance rejection. Note that most PID-based controllers proposed try to mount on other complicated modules, in order to increase the order of the controller. However, integrating fuzzy logic and neural network usually makes the systems performance intractable. Besides, these kinds of methods usually lack a systematic design approach and the stability analysis is hard to be derived, especially with nonlinearity and disturbances.
Therefore, in this paper, we propose a novel cascaded PD controller to solve the problem. The original \(4^{th}\)-order system is considered as two coupled \(2^{nd}\)-order dynamics (namely sub-plant1 and sub-plant2). Two PD controllers are used. The first PD controller is integrated with the coupling terms in sub-plant1, which calculates a reference signal for sub-plant2. In this way, the internal dynamics is tractable and can be specified. Then the second PD controller can just maintain the stability of sub-plant2. The total order of controller in this case is 4, which is equivalent to the order of the system. We believe the proposed method not only preserves the simplicity of PID controller, but also achieves certain degree of internal dynamics control. What is more, the stability analysis of the original nonlinear system with disturbance can be derived without any linearisation or simplification, which is challenging in previous PID controller research. Our method can achieve non-oscillating performance without mounting on any other modules. However, fuzzy logic system is still implemented to further improve the performance of cascaded PD controller, along with stability analysis using the lower bounds of the FLRs. To complete the task of controller tuning in a data efficient way, Bayesian Optimization (BO) is chosen.
The contributions of this paper are summarized as follows:
* A novel cascaded PD controller for uncertain under-actuated \(4^{th}\)-order flexible joint systems is proposed.
* Type-1 fuzzy logic regulators are integrated into cascaded PD controllers, which shorten the settling time and further cancels oscillation.
* The asymptotic stability of proposed cascaded fuzzy PD controllers with respect to \(4^{th}\)-order system dynamics with uncertainties is proved under natural assumptions. This renders a new perspective for the stability analysis of other fuzzy-enhanced PID controllers.
* Bayesian Optimization is implemented for joint tuning of cascaded PD controllers, as well as fuzzy logic system tuning.
The rest of the paper is organized as follows. Firstly, some preliminaries are introduced, including dynamic model to be investigated, fuzzy PD controller, and Bayesian Optimization rationale. Secondly, the controller is meticulously designed and analyzed. The cascaded PD controller is designed step by step, and then the fuzzy logic regulator is integrated. Afterwards, the stability analysis is given in terms of transfer function and Jacobian matrix. Sec.4 presents the simulation results. The parameters are specified and BO tuning is executed to determine all the variables. Then the asymptotic stability is proved, with numerical simulation results of tracking tasks. In addition, several runs of ablation experiments are conducted, which reveals the advantages and potential challenges of proposed method. Sec.5 concludes the paper and points out future research directions.
## 2 Preliminaries
### Dynamic Model Description
Fig.1 shows the conceptual structure of a single-link flexible joint manipulator (SLFJM). It is composed of two solid bodies, namely the motor shaft and the link, whose connection is modeled as a torsion spring. In the figure, \(I_{m}\) and \(I_{l}\) are the inertia of the motor and the link respectively. \(\theta_{m}\) is the rotational angle of the motor, and \(\theta_{l}\) is correspondingly that of the link. \(u\) represents our control torque input. \(k\) is the torsional coefficient of the spring, \(m\) is the mass of the link, \(g\) is the gravity coefficient and \(l\) is the shortest distance between the center of mass (CoM) of the link to the
rotational axial. The rationale of SLFJM is like this: the input \(u\) will cause a discrepancy between \(\theta_{m}\) and \(\theta_{l}\), which will generate torsion due to the torsional spring. The induced torque is proportional to \(|\theta_{m}-\theta_{l}|\).
Based on the above discussion and Newton's Law, a simplified nominal dynamic model of SLFJM can be established as [5]
\[\begin{cases}I_{l}\ddot{\theta}_{l}+mglcos\theta_{l}+k(\theta_{l}-\theta_{m})= 0\\ I_{m}\ddot{\theta}_{m}+\mu\dot{\theta}_{m}-k(\theta_{l}-\theta_{m})-u=0,\end{cases} \tag{1}\]
where \(\mu\) is the friction coefficient, \(\dot{\theta}_{l},\dot{\theta}_{m}\) are angular velocities of the link and motor, \(\ddot{\theta}_{l},\ddot{\theta}_{m}\) are angular accelerations of the link and motor, \(u\) is the control input. For convenience and generality of expression, we replace the above variables with
\[\begin{cases}x_{1}=\theta_{l}\\ x_{2}=\dot{\theta}_{l}\\ x_{3}=\theta_{m}\\ x_{4}=\dot{\theta}_{m}\end{cases} \tag{2}\]
Rearranging (1) and (2) into state-space equation, and introducing total disturbance \(d_{1},d_{2}\), we have
\[\begin{cases}\dot{x}_{1}=x_{2}\\ \dot{x}_{2}=-\frac{mgl}{I_{l}}cosx_{1}-\frac{k}{I_{l}}(x_{1}-x_{3})+d_{1}\\ \dot{x}_{3}=x_{4}\\ \dot{x}_{4}=\frac{k}{I_{m}}(x_{1}-x_{3})-\frac{\mu}{I_{m}}x_{4}+\frac{1}{I_{m} }u+d_{2}.\end{cases} \tag{3}\]
Note that system (3) is an under-actuated system with 2 degree-of-freedom (DoF) but only 1 control input. Besides, it is noticeable that this system has fourth-order dynamics, and can be considered as two second-order systems being connected by a torsional spring. These two facets determine the difficulty of controlling the system.
### Fuzzy PD Controller
#### 2.2.1 Conventional PD Controller
Fig.2 is the conceptual structure of a conventional PD controller. \(R\) is the reference signal, \(e(t)\) is the error in time \(t\), \(U\) is the control signal and \(Y\) is the output. The core modules of PD controller are proportional and derivative module, and the mathematical expression is
\[U=K_{p}e(t)+k_{d}\frac{de(t)}{dt}, \tag{4}\]
where \(k_{p},k_{d}\) are the proportional gain and derivative gain. PD controller is a simplified version of Proportional-Integral-Derivative (PID) controller, which is widely adopted in the industry.
Figure 1: Conceptual structure of flexible joint manipulator
#### 2.2.2 Type-1 Fuzzy Logic System
Fig.3 presents the general workflow of type-1 fuzzy logic systems. The crisp inputs are first fuzzified by the fuzzifier, mapping to a value between 0 and 1 using membership functions (MF). Then it will be processed by the rule-based inference engine. Lastly, the type-1 output fuzzy set will be defuzzified to retrieve a crisp outputs.
Among them, the inference engine is the core component of fuzzy system. The IF-THEN rules of a multi-input-multi-output (MIMO) fuzzy systems are in the following format:
RULE i: IF \(x_{1}(t)\) is \(M_{1}^{i}\) AND... AND \(x_{n}(t)\) is \(M_{n}^{i}\) THEN \(y_{1}^{i}(t)\) AND... AND \(y_{m}^{i}(t),\) (5)
where \(x_{i},i=1,2,...,n\) is the linguistic input variables, \(y_{j}^{i}(t),j=1,2,...,m\) is the intermediate output of the output \(y_{j}(x(t))\) under \(i^{th}\) rule. \(M_{j}^{i}\) is the fuzzy term of the \(j^{th}\) linguistic variable under \(i^{th}\) rule. In Sugeno fuzzy inference system, the final output is
\[y_{j}(x(t))=\sum_{i=1}^{\mathbb{N}}\omega_{i}(X(t))y_{j}^{i}(t). \tag{6}\]
\(X(t)=[x_{1}(t),x_{2}(t),...,x_{N}(t)]\), and \(\mathbb{N}\) is the total number of fuzzy rules. \(\omega_{i}(X(t))\) is the normalized firing strength of the \(i^{th}\) rule:
\[\omega_{i}(X(t))=\frac{\prod_{k=1}^{n}\mu_{M_{k}^{i}}(X(t))}{\sum_{m=1}^{m} \prod_{k=1}^{n}\mu_{M_{k}^{m}}(X(t))}. \tag{7}\]
Also, \(\omega_{i}(X(t))>0,\forall i\), and \(\sum\omega_{i}(X(t))=1\). \(\mu_{M_{k}^{i}}(X(t))\) is the grade of the MF of fuzzy term \(M_{k}^{i}\).
#### 2.2.3 Fuzzy PD Controller
Fig.4 is the structure of single fuzzy PD controller. The fuzzy inference system is used to adjust the proportional and derivative gains of conventional PD controller.
### Bayesian Optimization
Bayesian Optimization (BO) is a powerful algorithm in combinatorial optimization realm, especially when the original cost function is expensive to measure. Among all the other optimization algorithms, BO stands out in terms of its ability to search for global optimality and data efficiency. To do that, BO maintains a cheap surrogate model that replaces
Figure 3: General workflow of type-1 fuzzy system
Figure 2: Conceptual structure of conventional PD controller
the real expensive cost function. The most commonly used surrogate model is Gaussian Regression, which integrates all sampled data together to form a global probabilistic model that describes its belief on the cost value distribution. With the global model, BO calculates the next location to sample with the highest probability to retrieve the best result. Compared with other popular optimization algorithms, BO is free from gradient information, and can find relatively satisfying result quickly [12].
The simplest form of BO is Sequential Model-based Optimization [13], of which the pseudo code is displayed in Alg.1. Firstly, the surrogate model \(\mathcal{M}\), cost function \(f\), acquisition function \(S\) and parameter domain \(\mathcal{X}\) are initialized. Then we sample randomly from \(\mathcal{X}\) to retrieve an initial data base \(\mathcal{D}\). Then for a fixed number of steps \(T\), \(\mathcal{M}\) is first updated as posterior distribution \(p(y|x,D)\) to best represent all the data inside \(\mathcal{D}\). Then the next location \(x_{i}\) to be explored is retrieved by maximizing acquisition function \(\mathbb{S}\), which is some form of evaluation on the probability of getting lower cost. Then the real cost \(f(x_{i})\) is evaluated and added to the database \(\mathcal{D}\). In this paper, the acquisition function is selected as Upper Confidence Bound (UCB) [13], represented as
\[UCB=\mu(x_{i})+h\sigma(x_{i}). \tag{8}\]
\(\mu(x_{i})\) is the mean value, and \(\sigma(x_{i})\) is the covariance. UCB can be simply interpreted as an upper bound of our confidence on certain location. The coefficient \(h\) adjusts the exploration extent, and is selected as \(2.576\)[14].
```
1:Input: f,\(\mathcal{X}\),\(\mathcal{S}\),\(\mathcal{M}\)
2:\(\mathcal{D}\leftarrow\) INITSAMPLES(f,\(\mathcal{X}\))
3:for\(i\leftarrow|\mathcal{D}|\) to \(\mathcal{D}\)do
4:\(p(y|x,\mathcal{D})\gets FITMODEL(\mathcal{M},\mathcal{D})\)
5:\(x_{i}\gets argmax_{x\in\mathcal{X}}\mathcal{S}(x,p(y|x,\mathcal{D}))\)
6:\(y_{i}\gets f(x_{i})\)
7:\(\mathcal{D}\leftarrow\mathcal{D}\cup(x_{i},y_{i})\)
8:endfor
```
**Algorithm 1** Sequential Model-based Optimization
In this paper, BO is implemented in all tuning processes, including for PD controllers and fuzzy logic regulators (FLC). The considerations behind are that testing real systems with un-verified sets of parameters is unsafe, which may damage the mechanics and bring danger to human operators. Besides, fewer times of runs means less wear and tear of the system [15]. Therefore, BO is suitable to be integrated into the design procedures of our proposed algorithms.
## 3 Controller Design and Analysis
This section articulates the controller design procedures. Firstly, a conceptual graph of the controller is presented, with an introduction to the core idea. Secondly, cascaded PD controller is designed. After that, stability analysis of the cascaded PD controller is implemented through transfer function and Jacobian matrix. Lastly, the design of fuzzy logic regulator is specified.
Figure 4: Fuzzy PD structure
### Framework Overview
Fig.5 illustrates the conceptual framework of the controllers. Inside our designed framework, the original fourth-order system (3) is divided into two second-order sub-plants, which are expressed in (9) and (10). The inspiration of cascaded PD controller for the system is to use one PD controller each for those two sub-plants, which are second-order systems. It is expected that if two sub-plants can be stabilized separately under two separate PD controllers, the original system can be stable. However, the coupling term \(x_{3}\) that appeared in (9) should be handled properly, which will be explained further in Sec.3.2.
\[sub-plant1:\begin{cases}\dot{x}_{1}=x_{2}\\ \dot{x}_{2}=-\frac{mgl}{I_{l}}cosx_{1}-\frac{k}{I_{l}}(x_{1}-x_{3})+d_{1}\end{cases} \tag{9}\]
\[sub-plant2:\begin{cases}\dot{x}_{3}=x_{4}\\ \dot{x}_{4}=\frac{k}{I_{m}}(x_{1}-x_{3})-\frac{\mu}{I_{m}}x_{4}+\frac{1}{I_{m }}u+d_{2}\end{cases} \tag{10}\]
The workflow of our controller is specified in the following. Firstly, the reference signal \(X_{1d}=[x_{1d},\dot{x}_{1d}]^{T}\) is input into the first controller PD1, where the desired intermediate torque for the first second-order system is computed. Through combining (9), a referenced signal \(X_{3d}=[x_{3d},\dot{x}_{3d}]\) for sub-plant2 is derived. This will be input into the second PD controller (namely PD2) to stabilize the sub-plant2. Upon all that, BO is implemented during the tuning process to render appropriate parameters \(k_{p1},k_{d1}\) and \(k_{p2},k_{d2}\) for PD1 and PD2 respectively. \(k_{p1},k_{p2}\) represent the proportional gains, and \(k_{d1},k_{d2}\) are the derivative gains. After that, two fuzzy logic regulators (FLR1 and FLR2) will determine online the regulation values \(\Delta k_{p1},\Delta k_{d1},\Delta k_{p2},\Delta k_{d2}\) for the parameters of the PD controllers.
### Cascaded PD Controller Design
In this section, the design process of cascaded PD controllers is elaborated, Firstly, PD1 output is transferred to reference signal \(x_{3d}\). Secondly, \(x_{3d}\) is utilized to design the second PD controller. Lastly, certain compensation and simplification is made to transfer sub-plant2 into a standard second-order serial integrator with disturbance.
Consider a serial integrator with disturbance \(d_{1}\), being controlled by a PD controller with proper parameters in (11), where \(x_{1d},\dot{x}_{1d}\) are the given reference signal.
\[\begin{cases}\dot{x}_{1}=x_{2}\\ \dot{x}_{2}=u_{PD1}+d_{1}\\ u_{PD1}=k_{p1}(x_{1d}-x_{1})+k_{d1}(\dot{x}_{1d}-\dot{x}_{1})\end{cases} \tag{11}\]
Associating with (9), it can be expected that if \(-\frac{mgl}{I_{l}}cosx_{1}-\frac{k}{I_{l}}(x_{1}-x_{3})=u_{PD1}\), then (9) can be stabilized. While \(x_{1},x_{2}\) are the state variables of (9), \(x_{3}\) is an external variable which can be utilized freely for controller design. Therefore, we assign the desired \(x_{3}\) to be \(x_{3d}\) that satisfies
\[-\frac{mgl}{I_{l}}cosx_{1}-\frac{k}{I_{l}}(x_{1}-x_{3d})=u_{PD1}. \tag{12}\]
Figure 5: Framework of controller design
Therefore, the reference signal for the motor is derived as
\[x_{3d}=\frac{u_{PD1}I_{l}}{k}+x_{1}+\frac{mglcosx_{1}}{k}. \tag{13}\]
Consequently, the direct PD controller for (10) becomes apparent by assigning \(u=u_{PD2}\), which can be expressed as
\[\begin{cases}\dot{x}_{3}=x_{4}\\ \dot{x}_{4}=\frac{k}{I_{m}}(x_{1}-x_{3})-\frac{\mu}{I_{m}}x_{4}+\frac{1}{I_{m} }u_{PD2}+d_{2}\\ u_{PD2}=k_{p2}(x_{3d}-x_{3})+k_{d2}(\dot{x}_{3d}-\dot{x}_{3})\end{cases} \tag{14}\]
Further, to compensate for the term \(\frac{k}{I_{m}}(x_{1}-x_{3})\) that illustrates coupling with sub-plant1, \(u\) is modified as
\[u_{2}=u_{PD2}+u_{PD1}I_{l}+mglcosx_{1}. \tag{15}\]
Integrating (15) into (10) and (12), the resulting dynamics is shown in (3).
\[\begin{cases}\dot{x}_{3}=x_{4}\\ \dot{x}_{4}=u_{2}+d_{2}^{\prime}\\ u_{2}=\frac{k_{2}+k}{I_{m}}(x_{3d}-x_{3})+\frac{k_{2}}{I_{m}}(\dot{x}_{3d}- \dot{x}_{3})\\ d_{2}^{\prime}=-\frac{1}{I_{m}}x_{4}+d_{2}\end{cases} \tag{16}\]
However, the derivative reference signal \(\dot{x}_{3d}\) is not given directly and should be calculated as
\[\dot{x}_{3d}=\frac{d(\frac{u_{PD1}I_{l}}{k}+x_{1}+\frac{mglcosx_{1}}{k})}{dt}= \frac{d(\frac{u_{PD1}I_{l}}{k})}{dt}+\dot{x}_{1}-\frac{mglsinx_{1}}{k}\dot{x} _{1}. \tag{17}\]
This calculation is possible but very complicated, especially when it involves the derivative of PD1 controller. For simplicity in this paper, we assign
\[\dot{x}_{3d}=0. \tag{18}\]
**Observation 1**: If sub-plant1 and sub-plant2 can be stabilized separately, we would expect the whole system to be stable. Indeed, it can be proved that system (12) and (17) can be stabilized separately[16]. However, a joint analysis is still required to ensure stability of the fourth-order system, which will be detailed in Sec.3.5.
**Observation 2**: The cascaded PD controller in this paper is different from the conventional one. Conventional cascaded PD controller works in adjacent order of the system. For example, one PD controller to assign desired velocity, and the other PD controller to control the acceleration [17]. In contrast, the PD1 controller in this paper serves as the acceleration controller for sub-plant1, as well as the calculator of the reference signal for sub-plant2. And PD2 controller is the acceleration controller for sub-plant2.
### Type-1 Fuzzy Logic Regulator Design
The fuzzy logic regulator (FLR) is used to adjust the parameters of PD controller adaptively. Two FLRs are required, with each deals with one PD controller. For the first FLR, the inputs are \(e_{1},e_{2}\), and the outputs are \(\Delta k_{p1},\Delta k_{d1}\). For the second FLR, the inputs are \(e_{3}\), \(e_{4}\), and the outputs are \(\Delta k_{p2},\Delta k_{d2}\). The inputs will pass through a fuzzification module, and then will be processed by fuzzy inference module using predefined fuzzy rules. At last, a crisp value is output using defuzzification module. All the inputs and outputs are described by 5 linguistic variables, namely Negative Big (NB), Negative Small (NS), Zero (ZE), Positive Small (PS) and Positive Big (PB). The memberships functions are selected as triangular membership functions, and are divided evenly that spread across the domain of variables. The inputs of the FLRs are manually set as
\[e_{1},e_{3}\in[-\pi,\pi]rad;e_{2},e_{4}\in[-5,5]rad/s. \tag{19}\]
The membership function of the inputs are depicted in Fig.6 and Fig.7. Similarly, the domain of the outputs are defined using unknown parameters below, which are to be tuned by BO.
\[\Delta k_{p1}\in[\Delta k_{p1}^{l},\Delta k_{p1}^{u}] \tag{20}\]
\[\Delta k_{d1}\in[\Delta k_{d1}^{l},\Delta k_{d1}^{u}] \tag{21}\]
\[\Delta k_{p2}\in[\Delta k_{p2}^{l},\Delta k_{p2}^{u}] \tag{22}\]
\[\Delta k_{d2}\ in[\Delta k_{d2}^{l},\Delta k_{d2}^{u}] \tag{23}\]
The fuzzy rules are the key element to fuzzy inference module. The fuzzy rules for our PD controllers are defined in Tab.1 and Tab.2, in which \(e\) represents the first input, namely the angular error in our experiments. \(de\) is the second input, which is the angular velocity error in this paper. The overall notion of fuzzy rule design is that when the errors are big, proportional gain should be increased to compensate for it, while derivative gain should be decreased. When the errors are small, the proportional gain should be decreased and the derivative gain should be decreased to prevent overshoot.
### Simplified Linear System Transfer Function Analysis
In this section, the system performance without FLRs is analyzed using transfer function. Through calculating the poles of the characteristic function of the resulting system, the stability analysis can be carried out. To do that, a nominal model with \(g=0\) is used in this section, which means the model is a simplified linear version of (3) without disturbance. With that being said, it can still render a good estimate of the original system dynamics, or even becomes the real system analysis if all nonlinear terms and unknown disturbance are properly compensated.
Combining (3)(12)(15)(18), and setting \(d_{1}=0,d_{2}=0,g=0\), the whole system dynamics is
\[\begin{cases}\dot{x}_{1}=x_{2}\\ \dot{x}_{2}=-\frac{k}{I_{1}}(x_{1}-x_{3})\\ \dot{x}_{3}=x_{4}\\ \dot{x}_{4}=\frac{k_{p1}(k_{p2}+k)I_{1}}{kI_{m}}(x_{1d}-x_{1})+\frac{k_{d1}k_{ p1}I_{1}}{kI_{m}}(\dot{x}_{1d}-x_{2})+\frac{(k_{p2}+k)(x_{1}-x_{3})}{I_{m}}- \frac{k_{p2}x_{4}}{I_{m}}-\frac{\mu x_{4}}{I_{m}}\end{cases} \tag{24}\]
\begin{table}
\begin{tabular}{c c c c c c} \hline e/ de & NB & NS & ZE & PS & PB \\ \hline NB & NB & NB & NS & NS & ZE \\ NS & NB & NS & NZ & PS & PS \\ PS & NS & ZE & PS & PS & PB \\ PB & ZE & PS & PS & PB & PB \\ \hline \end{tabular}
\end{table}
Table 1: Rule table for \(\Delta k_{p}\)
\begin{table}
\begin{tabular}{c c c c c c} \hline e/ de & NB & NS & ZE & PS & PB \\ \hline NB & PB & PB & PS & PS & ZE \\ NS & PB & PS & PS & ZE & NS \\ ZE & PS & PS & ZE & NS & NS \\ PS & PS & ZE & NS & NS & NB \\ PB & ZE & NS & NS & NB & NB \\ \hline \end{tabular}
\end{table}
Table 2: Rule table for \(\Delta k_{d}\)
Figure 6: Membership function of error Figure 7: Membership function of velocity error
Assuming the initial values of all state variables to be 0. Taking the Laplace transformation of \((\dot{x}_{1},\dot{x}_{2})\) from (24):
\[s^{2}X_{1}(s)=-\frac{k}{I_{l}}(X_{1}(s)-X_{3}(s)), \tag{25}\]
where \(s\) is the Laplace variable, and \(X_{i}(s),i=1,2,3,4\) are the Laplace transformation results of corresponding variables \(x_{i}\). For simplicity, the dependent variable \(s\) will be omitted, and \(X_{i}(s)\) will be written as \(X_{i}\). From (25), the following transfer function is derived:
\[\frac{X_{1}}{X_{3}}=\frac{k}{I_{l}s^{2}+k}. \tag{26}\]
Similarly, taking the Laplace transform of \((\dot{x}_{3},\dot{x}_{4})\) from (24):
\[s^{2}X_{3}=\frac{k_{p1}(k_{p2}+k)I_{l}}{kI_{m}}(X_{1d}-X_{1})+\frac{k_{d1}k_{p 1}I_{l}}{kI_{m}}s(X_{1d}-X_{1})+\frac{(k_{p2}+k)(X_{1}-X_{3})}{I_{m}}-\frac{k_{ d2}sX_{3}}{I_{m}}-\frac{\mu sX_{3}}{I_{m}} \tag{27}\]
Merging similar items and move \(X_{3}\) all to the left:
\[X_{3}=\frac{\frac{k_{p1}(k_{p2}+k+sk_{d1})I_{l}}{k}(X_{1d}-X_{1})+(k_{p2}+k)X _{1}}{I_{m}s^{2}+(K_{d2}+\mu)s+(k_{p2}+k)} \tag{28}\]
Multiplying (26) by (28):
\[X_{1}=\frac{k_{p1}(k_{p2}+k+sk_{d1})I_{l}(X_{1d}-X_{1})+k(k_{p2}+k)X_{1}}{[I_{ m}s^{2}+(K_{d2}+\mu)s+(k_{p2}+k)](I_{l}s^{2}+k)} \tag{29}\]
Rearranging (29), and then the transfer function
\[\frac{X_{1}}{X_{1d}}=\frac{k_{p1}(k_{p2}+k+sk_{d1})}{I_{m}I_{l}s^{4}+(I_{l}K_ {d2}+I_{l}\mu)s^{3}+[I_{m}k+I_{l}(k_{p2}+k)]s^{2}+(k\mu+kk_{d2}+k_{p1}k_{d1})s +k_{p1}(k_{p2}+k)} \tag{30}\]
(30) depicts the response of \(x_{1}\) given the reference signal \(x_{1d}\). It is obvious that the denominator has \(4^{th}\) order, and the stability is determined by the poles of the transfer function (30). Since it is determined by all the parameters of the system, it will be calculated numerically in Sec.4.3.1.
### Stability Analysis Using Jacobian Matrix
In Sec.3.4, transfer function analysis is implemented on a simplified linear model. Although the system turns out to be stable, it neglects the nonlinearity and disturbance. Therefore, this section proves that our proposed controller is asymptotically stable even with nonlinearity and disturbance, given that the disturbance satisfies certain conditions. Zhao et al. has proved the global asymptotic stability of a general uncertain \(2^{nd}\) order dynamic system can be achieved given certain assumptions [16]. Our analysis here is an extension of theirs from \(2^{nd}\) order to \(4^{th}\) order dynamics.
Firstly, the error equations are defined:
\[\begin{cases}e_{1}=x_{1d}-x_{1}\\ e_{2}=\dot{x}_{1d}-\dot{x}_{1}=\dot{x}_{1d}-x_{2}\\ e_{3}=x_{3d}-x_{3}=\frac{u_{p2}I_{l}}{k}+x_{1}+\frac{mglcosx_{1}}{k}-x_{3}\\ e_{4}=\dot{x}_{3d}-\dot{x}_{3}=\dot{x}_{3d}-x_{4}\end{cases} \tag{31}\]
Integrating (12)(18) into (31):
\[\begin{cases}e_{1}=x_{1d}-x_{1}\\ e_{2}=\dot{x}_{1d}-x_{2}\\ e_{3}=\frac{(k_{p1}e_{1}+k_{d1}e_{2})I_{l}}{k}+x_{1}+\frac{mglcosx_{1}}{k}-x_{3} \\ e_{4}=-x_{4}\end{cases} \tag{32}\]
**Definition 1**: For a general second-order dynamic system with disturbance
\[\begin{cases}\dot{x}_{1}=x_{2}\\ \dot{x}_{2}=f(x_{1},x_{2},t)+u,\end{cases} \tag{33}\]
a special functional space is defined as follow:
\[\mathcal{F}_{L_{1},L_{2}}(x_{1},x_{2})=\Big{\{}f\in C^{1}(\mathbb{R}^{2}\times \mathbb{R}^{+})\Big{|}\Big{|}\frac{\partial f}{\partial x_{1}}\big{|}\leq L_{1 },\big{|}\frac{\partial f}{\partial x_{2}}\big{|}\leq L_{2},\forall x_{1},x_{2} \in\mathbb{R},\forall t\in\mathbb{R}^{+}\Big{\}}, \tag{34}\]
where \(L_{1},L_{2}\) are positive constants, and \(C^{1}(\mathbb{R}^{2}\times\mathbb{R}^{+})\) denotes the functional space mapping \(\mathbb{R}^{2}\times\mathbb{R}^{+}\) to \(\mathbb{R}\), which are locally Lipschitz continuous in \((x_{1},x_{2})\) uniformly in \(t\), and piecewise continuous in \(t\).
**Assumption 1**: There exists positive constants \(L_{11},L_{12},L_{21},L_{22}\) that satisfy the following:
\[d_{1}\in\mathcal{F}_{L_{11},L_{12}}(x_{1},x_{2}) \tag{35}\] \[d_{2}\in\mathcal{F}_{L_{21},L_{22}}(x_{3},x_{4}) \tag{36}\]
**Theorem 1**: For a class of \(4^{th}\)-order dynamic systems (3), using controller presented in (15) and (18), the system can achieve locally asymptotic stability under Assumption 1, if the followings are satisfied:
\[k_{d1}>L_{12},k_{p1}>L_{11},\frac{\mu+k_{d2}}{I_{m}}>L_{22},\frac{k+k_{p2}}{I _{m}}>L_{21} \tag{37}\]
**Proof**. Taking the derivative of (32) and integrating into (3):
\[\begin{cases}\dot{e}_{1}=e_{2}\\ \dot{e}_{2}=\ddot{x}_{1d}+\frac{mgl}{I_{l}}cosx_{1}+\frac{k}{I_{l}}(x_{1}-x_{ 3})-d_{1}\\ \dot{e}_{3}=e_{4}\\ \dot{e}_{4}=-\dot{x}_{4}=\frac{\mu}{I_{m}}x_{4}-\frac{(k_{p2}+k)e_{3}+k_{d2}e_{ 4}}{I_{m}}-d_{2}\end{cases} \tag{38}\]
To remove all state variables \(\{x_{i},i=1,2,3,4\}\) in (38), integrate it with (32), and we have the dynamics of the error vectors.
\[\begin{cases}\dot{e}_{1}=e_{2}\\ \dot{e}_{2}=\ddot{x}_{1d}-k_{p1}e_{1}-k_{d1}e_{2}+\frac{k}{I_{l}}e_{3}-d_{1}\\ \dot{e}_{3}=e_{4}\\ \dot{e}_{4}=-\frac{\mu+k_{d2}}{I_{m}}e_{4}-\frac{k+k_{p2}}{I_{m}}e_{3}-d_{2} \end{cases} \tag{39}\]
Denote the vector field of (39) as \(F(e_{1},e_{2},e_{3},e_{4})\), i.e.
\[F(e_{1},e_{2},e_{3},e_{4})=\left[\begin{array}{cc}e_{2}\\ \ddot{x}_{1d}-k_{p1}e_{1}-k_{d1}e_{2}+\frac{k}{I_{l}}e_{3}-d_{1}\\ e_{4}\\ -\frac{\mu+k_{d2}}{I_{m}}e_{4}-\frac{k+k_{p2}}{I_{m}}e_{3}-d_{2}\end{array} \right] \tag{40}\]
Then the Jacobian matrix of \(F(e_{1},e_{2},e_{3},e_{4})\) is
\[DF(e_{1},e_{2},e_{3},e_{4})=\left[\begin{array}{cccc}0&1&0&0\\ \frac{\partial\ddot{x}_{1d}}{\partial e_{1}}-\frac{\partial d_{1}}{\partial e _{1}}-k_{p1}&-\frac{\partial\dot{x}_{1d}}{\partial e_{2}}-\frac{\partial d_{1 }}{\partial e_{2}}-k_{d1}&\frac{\partial\ddot{x}_{1d}}{\partial e_{3}}+\frac{k }{I_{l}}&\frac{\partial\ddot{x}_{1d}}{\partial e_{4}}\\ 0&0&0&1\\ 0&0&-\frac{k+k_{p2}}{I_{m}}-\frac{\partial d_{2}}{\partial e_{3}}&-\frac{\mu+ k_{d2}}{I_{m}}-\frac{\partial d_{2}}{\partial e_{4}}\end{array}\right] \tag{41}\]
Usually, the reference signal is not dependent on the state variables, but only on time \(t\). Therefore, (41) can be simplified to
\[DF(e_{1},e_{2},e_{3},e_{4})=\left[\begin{array}{cccc}0&1&0&0\\ -\frac{\partial d_{1}}{\partial e_{1}}-k_{p1}&-\frac{\partial d_{1}}{\partial e _{2}}-k_{d1}&\frac{k}{I_{l}}&0\\ 0&0&0&1\\ 0&0&-\frac{k+k_{p2}}{I_{m}}-\frac{\partial d_{2}}{\partial e_{3}}&-\frac{\mu+ k_{d2}}{I_{m}}-\frac{\partial d_{2}}{\partial e_{4}}\end{array}\right] \tag{42}\]
The eigenvalues of (42) have closed-form solutions:
\[\lambda_{1}=-\frac{1}{2}\frac{\partial d_{1}}{\partial e_{2}}-\frac{1}{2}k_{d1 }-\frac{1}{2}\sqrt{\Big{[}(\frac{\partial d_{1}}{\partial e_{2}}+k_{d1})^{2}-4( \frac{\partial d_{1}}{\partial e_{1}}+k_{p1})\Big{]}} \tag{43}\]
\[\lambda_{2}=-\frac{1}{2}\frac{\partial d_{1}}{\partial e_{2}}-\frac{1}{2}k_{d1} +\frac{1}{2}\sqrt{\left[(\frac{\partial d_{1}}{\partial e_{2}}+k_{d1})^{2}-4( \frac{\partial d_{1}}{\partial e_{1}}+k_{p1})\right]} \tag{44}\]
\[\lambda_{3}=-\frac{1}{2}\frac{\mu+k_{d2}}{I_{m}}-\frac{1}{2}\frac{\partial d_{2 }}{\partial e_{4}}-\frac{1}{2}\sqrt{\left[(\frac{\mu+k_{d2}}{I_{m}}+\frac{ \partial d_{2}}{\partial e_{4}})^{2}-4(\frac{k+k_{p2}}{I_{m}}+\frac{\partial d _{2}}{\partial e_{3}})\right]} \tag{45}\]
\[\lambda_{4}=-\frac{1}{2}\frac{\mu+k_{d2}}{I_{m}}-\frac{1}{2}\frac{\partial d_{ 2}}{\partial e_{4}}+\frac{1}{2}\sqrt{\left[(\frac{\mu+k_{d2}}{I_{m}}+\frac{ \partial d_{2}}{\partial e_{4}})^{2}-4(\frac{k+k_{p2}}{I_{m}}+\frac{\partial d _{2}}{\partial e_{3}})\right]} \tag{46}\]
If (37) and Assumption 1 are satisfied, all four eigenvalues have negative real parts. Note that \([e_{1},e_{2},e_{3},e_{4}]^{T}=[0,0,0,0]^{T}\) is obviously the set point of (39). Therefore, the system is asymptotically stable [18], converging to \([e_{1},e_{2},e_{3},e_{4}]^{T}=[0,0,0,0]^{T}\). In other words, all orbits starting close enough to the set point tends asymptotically to it.
**Observation 3**: It is tempting to extend the conclusion into globally asymptotic stability according to Markus-Yamabe's theorem [19]. Nevertheless, Markus-Yamabe's theorem currently only holds for \(2^{nd}\)-order systems, and counterexamples have been investigated in higher-order systems [20]. As for what is the extreme of initial points to ensure asymptotic stability, we can implement numerical experiments to determine. Nevertheless, one interesting fact about (42) is that the existence of the coupling term \(\frac{k}{I_{t}}\) does not affect the result of eigenvalues. Namely, the stability condition of this coupled system is the same as that if the coupling between the sub-plant1 and sub-plant2 disappears and that they are totally decoupled.
Further, the controller with FLR integrated can be analyzed under the same assumptions and conditions. Due to the rationale of fuzzy logic systems, the outputs of a FLR are limited by the lower and upper bounds of the antecedents as shown in (20)-(23). Therefore, the stability condition should be specified as
\[k^{\prime}_{d1}+\Delta k^{l}_{d1}>L_{12} \tag{47}\]
\[k^{\prime}_{p1}+\Delta k^{l}_{p1}>L_{11} \tag{48}\]
\[\frac{\mu+k_{d2}+\Delta k^{l}_{d2}}{I_{m}}>L_{22} \tag{49}\]
\[\frac{k+k_{p2}+\Delta k^{l}_{p2}}{I_{m}}>L_{21} \tag{50}\]
where \(k^{\prime}_{p1},k^{\prime}_{d1},k^{\prime}_{p2},k^{\prime}_{d2}\) are the static parameters for cascaded PD controller without FLRs, and \(min()\) means taking the minimal value.
**Observation 4**: The stability conditions for FLR-enhanced PD control in (47)-(50) are a kind of Membership-Function-Independent (MFI) method [21, 22], by using Membership Function Boundary (MFB) techniques [23, 24]. Although those conditions are nearly "free" to be derived, they come with a great extent of conservativeness [25]. By considering the internal dynamics of the fuzzy logic systems, the stability conditions can be relaxed by introducing slack matrices.
## 4 Simulation
This section introduces the implementation and results in simulation. Firstly, some necessary parameters for simulation are specified. Secondly, BO tuning process is detailed, which renders the parameters of the controllers. Next, the numerical results as well as stability analysis are carried out. Further, we implemented ablation experiments to illustrate the contribution of each component of our controller.
### Parameters Specification
Tab.3 renders the parameters for dynamic model. The values of those parameters are taken from a real physical machine [26]. Tab.4 is the basic setting for simulation environment. The initial values are \(\{x_{i}=0|i=1,2,3,4\}\). Two reference signals are implemented. The first one is square-wave signal, which is defined as
\[x_{1d}(t)=\begin{cases}1,\text{if t < 10 sec}\\ 0,\text{otherwise}\end{cases},\dot{x}_{1d}(t)=0. \tag{51}\]
The other one is sine-wave target \([x_{1d}(t),\dot{x}_{1d}(t)]^{T}=[sin(t),cos(t)]^{T}\). The total disturbance \(d_{1},d_{2}\) are set as random values ranging between \([-10,10]rad/s^{2}\).
### BO Tuning Procedure and Results
BO is implemented in this paper to achieve data-efficient tuning. The advantages of BO is that it can find a sub-optimal solution quickly. In this paper, BO is first utilized to tune the parameters of two PD controllers jointly without introducing fuzzy logic regulator. The cost function is negative sum of absolute angular error, and square-wave signal (51) is used as reference. Therefore, the task of BO can be formalized:
\[\max_{k_{p1},k_{d1},k_{p2},k_{d2}}-\sum_{timestep=0}^{200}|e_{1}| \tag{52}\]
The searching ranges are limited to be
\[k_{p1},k_{p2}\in[0,150],k_{d1},k_{d2}\in[0,30]. \tag{53}\]
BO is run for 150 episodes, and Fig.8 records the highest cost encountered upon each number of episodes. Upon retrieving the "best" set of parameters for PD controllers, we use that set of parameters as baseline and tune the upper/lower bounds for fuzzy logic regulators. The FLR is responsible for adjusting 4 parameters, with each parameter having one upper bound and one lower bound. Therefore, BO tuning for FLR has eight parameters. Fig.9 records the highest cost encountered upon each number of episodes for FLR tuning. At last, the resulting parameters are summarized in Tab.5. It should be noticed that BO in practice can quickly converge to satisfying performance within a few iterations. This is valuable to practice, since it means a satisfying set of parameters is easily accessible with low damage to the devices.
### Results and Evaluation
#### 4.3.1 Stability Analysis of Cascaded PD Controller
With the parameters given in Tab.5 and Tab.3, the stability analysis can be carried out both in terms of transfer function and Jacobian matrix. In this paper, only cascade PD controller without fuzzy logic regulators is analyzed, since the stability analysis of fuzzy logic system itself is still a heated point of research. The fuzzy cascaded PD controller will be investigated numerically in the following sections.
Substituting all the parameters into (30, we have the final transfer function of \(x_{1}\) w.r.t \(x_{1d}\)
\[\frac{X_{1}}{X_{1d}}=\frac{531.2942s+12760.455}{0.3s^{4}+18.636s^{3}+274.5s^{2} +1624.5846s+12676.455}, \tag{54}\]
of which the poles are solved as
\[p_{1}=-43.3921, \tag{55}\]
\begin{table}
\begin{tabular}{c c} \hline Description & Values \\ \hline Simulation timestep & \(0.005sec\) \\ ODE solver & Forward Euler \\ Control timestep & \(0.05sec\) \\ Episode & \(10sec\) \\ Env & OpenAI Gym [27] \\ \hline \end{tabular}
\end{table}
Table 4: Parameter of Simulation Environment
\begin{table}
\begin{tabular}{c c c} \hline Parameters & Description & Values \\ \hline \(g\) & Gravity acceleration & \(9.8m/sec^{2}\) \\ \(m\) & Mass of the link & \(1.2756kg\) \\ \(l\) & Length of the link & \(0.4m\) \\ \(I_{l}\) & Inertia of link & \(1kg\ m^{2}\) \\ \(I_{m}\) & Inertia of motor & \(0.3kg\ m^{2}\) \\ \(k\) & Elastic stiffness of the flexible link & \(100Nm\) \\ \(\mu\) & Viscosity & \(0.1kg\ m^{2}/sec\) \\ \hline \end{tabular}
\end{table}
Table 3: Parameter of Dynamic Model
\[p_{2}=-16.1253, \tag{56}\] \[p_{3}=-1.3013+7.6613i,\] (57) \[p_{4}=-1.3013-7.6613i. \tag{58}\]
Evidently, because all poles are to the left of imaginary axis, the system is stable. Further, two of the poles are in the real axis.
As for the Jacobian matrix analysis, substituting all the parameters into (41). Note that the reference signal is only dependent on time \(t\), so \(\frac{\partial\dot{x}_{4}}{\partial e_{i}}=0,i=1,2,3,4\). Similarly, the disturbances are random values in this paper, therefore \(\frac{\partial d_{i}}{\partial e_{i}}=0,i=1,2,j=1,2,3,4\). Finally, the Jacobian matrix becomes
\[DF(e_{1},e_{2},e_{3},e_{4})=\left[\begin{array}{cccc}0&1&0&0\\ -52.19&-10&100&0\\ 0&0&0&1\\ 0&0&-815&-29.12\end{array}\right], \tag{59}\]
and the eigenvalues are
\[\lambda_{1}=-5.0000+5.2144i, \tag{60}\] \[\lambda_{2}=-5.0000-5.2144i,\] (61) \[\lambda_{3}=-14.5600+24.5562i, \tag{62}\]
\begin{table}
\begin{tabular}{c c c} \hline Parameters & Description & Values \\ \hline \(k_{p1}\) & Proportional gain for PD1 & \(52.19\) \\ \(k_{d1}\) & Derivative gain for PD1 & \(10.18\) \\ \(k_{p2}\) & Proportional gain for PD2 & \(144.5\) \\ \(k_{d2}\) & Derivative gain for PD2 & \(8.636\) \\ \(\Delta k_{p1}^{u}\) & Upper bound of FLR on \(k_{p1}\) & \(-11.61\) \\ \(\Delta k_{p1}^{l}\) & Lower bound of FLR on \(k_{p1}\) & \(15.27\) \\ \(\Delta k_{d1}^{u}\) & Upper bound of FLR on \(k_{d1}\) & \(-3.228\) \\ \(\Delta k_{d1}^{l}\) & Lower bound of FLR on \(k_{d1}\) & \(0.1\) \\ \(\Delta k_{p2}^{u}\) & Upper bound of FLR on \(k_{p2}\) & \(-16.94\) \\ \(\Delta k_{p2}^{l}\) & Lower bound of FLR on \(k_{p2}\) & \(2.997\) \\ \(\Delta k_{d2}^{u}\) & Upper bound of FLR on \(k_{d2}\) & \(-0.1\) \\ \(\Delta k_{d2}^{l}\) & Lower bound of FLR on \(k_{d2}\) & \(0.9537\) \\ \hline \end{tabular}
\end{table}
Table 5: Tuning Result of BO
Figure 8: Joint tuning of PD1 and PD2 Figure 9: Tuning of FLR
\[\lambda_{4}=-14.5600-24.5562i. \tag{63}\]
Similarly, all the eigenvalue of \(DF\) have negative real part, which ensures our system is asymptotically stable. After introducing FLRs, in the worst-case scenario, the Jacobian matrix becomes
\[DF(e_{1},e_{2},e_{3},e_{4})=\left[\begin{array}{cccc}0&1&0&0\\ -40.58&-6.772&100&0\\ 0&0&0&1\\ 0&0&-758.53&-28.79\end{array}\right], \tag{64}\]
of which the eigenvalues are
\[\lambda_{1}=-3.3860+5.3958i, \tag{65}\]
\[\lambda_{2}=-3.3860-5.3958i, \tag{66}\]
\[\lambda_{3}=-14.3950+23.4801i, \tag{67}\]
\[\lambda_{4}=-14.3950-23.4801i. \tag{68}\]
of which all eigenvalues have negative real parts. This illustrates the stability conditions (47)-(50) are satisfied.
#### 4.3.2 Square-wave Signal Tracking
The main results of square-wave signal tracking are presented in Fig.10 to Fig.15. Fig.10 and Fig.11 are the \(x_{1}\) output and error profile respectively. In the legend, "fuzzyPD" means fuzzy cascaded PD controller proposed in this paper, and "PD" represents conventional cascaded PD without fuzzy logic regulators. We can see that the main difference is that "fuzzyPD" has shorter settling time, but with the cost of \(3.68\%\) overshoot. Besides, it is noticeable that conventional cascaded PD controller here can already achieve smooth motion without overshoot. Fig.12 and Fig.13 are the \(x_{3}\) outputs and references. Both controllers shows oscillation during the reaching phase, but the trajectory of "fuzzyPD" is smoother in comparison. Besides, near the equilibrium, the \(x_{3}\) reference of "fuzzyPD" is larger than "PD", which greatly reduces the equilibrium error from \(3.9\times 10^{-5}\) to \(-7.15\times 10^{-7}\). Fig.14 and Fig.15 represent the torque and FLR outputs. The profile of the torque follows the same trend of \(x_{3}\) outputs. Similar oscillations are witnessed during the reaching phase, and higher torque with "fuzzyPD" near the equilibrium. In Fig.15, one interesting fact is that \(\Delta K_{p}\) and \(\Delta K_{d}\) have opposite rate of change. When \(\Delta K_{p}\) is increasing, \(\Delta K_{d}\) is decreasing. This coincides with the design process of fuzzy logic regulator. Also, FLR for PD1 tries to increase the response speed by increasing \(\Delta K_{p1}\) and decreasing \(\Delta K_{d1}\), while it is just the opposite for PD2.
#### 4.3.3 Sine-wave Signal Tracking
The main results of sine-wave signal tracking are presented in Fig.16 to Fig.21. Fig.16 and Fig.17 are the \(x_{1}\) output and error profile respectively. The tracking performance is satisfying for both controllers, but the error profile illustrates that "fuzzyPD" has overall smaller error. Fig.18 and Fig.19 are the \(x_{3}\) outputs and references. Conceivably, \(x_{3}\) tracks its desired path relatively well. Fig.20 and Fig.21 represent the torque and FLR outputs. While the torque of "PD" follows the trend of sine wave, the torque of "fuzzyPD" shows more complicated pattern. We believe this alternation helps "fuzzyPD" to maintain low errors. For FLR outputs, because the tracking errors are not changing rapidly, the outputs of FLR are almost constant. Besides, it has similar behavior with that in square-wave tracking task.
Figure 14: Torque input with square-wave reference Figure 15: FLR output with square-wave reference
### Ablation experiment
In this section, some ablation experiments are implemented to investigate how each component of our proposed controller contribute to the final results. For simplicity, only \(x_{1}\) output with square-wave reference is illustrated.
Firstly, as a baseline, one single PD controller is tuned by BO. The resulting parameters are \([k_{p},k_{d}]=[117.0,29.99]\). The \(x_{1}\) output is shown in Fig.22. Obviously, single PD controller behaves poorly here, with nearly constant magnitude oscillation. It is understandable, since single PD controller here is in essence just a reduced-order controller.
Further, we implemented 4 ablation experiments, of which the outputs are depicted together in Fig.23, and the cost values are summarized in Tab.6. The cost is calculated following (52). "fuzzy+fuzzy" means two fuzzy PD controllers are used; "PD+PD" means two conventional PD controllers; "fuzzy+PD" represents fuzzy PD for sub-plant1 and |
2308.16888 | Evidence of fractal structures in hadrons | This study focuses on the presence of (multi)fractal structures in confined
hadronic matter through the momentum distributions of mesons produced in
proton-proton collisions between 23 GeV and 63 GeV. The analysis demonstrates
that the $q$-exponential behaviour of the particle momentum distributions is
consistent with fractal characteristics, exhibiting fractal structures in
confined hadronic matter with features similar to those observed in the
deconfined quark-gluon plasma (QGP) regime. Furthermore, the systematic
analysis of meson production in hadronic collisions at energies below 1 TeV
suggests that specific fractal parameters are universal, independently of
confinement or deconfinement, while others may be influenced by the quark
content of the produced meson. These results pave the way for further research
exploring the implications of fractal structures on various physical
distributions and offer insights into the nature of the phase transition
between confined and deconfined regimes. | Rafael P. Baptista, Lucas Q. Rocha, D. P. Menezes, Luis A. Trevisan, Constantino Tsallis, Airton Deppman | 2023-08-31T17:49:43Z | http://arxiv.org/abs/2308.16888v1 | # Evidence of fractal structures in hadrons
###### Abstract
This study focuses on the presence of (multi)fractal structures in confined hadronic matter through the momentum distributions of mesons produced in proton-proton collisions between 23 GeV and 63 GeV. The analysis demonstrates that the \(q\)-exponential behaviour of the particle momentum distributions is consistent with fractal characteristics, exhibiting fractal structures in confined hadronic matter with features similar to those observed in the deconfined quark-gluon plasma (QGP) regime. Furthermore, the systematic analysis of meson production in hadronic collisions at energies below 1 TeV suggests that specific fractal parameters are universal, independently of confinement or deconfinement, while others may be influenced by the quark content of the produced meson. These results pave the way
for further research exploring the implications of fractal structures on various physical distributions and offer insights into the nature of the phase transition between confined and deconfined regimes.
## 1 Introduction
The investigation of strongly interacting particles has always faced the challenge of dealing with regimes where perturbative QCD (pQCD) cannot provide accurate results. The structure of hadrons and the existence of quark-gluon plasma (QGP) are notorious examples of systems where non-perturbative QCD (npQCD) must be considered.
The large amount of data provided by high-energy colliders in the TeV energy region has revealed several aspects of strong interactions in npQCD. One of the most ubiquitous features is the q-exponential behaviour observed in particle momentum distributions [1, 2, 3, 4], suggesting that nonextensive statistical mechanics, based on the nonadditive entropy S\({}_{q}\), [5, 6] is the appropriate framework to study the complex system formed during collisions. The emergence of non-extensive statistics (or \(q\)_-statistics_ for short) in Yang-Mills theories has been demonstrated in [7], which uses self-energy interactions and the renormalization group equation to show how thermofractal structures can emerge in the interacting fields. These thermofractal structures give rise to non-extensive statistics and \(q\)-exponential distributions [8].
Assuming that the statistical mechanics grounded on S\({}_{q}\) is the correct framework to study the thermodynamics of QCD systems, it is natural to extend it to describe high-energy processes as well. Hagedorn's Self-Consistent Thermodynamics [9] was generalized by incorporating non-extensive statistics [10], resulting in a critical temperature, similar to the original theory, but with a new formula for the hadron mass spectrum. The new formula provides a better description compared to the original Hagedorn's formula, accurately describing the mass spectrum from the heaviest known hadrons down to the pion mass [11]. This result suggests the existence of fractal structures in hadrons beyond those already investigated in the deconfined regime. Could the confined quark matter inside hadrons show fractal aspects? The present work offers a systematic analysis of meson production in hadronic collisions at energies below 1 TeV to address this question. It will be shown that the momentum distributions of produced mesons resulting from collisions in the range of 23 GeV to 63 GeV exhibit many features expected in thermofractals.
A subproduct of this work is the reconciliation of QCD with the early theoretical approaches proposed in the 1960s by Chew, Frautshi [12] and Hagedorn, among others. The self-consistent calculation,
also known as the bootstrap approach, for describing hadrons was an important line of research whose prominence diminished with the emphasis given to quarks and QCD. The understanding of fractal structures in Yang-Mills Theories unifies the two approaches, clarifying and simplifying several aspects of Quantum Chromodynamics.
The results obtained in the present study unveil new possibilities for investigating the effects of fractal structures in hadronic processes. Experimental data on the hadron structure are produced in HERA, JLab, and NICA, among others, while the forthcoming Electron-Ion Colliders (EIC) offer promising prospects for experimentally probing the intricate nature of the hadronic structure [13, 14]. Consequently, the present work appears to hold substantial relevance for the advancement of these colliders. Moreover, given the presence of fractal structures in the Quark-Gluon Plasma (QGP), the identification of similar structures in the hadronic phase may impose constraints on the confinement-deconfinement phase transition. Thus, the present findings should contribute significantly to the understanding of fundamental characteristics associated with hadronic matter.
The non-extensive statistics has already been used in hadronic gas models [15, 16, 17]. The present approach differs from the previous works in the use of the running-coupling and microscopic calculation of the subprocess involved in hadron collisions.
## 2 Methods
For the investigation of the thermofractal structures in hadrons, this work analyses the momentum distributions of mesons produced in \(pp\)
Figure 1: Example of the simplest subprocess for meson production in \(pp\) collision.
collisions. The data is collected in the centre-of-mass frame at approximately zero rapidity, where the colliding protons have momentum \(P\) and total energy \(E=\sqrt{s}\)[18]. In the relevant subprocess, two quarks interact and form a new meson, while the other quarks are considered spectators. It is assumed that almost all energy of the process is consumed in the production of the meson, so the momentum of the remaining system is negligible. This assumption limits, to some extent, the power of the present analysis, but it does not interfere with the pursued objectives. In this configuration, if \(\varepsilon\) is the produced meson energy and \(\varepsilon_{1}\), \(\varepsilon_{2}\) are the energies of the interacting quarks, then \(\varepsilon_{1}=\varepsilon_{2}=\varepsilon/2\).
Fig. 1 shows an example of the meson production process to assist the discussion of some relevant aspects that will affect the analysis of the experimental data. Since we are in the confined regime of the quark matter, the meson production involves at least three vertices. Using the results of the thermofractal theory for QCD, we have that each vertex involves a running coupling given by
\[g(\varepsilon)=G_{o}e_{q}(\varepsilon_{1},\lambda_{1};q)e_{q}(\varepsilon_{2}, \lambda_{2};q)\,, \tag{1}\]
where \(G_{o}\) is a constant associated with the maximum value of the coupling, the \(q\)-exponential distribution is given by
\[e_{q}(\varepsilon,\lambda;q)=\left[1+(q-1)\frac{\varepsilon}{\lambda}\right] ^{\frac{-1}{q-1}}\,, \tag{2}\]
with \(\varepsilon=\varepsilon_{1}+\varepsilon_{2}\), where \(\varepsilon_{1}\) and \(\varepsilon_{2}\) are the energy carried by the interaction partons in each proton, represented by the indexes 1 and 2, \(\lambda\) is a scale parameter (effective temperature) and \(q\) is the entropic index associated with S\({}_{q}\).
For fractal structures, one expects to observe the characteristic \(q\)-exponential distributions due to the form of the running coupling. Besides the form of the distribution, it is expected that [7]
\[\frac{1}{q-1}=\frac{11}{3}N_{c}-\frac{4}{3}\frac{N_{f}}{2}\,, \tag{3}\]
where \(N_{c}\) and \(N_{f}\) are, respectively, the numbers of colours and flavours. With \(N_{c}=3\) and \(N_{f}=6\), the formula above gives \(q=8/7\simeq 1.14\), in agreement with high-energy experimental data analyses.
The work performs three scenarios of systematic analyses, as discussed below, where the experimental data is fitted by three different models, with increasing physical significance, where the momentum distribution is given by:
1. A single \(q\)-exponential function, with \[\frac{d^{3}\sigma}{dp^{3}}=\sigma_{o}e_{q}(\varepsilon,\lambda;q)\,,\] (4) where \(q\), \(\sigma_{0}\), and \(\lambda\) are adjustable parameters.
2. The product of three \(q\)-exponential functions with \[\frac{d^{3}\sigma}{dp^{3}}=\sigma_{o}e_{q}(\varepsilon_{1},\Lambda;\bar{ \bar{q}})e_{q}(\varepsilon_{2},\Lambda;\bar{\bar{q}})e_{q}(\varepsilon,\lambda ;q)\,,\] (5) where \(\varepsilon_{1}=\varepsilon_{2}=\varepsilon/2\), \((\bar{q}-1)^{-1}=[(q-1)/q]^{-1}\), and the parameters \(\sigma_{o}\), \(\lambda\) and \(\Lambda\) are adjusted. The parameter \(\bar{q}\) takes into account that the parton momentum distributions in the hadron emerge in a complex interaction of an equilibrated system [19]. The use of \(\bar{q}\) can be controversial. A thorough discussion about this point can be found in Refs. [1, 3, 4, 20], but it can be advanced that results very similar to those shown here can be obtained by using \(\bar{q}=q\).
3. The product of three \(q\)-exponential functions, as in Scenario 2, but where \(\lambda\) is fixed at a value that will be found in Scenario 2 and the parameters \(\sigma_{o}\) and \(\Lambda\) are adjusted;
The accumulated experience of fitting \(q\)-exponential distributions to high-energy data in the deconfined regime has underlined the importance of addressing parameter correlations to accurately extract pertinent physical information [11, 21, 22]. This study incorporates the methodologies established in those analyses.
## 3 Results
This section presents the results of each of the scenarios previously delineated and gives limited and preliminary discussion for each scenario. A global discussion will be presented in the next section.
### Scenario 1
In the first scenario of the analysis, the best fits for the set of experimental data are displayed in Fig. 2. The single \(q\)-exponential function fitted the experimental data with the parameters presented in Table 1. The overall agreement is good, with reduced chi-square in the range of 1 to 4. The parameters \(q\) and \(\lambda\) reported in that table are plotted as a function of the collision energy in Fig. 3. The worst fitting of the model to experimental data happens for \(K^{+}\) at 63 GeV, and
the same relative result will be observed in the other scenarios of the analysis. By a simple visual inspection, it is possible to observe that data set for this case departs from the general shape observed for all the other cases. This suggests that the experimental analysis and data reduction could be revisited.
The values for both \(\bar{q}\) and \(\lambda\) appear in the range \(1.04<\bar{q}<1.10\) and \(0.10<\lambda<0.15\), except for one set of data on kaon production. The mean values are \(q=1.069\pm 0.004\) and \(\lambda=0.121\pm 0.003\). These results show that fractal structures may be present in the hadron structure, but the values for both parameters are slightly different from those found in the QGP multiparticle production, where \(q=1.14\) and \(\lambda\) is close to the pion mass, \(m_{\pi}=140\) MeV. Additional information reporting other aspects of the analysis can be found in the Supplementary Material.
The main difference between the process analyzed here and the multiparticle production at higher energies is the deconfined regime for quarks and gluons that are determinants of the minimum number of vertices in the process. While in QGP the partonic production involves a single vertex, in the hadronic collisions the confinement of quarks and gluons imposes a minimum of two vertices for any interaction to keep quarks and gluons confined. In the meson production studied in the present work, a minimum of three vertices are necessary, as depicted in Fig 1: one related to the meson production, where the partons produced in each of the interacting protons merge to form the new particle, and the other two at the independent interaction in each of the colliding protons, where the partons are originated.
The presence of the independent vertices makes all the differences in the process. Roughly speaking, each vertex contributes with a q-exponential, so the expected exponent in the function is twice that for the single vertex, that is, for hadronic interactions, we expect a value \(\bar{q}\) such that
\[\frac{1}{\bar{q}-1}=\frac{2}{q-1}\,, \tag{6}\]
and using the expected value \(q=1.14\) it results that \(\bar{q}=1.07\), in agreement with the result of the analysis in Scenario 1 reported in Table 1.
The fitting of the q-exponential function faces some challenging aspects associated with the correlations in the parameter space. As an example, Fig 4 presents the ellipses corresponding to a constant chi-square value. Most of the ellipses have axes that are not parallel to the parameters axes, reflecting the existence of the correlation. For comparison, the value \(q=1.07\) is indicated through a vertical line. Observe that the ellipses fall around the expected theoretical value,
except in the case of the data set for \(K^{+}\) at 63 GeV.
The considerations made above must be clarified in many aspects, but they are indicative that the thermofractal structures manifest themselves in the confined quark matter. The main trouble with the present analysis is the use of a single form of q-exponential. If the explanation for the different value of \(q\) is the presence of independent vertices, why both vertices should depend on \(\varepsilon\) and not on \(\varepsilon_{1}\) or \(\varepsilon_{2}\). Aside from this, it is not evident that the same data that was adjusted by a single q-exponential function can be fitted as a product of q-exponential functions equally well. This will be investigated in the following scenarios.
### Scenario 2
To describe the subprocess depicted in Fig.1 using the thermofractal approach, we need to use the running coupling for each of the vertices in the process, which can be done by considering that in proton \(P_{1}\) one pair parton-antiparton is created, contributing with a term \(e_{q}(\varepsilon_{1},\lambda_{1},q)\cdot e_{q}(\bar{\varepsilon}_{1},\lambda _{1},\bar{q})\) to the cross-section, and the same happens for proton \(P_{2}\), contributing with \(e_{q}(\varepsilon_{2},\lambda_{2},q)\cdot e_{q}(\bar{\varepsilon}_{2}, \lambda_{2},\bar{q})\). In the third subprocess, two partons merge to form a meson, reducing the number of q-exponentials since \(e_{q}(\bar{\varepsilon}_{1},\lambda_{1},q)e_{q}(\bar{\varepsilon}_{2}, \lambda_{2},q)=e_{q}(\varepsilon,\lambda,q)\). The parameter \(\lambda\) is associated with the energy scale involved in the meson production, while the parameter \(\Lambda\) is associated with the colliding hadrons energy scale. Due to the energy-momentum conservation, the constraint \(\bar{\varepsilon}_{1(2)}=\varepsilon_{1(2)}\) must be satisfied. The use of the index \(\bar{q}\) for the vertices in the colliding protons is due to the fact that the vertex occur in the complex environment of the hadron structure, which is supposed to be in equilibrium [19].
With the considerations made above, the differential cross-section will be proportional to the product of three q-exponential functions, i.e.,
\[\frac{d^{3}\sigma}{dp^{3}}=\sigma_{o}e_{q}(\varepsilon_{1},\lambda_{1},\bar{q })e_{q}(\varepsilon_{2},\lambda_{2},\bar{q})e_{q}(\varepsilon,\lambda,q)\,. \tag{7}\]
The parameters \(\lambda_{1}\) and \(\lambda_{2}\) are related to the internal characteristics of the colliding protons. Due to the symmetries of the two protons involved in the process, \(\lambda_{1}=\lambda_{2}=\Lambda\), and the expression above reduces to the one proposed for Scenario 2 of the analysis, given in Eq. 5
The same sets of data were adjusted by the cross-section in Eq. 7 with \(q=1.14\) and the parameters \(\sigma_{0}\), \(\Lambda\) and \(\lambda\) adjustable. The results of the analysis for Scenario 2 are displayed in Fig. 5. The formula can fit well the data, resulting in a fairly low chi-square for most of the cases, as reported in Table 2. This is itself a promising result
since the physical role of each term in the cross-section can be clearly understood. Reinforcing the physical significance of the results, the values for \(\Lambda\) and \(\lambda\) can be compared to those found in the higher energy processes. The best-fit values for the scale parameters are displayed in Fig. 7.
For \(\lambda\) and \(\Lambda\) there is no evident dependence on the produced meson mass. The values for \(\lambda\) fall in the range \(0.5~{}GeV\) and \(0.25~{}GeV\), while for \(\Lambda\) the values vary on a broader range, from \(0.2~{}GeV\) to \(1.4~{}GeV\). Both parameters present some dependence on the collision energy, which results to be stronger for \(\Lambda\) than for \(\lambda\).
It is important to notice that, as happens in high-energy studies, the fitting process for the hadron regime analysis performed here needs to address the difficulty of correlation among the adjustable parameters [11, 21, 22]. The correlations are illustrated in the \(\lambda\) vs \(\Lambda\) projection of the parameter space shown in Fig. 6, where the typical ellipses already observed in analyses of multiparticle production can be observed. The average value \(\langle\lambda\rangle=0.122\pm 0.007\) is a reasonable value for almost all sets analyzed and is close to the average values obtained in Scenario 1, whilst for \(\Lambda\) the values are spread over a larger range around the average \(\langle\Lambda\rangle=0.55\pm 0.06~{}GeV\). Similar correlations exist between other pairs of parameters, leading to a small variation of the chi-square for the best fit when some of the parameters are fixed at reasonable values, as observed in Tables 1 to 3. Additional information on the correlations in the parameter space can be found in the Supplementary Material. Therefore, to simplify the analysis, it is reasonable to assume that \(\lambda\) is constant. This modification corresponds to Scenario 3 of the analysis.
The average \(\langle\lambda\rangle\) is close to the pion mass, so the parameter \(\lambda\) will be fixed to the pion mass, \(m_{\pi}=0.140~{}GeV\). Assuming this value, this work departs from a purely statistical analysis, but gains in the physical interpretation of the result.
At this point, it is important to stress that other possibilities of analysis are present and will be left for future work.
### Scenario 3
The third and last scenario of the analysis represents the most reliable description of the process presented in this work. While the number of adjustable parameters was three in the two previous scenarios, the third scenario has only two free parameters in the fitting. The other parameters were fixed to values corresponding to well-known physical quantities, such as the pion mass for \(\lambda\) and the thermofractal QCD value for \(q\), which has been found in agreement with the value resulting
from the experimental analysis. With fewer adjustable parameters, the model is subjected to a stronger test when compared with the experimental data, and the best-fit values for the parameter tend to present more significant physical content.
The model can describe accurately all sets of experimental data, as shown in the plots in Fig. 8. The best-fit values for \(\Lambda\) are given in Table 3 and in the plots presented in Fig. 9. The parameter \(\Lambda\) shows dependence on the produced meson and on the collision energy. In the range analysed the behaviour of the scale \(\Lambda\) is approximately linear for all mesons studied.
The study of correlations in the parameter spaces brings evidence of strong correlations involving the adjustable parameters, which can cause some difficulties in obtaining the real values for the adopted parameters in each scenario of the analysis. In particular, Fig. 10 shows that the correlations between \(\sigma_{o}\) and \(\Lambda\), the only free parameters in this scenario of the analysis, roughly distribute along a power-law curve. This form is due to the known relationship between the multiplicative factor of the q-exponential function and the multiplicative factor in the argument of the function, as demonstrated in Ref. [6] of the work, on page 35.
## 4 Discussion and conclusion
The results obtained here provide at least some preliminary answers to the questions posed in the Introduction. The findings show that fractal structures may be present in the hadron structure, as the momentum distributions of the produced mesons in \(pp\) collisions can be accurately described by q-exponential functions. The analysis progressively unveils the role of the fractal structure in the subprocesses involved in the hadronic interactions.
In the first scenario of the analysis, a naive model of the fractal structure was applied, and it was already possible to observe a good fit of the data with values for the parameters \(q\) and \(\lambda\) that seemed to be consistent with those observed in high-energy multiparticle production. Even with such a naive approach, it was possible to find evidence of fractal structures with characteristics similar to those found in the deconfined regime. Interpreting the results from Scenario 1, it became evident that at least two vertices must be involved in the confined regime to ensure the confinement of quarks and gluons.
In Scenario 2, the subprocess involving partons from both colliding protons was described more accurately than in the first scenario, introducing a new parameter, \(\Lambda\), while fixing \(q=1.14\). The parameter \(\Lambda\) represents the energy scale of the colliding protons, while the
parameter \(\lambda\) represents the energy scale of the produced meson. The results obtained in this scenario showed that the model for the momentum distribution can describe the data, and the parameter \(\lambda\) could be considered independent of the collision energy and the produced meson species, although a weak dependence on those variables cannot be completely ruled out. On the other hand, variations in the parameter \(\Lambda\) with energy and particle mass were more evident.
In Scenario 3, a more realistic model was adopted, with only two adjustable parameters, \(\sigma_{o}\) and \(\Lambda\), while keeping \(q\) fixed to the same value as before, i.e. q =1.14, and fixing \(\lambda\) to the pion mass. Even with these constraints, it was possible to describe the data, and the behaviour of \(\Lambda\) with the particle species and the collision energy was analyzed.
The main aspects that can be inferred from the more realistic scenario are as follows: the scale \(\Lambda\) depends on the particle species and the collision energy. While the energy dependence, for the range considered in the present work, can be considered approximately linear for all particles analyzed, the rate of increase of \(\Lambda\) with the collision energy depends on the quark composition of the created meson. Specifically, for pion and rho, which are formed by \((u,\bar{d})\) valence quarks, \(\Lambda\) increases at approximately the same rate, whereas for the Kaon, formed by \((u,\bar{s})\) valence quarks, the parameter increases at a faster rate. The different rates of increase in \(\Lambda\) cannot be attributed solely to the differences in meson masses, since pion and rho have similar slopes with different masses, thereby it may be associated with the different quantum numbers of the mesons.
Being more realistic, the third scenario allows the investigation, in the hadron structure, of one surprising, and yet not understood, effect that is observed in high-energy collisions is the presence of log-periodic oscillations in the transverse momentum distributions. The oscillations appear in the normalized residues after the fittings of the experimental data with the S\({}_{q}\) distributions. This work investigates the presence of the same kind of oscillations in the hadronic case. Fig. 11 reports typical results of the analysis, and the complete report of the residues can be found in the Supplementary Material. Although the shorter range of momentum distribution limits the analysis, the results obtained do not allow one to conclude that the same oscillations appear in the confined regime. The absence of the log-periodic oscillations in the hadronic regime can be indicative of the connection between the oscillation and collective behaviour of the QGP, possibly associated with the flow.
The conclusions one can infer from the study presented in this work are the following: It is likely that the fractal structure is present in con
fined hadronic matter with characteristics similar to those observed in higher energy collisions, where the deconfinement of quarks and gluons is expected. The significance of this finding lies in demonstrating the persistence of the fractal structure through the confined/deconfined phase transition.
The energy scale associated with the produced meson is the pion mass, and it is similar QGP freeze-out temperature. This is another evidence that the same fractal structure is present in hadrons as well as in QGP.
The scale \(\Lambda\) depends on the meson mass, on the meson quark content and on the collision energy. The rate of the scale increase with energy seems to be independent of the quark content, at least considering the limited experimental data analyzed here.
This work presents a systematic analysis of meson production in proton-proton (\(pp\)) collisions within a specific energy range. The main objective is to demonstrate that the thermofractal structures observed in multiparticle production at higher energies are also present in the confined region. By investigating fractal structures in hadrons, the research contributes significantly to understanding the presence of fractal effects in the hadronic processes, illuminating the physical meaning of the fractal parameters. It is observed that certain fractal parameters, specifically \(q\) and \(\lambda\), are universal and independent of whether the hadronic matter is in the confined or deconfined regime, while \(\sigma_{o}\) and \(\Lambda\) may be influenced by the quark structure of the produced meson.
The study opens up opportunities for further investigation in terms of collision energy and particle species. Fractal characteristics, such as scale invariance and self-similarity, may have significant effects on observables like Parton Distribution Function (PDF) [23], Transverse Momentum Distribution (TMD) [24, 25], Generalized Momentum Distribution (GPD) [26, 27] and other distributions that can be measured experimentally. These effects will be explored more thoroughly in future experiments using the Electron-Ion Collider (EIC) [28, 29]. In Astrophysics, the hadronic composition and processes are relevant in the study of massive objects [30, 31, 32] and can benefit from the present results.
The z-scaling is a phenomenological approach to meson production that evidences the scaling properties of the process [33, 34, 35]. The comparison between the thermofractal approach and the z-scaling can further clarify the underlying physics of the hadronic collisions.
Interestingly, the fact that the phase transition between confined and deconfined regimes does not disrupt the fractal structure may offer constraints on the nature of the phase transition itself.
In conclusion, the results presented in this work lay the groundwork
for future research aimed at understanding the intricate fractal nature of hadrons and its implications for various physical distributions critical in high-energy physics. The findings may lead to advancements in our comprehension of the underlying structure of hadrons and aid in the interpretation of experimental data from high-energy collisions.
## 5 Acknowledgements
The authors thank Prof. Otaviano A. M. Helene for an insightful discussion on the correlations among parameters in the fitting process. A.D. is supported by the Project INCT-FNA (Instituto Nacional de Ciencia e Tecnologia - Fisica Nuclear Aplicada) Proc. No. 464898/2014-5, and by the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq-Brazil), grant 306093/2022-7. C.T. is partially supported by CNPq and Faperj (Brazilian agencies).
|
2306.17784 | The anticyclotomic main conjectures for elliptic curves | The goal of this article is to obtain a proof of the Main conjectures of
Iwasawa theory for rational elliptic curves over anticyclotomic extensions of
imaginary quadratic fields, under mild arithmetic assumptions, both in the case
where the rational prime $p$ is good ordinary or supersingular. | Massimo Bertolini, Matteo Longo, Rodolfo Venerucci | 2023-06-30T16:37:45Z | http://arxiv.org/abs/2306.17784v1 | # The Anticyclotomic main conjectures for elliptic curves
###### Abstract.
Let \(E/\mathbf{Q}\) be a modular elliptic curve of conductor \(N\) and let \(f\) be the cuspidal eigenform on \(\Gamma_{0}(N)\) associated to \(E\) by the modularity theorem. Denote by \(K_{\infty}\) the anticyclotomic \(\mathbf{Z}_{p}\)-extension of an imaginary quadratic field \(K\). The goal of this article is to obtain a proof of the Main conjectures of Iwasawa theory for \(E\) over \(K_{\infty}\), both in the case where the rational prime \(p\) is _good ordinary_ or _supersingular_ for \(E\).
###### Contents
* 1 Introduction
* 2 Special points on Shimura curves and Gross curves
* 3 Admissible primes and raising the level
* 4 \(p\)-adic \(L\)-functions and special values formulae
* 5 Selmer groups
* 6 Ramified classes and reciprocity laws
* 7 \(\varepsilon\)-BSD formulae in the definite case
* 8 \(\varepsilon\)-BSD formulas in the indefinite case
* 9 Proof of Theorems B and C
* 10 Statements and declarations
## 1. Introduction
Let \(E/\mathbf{Q}\) be a modular elliptic curve of conductor \(N\) and let \(f\) be the cuspidal eigenform on \(\Gamma_{0}(N)\) associated to \(E\) by the modularity theorem. Denote by \(K_{\infty}\) the anticyclotomic \(\mathbf{Z}_{p}\)-extension of an imaginary quadratic field \(K\). The goal of this article is to obtain a proof of the Main conjectures of Iwasawa theory for \(E\) over \(K_{\infty}\), both in the case where the rational prime \(p\) is _good ordinary_ or _supersingular_ for \(E\).
The anticyclotomic setting displays a well-known dichotomy, depending on whether the generic sign of the functional equation of the complex \(L\)-function of \(E/K\) twisted by finite order characters of the Galois group of \(K_{\infty}/K\) is \(+1\) or \(-1\). For reasons which will be explained later we call the former case _definite_ and the latter case _indefinite_.
Assume first that \(p\) is a _good ordinary_ prime for \(E\). In the _indefinite_ case, a norm-compatible sequence of Heegner points arising from a Shimura curve parametrisation is defined over the finite layers of \(K_{\infty}/K\). Its position in the compact \(p\)-adic Selmer group of \(E/K_{\infty}\) is encoded by an element \(L_{p}(f)\) of the anticyclotomic Iwasawa algebra \(\Lambda\), called the _indefinite anticyclotomic \(p\)-adic \(L\)-function_. (The notation \(L_{p}(f)\) instead of \(L_{p}(E)\) is adopted throughout, in order to achieve notational uniformity in the modular arguments of this article.) The _indefinite anticyclotomic Iwasawa Main conjecture (IAMC)_, formulated by Perrin-Riou [24], states that \(L_{p}(f)\) generates the square-root of the characteristic ideal of the \(\Lambda\)-torsion part of the Pontrjagin dual of the \(p\)-primary Selmer group of \(E/K_{\infty}\). The proof of this conjecture is one of the main results of this paper. We remark that one divisibility of characteristic ideals - notably the fact that \(L_{p}(f)\) is divisible by the characteristic ideal of the relevant Selmer group - is obtained in Howard's paper [17], as a direct application of the theory of Euler systems.
We now turn to the _definite_ (good ordinary) case, in which the _definite anticyclotomic \(p\)-adic \(L\)-function_\(L_{p}(f)\) interpolates central critical values of twists of the complex \(L\)-functions of \(E/K\), described in Section 2 in terms of special points on the Gross curve. The raising the level of the modular form \(f\) at certain admissible primes yields congruent eigenforms modulo arbitrary powers of \(p\). These eigenforms belong to the indefinite setting and therefore the Heegner construction becomes available on the Shimura curves supporting them. (See Section 3 for the precise definitions.) This basic observation is the opening gambit of the article [3] by Bertolini-Darmon, which builds on it by establishing a _first explicit reciprocity law_ relating the resulting Heegner cohomology classes to \(L_{p}(f)\). Moreover, with the help of a _second explicit reciprocity law_, this article sets up an inductive procedure (which may be viewed as an analogue
of Kolyvagin's induction) proving that \(L_{p}(f)\) is divisible by the characteristic ideal of the Pontrjagin dual of the \(p\)-primary Selmer group of \(E/K_{\infty}\). This shows one divisibility in the _definite Iwasawa anticyclotomic Main conjecture (DAMC)_. (The above explicit reciprocity laws are reviewed in Section 6.2.) This procedure has been formalised in Howard's paper [16], leading to the concept of _bipartite Euler system_. The full DMAC proved in this paper is based on a refinement of the above induction. It requires to show the non-vanishing modulo \(p\) of values of the definite \(p\)-adic \(L\)-function attached to an eigenform congruent to \(f\), obtained by raising the level at sufficiently many admissible primes. This maximality property ultimately rests on a fundamental \(p\)_-converse_ theorem of Skinner-Urban [31], as explained in Step 4 of Section 7.1. It should be stressed that both the DAMC as well as the IAMC are obtained in this article from the same unified approach based on the above mentioned inductive process. The article [6] by Burungale-Castella-Kim also uses the techniques of bipartite Euler systems to obtain a proof of the IAMC (that is, Perrin-Riou's Heegner point main conjecture).
Assume now that \(p\) is a _supersingular_ prime for \(E\). As customary in the supersingular theory, two cases indexed by a sign \(\varepsilon=\pm\) need to be distinguished. Depending on the choice of \(\varepsilon\), one is lead to introduce the concepts of \(\varepsilon\)-points, \(\varepsilon\)-Selmer groups and \(\varepsilon\)-\(p\)-adic \(L\)-functions \(L_{p}^{\varepsilon}(f)\). In terms of these objects, this article formulates and proves the analogues of the AMCs outlined above. In the definite setting, the analogue inclusion of [3] was obtained by Darmon-Iovita [10] when \(p\) is split in \(K\) and \(a_{p}(E)=0\), and extended by Burungale-Buyukboduk-Lei [5] without assuming \(a_{p}(E)=0\) and covering also the case \(p\) inert.
The following two specific aspects of the supersingular setting are worth noting.
On the one hand, our study of the structure of the \(\varepsilon\)-Selmer groups rests in a fundamental way on the _control_ result stated in Proposition 5.3. The proof of this result is based on Theorem 5.1, which was known for \(p\) split in \(K\) thanks to the work of Iovita-Pollack [18]. For \(p\) inert in \(K\), Theorem 5.1 is a consequence of the recent proof of Rubin's conjecture on local points in \(p\)-adic towers due to Burungale-Kobayashi-Otha [7]. In a previous version of this article, the control statement of Proposition 5.3 was a running assumption in the inert case.
When \(p\) is inert in \(K\), the supersingular setting displays a subcase for \(\varepsilon=+\), called _exceptional_ in Definition 1.2 of this Introduction. In the exceptional case, the \(+p\)-adic \(L\)-function acquires an extra-zero of local nature and our approach only allows us to show one divisibility in the AMCs. It would be interesting to further investigate the nature of this exceptional zero and the possibility of establishing the full AMC in the exceptional case.
We now formulate our main results more precisely. In order to obtain unified statements, we adopt the convention that \(\varepsilon=\emptyset\) in the ordinary case, so that the concept of \(\varepsilon\)-point, \(\varepsilon\)-Selmer group and \(\epsilon\)-\(p\)-adic \(L\)-function simply stands for point, Selmer group and \(p\)-adic \(L\)-function (then in particular \(L_{p}(f)=L_{p}^{\varepsilon}(f)\) is this case).
Fix throughout the paper algebraic closures \(\bar{\mathbf{Q}}\) of \(\mathbf{Q}\) and \(\bar{\mathbf{Q}}_{p}\) of \(\mathbf{Q}_{p}\), as well as embeddings \(\bar{\mathbf{Q}}\hookrightarrow\bar{\mathbf{Q}}_{p}\) and \(\bar{\mathbf{Q}}\hookrightarrow\mathbf{C}\).
Our main results are proved under the following assumptions. Let \(N\) be as above the conductor of \(E\), assumed to be coprime with the discriminant of \(K\). Factor \(N\) as \(N=N^{+}N^{-}\), where \(N^{+}\) resp. \(N^{-}\) is divisible only by primes which are split, resp. inert in \(K\).
**Hypothesis 1.1**.:
1. The rational prime \(p\) is \(\geqslant 5\) and does not divide \(N\) and the class number \(h_{K}\) of \(K\).
2. The representation \(\bar{\varrho}_{E,p}:G_{\mathbf{Q}}\to\operatorname{GL}_{2}(\mathbf{F}_{p})\) arising from the \(p\)-torsion \(E(\bar{\mathbf{Q}})_{p}\) of \(E\) is irreducible.
3. \(N^{-}\) is squarefree.
4. If \(E/\mathbf{Q}_{p}\) has good ordinary reduction, then \(a_{p}(E)\not\equiv\,\pm 1\pmod{p}\).
5. If \(q\) is a prime dividing \(N^{+}\), then \(H^{0}(I_{\mathbf{Q}_{q}},E_{p})=0\).
6. If \(q\|N^{-}\) and \(q\equiv\,\pm 1\bmod{p}\), then \(\bar{\varrho}_{E,p}\) is ramified at \(q\).
**Definition 1.2**.: We say that \((E,K,p,\varepsilon)\) is _exceptional_ if \(E\) has supersingular reduction at \(p\), \(p\) is inert in \(K\) and \(\varepsilon=+\).
For \(\varepsilon=\pm,\emptyset\), let \(L_{p}^{\varepsilon}(f)\) be the anticyclotomic \(p\)-adic \(L\)-function introduced in Chapter 4 in the definite case and in Definition 8.3 at the end of Section 8 in the indefinite case. Moreover, let \(\operatorname{Char}_{p}^{\varepsilon}(f)\) be the "algebraic anticyclotomic \(p\)-adic \(L\)-function" defined to be the characteristic ideal of a certain Selmer
module in Definition 7.3, resp. Definition 8.4 in the definite, resp. indefinite case. Note that in the indefinite case, \(L^{\varepsilon}_{p}(f)\) describes the position of a Heegner class in a compact Selmer group and \(\operatorname{Char}^{\varepsilon}_{p}(f)\) refers to the torsion part of a rank \(1\) Iwasawa module.
The next theorem contains our results on the DAMC and IAMC. Although we have strived for a maximal notational uniformity, the reader should keep in mind that the nature of the result in the two cases is rather different!
**Theorem A** (Damc & Iamc).: \((L^{\varepsilon}_{p}(f))\subseteq(\operatorname{Char}^{\varepsilon}_{p}(f))\) _with equality in the non-exceptional case._
The proof of Theorem A is obtained by compiling information from the finite layers of the anticyclotomic tower, via a standard method which will not be recalled in detail in this paper. Specifically, it follows immediately from the \(\varepsilon\)-Birch and Swinnerton-Dyer (BSD) formulas of Theorem 7.1, resp. of Theorem 8.2 in the definite, resp. indefinite case, by making use of an argument due to Mazur-Rubin [20, Section 5.2] and Howard [17, Section 2.2].
The Birch and Swinnerton-Dyer conjecture leads one to expect BSD formulas for the usual Selmer groups over the finite anticyclotomic layers. These BSD formulas are obtained in Chapter 9 as a consequence of the \(\varepsilon\)-BSD formulas mentioned above, via a comparison between the \(\varepsilon\)-Selmer groups and the standard Selmer groups. We refer the reader to Chapter 7 for the definition of the Selmer group \(\operatorname{Sel}(K,A_{f}(\chi))\) as well as of the Shafarevich-Tate group \(\Sha(K,A_{f}(\chi))\), and to Sections 9.2 and 9.3 for an explanation of the constants \(C\) (related to certain archimedean periods) and of the regulator \(\operatorname{Reg}_{\chi}(E/K)\), which appear in the statements below.
**Theorem B** (definite BSD formulas).: _Let \(\chi\) be a finite order character of conductor \(p^{n}\) of the Galois group of \(K_{\infty}/K\). Then \(\operatorname{Sel}(K,A_{f}(\chi))\) is finite if and only if \(L(E/K,\chi,1)\neq 0\). In this case one has_
\[\operatorname{length}_{\mathscr{O}_{\chi}}\left(\operatorname{Sel}(K,A_{f}( \chi))\leqslant\operatorname{ord}_{\chi}\left(\frac{L(E/K,\chi,1)}{C}\right)\right.\]
_with equality in the non-exceptional case._
**Theorem C** (indefinite BSD formulas).: _Let \(\chi\) be a finite order character of conductor \(p^{n}\) of the Galois group of \(K_{\infty}/K\). Then \(\operatorname{Sel}(K,A_{f}(\chi))\) has corank equal to \(1\) if and only if \(L^{\prime}(E/K,\chi,1)\neq 0\). In this case one has_
\[\operatorname{length}_{\mathscr{O}_{\chi}}\left(\Sha(K,A_{f}(\chi))\right) \leqslant\operatorname{ord}_{\chi}\left(\frac{L^{\prime}(E/K,\chi,1)}{C \cdot\operatorname{Reg}_{\chi}(E/K)}\right)\]
_with equality in the non-exceptional case._
_Remark 1.3_.: The case \(n=0\) can be obtained more directly by applying [31] in the setting of Theorem B and the techniques of [1] in the setting of Theorem C. In the non-exceptional case it follows as well from the AMC's proved in this paper. The presence of a local zero in the exceptional case prevents us to treat the trivial character on the same ground as for the other characters.
**Conventions.** The following conventions are adopted to lighten notations (as recalled also in the appropriate parts of the paper).
* Besides denoting with \(\varepsilon\) one of the signs \(+\) or \(-\), we also sometimes write \(\varepsilon=+1\) when \(\varepsilon=+\) and \(\varepsilon=-1\) when \(\varepsilon=-\). With this convention, the equation \((-1)^{n}=\varepsilon\) for an integer \(n\) implies that \(n\) is even if \(\varepsilon=+\) and \(n\) is odd if \(\varepsilon=-\).
* Given a principal ideal \(I=(x)\) of a commutative ring with unity \(R\) and an \(R\)-module \(M\), we sometimes write \(M/x\) to denote \(M/(x)=M/xM\).
## 2. Special points on Shimura curves and Gross curves
### Shimura curves and Gross curves
Fix a positive integer \(N\) and a factorisation \(N=N^{+}N^{-}\) into coprime integers, with \(N^{-}\) squarefree. Let \(\mathscr{B}\) be the quaternion algebra over \(\mathbf{Q}\) whose discriminant has finite part equal to \(N^{-}\). The algebra \(\mathscr{B}\) (which is unique up to isomorphism) is said to be _indefinite_, (resp., _definite_) if it is split (resp., non-split) at infinity. So \(\mathscr{B}\) is indefinite if and only if \(N^{-}\) is divisible by an _even_ number of primes.
For every abelian group \(Z\), let \(\hat{Z}\) denote \(Z\otimes_{\mathbf{Z}}\hat{\mathbf{Z}}\), where \(\hat{\mathbf{Z}}=\prod_{\ell\text{ prime}}\mathbf{Z}_{\ell}\) is the profinite completion of \(\mathbf{Z}\). Let \(\operatorname{Hom}(\mathbf{C},\mathscr{B}_{\infty})\) be the set of \(\mathbf{R}\)-algebra morphisms of \(\mathbf{C}\) in \(\mathscr{B}_{\infty}=\mathscr{B}\otimes_{\mathbf{Q}}\mathbf{R}\). The group \(\mathscr{B}^{*}\) acts
via the diagonal embedding on \(\hat{\mathscr{B}}^{*}\), and via conjugation on \(\operatorname{Hom}(\mathbf{C},\mathscr{B}_{\infty})\). Fix a maximal order \(\hat{\mathscr{R}}\) in \(\mathscr{B}\), and an Eichler order \(\mathscr{R}\) of level \(N^{+}\) contained in \(\hat{\mathscr{R}}\). Define the set
\[Y_{N^{+},N^{-}}(\mathbf{C}):=\hat{\mathscr{R}}^{*}\backslash\hat{\mathscr{R}} ^{*}\times\operatorname{Hom}(\mathbf{C},\mathscr{B}_{\infty})\big{/}\mathscr{ B}^{*}. \tag{2.1}\]
As notations suggest, \(Y_{N^{+},N^{-}}(\mathbf{C})\) is a Riemann surface, arising as the set of complex points of a smooth curve. This curve can be defined over \(\mathbf{Q}\), and its description, which we will recall in the next paragraphs, markedly depends on whether \(\mathscr{B}\) is definite or indefinite.
In the indefinite case, let \(\Gamma_{N^{+},N^{-}}\subset\operatorname{SL}_{2}(\mathbf{R})\) be the discrete subgroup of \(\iota_{\infty}\left(\mathscr{R}^{*}\right)\) consisting of elements of determinant \(1\); here \(\iota_{\infty}:\mathscr{B}\cong\mathbb{M}_{2}(\mathbb{R})\) is a fixed isomorphism. Then the strong approximation theorem shows that
\[Y_{N^{+},N^{-}}(\mathbf{C})\cong\Gamma_{N^{+},N^{-}}\backslash\mathcal{H},\]
where \(\mathcal{H}:=\{z\in\mathbf{C}:\Im(z)>0\}\), and the left action of \(\Gamma_{N^{+},N^{-}}\) on \(\mathcal{H}\) is by fractional linear transformations. If \(N^{-}\neq 1\), we set \(X_{N^{+},N^{-}}(\mathbf{C})=Y_{N^{+},N^{-}}(\mathbf{C})\), while if \(N^{-}=1\) then \(Y_{N^{+},N^{-}}(\mathbf{C})\) is the usual modular curve of level \(\Gamma_{0}(N)\), and we let \(X_{N^{+},N^{-}}(\mathbf{C})\) denote its standard compactification obtained by adding a finite set of cusps. The Riemann surface \(X_{N^{+},N^{-}}(\mathbf{C})\) has a model \(X_{N^{+},N^{-}}\) defined over \(\mathbf{Q}\), which is called the _Shimura curve_ of discriminant \(N^{-}\) and level \(N^{+}\) (up to isomorphism, it is independent of the choices made).
In the definite case, the double coset space \(\hat{\mathscr{R}}^{*}\backslash\hat{\mathscr{R}}^{*}/\mathscr{B}^{*}\) is a finite set, in bijection with the set \(\{\mathscr{R}_{1},\dots,\mathscr{R}_{h}\}\) of conjugacy classes of (oriented) Eichler orders of level \(N^{+}\) in \(\mathscr{B}\). For every \(j=1,\dots,h\), set \(\Gamma_{j}:=\mathscr{R}_{j}^{*}/\mathbf{Z}^{*}\); each \(\Gamma_{j}\) is a finite group. Then, again by the strong approximation theorem,
\[Y_{N^{+},N^{-}}(\mathbf{C})\cong\coprod_{j=1}^{h}\Gamma_{j}\backslash \operatorname{Hom}(\mathbf{C},\mathscr{B}_{\infty}).\]
Attach a conic \(\mathscr{C}/\mathbf{Q}\) to \(\mathscr{B}\), by the rule
\[\mathscr{C}(A):=\big{\{}x\in\mathscr{B}\otimes_{\mathbf{Q}}A:x\neq 0, \operatorname{Nr}(x)=\operatorname{Tr}(x)=0\big{\}}/A^{*},\]
where \(\operatorname{Nr}\) and \(\operatorname{Tr}\) denote reduced norm and trace, respectively. There is a natural bijection between \(\operatorname{Hom}(\mathbf{C},\mathscr{B}_{\infty})\) and \(\mathscr{C}(\mathbf{C})\), from which it follows that \(Y_{N^{+},N^{-}}(\mathbf{C})\) is identified with the set of complex points of the disjoint union \(X_{N^{+},N^{-}}:=\coprod_{j=1}^{h}\mathscr{C}_{j}\) of the genus zero curves \(\mathscr{C}_{j}:=\Gamma_{j}\backslash\mathscr{C}\) defined over \(\mathbf{Q}\). The curve \(X_{N^{+},N^{-}}\) is called the _Gross curve_ of discriminant \(N^{-}\) and level \(N^{+}\).
### Hecke operators
Since \(\operatorname{Pic}(\mathbf{Z})\cong\hat{\mathbf{Q}}^{*}/\mathbf{Q}^{*}\hat{ \mathbf{Z}}^{*}\) is trivial, one has a bijection
\[Y_{N^{+},N^{-}}(\mathbf{C})\cong\Big{(}\hat{\mathscr{R}}^{*}\backslash\hat{ \mathscr{B}}^{*}/\hat{\mathbf{Q}}^{*}\times\operatorname{Hom}(\mathbf{C}, \mathscr{B}_{\infty})\Big{)}\Big{/}\mathscr{B}^{*}.\]
The double coset space \(\hat{\mathscr{R}}^{*}\backslash\hat{\mathscr{B}}^{*}/\hat{\mathbf{Q}}^{*}\) is equal to the product over all prime numbers \(\ell\) of the local double coset spaces \(\mathcal{T}_{\ell}=\mathscr{R}_{\ell}^{*}\backslash\mathscr{B}_{\ell}^{*}/ \mathbf{Q}_{\ell}^{*}\), where \(\mathscr{B}_{\ell}=\mathscr{B}\otimes_{\mathbf{Z}}\mathbf{Z}_{\ell}\) and \(\mathscr{B}_{\ell}=\mathscr{B}\otimes_{\mathbf{Q}}\mathbf{Q}_{\ell}\). If \(\ell\nmid N\), then \(\mathcal{T}_{\ell}\) is isomorphic to the set of vertices of the Bruhat-Tits tree \(\mathcal{T}_{\ell}\cong\operatorname{PGL}_{2}(\mathbf{Z}_{\ell})\backslash \operatorname{PGL}_{2}(\mathbf{Q}_{\ell})\) of \(\operatorname{PGL}_{2}(\mathbf{Q}_{\ell})\). This decomposition gives rise to an action of Hecke operators \(T_{\ell}\), for primes \(\ell\nmid N\), and \(U_{\ell}\) for \(\ell\mid N\) by \(\mathbf{Q}\)-rational correspondences on \(X_{N^{+},N^{-}}\). By covariant functoriality, they induce endomorphisms of the Picard group
\[J_{N^{+},N^{-}}=\operatorname{Pic}(X_{N^{+},N^{-}}/\mathbf{Q})\]
of the curve \(X_{N^{+},N^{-}}/\mathbf{Q}\), denoted in the same way. Define \(\mathbf{T}_{N^{+},N^{-}}\) to be the \(\mathbf{Z}\)-subalgebra of the ring \(\operatorname{End}_{\mathbf{Q}}(J_{N^{+},N^{-}})\) generated over \(\mathbf{Z}\) by the operators \(T_{\ell}\) and \(U_{\ell}\). Note that in the definite case \(J_{N^{+},N^{-}}\) is a free \(\mathbf{Z}\)-module of rank equal to the number of connected components of the Gross curve \(X_{N^{+},N^{-}}\).
### The Jacquet-Langlands correspondence
Let \(\mathbb{T}_{N^{+},N^{-}}\) be the Hecke algebra acting faithfully on the \(\mathbf{C}\)-vector space \(S_{2}(\Gamma_{0}(N))^{N^{-}\text{new}}\) of weight-two cusp forms of level \(N\) which are new at \(N^{-}\), generated over \(\mathbf{Z}\) by Hecke operators \(T_{\ell}\) for primes \(\ell\nmid N\) and \(U_{\ell}\) for primes \(\ell|N\). The Jacquet-Langlands correspondence states the existence of a canonical isomorphism \(\mathbb{T}_{N^{+},N^{-}}\cong\mathbf{T}_{N^{+},N^{-}}\) identifying Hecke operators indexed by the same prime numbers. It follows that \(\mathbb{T}_{N^{+},N^{-}}\) acts as a group of \(\mathbf{Q}\)-rational endomorphisms of \(J_{N^{+},N^{-}}\). See Section 1.6 of [2] for details.
### Special points
Let \(p>3\) be a prime number such that \(p\nmid N\), and \(K/\mathbf{Q}\) be an imaginary quadratic field of discriminant \(D_{K}\) coprime with \(Np\). Assume in this subsection that the factorization \(N=N^{+}N^{-}\) satisfies the following _generalized Heegner hypothesis_: a prime divisor \(q\) of \(N\) divides \(N^{+}\) if and only if it is split in \(K\).
The inclusion \(\operatorname{Hom}(K,\mathscr{B})\subset\operatorname{Hom}(\mathbf{C}, \mathscr{B}_{\infty})\) arising from extension of scalars induces a map from the set
\[\mathscr{S}_{N^{+},N^{-}}(K):=\hat{\mathscr{R}}^{*}\backslash\hat{\mathscr{B}} ^{*}\times\operatorname{Hom}(K,\mathscr{B})\big{/}\mathscr{B}^{*}\]
to \(Y_{N^{+},N^{-}}(\mathbf{C})\). A _special point_ of \(X_{N^{+},N^{-}}\) associated with \(K\) is any point in the image of this map. When \(\mathscr{B}\) is indefinite (resp., definite), so that \(X_{N^{+},N^{-}}\) is a Shimura curve (resp., a Gross curve), we say that the points in \(\mathscr{S}_{N^{+},N^{-}}(K)\) are _Heegner points_ (resp., _Gross points_) associated with \(K\).
Let \(P\in\mathscr{S}_{N^{+},N^{-}}(K)\) be represented by \(g\times f\in\hat{\mathscr{B}}^{*}\times\operatorname{Hom}(K,\mathscr{B})\). Then \(P\) is a said to be of _conductor_\(p^{n}\) if
\[f(K)\cap g^{-1}\hat{\mathscr{R}}^{*}g=f(\mathcal{O}_{p^{n}}),\]
where \(\mathcal{O}_{p^{n}}:=\mathbf{Z}+p^{n}\mathcal{O}_{K}\) (\(n\geqslant 0\)) is the order of \(K\) of conductor \(p^{n}\). Write \(\mathscr{S}_{N^{+},N^{-}}(\mathcal{O}_{p^{n}})\) for the set of special points of conductor \(p^{n}\) in \(X_{N^{+},N^{-}}(\mathbf{C})\). The theory of local embeddings guarantees that, under the condition recalled at the beginning of this subsection, the set \(\mathscr{S}_{N^{+},N^{-}}(\mathcal{O}_{p^{n}})\) is not empty for all \(n\geqslant 0\) (see [2], Section 2.2).
The set of special points \(\mathscr{S}_{N^{+},N^{-}}(K)\) is equipped with an algebraic Galois action of the group \(\operatorname{Gal}(K^{\mathrm{ab}}/K)\), where \(K^{\mathrm{ab}}\) is the maximal abelian extension of \(K\). Let \(P\in\mathscr{S}_{N^{+},N^{-}}(K)\) be represented by a pair \(g\times f\in\hat{\mathscr{B}}^{*}\times\operatorname{Hom}(K,\mathscr{B})\) and let \(\sigma\) be represented under the inverse of the Artin map by the class of an element \(\mathfrak{a}\in\hat{K}^{*}\). Then \(\sigma(P)\) is the special point in \(\mathscr{S}_{N^{+},N^{-}}(K)\) represented by the pair \(g\hat{f}(\mathfrak{a})\times f\), where \(\hat{f}\) is the adelization of \(f\).
Let \(\operatorname{Pic}(\mathcal{O}_{p^{n}})=K^{*}\backslash\hat{K}^{*}/\hat{ \mathcal{O}}_{p^{n}}\) be the Picard group of \(\mathcal{O}_{p^{n}}\). By class field theory there exists an abelian extension \(\tilde{K}_{n}/K\), the ring class field of conductor \(p^{n}\), such that the Galois group \(\tilde{G}_{n}=\operatorname{Gal}(\tilde{K}_{n}/K)\) is isomorphic to \(\operatorname{Pic}(\mathcal{O}_{p^{n}})\) via the inverse of the Artin map. Recall that the Galois group \(\operatorname{Gal}(K/\mathbf{Q})\) acts on \(\tilde{G}_{n}\) as inversion.
If \(X_{N^{+},N^{-}}\) is a Shimura curve, then the theory of complex multiplication shows that \(\mathscr{S}_{N^{+},N^{-}}(\mathcal{O}_{p^{n}})\) is contained in \(X_{N^{+},N^{-}}(\tilde{K}_{n})\), for all \(n\geqslant 0\), and Shimura's reciprocity law states that the algebraic Galois action on the set of special points \(\mathscr{S}_{N^{+},N^{-}}(K)\) described above coincides with the usual geometric action of \(\operatorname{Gal}(\bar{\mathbf{Q}}/\mathbf{Q})\) on \(X_{N^{+},N^{-}}(\bar{\mathbf{Q}})\). In this case, for any extension \(H/\mathbf{Q}\) in \(\bar{\mathbf{Q}}\), denote as usual \(J_{N^{+},N^{-}}(H)\) the subgroup of \(H\)-rational divisors of \(J_{N^{+},N^{-}}(\bar{\mathbf{Q}})\), i.e. those fixed by \(\operatorname{Gal}(\bar{\mathbf{Q}}/H)\).
If \(X_{N^{+},N^{-}}\) is a Gross curve, then the algebraic action of \(\operatorname{Gal}(K^{\mathrm{ab}}/K)\) on \(\mathscr{S}_{N^{+},N^{-}}(K)\) described above does not correspond to any geometric Galois action, since all special points are already defined over \(K\). However, one can check that each element in \(\mathscr{S}_{N^{+},N^{-}}(\mathcal{O}_{p^{n}})\) is fixed by the algebraic action of \(\operatorname{Gal}(K^{\mathrm{ab}}/\tilde{K}_{n})\), for all \(n\geqslant 0\). Extend canonically the action of \(\operatorname{Gal}(K^{\mathrm{ab}}/K)\) on \(\mathscr{S}_{N^{+},N^{-}}(K)\) defined above to the subgroup of \(J_{N^{+},N^{-}}\) generated by the image of \(\mathscr{S}_{N^{+},N^{-}}(K)\). Given an abelian extension \(H\) of \(K\) and an element \(D\subseteq J_{N^{+},N^{-}}\) supported on \(\mathscr{S}_{N^{+},N^{-}}(K)\), write with an abuse of notation \(D\in J_{N^{+},N^{-}}(H)\) to mean that \(D\) is fixed by the action of \(\operatorname{Gal}(K^{\mathrm{ab}}/H)\).
### Compatible sequences of special points
Let \(K\), \(N=N^{+}N^{-}\) and \(p\nmid N\) be fixed as in the previous subsection, and assume that \(p\nmid h_{K}\), the class number of \(K\). Recall from the Introduction the anticyclotomic \(\mathbf{Z}_{p}\)-extension \(K_{\infty}/K\), and for any integer \(n\geqslant 0\) let \(K_{n}\) be the subfield of \(K_{\infty}\) such that \(G_{n}=\operatorname{Gal}(K_{n}/K)\cong\mathbf{Z}/p^{n}\mathbf{Z}\). Since \(p\nmid h_{K}\), we have \(\tilde{K}_{n+1}=K_{n}\cdot\tilde{K}_{1}\), and \(K_{n}\cap\tilde{K}_{1}=K\). In particular \(\tilde{G}_{n+1}=\Delta\times G_{n}\), with \(\Delta=\tilde{G}_{1}\).
Let \(L\geqslant 1\) be a squarefree integer, prime to \(Np\); when \(L>1\), we suppose that \(L\) is the product of primes which are inert in \(K\). The set \(\mathscr{S}_{N^{+},LN^{-}}(\mathcal{O}_{p^{n}})\) of special points of conductor \(p^{n}\) in \(X_{N^{+},LN^{-}}(\mathbf{C})\) is then non-empty for every \(n\geqslant 0\) (note that \(X_{N^{+},LN^{-}}\) might be a Gross curve or a Shimura curve, accordingly with the parity of the number of prime divisors of \(N^{-}L\)). As in Section 2.4 of [2] fix a _compatible sequence_\(\tilde{P}_{\infty}(L)=(\tilde{P}_{n}(L))_{n\geqslant 0}\) of special points of \(p\)-power conductor, where \(\tilde{P}_{n}(L)\in\mathscr{S}_{N^{+},LN^{-}}(\mathcal{O}_{p^{n}})\). For every integer \(n\geqslant\ -1\) define
\[P_{n}(L)=\sum_{\sigma\in\Delta}\sigma\big{(}\tilde{P}_{n+1}(L)\big{)}\in J_{N^{+}, LN^{-}}(K_{n}),\]
\[P_{K}(L)=\sum_{\sigma\in\operatorname{Pic}(\mathcal{O}_{K})}\sigma\big{(}\check{P}_ {0}(L)\big{)}\in J_{N^{+},N^{-}}(K).\]
Let \(\epsilon_{K}\) be the quadratic character associated with \(K\), and \(u_{K}\) be one half of the order of the unit group \(\mathcal{O}_{K}^{*}\). Define
\[u_{p}=(p-\epsilon_{K}(p))/u_{K}. \tag{2.2}\]
Then these points satisfy the following relations:
\[P_{-1}(L)=u_{p}\cdot P_{K}(L), \tag{2.3}\]
\[u_{K}\cdot P_{0}(L)=\begin{cases}T_{p}P_{K}(L),\text{ if }\epsilon_{K}(p)=-1\\ T_{p}P_{K}(L)-2P_{K}(L),\text{ if }\epsilon_{K}(p)=+1,\end{cases} \tag{2.4}\]
\[\operatorname{Trace}_{K_{n+1}/K_{n}}(P_{n+1}(L))=T_{p}P_{n}(L)-P_{n-1}(L), \text{ for every }n\geqslant 0, \tag{2.5}\]
where we write \(\operatorname{Trace}_{K_{n+1}/K_{n}}\) for the trace map \(\sum_{\sigma\in\operatorname{Gal}(K_{n+1}/K_{n})}\sigma\). We simply write \(P_{n}\) and \(P_{K}\) if \(L=1\).
## 3. Admissible primes and raising the level
Fix a squarefree positive integer \(N\), a factorisation \(N=N^{+}N^{-}\) into coprime squarefree positive integers, and a rational prime \(p>5\) coprime to \(N\).
### Eigenforms of level \((N^{+},N^{-})\)
Recall the Hecke algebra \(\mathbb{T}_{N^{+},N^{-}}\) defined in Section 2.3 and let \(R\) be a complete local Noetherian ring with finite residue field \(k_{R}\) of characteristic \(p\). A \(R\)_-valued (weight two) eigenform of level \((N^{+},N^{-})\)_ is a _surjective_ morphism \(f:\mathbb{T}_{N^{+},N^{-}}\to R\). Denote by \(S_{2}(N^{+},N^{-};R)\) the set of such eigenforms. To every eigenform \(f\in S_{2}(N^{+},N^{-};R)\) is associated a Galois representation
\[\bar{\rho}_{f}:G_{\mathbf{Q}}\longrightarrow\operatorname{GL}_{2}(k_{R}),\]
unramified at every prime \(q\nmid Np\), and such that an arithmetic Frobenius \(\operatorname{Frob}_{q}\in G_{\mathbf{Q}}\) at \(q\) has characteristic polynomial \(\operatorname{char}\left(\bar{\rho}_{f}(\operatorname{Frob}_{q})\right)=X^{2 }-\bar{f}(T_{q})X+q\in k_{R}[X]\), where \(\bar{f}(T_{q})\) denotes the reduction of \(f(T_{q})\) modulo the maximal ideal of \(R\). The semi-simplification of \(\bar{\rho}_{f}\) is characterised by these properties. Moreover, as proved by Carayol [8], if \(\bar{\rho}_{f}\) is _irreducible_ (hence absolutely irreducible since \(p\) is odd), it can be lifted _uniquely_ to a Galois representation
\[\rho_{f}:G_{\mathbf{Q}}\longrightarrow\operatorname{GL}_{2}(R),\]
unramified at \(q\nmid Np\), and such that \(\operatorname{trace}(\operatorname{Frob}_{q})=f(T_{q})\) and \(\det(\operatorname{Frob}_{q})=q\) for such a \(q\). Assuming that \(\bar{\rho}_{f}\) is irreducible, write \(T_{f}\) for an \(R\)-module giving rise to the representation \(\rho_{f}\), and, for \(R\) a quotient of \(\mathbf{Z}_{p}\), define \(A_{f}=\operatorname{Hom}_{\mathbf{Z}_{p}}(T_{f},\mu_{p^{\infty}})\), where \(\mu_{p^{\infty}}\) is the group of \(p\)-power roots of unity.
Let \(n\in\mathbf{N}\cup\{\infty\}\) and define \(R=\mathbf{Z}_{p}\) if \(n=\infty\) and \(R=\mathbf{Z}/p^{n}\mathbf{Z}\) if \(n<\infty\). Let \(f\in S_{2}(N^{+},N^{-};R)\). If \(k\in\mathbf{N}\cup\{\infty\}\) and \(1\leqslant k\leqslant n\), let \(f_{k}=f\pmod{p^{k}}\) denote the reduction of \(f\) modulo \(p^{k}\), with the convention that \(f_{\infty}=f\) if \(n=k=\infty\). Let \(T_{f,k}\) and \(A_{f,k}\) be the modules introduced above for \(f_{k}\) (i.e. \(T_{f,k}=T_{f_{k}}\) and \(A_{f,k}=A_{f_{k}}\)). If particular, if \(n=\infty\) then \(T_{f,\infty}=T_{f}\) and \(A_{f,\infty}=A_{f}\). Finally, for any \(\mathbf{Z}_{p}\)-algebra \(\mathscr{O}\), we will write \(T_{f,k,\mathscr{O}}=T_{f,k}\otimes_{\mathbf{Z}_{p}}\mathscr{O}\) and \(A_{f,k,\mathscr{O}}=A_{f,k}\otimes_{\mathbf{Z}_{p}}\mathscr{O}\).
### Admissible primes
Let \(R\) denote a complete, local Noetherian ring with finite residue field \(k_{R}\) of characteristic \(p\) and let \(f\in S_{2}(N^{+},N^{-};R)\) be an \(R\)-valued eigenform of level \((N^{+},N^{-})\). Fix a quadratic imaginary field \(K/\mathbf{Q}\) of discriminant \(D_{K}\) coprime with \(Np\). Following [3], we say that a rational prime \(\ell\) is an _admissible prime relative to \((f,K)\)_ if the following conditions are satisfied:
1. \(\ell\) does not divide \(Np\).
2. \(p\) does not divide \(\ell^{2}-1\).
3. \(f(T_{\ell})^{2}=(\ell+1)^{2}\in R\).
4. \(\ell\) is inert in \(K/\mathbf{Q}\).
Write \(\mathscr{S}(f,K)\) for the set of squarefree products of admissible primes for \((f,K)\).
Let \(n\in\mathbf{N}\cup\{\infty\}\), and put \(R=\mathbf{Z}_{p}\) if \(n=\infty\) and \(R=\mathbf{Z}/p^{n}\mathbf{Z}\) if \(n<\infty\). For \(f\in S_{2}(N^{+},N^{-};R)\), and \(k\in\mathbf{N}\cup\{\infty\}\) with \(1\leqslant k\leqslant n\), call \(k\)_-admissible prime_ any admissible prime relative to \((f_{k},K)\), where recall that \(f_{k}\) is the reduction of \(f\) modulo \(p^{k}\), with the convention that \(f_{\infty}=f\). With an abuse of notation, if no confusion may arise, we write \(\mathscr{S}_{k}\) for \(\mathscr{S}(f_{k},K)\). We say that \(L\in\mathscr{S}_{k}\) is _definite_ if \(\epsilon_{K}(LN^{-})=-1\) and _indefinite_ if \(\epsilon_{K}(LN^{-})=+1\), and write \(\mathscr{S}_{k}^{\text{def}}\) and \(\mathscr{S}_{k}^{\text{ind}}\) for the subsets of \(\mathscr{S}_{k}\) consisting of definite and indefinite integers, respectively; clearly \(\mathscr{S}_{k}=\mathscr{S}_{k}^{\text{def}}\cup\mathscr{S}_{k}^{\text{ind}}\).
The following lemma is proved by the same argument appearing in the proof of Lemma 2.6 of [3].
**Lemma 3.1**.: _Let \(\ell\) be an admissible prime relative to \((f,K)\), and let \(K_{\ell}/\mathbf{Q}_{\ell}\) be the completion of \(K\) at the unique prime dividing \(\ell\)\((\)so that \(K_{\ell}=\mathbf{Q}_{\ell^{2}}\) is the quadratic unramified extension of \(\mathbf{Q}_{\ell}\)\()\). There is a decomposition of \(R[G_{K_{\ell}}]\)-modules \(T_{f}=R(\varepsilon)\oplus R\), where \(R(\varepsilon)\) (resp., \(R\)) denotes a copy of \(R\) on which \(G_{K_{\ell}}\) acts via the \(p\)-adic cyclotomic character \(\varepsilon\) (resp., acts trivially)._
### Level raising
Let \(n\) be a positive integer, and let \(f\in S_{2}(N^{+},N^{-};\mathbf{Z}/p^{n}\mathbf{Z})\).
**Hypothesis 3.2**.: The data \((\bar{\rho}_{f},N^{+},N^{-},p)\) with \(N=N^{+}N^{-}\) satisfy the following conditions:
1. \(N^{-}\) is squarefree, \(p\geqslant 5\) and \(p\nmid N\);
2. \(\bar{\rho}_{f}:G_{\mathbf{Q}}\to\operatorname{GL}_{2}(\mathbf{F}_{p})\) is surjective.
3. If \(q\|N^{-}\) and \(q\equiv\,\pm 1\) mod \(p\), then \(\bar{\rho}_{f}\) is ramified at \(q\).
**Theorem 3.3**.: _Assume that \(f\in S_{2}(N^{+},N^{-};\mathbf{Z}/p^{n}\mathbf{Z})\) satisfies Assumption 3.2. Let \(S\in\mathscr{S}_{k}\) for some integer \(1\leqslant k\leqslant n\). Then there exists an eigenform_
\[f_{S}:\mathbb{T}_{N^{+},N^{-}S}\longrightarrow\mathbf{Z}/p^{k}\mathbf{Z}\]
_of level \((N^{+},N^{-}S)\) such that \(f_{S}(T_{q})=f_{k}(T_{q})\) for \(q\nmid NS\) and \(f_{S}(U_{q})=f(U_{q})\) for \(q\mid N\), where recall that \(f_{k}=f\pmod{p^{k}}\). Moreover, \(f_{S}\) is unique up to multiplication by a unit in \(\left(\mathbf{Z}/p^{k}\mathbf{Z}\right)^{*}\)._
Proof.: Assume that \(N^{-}>1\) and that \(N^{-}\) has an odd (resp., even) number of prime divisors. In this case Theorem 3.3 is proved in Section 5 (resp., Section 9) of [3] under slightly more restrictive assumptions on \((\overline{\rho}_{f},N^{+},N^{-},p)\), subsequently removed in [26]. The method of [3] builds on work of Ribet (who considered the case \(n=k=1\)), and makes essential use of the generalisation of Ihara's Lemma to Shimura curves obtained by Diamond-Taylor [11]. We refer to [3] for more details and references.
Assume now that \(N^{-}=1\). If \(n=k=1\), the theorem was proved by Ribet. If \(n>1\), it can be proved by following the arguments of Section 9 of [3] (see in particular Proposition 9.2 and Theorem 9.3), using Ihara's Lemma (rather than its generalisation by Diamond-Taylor) in the proof of Proposition 9.2.
In the situation of Theorem 3.3, we say that \(f_{S}\) is the _level raising of \(f_{k}=f\pmod{p^{n}}\) at \(S\)_; it is defined up to units in \(\mathbf{Z}/k\mathbf{Z}\).
## 4. \(p\)-adic \(L\)-functions and special values formulae
Let \(E/\mathbf{Q}\) be the elliptic curve fixed in the introduction, and let \(f\) be the cuspform associated to \(E\) by modularity. Let \(k\in\mathbf{Z}_{\geqslant 1}\cup\{\infty\}\) be an integer or the symbol \(\infty\). If \(k\) is an integer let \(L\in\mathscr{S}_{k}^{\text{def}}\) be a _definite_\(k\)-admissible integer (i.e. \(\epsilon_{K}(LN^{-})=-1\)) and denote by \(g=f_{L}\in S_{2}(N^{+},LN^{-};\mathbf{Z}/p^{k}\mathbf{Z})\) the \(L\)-level raising of \(f\) modulo \(p^{k}\). If \(k=\infty\) assume that \(f\) is _definite_ (i.e. \(\epsilon_{K}(N^{-})=-1\)), and set \(L=1\) and \(g=f\); under the Jacquet-Langlands isomorphism \(\mathbb{T}_{N^{+},LN^{-}}\cong\mathbf{T}_{N^{+},LN^{-}}\) recalled in Section 2, if \(k=\infty\) the form \(g\) induces a \(\mathbf{Z}_{p}\)-valued ring homomorphism \(\mathbf{T}_{N^{+},LN^{-}}\to\mathbf{Z}_{p}\), denoted by the same symbol \(g\). In both cases \(X_{N^{+},LN^{-}}\) is a _Gross curve_. Define \(R=\mathbf{Z}_{p}\) if \(k=\infty\) and \(R=\mathbf{Z}/p^{k}\mathbf{Z}\) if \(k\) is an integer. We may in both cases view \(g\) as a morphism
\[g:\mathbf{T}_{N^{+},LN^{-}}\longrightarrow R.\]
Fix a topological generator \(\gamma\) of \(G_{\infty}\). Let \(\omega_{n}=\gamma^{p^{n}}-1\), denote \(\Phi_{n+1}(T)=\sum_{j=0}^{p-1}T^{j\cdot p^{n}}\in\mathbf{Z}[T]\) the \(p^{n+1}\)-cyclotomic polynomial. Set \(\nu_{p}=0\) (resp., \(\nu_{p}=1\)) if \(p\) is inert (resp., splits) in \(K\). Define
* \(\omega_{0}^{+}=\omega_{1}^{+}=(\gamma-1)^{\nu_{p}}\),
* \(\omega_{0}^{-}=(\gamma-1)\),
* For each integer \(n\geqslant 2\), \[\omega_{n}^{+}=(\gamma-1)^{\nu_{p}}\prod_{1\leqslant j\leqslant\lfloor\frac{n}{2} \rfloor}\Phi_{2j}(\gamma),\]
* For each integer \(n\geqslant 1\). \[\omega_{n}^{-}=(\gamma-1)\prod_{1\leqslant j\leqslant\lfloor\frac{n+1}{2} \rfloor}\Phi_{2j-1}(\gamma).\]
### Modular parametrisations
Let \(\mathfrak{m}_{L}\subset\mathbf{T}_{N^{+},LN^{-}}\) be the kernel of the reduction
\[\bar{g}:\mathbf{T}_{N^{+},LN^{-}}\longrightarrow\mathbf{F}_{p}\]
of \(g\) modulo \(p\), and let \(J_{\mathfrak{m}_{L}}\) and \(\mathbf{T}_{\mathfrak{m}_{L}}\) denote the completions of \(J_{N^{+},LN^{-}}\) and \(\mathbf{T}_{N^{+},LN^{-}}\) at \(\mathfrak{m}_{L}\) respectively. Thanks to Ribet's level lowering theorem, Hypothesis 1.1 imply that _Hypothesis CR_ in [26] holds true, so according to Theorem 6.2 and Proposition 6.5 of [26] (relaxing one of the assumption of [3, Lemma 2.2]) it follows that \(J_{\mathfrak{m}_{L}}\) is a free \(\mathbf{T}_{\mathfrak{m}_{L}}\)-module of rank one. As a consequence \(g\) induces a surjective morphism
\[\psi_{g}:J_{N^{+},LN^{-}}\otimes_{\mathbf{Z}}\mathbf{Z}_{p}\longrightarrow R\]
satisfying \(\psi_{g}(t\cdot x)=g(t)\cdot\psi_{g}(x)\) for every \(t\in\mathbf{T}_{N^{+},LN^{-}}\) and every \(x\in J_{N^{+},LN^{-}}\), which is uniquely determined by \(g\) up to multiplication by a \(p\)-adic unit. Let finally \(\Lambda_{R}=\Lambda\otimes_{\mathbf{Z}_{p}}R=R[\![G_{\infty}]\!]\) (so that \(\Lambda_{R}=\Lambda\) if \(k=\infty\) and \(\Lambda_{R}=\Lambda/p^{k}\Lambda\) if \(k<\infty\)) and put \(\Lambda_{n,k}=R[G_{n}]\).
### The ordinary case
In the ordinary case, the construction of the \(p\)-adic \(L\)-function, which we will recall below, has been obtained in [2]. Assume in this section that \(E/\mathbf{Q}_{p}\) has _good ordinary_ reduction at \(p\), i.e. \(p\nmid N\) and \(g(T_{p})=a_{p}(E)\pmod{p^{k}}\) is a \(p\)-adic unit. The Hecke polynomial \(X^{2}-g(T_{p})X+p\) has a unique root \(\alpha_{p}(g)\) in \(R\) which is congruent to \(g(T_{p})\) modulo \(p\) and hence is a \(p\)-adic unit. Recall the compatible sequence \(P_{\infty}(L)=(P_{n}(L))_{n\geqslant-1}\) of Gross points fixed in Section 2.5. For every \(n\geqslant 1\) define
\[\mathcal{L}_{g,n}=\frac{1}{\alpha_{p}(g)^{n}}\sum_{\sigma\in G_{n}}\Big{(} \psi_{g}\big{(}\sigma(P_{n-1}(L))\big{)}-\alpha_{p}(g)\cdot\psi_{g}\big{(} \sigma(P_{n}(L))\big{)}\Big{)}\cdot\sigma\in\Lambda_{n,k}.\]
Since \(\psi_{g}(T_{p}x)=a_{p}(E)\cdot\psi_{g}(x)\) for every \(x\in J_{N^{+},LN^{-}}\), a direct computation based on Equation (2.5) shows that the elements \((\mathcal{L}_{g,n})_{n\geqslant 1}\) are compatible under the natural projection maps \(\Lambda_{n+1,k}\twoheadrightarrow\Lambda_{n,k}\). Define the _anticyclotomic square root \(p\)-adic \(L\)-function_
\[\mathcal{L}_{g}=\lim_{n\to\infty}\mathcal{L}_{g,n}\in\Lambda_{R}\]
as the inverse limit of the compatible sequence \((\mathcal{L}_{g,n})_{n\geqslant 1}\) in \(\varprojlim\Lambda_{n,k}=\Lambda_{R}\).
For any \(x\in\Lambda\) and any ring homomorphism \(\chi:\Lambda\to\mathscr{O}\), define as usual \(x(\chi)=\chi(x)\). Denote \(\mathbf{1}\) the trivial character. One has (cf. Equation (2.5))
\[\mathcal{L}_{g}(\mathbf{1})=\left\{\begin{array}{lcl}\frac{1}{u_{K}}\big{(}1 -\alpha_{p}(g)^{2}\big{)}\cdot\psi_{g}(P_{K}(L))&\text{ if }&\epsilon_{K}(p)=-1,\\ \frac{-1}{u_{K}}\big{(}1-\alpha_{p}(g)\big{)}^{2}\cdot\psi_{g}(P_{K}(L))& \text{ if }&\epsilon_{K}(p)=+1.\end{array}\right. \tag{4.1}\]
**Lemma 4.1**.: _The equality \(\mathcal{L}_{g}(\mathbf{1})=\psi_{g}\big{(}P_{K}(L)\big{)}\) holds in \(R\) up to multiplication by an element in \(R^{*}\)._
Proof.: This follows from the above formulas and Hypothesis 5.2(1).
The definition of \(\mathcal{L}_{g}\) depends on the choice of the compatible system of Heegner points \(P_{\infty}(L)\). If \(Q_{\infty}(L)\) is another compatible system, then there exists \(\gamma\in G_{\infty}\) such that \(\gamma\big{(}P_{n}(L)\big{)}=Q_{n}(L)\) for every \(n\geqslant 0\) (cf. Section 2 of [2]). As a consequence the square root \(p\)-adic \(L\)-function \(\mathcal{L}_{g}\) is well defined up to multiplication by \(G_{\infty}\). Define the _anticyclotomic \(p\)-adic \(L\)-function_
\[L_{p}(g)=\mathcal{L}_{g}\cdot\mathcal{L}_{g}^{\iota}\in\Lambda_{R},\]
where \(\iota\) is Iwasawa's main involution. Note that \(L_{p}(g)\) is independent of the choice of \(P_{\infty}(L)\).
### The supersingular case
In the supersingular case, the construction of the \(p\)-adic \(L\)-function has been obtained in [10] when \(p\) is split, and we extend the construction to the inert case. Assume that \(E/\mathbf{Q}_{p}\) has good _supersingular_ reduction. As \(p>3\) the Hasse bound gives \(a_{p}(E)=0\). Let \(\Lambda_{n,k}=\Lambda/(\omega_{n},p^{k})=R[G_{n}]\), and define \(\Lambda_{n,k}^{\pm}=\Lambda/(\omega_{n}^{\pm},p^{k})\). Set
* \(\tilde{\omega}_{0}^{+}=\tilde{\omega}_{1}^{+}=1\),
* \(\tilde{\omega}_{0}^{-}=(\gamma-1)\),
* For each integer \(n\geqslant 2\), \[\tilde{\omega}_{n}^{+}=\prod_{1\leqslant j\leqslant\lfloor\frac{n+1}{2} \rfloor}\Phi_{p^{2j}}(\gamma),\]
* For each integer \(n\geqslant 1\), \[\tilde{\omega}_{n}^{-}=\prod_{1\leqslant j\leqslant\lfloor\frac{n+1}{2} \rfloor}\Phi_{p^{2j-1}}(\gamma)\]
where as before \(\gamma\) is a topological generator of \(G_{\infty}\), so that \(\omega_{n}=(\gamma-1)\cdot\tilde{\omega}_{n}^{+}\cdot\tilde{\omega}_{n}^{-}\) (and \(\tilde{\omega}_{n}^{+}=\omega_{n}^{+}\) in the inert case) for each \(n\geqslant 0\). For every \(n\geqslant 0\) define
\[\mathcal{L}_{g,n}=\sum_{j=0}^{p^{n}-1}\psi_{g}\big{(}\gamma^{j}(P_{n}(L)) \big{)}\cdot\gamma^{j}\in\Lambda_{R}.\]
**Lemma 4.2**.: _Let \(\varepsilon=(-1)^{n}\). Then \(\omega_{n}^{\varepsilon}\cdot\mathcal{L}_{g,n}\in\omega_{n}\cdot\Lambda_{R}\) for all integers \(n\geqslant 0\)._
Proof.: The case when \(p\) is split is [10, Proposition 2.8(1)], and we only need to check the inert case. The proof is by induction.
Let \(m=0\). We have, using (2.4) for the second equality,
\[u_{K}\cdot\mathcal{L}_{g,0}=\psi_{g}\big{(}P_{0}(L)\big{)}=a_{p}(E)\psi_{g}(P _{K}(L))=0\]
where the last equality follows from \(a_{p}(E)=0\), so clearly \(\mathcal{L}_{g,0}=0\) and therefore \(\omega_{0}^{+}\mathcal{L}_{g,0}\) belongs to \(\omega_{0}\Lambda_{0,k}\) (recall that \(\omega_{0}=\tilde{\omega}_{0}^{+}=1\)).
Let \(m=1\). We have (the congruences are modulo \(\omega_{1}\))
\[\mathcal{L}_{g,1}=\sum_{j=0}^{p-1}\psi_{g}\big{(}\gamma^{j}(P_{1}(L))\big{)} \cdot\gamma^{j}\equiv\sum_{j=0}^{p-1}\psi_{g}\big{(}\gamma^{j}(P_{1}(L))\big{)} =\psi_{g}\big{(}\mathrm{Trace}_{K_{1}/K_{0}}(P_{1}(L))\big{)}=-u_{p}\psi_{g}( P_{K}(L))\]
where the first congruence follows because \(\gamma\equiv 1\pmod{\omega_{1}}\), and the last equality follows from (2.3) and (2.5). Now \(\omega_{1}^{-}=\omega_{1}\) and therefore \(\omega_{1}^{-}\mathcal{L}_{g,1}=-\omega_{1}u_{p}\psi_{g}(P_{K})\in\omega_{1} \Lambda_{1,k}\).
Let \(m\geqslant 2\). We have (the congruences are modulo \(\omega_{m+1}\))
(4.2) \[\mathcal{L}_{g,m+1} =\sum_{j=0}^{p^{m}-1}\left(\sum_{i=0}^{p-1}\psi_{g}(\gamma^{j+ip^{m }}(P_{m+1}(L)))\cdot\gamma^{ip^{m}}\right)\cdot\gamma^{j+ip^{m}}\] \[\equiv\sum_{j=0}^{p^{m}-1}\psi_{g}\big{(}\gamma^{j}\big{(}\text{ Trace}_{K_{m+1}/K_{m}}(P_{m+1}(L))\big{)}\big{)}\cdot\gamma^{j}\ (\text{ because }\gamma^{ip^{m}}-1\equiv 0\pmod{ \omega_{m+1}})\] \[\equiv\sum_{j=0}^{p^{m}-1}\psi_{g}(T_{p}\gamma^{j}(P_{m})-\gamma^ {j}(P_{m-1}(L)))\cdot\gamma^{j}\ (\text{by Equation (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:
Fix \([c]\) such that \(y[c]=[cy]=0\). Then \(cy=d\alpha\) for some \(d\in R\) and \(\alpha\in(z,\pi)\), so \(y(c-ex)=f\pi\) for some \(e\) and \(f\), and again, since \(y\) and \(\pi\) do not have common factors, we see that \(y\mid f\), so we can write \(y(c-ex-g\pi)=0\) for some \(g\), and since \(y\neq 0\) and \(R\) is a domain, we have \(c=ex+g\pi\). So \(x[d]=[xd]=[c]\).
By Lemma 4.3, in the split case multiplication by \(\tilde{\omega}_{n}^{\mp}\) gives an isomorphism between \(\Lambda_{n,k}^{\pm}\) and the \(\omega_{n}^{\pm}\)-torsion submodule of \(\Lambda_{n,k}\) (_cf._[18, Section 4 ]). In the inert case, again by Lemma 4.3, multiplication by \(\omega_{n}^{+}=\tilde{\omega}_{n}^{+}\) gives an isomorphism between \(\Lambda_{n,k}^{-}\) and the \(\omega_{n}^{-}\)-torsion submodule of \(\Lambda_{n,k}\), and multiplication by \(\omega_{n}^{-}=(\gamma-1)\tilde{\omega}_{n}^{-}\) gives an isomorphism between \(\Lambda_{n,k}^{+}\) and the \(\omega_{n}^{+}=\tilde{\omega}_{n}^{+}\)-torsion submodule of \(\Lambda_{n,k}\). Lemma 4.2 then implies that if \(\varepsilon=(-1)^{n}\), there exists elements \(\mathcal{L}_{n,k}^{\varepsilon}\in\Lambda_{n,k}^{\varepsilon}\) such that
* If \(p\) is split in \(K\) or \(p\) is inert in \(K\) and \(\varepsilon=-1\) (the non-exceptional case): \[\mathcal{L}_{g,n}=\begin{cases}(-1)^{n/2}\tilde{\omega}_{n}^{-}\mathcal{L}_{ g,n}^{+},\text{ if $n$ is even;}\\ (-1)^{(n-1)/2}\tilde{\omega}_{n}^{+}\mathcal{L}_{g,n}^{-},\text{ if $n$ is odd;}\end{cases}\]
* If \(p\) is inert in \(K\) and \(\varepsilon=+1\): \[\mathcal{L}_{g,n}=(-1)^{n/2}\omega_{n}^{-}\mathcal{L}_{g,n}^{+}\]
Denote by \(\pi_{2m+2}^{+}:\Lambda_{2m+2,k}^{+}\to\Lambda_{2m,k}^{+}\) and \(\pi_{2m+3}^{-}:\Lambda_{2m+3,k}^{-}\to\Lambda_{2m+1,k}^{-}\) the natural projections.
**Lemma 4.4**.: \(\pi_{2m+2}^{+}(\mathcal{L}_{g,2m+2}^{+})=\mathcal{L}_{g,2m}^{+}\) _and \(\pi_{2m+3}^{-}(\mathcal{L}_{g,2m+3}^{-})=\mathcal{L}_{g,2m+1}^{-}\) for every \(m\geqslant 0\)._
Proof.: In the split case, this is Lemma 2.9 of [10], so we only need to check the inert case. Equation (4.2) shows that
\[\mathcal{L}_{g,2m+2}=-\Phi_{p^{2m+1}}(\gamma)\cdot\mathcal{L}_{g,2m}+\omega_{2 m+1}\cdot z \tag{4.3}\]
for some \(z\in\Lambda_{R}\), for each \(m\geqslant 0\).
We first prove the statement for \(\mathcal{L}_{g,m}^{+}\). From (4.3)
\[(-1)^{m+1}\cdot\omega_{2m+2}^{-}\cdot\mathcal{L}_{g,2m+2}^{+}=(-1)^{m+1}\Phi_ {p^{2m+1}}(\gamma)\cdot\omega_{2m}^{-}\cdot\mathcal{L}_{g,2m}^{+}+\omega_{2m +1}\cdot z.\]
Both sides of the previous equation are divisible by \(\omega_{2m+2}^{-}\). Since \(\Lambda_{2m+2}^{+}\) has no nontrivial \(\omega_{2m+2}^{-}\)-torsion, dividing by \(\omega_{2m+2}^{-}\), we get the result.
We now prove the statement for \(\mathcal{L}_{g,m}^{-}\). From (4.3)
\[(-1)^{m+1}\cdot\tilde{\omega}_{2m+3}^{+}\cdot\mathcal{L}_{g,2m+3}^{-}=(-1)^{ m+1}\cdot\Phi_{p^{2m+3}}(\gamma)\cdot\tilde{\omega}_{2m+1}^{+}\cdot\mathcal{L}_{g,2m+ 1}^{-}+\omega_{2m+2}\cdot z.\]
Both sides of the previous equation are divisible by \(\tilde{\omega}_{2m+3}^{+}\). Since \(\Lambda_{R}^{-}\) has no nontrivial \(\tilde{\omega}_{2m+3}^{+}\)-torsion, dividing by \(\tilde{\omega}_{2m+3}^{+}\) we get the result.
Since \(\varprojlim\Lambda_{2m,k}^{+}\cong\Lambda_{R}\cong\varprojlim\Lambda_{2m+1,k}^ {-}\) (_cf._ Section 4 of [18]) the previous lemma allows us to define
\[\mathcal{L}_{g}^{\varepsilon}=\lim_{m\in\mathbf{N}^{\varepsilon}}\mathcal{L}_ {g,n}^{\varepsilon}\in\Lambda_{R},\]
where \(\mathbf{N}^{\varepsilon}\) is the set of natural numbers \(n\) satisfying \((-1)^{n}=\varepsilon\). Every continuous character \(\chi:G_{\infty}\to\bar{\mathbf{Q}}_{p}^{*}\) extends uniquely to a morphism \(\chi:\Lambda_{R}\to\mathscr{O}_{\chi}/p^{k}\mathscr{O}_{\chi}\) of \(\mathbf{Z}_{p}\)-algebras, where \(\mathscr{O}_{\chi}=\mathbf{Z}_{p}[\chi(G_{\infty})]\). As before, denote by \(\mathcal{L}_{g}^{\pm}(\chi)=\chi\big{(}\mathcal{L}_{g}^{\pm}\big{)}\) the value of \(\chi\) at \(\mathcal{L}_{g}^{\pm}\) and by \(\mathbf{1}\) the trivial character of \(G_{\infty}\).
**Lemma 4.5**.: _If \((f,K,p,\varepsilon)\) is non-exceptional, then the equality \(\mathcal{L}_{g}^{\varepsilon}(\mathbf{1})=\psi_{g}(P_{K}(L))\) holds in \(R\) up to multiplication by an element in \(R^{*}\)._
Proof.: This follows from (2.5), after noticing that \(P_{-1}(L)=u_{p}\cdot P_{K}(L)\) by (2.3) in the non-exceptional case.
_Remark 4.6_.: In the exceptional case, a result analogue to the equality in Lemma 4.5 is not currently available, to the best knowledge of the authors. Indeed, \(\mathcal{L}_{g,0}=u_{K}^{-1}a_{p}(E)\psi_{g}(P_{K}(L))=0\) by Lemma 4.2 (recall \(a_{p}(E)=0\) under our assumptions). As a consequence, on the one hand \(\mathcal{L}_{g,0}\) does not have a direct relation with \(\psi_{g}(P_{K}(L))\), which is instead directly related to the special value of the \(L\)-series of \(E\) over \(K\). On the other hand, the equality \(\mathcal{L}_{g,0}=0\), which can be interpreted as an exceptional-zero phenomenon, makes it possible to divide by \(\gamma-1\) to define the anticyclotomic \(p\)-adic \(L\)-function \(\mathcal{L}_{g}^{+}\); however, the \(p\)-adic
\(L\)-function thus obtained does not seem to have a clear relation with \(\psi_{g}(P_{L}(K))\) as well. Therefore, it might be interesting to further investigate an analogue of Lemma 4.5 in the exceptional case, since it seems to require new ideas and a different approach than in the non-exceptional case.
As in the ordinary case, define
\[L_{p}^{\varepsilon}(g)=\mathcal{L}_{g}^{\varepsilon}\cdot(\mathcal{L}_{g}^{ \varepsilon})^{\iota}\in\Lambda_{R},\]
which is independent of the choice of \(P_{\infty}(L)\).
## 5. Selmer groups
Recall the notation introduced in Section 2.5: for every integer \(n\geqslant 0\), \(K_{n}/K\) is the cyclic subextension of \(K_{\infty}/K\) of degree \(p^{n}\), and \(G_{n}=\operatorname{Gal}(K_{n}/K)\). Let \(G_{\infty}=\operatorname{Gal}(K_{\infty}/K)\), \(\Lambda_{n}=\mathbf{Z}_{p}[G_{n}]\) and \(\Lambda=\mathbf{Z}_{p}[\![G_{\infty}]\!]\). In this section we also fix a finite flat extension \(\mathscr{O}/\mathbf{Z}_{p}\), and define \(\Lambda_{\mathscr{O},n}=\mathscr{O}[G_{n}]\) and \(\Lambda_{\mathscr{O}}=\mathscr{O}[\![G_{\infty}]\!]\). For each prime ideal \(w\) of \(K\), denote \(K_{w}\) the completion of \(K\) at \(w\); fix an algebraic closure \(\bar{K}_{w}\) of \(K_{w}\), define \(G_{K_{w}}=\operatorname{Gal}(\bar{K}_{w}/K_{w})\), and let \(I_{K_{w}}\subseteq G_{K_{w}}\) be its inertia subgroup.
Let \(E/\mathbf{Q}\) be an elliptic curve of conductor \(N\), and let \(p\nmid N\), \(p\geqslant 5\) be a prime number. Let
\[f\in S_{2}(\Gamma_{0}(N))\]
be the newform attached to \(E\) by modularity, which we identify, with a slight abuse of notation, with a modular form \(f\in S_{2}(N^{+},N^{-};\mathbf{Z}_{p})\) by the Jacquet-Langlands correspondence (cf. Section 2.3). The representations \(A_{f}\) and \(T_{f}\) associated with \(f\) as in Section 3.1 are then the \(p\)-divisible group and the \(p\)-adic Tate module of \(E\), respectively.
### \(\varepsilon\)-rational points
In this subsection we assume that \(E\) has supersingular reduction at \(p\). Let \(\mathfrak{p}\) be a prime of \(K\) dividing \(p\). For every \(n\in\mathbf{N}\cup\{\infty\}\) denote by \(\Phi_{n}=K_{n,\mathfrak{p}}\) the completion of \(K_{n}\) at the unique prime dividing \(\mathfrak{p}\), by \(O_{n}\) the ring of integers of \(\Phi_{n}\) and by \(\mathfrak{m}_{n}\) its maximal ideal. Set \(\Phi=\Phi_{0}\), \(O=O_{0}\) and \(\mathfrak{m}=\mathfrak{m}_{0}\). Then \(\Phi_{\infty}\) is a totally ramified \(\mathbf{Z}_{p}\)-extension of \(\Phi\), whose Galois group can be identified with \(G_{\infty}\) (via \(i_{p}:\mathbf{Q}\hookrightarrow\bar{\mathbf{Q}}_{p}\)). Let \(\mathbf{E}/O\) be the formal group of \(E/\Phi\), which gives the kernel of the reduction modulo \(p\) on \(E\), and let \(\log_{\mathbf{E}}:\mathbf{E}\to\mathbf{G}_{a}\) be the formal group logarithm. The formal group \(\mathbf{E}/O\) is a Lubin-Tate group for the uniformiser \(-p\in O\) ([15]). Since \(E_{p^{m}}\cong\mathbf{E}_{p^{m}}\) for every \(m\geqslant 1\) as \(G_{\mathfrak{p}}\)-modules, the \(p\)-adic Tate module \(T_{f}\) of \(E\) is isomorphic to the \(p\)-adic Tate module of \(\mathbf{E}\), hence has a natural structure of \(O[\operatorname{Gal}(\bar{\Phi}/\Phi)]\)-module, free of rank one over \(O\).
Denote by \(\Xi\) the set of \(\bar{\mathbf{Q}}_{p}^{*}\)-valued finite order characters on \(G_{\infty}\). For every \(\chi\in\Xi\) let \(n_{\chi}\) be the smallest nonnegative integer such that \(\chi\) factors through \(G_{n_{\chi}}\), and let
\[\Xi^{\pm}=\{\chi\in\Xi\ |\ n_{\chi}\geqslant 1,\ (-1)^{n_{\chi}}=\pm 1\}.\]
If \(p\) splits in \(K/\mathbf{Q}\), set \(\Xi_{p}^{\pm}=\Xi^{\pm}\); if \(p\) is inert in \(K/\mathbf{Q}\) set \(\Xi_{p}^{+}=\Xi^{+}\) and \(\Xi_{p}^{-}=\Xi^{-}\cup\{\mathbf{1}\}\), where \(\mathbf{1}\in\Xi\) is the trivial character on \(G_{\infty}\). Let \(\log_{\chi}:\mathbf{E}(\mathfrak{m}_{\infty})\to\bar{\mathbf{Q}}_{p}\) be the morphism which on \(y\in\mathbf{E}(\mathfrak{m}_{\infty})\) takes the value
\[\log_{\chi}(y)=p^{-m}\sum_{\sigma\in G_{m}}\chi(\sigma)^{-1}\log_{\mathbf{E}} (y^{\sigma}),\]
where \(m=m(\chi,y)\) is any positive integer large enough so that \(m\geqslant n_{\chi}\) and \(y\) belongs to \(\mathbf{E}(\mathfrak{m}_{m})\). Following [27] set
\[\mathbf{E}(\mathfrak{m}_{\infty})_{\pm}=\big{\{}y\in\mathbf{E}(\mathfrak{m}_{ \infty})\ \big{|}\ \log_{\chi}(y)=0\ \text{for every}\ \chi\in\Xi_{p}^{\mp}\big{\}},\]
and for every \(1\leqslant k\leqslant\infty\) define
\[H^{1}_{\operatorname{fin},\pm}(\Phi_{\infty},A_{f,k})=\mathbf{E}(\mathfrak{m}_ {\infty})_{\pm}\otimes_{\mathbf{Z}}(\mathbf{Q}_{p}/\mathbf{Z}_{p})_{p^{k}}\]
viewed as submodules of \(H^{1}(\Phi_{\infty},A_{f,k})\) under the local Kummer map.
Let \(\varepsilon\) denote one of the signs \(+\) or \(-\); we sometimes, with an abuse of notation, write \(\varepsilon=+1\) when \(\varepsilon=+\) and \(\varepsilon=-1\) when \(\varepsilon=-\): adopting this abuse of notation, for any integer \(n\) the equation \((-1)^{n}=\varepsilon\) is meaningful and states that \(n\) is even if \(\varepsilon=+\) and \(n\) is odd if \(\varepsilon=-\). For every integer \(n\geqslant 0\) and every prime \(\mathfrak{p}\) of \(K\) dividing \(p\) set
\[\mathbf{E}(\mathfrak{m}_{n})_{\varepsilon}=\mathbf{E}(\mathfrak{m}_{n})\cap \mathbf{E}(\mathfrak{m}_{\infty})_{\varepsilon}.\]
If follows from the definitions that \(\omega_{n}^{\varepsilon}\cdot\mathbf{E}(\mathfrak{m}_{n})_{\varepsilon}=0\) for every \(n\geqslant 0\) and \(\mathfrak{p}|p\); in particular, if \(p\) is inert in \(K\) then \(\mathbf{E}(\mathfrak{m})_{\pm}=0\) and \(\mathbf{E}(\mathfrak{m})_{-}=\mathbf{E}(\mathfrak{m})\), while if \(p\) is split in \(K\), we have \(\mathbf{E}(\mathfrak{m})_{+}=\mathbf{E}(\mathfrak{m})_{-}=\mathbf{E}(\mathfrak{m})\).
Denote by \(\Lambda_{O}\) the tensor product of \(\Lambda\) with \(O\). For every Galois extension \(\Psi/\Phi\) the group \(\mathbf{E}(\Psi)\) is a module over \(O[\operatorname{Gal}(\Psi/\Phi)]\). The next theorem, which elucidates the structure of the \(\Lambda_{O}\)-modules \(\mathbf{E}(\Phi_{n})\) and \(\mathbf{E}(\Phi_{n})_{\varepsilon}\), has been obtained by Iovita-Pollack [18] in the split case and by Burungale-Kobayashi-Ota [7] in the inert case. It shows that the \(\varepsilon\)-local points enjoy trace relations analogous to those satisfied by the families of Heegner points and Gross points intervening in the definition of the \(\varepsilon\)-\(p\)-adic \(L\)-functions.
**Theorem 5.1**.: _There exist an element \(\boldsymbol{d}_{\mathfrak{p},0}\in\mathbf{E}(\Phi)\), and elements \(\boldsymbol{d}_{\mathfrak{p},n}^{\varepsilon}\in\mathbf{E}(\Phi_{n})_{\varepsilon}\) for each \(n\geqslant 0\) and each \(\varepsilon\), satisfying the following properties._
1. _The element_ \(\boldsymbol{d}_{\mathfrak{p},0}\) _is a generator of the free_ \(O\)_-module_ \(\mathbf{E}(\Phi)\)_. We define_ \(\boldsymbol{d}_{\mathfrak{p},0}^{+}=\boldsymbol{d}_{\mathfrak{p},0}^{-}= \boldsymbol{d}_{\mathfrak{p},0}\) _if_ \(p\) _is split in_ \(K\) _and_ \(\boldsymbol{d}_{\mathfrak{p},0}^{+}=0\)_,_ \(\boldsymbol{d}_{\mathfrak{p},0}^{-}=\boldsymbol{d}_{\mathfrak{p},0}\) _if_ \(p\) _is inert in_ \(K\)_._
2. _Suppose that_ \(\varepsilon=(-1)^{n}\)_. The_ \(\Lambda_{O}\)_-module_ \(\mathbf{E}(\Phi_{n})_{\varepsilon}\) _is free of rank one over_ \(\Lambda_{O}/(\omega_{n}^{\varepsilon})\)_, generated by elements_ \(\boldsymbol{d}_{\mathfrak{p},n}^{\varepsilon}\) _for all_ \(n\geqslant 1\)_, for which the following trace relations hold:_ * \(\operatorname{Trace}_{n+2/n+1}(\boldsymbol{d}_{\mathfrak{p},n+2}^{\varepsilon} )=-\boldsymbol{d}_{\mathfrak{p},n}^{\varepsilon}\)_, for all_ \(n\geqslant 0\)_;_ * \(\operatorname{Trace}_{1/0}(\boldsymbol{d}_{\mathfrak{p},1}^{-})=u\cdot \boldsymbol{d}_{\mathfrak{p},0}\)_, for some unit_ \(u\in\mathbf{Z}_{p}^{\times}\)_._
3. _Suppose that_ \(\varepsilon=-(-1)^{n}\)_. Then_ \(\boldsymbol{d}_{\mathfrak{p},n}^{-\varepsilon}=\boldsymbol{d}_{\mathfrak{p},n -1}^{-\varepsilon}\in E(\Phi_{n-1})_{-\varepsilon}\) _for all_ \(n\geqslant 1\)_. Moreover, the_ \(\Lambda_{O}\)_-module_ \(\mathbf{E}(\Phi_{n})\) _is generated by_ \(\boldsymbol{d}_{\mathfrak{p},n}^{\varepsilon}\) _and_ \(\boldsymbol{d}_{\mathfrak{p},n}^{-\varepsilon}\) _for all_ \(n\geqslant 1\)_._
### Selmer groups
Let \(k\in\mathbf{N}\cup\{\infty\}\). If \(k\in\mathbf{N}\) and \(L\in\mathscr{S}_{k}\), let \(g:\mathbb{T}_{N^{+},N^{-}L}\to\mathbf{Z}/p^{k}\mathbf{Z}\) be the level raising of \(f_{k}=f\pmod{p^{k}}\) at \(L\) (cf. Section 3.3). If \(k=\infty\), set \(L=1\) and \(g=f\). Fix an isomorphism of \(G_{K}\)-modules between \(T_{f,k}\) and the \(p\)-adic representation \(T_{g}\) associated with \(g\), which also fixes an isomorphism between \(A_{f,k}\) and \(A_{g}=\operatorname{Hom}_{\mathbf{Z}_{p}}(T_{g},\mu_{p^{\infty}})\). We often identify \(A_{f,k}\) with \(T_{f}\otimes_{\mathbf{Z}_{p}}(\mathbf{Q}_{p}/\mathbf{Z}_{p})_{p^{k}}\), hence \(A_{g}\) with \(T_{g}\otimes_{\mathbf{Z}_{p}}(\mathbf{Q}_{p}/\mathbf{Z}_{p})_{p^{k}}\), using the Weil pairing on \(E\) (with the convention \(p^{\infty}=0\)). Let \(\iota:\Lambda\to\Lambda\) be Iwasawa's main involution and let
\[\mathbf{T}_{g}=T_{g}\otimes_{\mathbf{Z}_{p}}\Lambda(\epsilon_{\infty}^{-1}),\]
\[\mathbf{A}_{g}=\operatorname{Hom}_{\operatorname{cont}}(\mathbf{T}_{g}^{ \iota},\mu_{p^{\infty}}),\]
where \(\epsilon_{\infty}:G_{K}\to\Lambda^{*}\) is the tautological representation and one writes \(M^{\iota}=M\otimes_{\Lambda,\iota}\Lambda\) for every \(\Lambda\)-module \(M\). We also define the scalar extensions \(\mathbf{T}_{g,\ell}=\mathbf{T}_{g}\otimes_{\mathbf{Z}_{p}}\mathscr{O}\) and \(\mathbf{A}_{g,\ell}=\mathbf{A}_{g}\otimes_{\mathbf{Z}_{p}}\mathscr{O}\). For every ideal \(\mathfrak{P}\) of \(\Lambda_{\mathscr{O}}\) set \(\mathscr{O}_{\mathfrak{P}}=\Lambda_{\mathscr{O}}/\mathfrak{P}\), \(T_{g,\mathscr{O}}(\mathfrak{P})=\mathbf{T}_{g,\mathscr{O}}/\mathfrak{P}\) and \(A_{g,\mathscr{O}}(\mathfrak{P})=\mathbf{A}_{g,\ell}[\mathfrak{P}]\). Write \(T_{g}(\mathfrak{P})\) for \(T_{g,\mathbf{Z}_{p}}(\mathfrak{P})\) and \(A_{g}(\mathfrak{P})\) for \(A_{g,\mathbf{Z}_{p}}(\mathfrak{P})\). Then \(A_{g,\mathscr{O}}(\mathfrak{P})\) is isomorphic as a \(\Lambda_{\mathscr{O}}[G_{K}]\)-module to the Kummer dual \(\operatorname{Hom}_{\mathscr{O}}(T_{g,\mathscr{O}}(\mathfrak{P}^{\iota})^{ \iota},\mu_{p^{\infty}}\otimes_{\mathbf{Z}_{p}}\mathscr{O})\) of \(T_{g,\mathscr{O}}(\mathfrak{P}^{\iota})^{\iota}\), where \(\mathfrak{P}^{\iota}=\iota(\mathfrak{P})\). For every finite prime \(w\) of \(K\), local Tate duality gives then a perfect \(\mathscr{O}\)-bilinear pairing
\[\langle-,-\rangle_{\mathfrak{P},w}:H^{1}(K_{w},T_{g,\mathscr{O}}(\mathfrak{P}) )\times H^{1}(K_{w},A_{g,\mathscr{O}}(\mathfrak{P}^{\iota}))\longrightarrow \mathscr{K}/\mathscr{O}\]
where \(\mathscr{K}=\operatorname{Frac}(\mathscr{O})\) is the fraction field of \(\mathscr{O}\), such that \(\langle\lambda\cdot x,y\rangle_{\mathfrak{P},w}=\langle x,\iota(\lambda)\cdot y \rangle_{\mathfrak{P},w}\) for every \(\lambda\in\Lambda_{\mathscr{O}}\), and every \(x\in H^{1}(K_{w},T_{g,\mathscr{O}}(\mathfrak{P}))\) and \(y\in H^{1}(K_{w},A_{g,\mathscr{O}}(\mathfrak{P}^{\iota}))\).
#### 5.2.1. Primes dividing \(p\)
Let \(\mathfrak{p}\) be a prime of \(K\) dividing \(p\) and let \(\varepsilon\in\{\emptyset,\pm\}\). For \(\varepsilon=\pm\), recall the groups \(H^{1}_{\operatorname{fin},\varepsilon}(K_{\infty,\mathfrak{p}},A_{f,k})\) defined in Section 5.1, and for \(\varepsilon=\emptyset\) define
\[H^{1}_{\operatorname{fin}}(K_{\infty,\mathfrak{p}},A_{f,k})=E(K_{\infty, \mathfrak{p}})\otimes_{\mathbf{Z}}(\mathbf{Q}_{p}/\mathbf{Z}_{p})_{p^{k}},\]
viewed as submodules of \(H^{1}(K_{\infty,\mathfrak{p}},A_{f,k})\) under the local Kummer map. Shapiro's Lemma yields a natural isomorphism of \(\Lambda\)-modules \(H^{1}(K_{\mathfrak{p}},\mathbf{A}_{g})\cong H^{1}(K_{\infty,\mathfrak{p}},A_{g})\), which one considers as an equality, and define \(H^{1}_{\operatorname{fin},\varepsilon}(K_{\mathfrak{p}},\mathbf{A}_{g})\) as the subgroup of \(H^{1}(K_{\mathfrak{p}},\mathbf{A}_{g})\) corresponding to \(H^{1}_{\operatorname{fin},\varepsilon}(K_{\infty,\mathfrak{p}},A_{f,k})\) via this equality. We then define \(H^{1}_{\operatorname{fin},\varepsilon}(K_{\mathfrak{p}},\mathbf{A}_{g, \mathscr{O}})\) as the image of \(H^{1}_{\operatorname{fin},\varepsilon}(K_{\mathfrak{p}},\mathbf{A}_{g})\otimes_{ \mathbf{Z}_{p}}\mathscr{O}\) via the canonical map
\[H^{1}(K_{\mathfrak{p}},\mathbf{A}_{g})\otimes_{\mathbf{Z}_{p}}\mathscr{O} \longrightarrow H^{1}(K_{\mathfrak{p}},\mathbf{A}_{g,\mathscr{O}})\]
(note that the last map is injective by the flatness of \(\mathscr{O}/\mathbf{Z}_{p}\)). For every ideal \(\mathfrak{P}\) of \(\Lambda\) define
\[H^{1}_{\operatorname{fin},\varepsilon}(
as the orthogonal complement of \(H^{1}_{\text{fin},\varepsilon}(K_{\infty,\mathfrak{p}},A_{g,\mathscr{O}}(\mathfrak{ P}^{\mathfrak{s}}))\) under the local Tate pairing \(\langle-,-\rangle_{\mathfrak{P},\mathfrak{p}}\). If \(\mathtt{M}_{g,\mathscr{O}}\) denotes either \(T_{g,\mathscr{O}}\) or \(A_{g,\mathscr{O}}\), let \(H^{1}_{\text{sing},\varepsilon}(K_{\mathfrak{p}},\mathtt{M}_{g,\mathscr{O}}( \mathfrak{P}))\) be the quotient of \(H^{1}(K_{\mathfrak{p}},\mathtt{M}_{g,\mathscr{O}}(\mathfrak{P}))\) by the finite subgroup \(H^{1}_{\text{fin},\varepsilon}(K_{\mathfrak{p}},\mathtt{M}_{g,\mathscr{O}}( \mathfrak{P}))\), so that we have a canonical exact sequence
\[0\longrightarrow H^{1}_{\text{fin},\varepsilon}(K_{\mathfrak{p}},\mathtt{M}_{ g,\mathscr{O}}(\mathfrak{P}))\longrightarrow H^{1}(K_{\mathfrak{p}}, \mathtt{M}_{g,\mathscr{O}}(\mathfrak{P}))\longrightarrow H^{1}_{\text{sing}, \varepsilon}(K_{\mathfrak{p}},\mathtt{M}_{g,\mathscr{O}}(\mathfrak{P})) \longrightarrow 0.\]
A global class \(x\in H^{1}(K,\mathtt{M}_{g,\mathscr{O}}(\mathfrak{P}))\) is said to be \(\varepsilon\)_-finite at \(\mathfrak{p}\)_ if \(\text{res}_{\mathfrak{p}}(x)\in H^{1}_{\text{fin},\varepsilon}(K_{\mathfrak{p} },\mathtt{M}_{g,\mathscr{O}}(\mathfrak{P}))\). For any element \(s\in H^{1}(K,\mathtt{M}_{g,\mathscr{O}}(\mathfrak{P}))\), denote \(\partial_{\mathfrak{p}}(s)\) the projection of the restriction of \(s\) at \(\mathfrak{p}\) to the singular quotient of \(H^{1}(K_{\mathfrak{p}},\mathtt{M}_{g,\mathscr{O}}(\mathfrak{P}))\); we call \(\partial_{\mathfrak{p}}\) the _residue map_ at \(\mathfrak{p}\); also, write \(\partial_{p}=\oplus_{\mathfrak{p}|p}\partial_{\mathfrak{p}}\) and call \(\partial_{p}\) the _residue map at \(p\)_.
#### 5.2.2. Primes dividing \(N^{-}\)
Let \(\mathfrak{P}\) be an ideal of \(\Lambda_{\mathscr{O}}\) and let \(\ell\) be a rational prime dividing \(N^{-}\). Then \(\ell\) is inert in \(K/\mathbf{Q}\) and \(\ell\cdot\mathcal{O}_{K}\) splits completely in \(K_{\infty}/K\). As a consequence the \(G_{K_{\ell}}\)-representation \(T_{g,\mathscr{O}}(\mathfrak{P})\) is isomorphic to the base change \(T_{g,\mathscr{O}}\otimes_{\mathscr{O}}\mathscr{O}_{\mathfrak{P}}\) (with \(G_{K_{\ell}}\) acting trivially on \(\mathscr{O}_{\mathfrak{P}}\)). The elliptic curve \(E/K_{\ell}\) is a Tate curve, i.e. is isomorphic as a rigid analytic variety to the quotient of the multiplicative group \(\mathbf{G}_{m}/K_{\ell}\) by the lattice \(q_{\ell}^{\mathbf{Z}}\) generated by the Tate period \(q_{\ell}\in\ell\cdot\mathbf{Z}_{\ell}\) ([30, Chapter 5]). This gives a short exact sequence of \(G_{K_{\ell}}\)-modules
\[0\longrightarrow T^{(\ell)}_{f,k}\longrightarrow T_{f,k}\longrightarrow T^{[ \ell]}_{f,k}\longrightarrow 0,\]
where \(T^{(\ell)}_{f,k}\cong\mathbf{Z}_{p}/p^{k}(1)\) and \(T^{[\ell]}_{f,k}\cong\mathbf{Z}_{p}/p^{k}\), which in turn induces a short exact sequence of \(\mathscr{O}_{\mathfrak{P}}[G_{K_{\ell}}]\)-modules
\[0\longrightarrow T^{(\ell)}_{g,\mathscr{O}}(\mathfrak{P})\longrightarrow T _{g,\mathscr{O}}(\mathfrak{P})\longrightarrow T^{[\ell]}_{g,\mathscr{O}}( \mathfrak{P})\longrightarrow 0,\]
with \(T^{(\ell)}_{g,\mathscr{O}}(\mathfrak{P})\cong\mathscr{O}_{\mathfrak{P}}(1) \otimes_{\mathbf{Z}_{p}}\mathbf{Z}_{p}/p^{k}\) and \(T^{[\ell]}_{g,\mathscr{O}}(\mathfrak{P})\cong\mathscr{O}_{\mathfrak{P}}\otimes _{\mathbf{Z}_{p}}\mathbf{Z}_{p}/p^{k}\). Define \(A^{(\ell)}_{g,\mathscr{O}}(\mathfrak{P})\) and \(A^{[\ell]}_{g,\mathscr{O}}(\mathfrak{P})\) to be the Kummer duals of \(T^{[\ell]}_{g,\mathscr{O}}(\mathfrak{P}^{\mathfrak{s}})^{\iota}\) and \(T^{(\ell)}_{g,\mathscr{O}}(\mathfrak{P}^{\mathfrak{s}})^{\iota}\) respectively, so that one has an exact sequence of \(\mathscr{O}_{\mathfrak{P}}[G_{K_{\ell}}]\)-modules
\[0\longrightarrow A^{(\ell)}_{g,\mathscr{O}}(\mathfrak{P})\longrightarrow A _{g,\mathscr{O}}(\mathfrak{P})\longrightarrow A^{[\ell]}_{g,\mathscr{O}}( \mathfrak{P})\longrightarrow 0.\]
If \(\mathtt{M}_{g,\mathscr{O}}\) denotes either \(T_{g,\mathscr{O}}\) or \(A_{g,\mathscr{O}}\), define the _ordinary subspace_ of \(H^{1}(K_{\ell},\mathtt{M}_{g,\mathscr{O}})\) by
\[H^{1}_{\text{ord}}(K_{\ell},\mathtt{M}_{g,\mathscr{O}}(\mathfrak{P}))=\text{ Im}\left(H^{1}(K_{\ell},\mathtt{M}^{(\ell)}_{g,\mathscr{O}}(\mathfrak{P}))\longrightarrow H ^{1}(K_{\ell},\mathtt{M}_{g,\mathscr{O}}(\mathfrak{P}))\right).\]
As easily proved, \(H^{1}_{\text{ord}}(K_{\ell},T_{g,\mathscr{O}}(\mathfrak{P}))\) is the orthogonal complement of \(H^{1}_{\text{ord}}(K_{\ell},A_{g,\mathscr{O}}(\mathfrak{P}^{\mathfrak{s}}))\) under \(\langle-,-\rangle_{\mathfrak{P},\ell}\). A global class in \(H^{1}(K,\mathtt{M}_{g,\mathscr{O}}(\mathfrak{P}))\) is said to be _ordinary at \(\ell\)_ if its restriction at \(\ell\) belongs to the ordinary subspace \(H^{1}_{\text{ord}}(K_{\ell},\mathtt{M}_{g,\mathscr{O}}(\mathfrak{P}))\).
#### 5.2.3. Primes dividing \(L\)
Let \(\mathfrak{P}\) be an ideal of \(\Lambda_{\mathscr{O}}\), and let \(\ell\) be a prime divisor of \(L\). As above \(\ell\cdot\mathcal{O}_{K}\) splits completely in \(K_{\infty}/K\) and \(T_{g,\mathscr{O}}(\mathfrak{P})=T_{g,\mathscr{O}}\otimes_{\mathscr{O}}\mathscr{O}_ {\mathfrak{P}}\) as \(G_{K_{\ell}}\)-modules, with \(G_{K_{\ell}}\) acting trivially on the second factor. Lemma 3.1 then implies that the \(\mathscr{O}_{\mathfrak{P}}[G_{K_{\ell}}]\)-module \(T_{g,\mathscr{O}}(\mathfrak{P})\) is isomorphic to the direct sum of \(T^{(\ell)}_{g,\mathscr{O}}(\mathfrak{P})=\mathscr{O}_{\mathfrak{P}}(1)\otimes_{ \mathbf{Z}_{p}}\mathbf{Z}_{p}/p^{k}\) and \(T^{[\ell]}_{g,\mathscr{O}}(\mathfrak{P})=\mathscr{O}_{\mathfrak{P}}\otimes_{ \mathbf{Z}_{p}}\mathbf{Z}_{p}/p^{k}\) (where by definition \(\mathscr{O}_{\mathfrak{P}}(1)=\mathbf{Z}_{p}(1)\otimes_{\mathbf{Z}_{p}} \mathscr{O}_{\mathfrak{P}}\) as Galois modules). Let as above \(A^{(\ell)}_{g,\mathscr{O}}(\mathfrak{P})\) and \(A^{[\ell]}_{g,\mathscr{O}}(\mathfrak{P})\) be the Kummer duals of \(T^{[\ell]}_{g,\mathscr{O}}(\mathfrak{P}^{\mathfrak{s}})^{\iota}\) and \(T^{(\ell)}_{g,\mathscr{O}}(\mathfrak{P}^{\mathfrak{s}})^{\iota}\) respectively. If \(\mathtt{M}_{g,\mathscr{O}}\) denotes either \(T_{g,\mathscr{O}}\) or \(A_{g,\mathscr{O}}\), define the _ordinary subspace_ of \(H^{1}(K_{\ell},\mathtt{M}_{g,\mathscr{O}}(\mathfrak{P}))\) by the equality
\[H^{1}_{\text{ord}}(K_{\ell},\mathtt{M}_{g,\mathscr{O}}(\mathfrak{P}))=H^{1}(K_{ \ell},\mathtt{M}^{(\ell)}_{g,\mathscr{O}}(\mathfrak{P})).\]
By Lemma 3.1, \(H^{1}_{\text{ord}}(K_{\ell},\mathtt{M}_{g,\mathscr{O}}(\mathfrak{P}))\) is also isomorphic to the _singular quotient_
\[H^{1}_{\text{sing}}(K_{\ell},\mathtt{M}_{g,\mathscr{O}}(\mathfrak{P}))=H^{1}(K_{ \ell},\mathtt{M}_{g,\mathscr{O}}(\mathfrak{P}))/H^{1}_{\text{fin}}(K_{\ell}, \mathtt{M}_{g,\mathscr{O}}(\mathfrak{P})).\]
Moreover, note that \(H
#### 5.2.4. Primes outside \(LN^{-}p\)
Let \(w\) be a prime of \(K\) which does not divide \(LN^{-}p\), let \(\mathfrak{P}\) be an ideal of \(\Lambda_{\mathscr{O}}\) and let \(\mathsf{M}_{g,\mathscr{O}}\) denote either \(T_{g,\mathscr{O}}\) or \(A_{g,\mathscr{O}}\). A global class in \(H^{1}(K,\mathsf{M}_{g,\mathscr{O}}(\mathfrak{P}))\) is _finite_ (resp., _trivial_) _at \(w\)_ if its restriction at \(w\) belongs to the finite subspace
\[H^{1}_{\mathrm{fin}}(K_{w},\mathsf{M}_{g,\mathscr{O}}(\mathfrak{P}))=H^{1}(G_{ K_{w}}/I_{K_{w}},\mathsf{M}_{g,\mathscr{O}}(\mathfrak{P})^{I_{K_{w}}})\]
of \(H^{1}(K_{w},\mathsf{M}_{g,\mathscr{O}}(\mathfrak{P}))\) (resp., is zero).
#### 5.2.5. Discrete and compact Selmer groups
Let \(S\) be a positive squarefree integer and let \(\mathfrak{P}\) be an ideal of \(\Lambda_{\mathscr{O}}\). The _discrete Selmer group_
\[\mathrm{Sel}^{S}_{\varepsilon}(K,A_{g,\mathscr{O}}(\mathfrak{P}))\subset H^{1 }(K,A_{g,\mathscr{O}}(\mathfrak{P}))\]
is defined to be the \(\mathscr{O}_{\mathfrak{P}}\)-module of global cohomology classes in \(H^{1}(K,A_{g,\mathscr{O}}(\mathfrak{P}))\) which are
* \(\varepsilon\)-finite at primes dividing \(p\);
* ordinary at primes dividing \(LN^{-}\);
* trivial at primes dividing \(SN^{+}\);
* finite outside \(SLNp\).
The _compact Selmer group_
\[\mathfrak{Sel}^{\varepsilon}_{S}(K,T_{g,\mathscr{O}}(\mathfrak{P}))\subset H^ {1}(K,T_{g,\mathscr{O}}(\mathfrak{P}))\]
is the \(\mathscr{O}_{\mathfrak{P}}\)-module of global cohomology classes in \(H^{1}(K,T_{g,\mathscr{O}}(\mathfrak{P}))\) which are
* \(\varepsilon\)-finite at primes dividing \(p/\mathrm{g.c.d.}(S,p)\);
* ordinary at primes dividing \(LN^{-}/\mathrm{g.c.d.}(S,LN^{-})\);
* finite outside \(SLNp\).
Write \(\mathrm{Sel}_{\varepsilon}(K,A_{g,\mathscr{O}}(\mathfrak{P}))\) and \(\mathfrak{Sel}^{\varepsilon}(K,T_{g,\mathscr{O}}(\mathfrak{P}))\) as shorthands for \(\mathrm{Sel}^{1}_{\varepsilon}(K,A_{g}(\mathfrak{P}))\) and \(\mathfrak{Sel}^{\varepsilon}_{1}(K,T_{g,\mathscr{O}}(\mathfrak{P}))\) respectively. If \(\mathfrak{P}\) is the zero ideal, so that \(T_{g,\mathscr{O}}(\mathfrak{P})=\mathsf{T}_{g,\mathscr{O}}\) and \(A_{g,\mathscr{O}}(\mathfrak{P})=\mathsf{A}_{g,\mathscr{O}}\), set
\[\mathrm{Sel}^{S}_{\varepsilon}(K_{\infty},A_{g,\mathscr{O}})=\mathrm{Sel}^{S} _{\varepsilon}(K,\mathsf{A}_{g,\mathscr{O}}),\]
\[\mathfrak{Sel}^{S}_{S}(K_{\infty},T_{g,\mathscr{O}})=\mathfrak{Sel}^{S}_{S}(K,\mathsf{T}_{g,\mathscr{O}}).\]
Note that if \(p\mid S\) then \(\mathrm{Sel}^{S}_{\varepsilon}(K,\mathsf{A}_{g,\mathscr{O}})=\mathrm{Sel}^{S} (K,\mathsf{A}_{g,\mathscr{O}})\) and \(\mathfrak{Sel}^{S}_{S}(K,\mathsf{T}_{g,\mathscr{O}})=\mathfrak{Sel}_{S}(K, \mathsf{T}_{g,\mathscr{O}})\).
### Local properties
Assume throughout the paper that the following assumption holds true.
**Hypothesis 5.2**.:
1. If \(E/\mathbf{Q}_{p}\) has good ordinary reduction, then \(a_{p}(E)\not\equiv\,\pm 1\pmod{p}\).
2. If \(q\) is a prime dividing \(N^{+}\), then \(H^{0}(I_{\mathbf{Q}_{q}},E_{p})=0\) where recall that \(I_{\mathbf{Q}_{q}}\) is the inertia subgroup of \(\mathrm{Gal}(\bar{\mathbf{Q}}_{q}/\mathbf{Q}_{q})\).
If \(E\) has (good) ordinary reduction at \(p\), set \(\varepsilon=\emptyset\) and \(H^{1}_{\mathrm{fin},\varepsilon}=H^{1}_{\mathrm{fin}}\). If \(E\) has (good) supersingular reduction at \(p\), let \(\varepsilon\) denote either \(+\) or \(-\).
#### 5.3.1. Primes dividing \(p\)
Fix a prime \(\mathfrak{p}\) of \(K\) dividing \(p\). We first investigate local properties of points, and then we consider finite and singular subgroups.
**Proposition 5.3**.: _For every nonnegative integer \(n\), the restriction map induces an isomorphism_
\[E(K_{n,\mathfrak{p}})_{\varepsilon}\otimes_{\mathbf{Z}}\mathbf{Q}_{p}/\mathbf{ Z}_{p}\cong\big{(}E(K_{\infty,\mathfrak{p}})_{\varepsilon}\otimes_{\mathbf{Z}} \mathbf{Q}_{p}/\mathbf{Z}_{p}\big{)}[\omega_{n}^{\varepsilon}].\]
Proof.: If \(E/\mathbf{Q}_{p}\) has good supersingular reduction, this follows from Theorem 5.1. More precisely, the Pontrjagin dual of the restriction map
\[E(K_{n,\mathfrak{p}})_{\varepsilon}\otimes_{\mathbf{Z}}\mathbf{Q}_{p}/\mathbf{ Z}_{p}{\longrightarrow}\big{(}E(K_{m,\mathfrak{p}})_{\varepsilon}\otimes_{ \mathbf{Z}}\mathbf{Q}_{p}/\mathbf{Z}_{p}\big{)}[\omega_{n}^{\varepsilon}]\]
is a surjective morphism of \(\Lambda_{\mathfrak{p}}\)-modules, for all integers \(m\geqslant n\). Since Theorem 5.1 implies that its source and target are finite free \(\mathbf{Z}_{p}\)-modules of the same rank (indeed both are isomorphic to \(\Lambda_{\mathfrak{p}}/\omega_{n}^{\varepsilon}\)), it is an isomorphism.
Assume that \(E/\mathbf{Q}_{p}\) has good ordinary reduction and consider the restriction maps
\[r_{n}:H^{1}(K_{n,\mathfrak{p}},A_{f})\longrightarrow H^{1}(K_{\infty, \mathfrak{p}},A_{f})[\omega_{n}],\] \[r_{n}^{\mathrm{sing}}:\frac{H^{1}(K_{n,\mathfrak{p}},A_{f})}{E(K_{n, \mathfrak{p}})\otimes_{\mathbf{Z}}\mathbf{Q}_{p}/\mathbf{Z}_{p}}\longrightarrow \frac{H^{1}(K_{\infty,\mathfrak{p}},A_{f})}{E(K_{\infty,\mathfrak{p}})\otimes_{ \mathbf{Z}}\mathbf{Q}_{p}/\mathbf{Z}_{p}}.\]
Lemma 3.4 of [12] proves that the kernel of \(r_{n}^{\text{sing}}\) has cardinality \(|E(\mathbf{F_{p}})|^{2}\). (Loc. cit. considers the cyclotomic \(\mathbf{Z}_{p}\)-extension \(F_{\infty}/F\) of a finite extension \(F/\mathbf{Q}_{p}\), but the argument works for every \(\mathbf{Z}_{p}\)-extension \(F_{\infty}/F\) such that the inertia subgroup of \(\operatorname{Gal}(F_{\infty}/F)\) has finite index, cf. [12, Proposition 2.4].) The inflation-restriction sequence shows that \(r_{n}\) is surjective and that its kernel is isomorphic to a quotient of \(H^{0}(K_{\infty,\mathfrak{p}},A_{f})=E(K_{\infty,\mathfrak{p}})_{p^{\infty}}\). Assumption 5.2(1) then implies that \(r_{n}\) is an isomorphism and that \(r_{n}^{\text{sing}}\) is injective. The statement follows.
Fix an ideal \(\mathfrak{P}\) of \(\Lambda\) generated by a regular sequence.
**Proposition 5.4**.:
1. \(H^{1}_{\operatorname{fin},\varepsilon}(K_{\mathfrak{p}},T_{f,\mathscr{O}}( \mathfrak{P}))\) _and_ \(H^{1}_{\operatorname{sing},\varepsilon}(K_{\mathfrak{p}},T_{f,\mathscr{O}}( \mathfrak{P}))\) _are free_ \(\Lambda_{\mathscr{O}}/\mathfrak{P}\)_-modules of rank_ \([K_{\mathfrak{p}}:\mathbf{Q}_{p}]\)_._
2. \(H^{1}_{\operatorname{fin},\varepsilon}(K_{\mathfrak{p}},A_{f,\mathscr{O}}( \mathfrak{P}))\) _and_ \(H^{1}_{\operatorname{sing},\varepsilon}(K_{\mathfrak{p}},A_{f,\mathscr{O}}( \mathfrak{P}))\) _are co-free_ \(\Lambda_{\mathscr{O}}/\mathfrak{P}\)_-modules of rank_ \([K_{\mathfrak{p}}:\mathbf{Q}_{p}]\)_._
Proof.: Since \(H^{1}_{\operatorname{fin},\varepsilon}(K_{\mathfrak{p}},T_{f,\mathscr{O}}( \mathfrak{P}))\) and \(H^{1}_{\operatorname{sing},\varepsilon}(K_{\mathfrak{p}},T_{f,\mathscr{O}}( \mathfrak{P}))\) are isomorphic to the Pontryagin duals of \(H^{1}_{\operatorname{sing},\varepsilon}(K_{\mathfrak{p}},A_{f,\mathscr{O}}( \mathfrak{P}^{\mathfrak{P}}))^{\iota}\) and \(H^{1}_{\operatorname{fin},\varepsilon}(K_{\mathfrak{p}},A_{f,\mathscr{O}}( \mathfrak{P}^{\mathfrak{P}}))^{\iota}\) respectively, it is sufficient to prove (2). The proof is divided into three steps.
_Step 1._ If \(\mathscr{D}_{\Lambda}\) denotes the Pontrjagin dual of \(\Lambda\), one has isomorphisms of \(\Lambda\)-modules
\[\begin{split} H^{1}_{\operatorname{fin},\varepsilon}(K_{ \mathfrak{p}},\mathbf{A}_{f})&\cong\mathscr{D}^{[K_{\mathfrak{p} }:\mathbf{Q}_{p}]}_{\Lambda},\\ H^{1}_{\operatorname{sing},\varepsilon}(K_{\mathfrak{p}},\mathbf{A }_{f})&\cong\mathscr{D}^{[K_{\mathfrak{p}}:\mathbf{Q}_{p}]}_{ \Lambda}.\end{split} \tag{5.1}\]
If \(E/\mathbf{Q}_{p}\) has good supersingular reduction and \(p\) is splits in \(K/\mathbf{Q}_{p}\), (the duals of) Equations (5.1) are proved in Propositions 4.16 of [18], which in turn is a slight generalisation of Theorem 6.2 of [19] (see also [19, Proposition 9.2]). If \(E/\mathbf{Q}_{p}\) has good supersingular reduction and \(p\) is inert in \(K/\mathbf{Q}_{p}\), this is a consequence of Rubin's conjecture proved in [7]: if, as in SS5.1, \(O\) denotes the valuation ring of \(\Phi=K_{\mathfrak{p}}\), it is proved in [27] that \(H^{1}(K_{\mathfrak{p}},\mathbf{A}_{f})\) is a co-free \(\Lambda_{O}\)-module of rank two and, as a consequence of Rubin's conjecture, the Pontryagin dual of \(H^{1}(K_{\mathfrak{p}},\mathbf{A}_{f})\) is the direct sum of the Pontryagin duals of \(H^{1}_{\operatorname{fin},\varepsilon}(K_{\mathfrak{p}},\mathbf{A}_{f})\), each cofree of co-rank one over \(\Lambda_{O}\).
If \(E/\mathbf{Q}_{p}\) has good ordinary reduction, the representation \(T_{f}\) is ordinary at \(p\), _i.e._ there exists a short exact sequence of \(\mathbf{Z}_{p}[G_{\mathbf{Q}_{p}}]\)-modules
\[0\longrightarrow T_{f}^{\bullet}\longrightarrow T_{f}\longrightarrow T_{f}^{ \circ}\longrightarrow 0,\]
arising from the reduction modulo \(p\) on \(E(\bar{\mathbf{Q}}_{p})\). More precisely let \(\alpha,\beta\in\mathbf{Z}_{p}\) be the roots of the Hecke polynomial \(X^{2}-a_{p}(E)X+p\). Since \(a_{p}(E)\) is a \(p\)-adic unit, one can assume \(\alpha\in\mathbf{Z}_{p}^{*}\) and \(\beta\in p\mathbf{Z}_{p}\). Then \(T_{f}^{\bullet}\cong\mathbf{Z}_{p}(\chi_{\text{cyc}}\cdot\psi^{-1})\) and \(T_{f}^{\circ}\cong\mathbf{Z}_{p}(\psi)\), where \(\psi:G_{\mathbf{Q}_{p}}\rightarrow\mathbf{Z}_{p}^{*}\) is the unramified character which sends an arithmetic Frobenius to \(\alpha\). Set \(\mathbf{T}_{f}^{\bullet}=T_{f}^{\star}\otimes_{\mathbf{Z}_{p}}\Lambda(\epsilon_{ \infty}^{-1})\) for \(\star=\bullet\circ\), so that there is an exact sequence of \(\Lambda[G_{K}]\)-modules \(\mathbf{T}_{f}^{\bullet}\hookrightarrow\mathbf{T}_{f}\rightarrow\mathbf{T}_{f}^ {\circ}\). According to a result of Greenberg (cf. Proposition 2.4 of [12])
\[H^{1}_{\operatorname{fin}}(K_{\mathfrak{p}},\mathbf{T}_{f})=\text{Image}\big{(}H^ {1}(K_{\mathfrak{p}},\mathbf{T}_{f}^{\bullet})\longrightarrow H^{1}(K_{ \mathfrak{p}},\mathbf{T}_{f})\big{)}\cong H^{1}(K_{\mathfrak{p}},\mathbf{T}_{f}^ {\bullet}). \tag{5.2}\]
Let \(I\) be the augmentation ideal of \(\Lambda\). Since \(\mathbf{T}_{f}^{\bullet}/I=T_{f}^{\bullet}\) and \(H^{0}(K_{\mathfrak{p}},T_{f}^{\bullet})=0\), one has \(H^{1}(K_{\mathfrak{p}},\mathbf{T}_{f}^{\bullet})[I]=0\). Moreover \(H^{1}(K_{\mathfrak{p}},\mathbf{T}_{f}^{\bullet})/I\) is a free \(\mathbf{Z}_{p}\)-module, because it is isomorphic to a submodule of the free \(\mathbf{Z}_{p}\)-module \(H^{1}(K_{\mathfrak{p}},T_{f}^{\bullet})\). This implies that \(H^{1}(K_{\mathfrak{p}},\mathbf{T}_{f}^{\bullet})\) is a free \(\Lambda\)-module of rank \([K_{\mathfrak{p}}:\mathbf{Q}_{p}]\), hence so is \(H^{1}_{\operatorname{fin}}(K_{\mathfrak{p}},\mathbf{T}_{f})\) by Equation (5.2). The short exact sequence \(\mathbf{T}_{f}^{\bullet}\hookrightarrow\mathbf{T}_{f}\twoheadrightarrow\mathbf{T}_{f}^ {\circ}\) and Equation (5.2) induce an exact sequence of \(\Lambda\)-modules
\[0\longrightarrow H^{1}_{\operatorname{sing}}(K_{\mathfrak{p}},\mathbf{T}_{f}) \longrightarrow H^{1}(K_{\mathfrak{p}},\mathbf{T}_{f}^{\circ})\longrightarrow H^{2}(K_{ \mathfrak{p}},\mathbf{T}_{f}^{\bullet})\longrightarrow 0,\]
where the zero on the right follows from \(H^{2}(K_{\mathfrak{p}},\mathbf{T}_{f})/I\cong H^{2}(K_{\mathfrak{p}},T_{f})=0\) and Nakayama's Lemma. Hypothesis 5.2(1) implies \(H^{2}(K_{\mathfrak{p}},\mathbf{T}_{f}^{\bullet})=0\), hence \(H^{2}(K_{\mathfrak{p}},\mathbf{T}_{f}^{\bullet})=0\). Then \(H^{1}_{\operatorname{sing}}(K_{\mathfrak{p}},\mathbf{T}_{f})\) is isomorphic to \(H^{1}(K_{\mathfrak{p}},\mathbf{T}_{f}^{\circ})\). Moreover \(H^{0}(K_{\mathfrak{p}},T_{f}^{\circ})=0\) and another application of Hypothesis 5.2(1) shows that \(H^{1}(K_{\mathfrak{p}},T_{f}^{\circ})\) is a free \(\mathbf{Z}_{p}\)-module. As above this implies that \(H^{1}(K_{\mathfrak{p}},\mathbf{T}_{f}^{\circ})\) is a free \(\Lambda\)-module of rank \([K_{\mathfrak{p}}:\mathbf{Q}_{p}]\), hence so is \(H^{1}_{\operatorname{sing}}(K_{\mathfrak{p}},\mathbf{T}_{f})\).
_Step 2._ The inclusion \(A_{f,\mathscr{O}}(\mathfrak{P})\rightarrow\mathbf{A}_{f,\mathscr{O}}\) induces isomorphisms of \(\Lambda_{\mathscr{O}}/\mathfrak{P}\)-modules
\[H^{1}(K_{\mathfrak{p}},A_{f,\mathscr{O}}(\mathfrak{P}))\cong H^{1}(K_{\mathfrak{p}}, \mathbf{A}_{f,\mathscr{
\[H^{1}_{\text{fin},\varepsilon}(K_{\mathfrak{p}},A_{f,\mathscr{O}}(\mathfrak{P})) \cong H^{1}_{\text{fin},\varepsilon}(K_{\mathfrak{p}},\mathbf{A}_{f,\mathscr{O}}) [\mathfrak{P}].\]
If \(E\) has good supersingular reduction the \(G_{K_{\mathfrak{p}}}=\text{Gal}(\bar{K_{\mathfrak{p}}}/K_{\mathfrak{p}})\)-representation \(E_{p}\) is irreducible (see for example [29, Proposition 12]). If \(E/\mathbf{Q}_{p}\) has good ordinary reduction, the kernel of the reduction modulo \(p\) on \(E(K_{\mathfrak{p}})\) is isomorphic to \(\mathbf{F_{\mathfrak{p}}}(1)\) as a representation of the inertia subgroup of \(G_{K_{\mathfrak{p}}}\). Then Hypothesis 5.2(1) implies that \(H^{0}(K_{\mathfrak{p}},E_{p})\) vanishes in all cases. Because \(\mathbf{A}_{f}[\mathfrak{m}_{\Lambda}]\) is isomorphic to \(E_{p}\) this gives \(H^{0}(K_{\mathfrak{p}},\mathbf{A}_{f})=0\), and therefore \(H^{0}(K_{\mathfrak{p}},\mathbf{A}_{f,\mathscr{O}})=0\), which in turn easily implies that \(H^{1}(K_{\mathfrak{p}},A_{f,\mathscr{O}}(\mathfrak{P}))\) is isomorphic to the \(\mathfrak{P}\)-torsion submodule of \(H^{1}(K_{\mathfrak{p}},\mathbf{A}_{f,\mathscr{O}})\). The claim follows directly from this and the definitions.
_Step 3._ Since \(H^{1}(K_{\mathfrak{p}},\mathbf{A}_{f})\) is a co-free \(\Lambda\)-module of rank \(2[K_{\mathfrak{p}}:\mathbf{Q}_{p}]\) (_cf. Step 1_), it follows from _Step 2_ and the flatness of \(\mathscr{O}/\mathbf{Z}_{p}\) that \(H^{1}(K_{\mathfrak{p}},A_{f,\mathscr{O}}(\mathfrak{P}))\) is a co-free \(\Lambda_{\mathscr{O}}/\mathfrak{P}\)-module of rank \(2[K_{\mathfrak{p}}:\mathbf{Q}_{p}]\). To conclude the proof it is then sufficient to show that \(H^{1}_{\text{fin},\varepsilon}(K_{\mathfrak{p}},A_{f,\mathscr{O}}(\mathfrak{P }))\) is a co-free \(\Lambda_{\mathscr{O}}/\mathfrak{P}\)-module of rank \([K_{\mathfrak{p}}:\mathbf{Q}_{p}]\). This is a consequence of the previous two steps.
**Corollary 5.5**.: _Let \(\mathscr{F}\) denote one of the symbols \(\emptyset\), \(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\
### Control theorems
Let \(\mathfrak{P}\) be a principal ideal of \(\Lambda_{\mathscr{O}}\), which is either zero or generated by an element coprime with \(p\). Let \(1\leqslant t\leqslant k\) with \(k\in\mathbf{N}\cup\{\infty\}\) and denote by \(h\in S_{2}(N^{+},LN^{-};\mathbf{Z}/p^{\prime}\mathbf{Z})\) the reduction of \(g\) modulo \(p^{t}\).
**Proposition 5.8**.: _Let \(S\in\mathscr{S}_{k}\) be a (possibly empty) squarefree product of \(k\)-admissible primes. Then_
\[\operatorname{Sel}_{\varepsilon}^{S}(K,A_{h,\mathscr{O}}(\mathfrak{P})) \cong\operatorname{Sel}_{\varepsilon}^{S}(K,\mathbf{A}_{g,\mathscr{O}})[ \mathfrak{P},p^{t}].\]
Proof.: Identify as usual \(A_{f,k}\) with \(A_{g}\), hence \(A_{f,t}\) with \(A_{h}\). Let \(\mathfrak{G}=\operatorname{Gal}(K_{SLNp}/K)\) be the Galois group of the maximal algebraic extension of \(K\) which is unramified at every finite place \(w\nmid SLNp\) of \(K\). Since \(H^{0}(\mathfrak{G},E_{p})\) vanishes by Hypothesis 1.1(1), the same is true for for \(\mathfrak{G}\)-module \(E_{p}\otimes_{\mathbf{Z}_{p}}\mathscr{O}\), and therefore for every ideal \(J\) of \(\Lambda_{\mathscr{O}}\) generated by a regular sequence the natural map gives an isomorphism \(H^{1}(\mathfrak{G},\mathbf{A}_{f,\mathscr{O}}[J])\cong H^{1}(\mathfrak{G}, \mathbf{A}_{f,\mathscr{O}})[J]\) (cf. Step 2 in the proof of Proposition 5.4). As a consequence \(H^{1}(\mathfrak{G},A_{h,\mathscr{O}}(\mathfrak{P}))\) is isomorphic to the \((\mathfrak{P},p^{t})\)-torsion submodule of \(H^{1}(\mathfrak{G},\mathbf{A}_{g,\mathscr{O}})\). This implies that the natural map \(\operatorname{Sel}_{\varepsilon}^{S}(K,A_{h,\mathscr{O}}(\mathfrak{P}))\to \operatorname{Sel}_{\varepsilon}^{S}(K,\mathbf{A}_{g,\mathscr{O}})[\mathfrak{ P},p^{t}]\) is injective, and its cokernel is isomorphic to a submodule of the kernel of
\[\oplus_{w|p}H^{1}_{\operatorname{sing},\varepsilon}(K_{w},A_{h,\mathscr{O}}( \mathfrak{P}))\oplus_{w|\frac{LN^{-}}{L,S}}H^{1}(K_{w},A_{h,\mathscr{O}}^{[w] }(\mathfrak{P}))\oplus_{w|SN^{+}}H^{1}(K_{w},A_{h,\mathscr{O}}(\mathfrak{P} ))\longrightarrow\]
\[\longrightarrow\oplus_{w|p}H^{1}_{\operatorname{sing},\varepsilon}(K_{w}, \mathbf{A}_{g,\mathscr{O}})\oplus_{w|\frac{LN^{-}}{L,S}}H^{1}(K_{w},\mathbf{A }_{g,\mathscr{O}}^{[w]})\oplus_{w|SN^{+}}H^{1}(K_{w},\mathbf{A}_{g,\mathscr{O }}).\]
The proposition then follows from Corollary 5.5(2), Lemma 5.6 and Lemma 5.7.
**Proposition 5.9**.: _Let \(S\) be an integer coprime with \(LNp\). The canonical map_
\[\mathfrak{Sel}_{S}^{\varepsilon}(K,\mathbf{T}_{g,\mathscr{O}})\otimes_{ \Lambda_{\mathscr{O}}}\Lambda_{\mathscr{O}}/\mathfrak{P}\longrightarrow \mathfrak{Sel}_{S}^{\varepsilon}(K,T_{g,\mathscr{O}}(\mathfrak{P}))\]
_is injective._
Proof.: Let \(\mathfrak{G}\) be the Galois group of the maximal algebraic extension of \(K\) unramified outside \(SLNp\infty\). Since \(\Lambda_{\mathscr{O}}/p^{k}\) has no nontrivial \(\mathfrak{P}\)-torsion the morphism
\[H^{1}(\mathfrak{G},\mathbf{T}_{g,\mathscr{O}})\otimes_{\Lambda_{\mathscr{O}}} \Lambda_{\mathscr{O}}/\mathfrak{P}\longrightarrow H^{1}(\mathfrak{G},\mathbf{ T}_{g,\mathscr{O}}(\mathfrak{P}))\]
is injective. Moreover the cokernel of the inclusion \(\mathfrak{Sel}_{S}^{\varepsilon}(K,\mathbf{T}_{g,\mathscr{O}})\to H^{1}( \mathfrak{G},\mathbf{T}_{g,\mathscr{O}})\) is isomorphic to a submodule of
\[\oplus_{w|p}H^{1}_{\operatorname{sing},\varepsilon}(K_{w},\mathbf{T}_{g, \mathscr{O}})\oplus_{w|LN^{-}}H^{1}(K_{w},\mathbf{T}_{g,\mathscr{O}}^{[w]}). \tag{5.3}\]
To prove the first part of the proposition it is then sufficient to show that each summand of the direct sum (5.3) has no non-trivial \(\mathfrak{P}\)-torsion. For \(H^{1}_{\operatorname{sing},\varepsilon}(K_{w},\mathbf{T}_{g,\mathscr{O}})\) this is a consequence of Proposition 5.4. Assume that \(w=\ell\cdot\mathcal{O}_{K}\) for a rational prime \(\ell\) dividing \(LN^{-}\). In this case \(\mathbf{T}_{g,\mathscr{O}}^{[w]}\) is isomorphic to \(\Lambda_{\mathscr{O}}\otimes_{\mathscr{O}}\mathscr{O}/p^{k}\mathscr{O}\). Then \(H^{1}(K_{w},\mathbf{T}_{g,\mathscr{O}}^{[w]})\) is isomorphic to \(H^{1}(K_{\ell},\mathscr{O}/p^{k}\mathscr{O})\otimes_{\mathscr{O}}\Lambda_{ \mathscr{O}}\), hence \(H^{1}(K_{w},\mathbf{T}_{g,\mathscr{O}}^{[w]})[\mathfrak{P}]=0\).
### Global freeness
Assume in this section that \(k<\infty\) and let \(\bar{g}\in S_{2}(N^{+},LN^{-};\mathbf{F}_{p})\) be the reduction of \(g\) modulo \(p\). Let also \(\mathfrak{m}_{\Lambda_{\mathscr{O}}}\) denote the maximal ideal of \(\Lambda_{\mathscr{O}}\).
**Definition 5.10**.: A _freeing set relative to \(g\)_ is an integer \(S\in\mathscr{S}_{k}\) coprime with \(L\) such that the Selmer group \(\operatorname{Sel}_{\varepsilon}^{S}(K,A_{\bar{g},\mathscr{O}})=\operatorname {Sel}_{\varepsilon}^{S}(K,\mathbf{A}_{g,\mathscr{O}}[\mathfrak{m}_{\Lambda_{ \mathscr{O}}}])\) is trivial.
An easy generalization of Theorem 3.2 of [3] guarantees the existence of infinitely many admissible sets relative to \(g\) (see also the proof of [4, Lemma 2.23]). Let \(n\) denotes either a nonnegative integer or \(\infty\). Set \(\omega_{\infty}=0\), \(I_{\mathscr{O},n}=\omega_{n}\cdot\Lambda_{\mathscr{O}}\), \(\Lambda_{\mathscr{O},n,k}=\Lambda_{\mathscr{O}}/(I_{\mathscr{O},n},p^{k})\) and
\[\mathfrak{Sel}_{S}^{\varepsilon}(K,T_{g,\mathscr{O}}(I_{\mathscr{O},n})),\]
\[\mathfrak{Sel}_{Sp}(K_{n},T_{g,\mathscr{O}})=\mathfrak{Sel}_{Sp}(K,T_{g,\mathscr{O} }(I_{\mathscr{O},n})).\]
**Proposition 5.11**.: _If \(S\) is a freeing set relative to \(g\) and set \(\delta(S)=\#\{\text{prime divisors of }S\}\)._
1. _The Selmer group_ \(\mathfrak{Sel}_{S}^{\varepsilon}(K_{n},T_{g,\mathscr{O}})\) _is a free_ \(\Lambda_{\mathscr{O},n,k}\)_-module of rank_ \(\delta(S)\)_._
2. _The Selmer group_ \(\mathfrak{Sel}_{Sp}(K_{n},T_{g,\mathscr{O}})\) _is a free_ \(\Lambda_{\mathscr{O},n,k}\)_-module of rank_ \(\delta(S)+2\)
Proof.: If \(E/\mathbf{Q}_{p}\) has ordinary reduction this is a variant of [3, Proposition 3.3] (see also Section 3 of [4]). We use the computations of the preceding sections to give a proof which works in our more general setting. Without loss of generality assume \(n\neq\infty\) throughout the proof.
_Step 1._ Let \(\operatorname{Sel}_{\varepsilon}^{S}(K_{n},A_{g,\mathscr{O}})=\operatorname{ Sel}_{\varepsilon}^{S}(K,A_{g,\mathscr{O}}(I_{\mathscr{O},n}))\). We show that
\[\operatorname{Sel}_{\varepsilon}^{S}(K_{n},A_{g,\mathscr{O}})=0. \tag{5.4}\]
Indeed Proposition 5.8 yields
\[\operatorname{Sel}_{\varepsilon}^{S}(K,A_{\bar{g},\mathscr{O}})=\operatorname {Sel}_{\varepsilon}^{S}(K,\mathbf{A}_{g,\mathscr{O}})[\mathfrak{m}_{\Lambda_{ \mathscr{O}}}]=\operatorname{Sel}_{\varepsilon}^{S}(K_{n},A_{g,\mathscr{O}}) [\mathfrak{m}_{\Lambda_{\mathscr{O}}}].\]
Since \(\operatorname{Sel}_{\varepsilon}^{S}(K,A_{\bar{g},\mathscr{O}})\) is trivial by the definition of admissible set, Equation (5.4) follows from Nakayama's Lemma.
_Step 2._ We show that
\[\mathfrak{Sel}_{Sp}(K_{n},T_{g,1,\mathscr{O}})\cong\mathfrak{Sel}_{Sp}(K_{n},T_{g,k,\mathscr{O}})[\mathfrak{m}_{\Lambda_{\mathscr{O}}}]. \tag{5.5}\]
Denote by \(K_{SLNp}\) the maximal algebraic extension of \(K\) which is unramified outside \(SLNp\) and by \(\mathfrak{G}_{s}\) the Galois group of \(K_{SLNp}/K_{s}\), for every \(0\leqslant s\leqslant\infty\). If one identifies \(H^{1}(\mathfrak{G}_{0},T_{f,\mathscr{O}}(I_{\mathscr{O},n,k}))\) with \(H^{1}(\mathfrak{G}_{n},T_{f,k,\mathscr{O}})\) under the Shapiro isomorphism, then
\[\mathfrak{Sel}_{Sp}(K_{n},T_{g,k,\mathscr{O}})=\ker\Big{(}H^{1}(\mathfrak{G}_ {n},T_{f,k,\mathscr{O}})\longrightarrow\bigoplus_{\mathfrak{l}\mid LN^{-}}H^{ 1}(K_{n,\mathfrak{l}},T_{f,k,\mathscr{O}}^{[\ell]})\Big{)},\]
where the direct sum is taken over the primes \(\mathfrak{l}\) of \(K_{n}\) which divide \(LN^{-}\). Hypothesis 1.1(1) guarantees that \(H^{0}(K_{s},T_{f,\mathscr{O}})\) vanishes for every \(r\geqslant 1\) and \(s\geqslant 0\). As a consequence, the map
\[H^{1}(\mathfrak{G}_{0},T_{f,1,\mathscr{O}})\longrightarrow H^{1}(\mathfrak{G }_{n},T_{f,k,\mathscr{O}})[\mathfrak{m}_{\Lambda_{\mathscr{O}}}]\]
induced by restriction from \(K\) to \(K_{n}\) and the inclusion \(p^{k-1}:T_{f,1}\hookrightarrow T_{f,k}\) is an isomorphism. To prove Equation (5.5) it is then sufficient to verify that the map \(\beta_{\mathfrak{l}}:H^{1}(K_{\ell},T_{f,1,\mathscr{O}}^{[\ell]})\to H^{1}(K_{ n,\mathfrak{l}},T_{f,k,\mathscr{O}}^{[\ell]})\) is injective for every prime \(\mathfrak{l}\) of \(K_{n}\) lying over \(\ell|LN^{-}\). The representation \(T_{f,\mathscr{O}}^{[\ell]}\) is isomorphic to \(\mathscr{O}/p^{r}\mathscr{O}\) for every \(r\geqslant 1\), and \(K_{n,\mathfrak{l}}=K_{\ell}\) since \(\ell\) splits completely in \(K_{n}/K\). Then \(H^{1}(K_{\ell},T_{f,1,\mathscr{O}})=\operatorname{Hom}_{\operatorname{cont}}( G_{K_{\ell}},\mathscr{O}/p\mathscr{O})\), \(H^{1}(K_{n,\mathfrak{l}},T_{f,k,\mathscr{O}})=\operatorname{Hom}_{\operatorname{ cont}}(G_{K_{\ell}},\mathscr{O}/p\mathscr{O})\), and \(\beta_{\mathfrak{l}}\) is identified with the injective morphism induced by the inclusion \(p^{k-1}:\mathscr{O}/p\mathscr{O}\hookrightarrow\mathscr{O}/p^{k}\mathscr{O}\).
_Step 3._ We now show (2). Since the local conditions defining \(\mathfrak{Sel}_{Sp}(K_{n},T_{g,\mathscr{O}})\) are dual to those defining \(\operatorname{Sel}^{Sp}(K_{n},A_{g,\mathscr{O}})\) with respect to local Tate duality, Theorem 2.19 of [9], Hypothesis 1.1(1) and Lemma 5.7 yield
\[\frac{\#\operatorname{Sel}_{\varepsilon}^{Sp}(K_{n},A_{g,\mathscr{O}})}{\# \mathfrak{Sel}_{Sp}(K_{n},T_{g,\mathscr{O}})}=\#T_{g,\mathscr{O}}(I_{ \mathscr{O},n})\cdot\prod_{w\mid Sp}\frac{\#H^{0}(K_{w},T_{g,\mathscr{O}}(I_ {n}))}{\#H^{1}(K_{w},T_{g,\mathscr{O}}(I_{n}))}\cdot\prod_{\ell\mid LN^{-}} \frac{\#H^{0}(K_{\ell},T_{g,\mathscr{O}}(I_{n}))}{\#H^{1}(K_{\ell},T_{g, \mathscr{O}}(I_{n}))}.\]
Step 1 implies that \(\operatorname{Sel}^{Sp}(K_{n},T_{g,\mathscr{O}})\) vanishes. For each \(w\mid pSLN^{-}\), define
\[h_{w}=\frac{\#H^{0}(K_{w},T_{g,\mathscr{O}}(I_{n}))}{\#H^{1}(K_{w},T_{g, \mathscr{O}}(I_{n}))}\]
and let \(h_{\infty}=\#T_{g,\mathscr{O}}(I_{\mathscr{O},n}).\) For every prime \(\ell\) dividing \(S\), \(T_{g,\mathscr{O}}(I_{\mathscr{O},n})=T_{g,\mathscr{O}}\otimes_{\mathscr{O}/p^{k}} \Lambda_{\mathscr{O},n,k}\), hence Lemma 3.1 and the local Euler characteristic formula give \(h_{w}^{-1}=\#H^{2}(K_{\ell},T_{g,\mathscr{O}}(I_{\mathscr{O},n}))=\#\Lambda_{ \mathscr{O},n,k}.\) For every prime \(\ell|L\) Lemma 3.1 similarly gives \(h_{w}=1.\) If \(\ell|N^{-}\), considering the long exact cohomology sequence associated with \(\mu_{p^{k}}\hookrightarrow T_{g}\twoheadrightarrow\mathbf{Z}/p^{k}\) one easily proves that \(h_{\ell}=1\) even in this case. If \(\mathfrak{p}\) is a prime dividing \(p\) the local Euler characteristic formula shows that \(h_{\mathfrak{p}}^{-1}=\#\Lambda_{\mathscr{O},n,k}^{2\cdot[K_{\mathfrak{p}} \cdot\mathbf{Q}_{p}]}\cdot\#H^{2}(K_{\mathfrak{p}},T_{g,\mathscr{O}}(I_{ \mathscr{O},n}))\). On the other hand \(H^{2}(K_{\mathfrak{p}},T_{g,\mathscr{O}}(I_{\mathscr{O},n}))\) has the same cardinality as \(H^{0}(K_{\mathfrak{p}},A_{g,\mathscr{O}}(I_{\mathscr{O},n}))\), which vanishes since its \(\mathfrak{m}_{\Lambda_{\mathscr{O}}}\)-torsion submodule is equal to \(H^{0}(K_{\mathfrak{p}},E_{p}\otimes_{\mathbf{F}_{p}}\mathbf{F})=0\) (thanks to Hypothesis 5.2(1) in the ordinary case), where \(\mathbf{F}\) is the residue field of \(\mathscr{O}\). The previous equation then yields
\[\#\mathfrak{Sel}_{Sp}(K_{n},T_{g,\mathscr{O}})=h_{\infty}^{-1}\cdot\#\Lambda_{ \mathscr{O},n,k}^{\delta(S)+4}=\#\Lambda_{\mathscr{O},n,k}^{\delta(S)+2}. \tag{5.6}\]
Since \(\Lambda_{\mathscr{O}}/I_{\mathscr{O},n}\) is a regular local ring, it is isomorphic as a \(\Lambda_{\mathscr{O}}\)-module to \(\operatorname{Hom}_{\mathscr{O}}(\Lambda_{\mathscr{O}}/I_{\mathscr{O},n}, \mathscr{O})\), so that \(\Lambda_{\mathscr{O},n,k}\) is isomorphic to its own Pontrjagin dual \(\mathscr{D}_{\mathscr{O},n,k}=\operatorname{Hom}_{\mathscr{O}}(\Lambda_{ \mathscr{O},n,k},\mathscr{K}/\mathscr{O})\) as a \(\Lambda_{\mathscr{O}}\)-module. Taking
\(n=0\) and \(k=1\) in Equation (5.6) shows that \(\mathfrak{S}\mathfrak{e}\mathfrak{l}_{Sp}(K,T_{f,1,\mathscr{O}})\) has dimension \(\delta(S)+2\) over \(\mathbf{F}\), hence \(\mathfrak{S}\mathfrak{e}\mathfrak{l}_{Sp}(K_{n},T_{f,k,\mathscr{O}})[\mathfrak{ m}_{\Lambda_{\mathscr{O}}}]\cong\mathbf{F}^{\delta(S)+2}\) by Step 2. Since \(\Lambda_{\mathscr{O},n,k}\cong\mathscr{D}_{\mathscr{O},n,k}\), Equation (5.6) and Nakayama's Lemma then concludes the proof of (2).
_Step 4._ There is an exact sequence of \(\Lambda_{\mathscr{O}}\)-modules
\[0\longrightarrow\mathfrak{S}\mathfrak{e}\mathfrak{l}_{S}^{\varepsilon}(K_{n}, T_{g,\mathscr{O}})\longrightarrow\mathfrak{S}\mathfrak{e}\mathfrak{l}_{Sp}(K_{n},T_{f,k, \mathscr{O}})\stackrel{{\partial_{p}}}{{\longrightarrow}} \bigoplus_{\mathfrak{p}\mid p}H^{1}_{\operatorname{sing},\varepsilon}(K_{ \mathfrak{p}},T_{f,\mathscr{O}}(I_{\mathscr{O},n,k}))\longrightarrow 0. \tag{5.7}\]
The only nontrivial fact is the surjectivity of the residue map \(\partial_{p}\). By construction (cf. Section 5.2) the local conditions defining \(\mathfrak{S}\mathfrak{e}\mathfrak{l}_{S}^{\varepsilon}(K_{n},T_{g,\mathscr{O}})\) are dual to those defining \(\operatorname{S}\mathfrak{e}\mathfrak{l}_{\varepsilon}^{S}(K_{n},A_{g, \mathscr{O}})\), hence Poitou-Tate duality implies that the cokernel of \(\partial_{p}\) is isomorphic to a submodule of the Pontrjagin dual of \(\operatorname{S}\mathfrak{e}\mathfrak{l}_{\varepsilon}^{S}(K_{n},A_{g, \mathscr{O}})\) (see e.g. Theorem 7.3 of [28]), which is trivial according to Step 1.
_Step 5._ Step 3, Step 4 and Proposition 5.4 (1) prove that \(\mathfrak{S}\mathfrak{e}\mathfrak{l}_{S}^{\varepsilon}(K_{n},T_{g,\mathscr{O}})\) is (a projective, hence) free \(\Lambda_{\mathscr{O},n,k}\)-module of rank \(\delta(S)\), thus concluding the proof of the proposition.
## 6. Ramified classes and reciprocity laws
To simplify the notation, if \(M\) is a \(R\)-module (where \(R\) is a commutative ring with unity) and \(I=(x)\) is a principal ideal of \(R\), we sometimes write \(M/x\) for \(M/(x)=M/xM\). We thus write, for example, \(\mathbf{Z}/p^{k}\) in place of \(\mathbf{Z}/p^{k}\mathbf{Z}\), for an integer \(k\geqslant 1\).
### Global classes
Let \(k\) be a positive integer and \(L\in\mathscr{S}_{k}\) a squarefree product of \(k\)-admissible primes. Assume that \(L\in\mathscr{S}_{k}^{\operatorname{ind}}\) is _indefinite_ (i.e. \(\epsilon_{K}(LN^{-})=+1\)), so that \(J_{N^{+},LN^{-}}\) is the Picard variety of the Shimura curve \(X_{N^{+},LN^{-}}\). Let \(g=f_{L}\in S_{2}(N^{+},LN^{-},\mathbf{Z}/p^{k})\) the \(L\)-level raising of \(f\) modulo \(p^{k}\). Let \(I_{g}\subset\mathbf{T}_{N^{+},LN^{-}}\) denote the kernel of \(g\). Proposition 4.4 of [26], a slight generalization of [3, Theorem 5.15], shows that there is an isomorphism of \(\mathbf{Z}_{p}[G_{\mathbf{Q}}]\)-modules
\[\pi_{g}:\operatorname{Ta}_{p}(J_{N^{+},LN^{-}})/I_{g}\cong T_{f,k},\]
which is unique up to multiplication by a \(p\)-adic unit by Hypothesis 1.1(1). For every integer \(n\geqslant 0\) define
\[\psi_{g,n}:J_{N^{+},LN^{-}}(K_{n})\longrightarrow H^{1}(K_{n},\operatorname{ Ta}_{p}(J_{N^{+},LN^{-}})/I_{g})\cong H^{1}(K_{n},T_{f,k}),\]
where the first (resp., second) map is induced by the Kummer map (resp., by \(\pi_{g}\)). It follows from Proposition 2.7.12 of [22] (see also Section 7 of [3] and Theorem 3.10 of [10]) that for every \(x\in J_{N^{+},LN^{-}}(K_{n})\) the classe \(\psi_{g,n}(x)\) is finite at every prime of \(K_{n}\) dividing \(p\). (To apply Proposition 2.7.12 of [22], note that all results in the Appendix A of loc. cit. on flat cohomology of finite flat group schemes hold for the \(p\)-divisible group of the elliptic curve \(E/K_{\mathfrak{p}}\) for any prime \(\mathfrak{p}\mid p\) of \(K\) because \(K_{\mathfrak{p}}/\mathbf{Q}_{p}\), being unramified, has ramification index smaller that \(p-1\)). Moreover, since \(J_{N^{+},LN^{-}}\) has purely toric reduction at every prime divisor of \(LN^{-}\), Mumford-Tate theory of \(p\)-adic uniformisation implies that these classes are ordinary at every such prime. In particular, for every multiple \(S\in\mathscr{S}_{k}\) of \(L\), \(\psi_{g,n}\) give a morphism (cf. Section 5.2.5)
\[\psi_{g,n}^{S}:J_{N^{+},LN^{-}}(K_{n})\longrightarrow\mathfrak{S} \mathfrak{e}\mathfrak{l}_{S}(K_{n},T_{f,k}).\]
Recall the compatible sequence of Heegner points \(P_{n}(L)\), for \(n\geqslant 0\), introduced in Section 2.5 and define
\[\tilde{\kappa}_{n}(L)=\psi_{g,n}^{S}(P_{n}(L)).\]
#### 6.1.1. Ordinary case
Suppose that \(E\) has ordinary reduction at \(p\). Choose a freeing set \(S\in\mathscr{S}_{k}\) relative to \(g\) such that \(L\mid S\). In this case, we define
\[\kappa_{n}(L)=\frac{1}{\alpha_{p}(g)^{n}}\Big{(}\tilde{\kappa}_{n-1}(L)-\alpha _{p}(g)\cdot\tilde{\kappa}_{n}(L)\Big{)}\Big{)}. \tag{6.1}\]
By the previous discussion \(\kappa_{n}(L)\) belongs to the compact Selmer group \(\mathfrak{S}\mathfrak{e}\mathfrak{l}_{S}(K_{n},T_{f,k})\). A simple computation using (2.5) shows that the canonical norm map takes \(\kappa_{n+1}(L)\) to \(\kappa_{n}(L)\) for all \(n\geqslant 1\). We define \(\kappa_{\infty}(L)\) to be the inverse limit of these classes with respect to the canonical norm maps:
\[\kappa_{\infty}(L)=\varprojlim_{n}\kappa_{n}(L)\in\varprojlim_{n}\mathfrak{S} \mathfrak{e}\mathfrak{l}_{S}(K_{n},T_{f,k})\cong\Lambda_{k}^{\delta(S)} \tag{6.2}\]
where \(\Lambda_{k}=\Lambda/p^{k}\).
#### 6.1.2. Supersingular case
Suppose now that \(E\) has supersingular reduction at \(p\). Choose a freeing set \(S\in\mathscr{S}_{k}\) relative to \(g\) such that \(L\mid S\). Since \(\tilde{\kappa}_{n}(L)\) satisfy the norm relations (2.3), (2.4) and (2.5) in \(\mathfrak{Sel}_{Sp}(K_{n},T_{f,k})\), an inductive argument shows that \(\omega_{n}^{\varepsilon}\tilde{\kappa}_{n}(L)=0\) if \(\varepsilon=(-1)^{n}\). Since \(\mathfrak{Sel}_{Sp}(K_{n},T_{f,k})\) is free over \(\Lambda_{n,k}\), there exists a class
\[\bar{\kappa}_{n}^{\varepsilon}(L)\in\mathfrak{Sel}_{Sp}(K_{n},T_{f,k})/\omega _{n}^{\varepsilon}\]
such that
* \(\tilde{\omega}_{n}^{-\varepsilon}\bar{\kappa}_{n}^{\varepsilon}(L)=\tilde{ \kappa}_{n}(L)\) if either \(p\) is split in \(K\) or \(p\) is inert in \(K\) and \(\varepsilon=-1\) (the non-exceptional case);
* \(\omega_{n}^{-}\bar{\kappa}_{n}^{+}(L)=\tilde{\kappa}_{n}(L)\) if \(p\) is inert in \(K\) and \(\varepsilon=+1\) (the exceptional case).
By A.2.6 of [22], \(\tilde{\kappa}_{n}(L)\in\mathfrak{Sel}_{S}(K_{n},T_{f,k})\). By definition, \(\mathfrak{Sel}_{S}(K_{n},T_{f,k})\subseteq\mathfrak{Sel}_{S}^{\varepsilon}(K_ {n},T_{f,k})\), and we conclude that \(\tilde{\kappa}_{n}(L)\in\mathfrak{Sel}_{S}^{\varepsilon}(K_{n},T_{f,k})\). We have a commutative diagram:
in which the vertical arrows are multiplication maps by :
* \(\tilde{\omega}_{n}^{-\varepsilon}\) if either \(p\) is split in \(K\) or \(p\) is inert in \(K\) and \(\varepsilon=-1\);
* \(\omega_{n}^{-}\) if \(p\) is inert in \(K\) and \(\varepsilon=+1\).
Recall that \(\mathfrak{Sel}_{S}^{\varepsilon}(K_{n},T_{f,k})\cong\Lambda_{n,k}^{\delta(S)}\) and \(\mathfrak{Sel}_{Sp}(K_{n},T_{f,k})\cong\Lambda_{n,k}^{\delta(S)+2}\) by Proposition 5.11, and that \(H^{1}_{\mathrm{sing},\varepsilon}(K_{\mathbf{p}},T_{f}(I_{n,k}))\cong\Lambda_{ n,k}^{[K_{\mathbf{p}};\mathbf{Q}_{p}]}\) by Proposition 5.4 and Corollary 5.5. It follows from the freeness of the modules involved in the above diagram that the vertical arrows are all injective. Since \(\tilde{\kappa}_{n}(L)=\tilde{\omega}_{n}^{-\varepsilon}\bar{\kappa}_{n}^{ \varepsilon}(L)\) belongs to \(\mathfrak{Sel}_{S}^{\varepsilon}(K_{n},T_{f,k})\) (in the split and non-exceptional cases), and \(\tilde{\kappa}_{n}(L)=\omega_{n}^{-}\bar{\kappa}_{n}^{+}(L)\) belongs to \(\mathfrak{Sel}_{S}^{\varepsilon}(K_{n},T_{f,k})\) (in the exceptional case), we conclude from the injectivity of the vertical arrows of the above diagram that \(\bar{\kappa}_{n}^{\varepsilon}(L)\) belongs to \(\mathfrak{Sel}_{S}^{\varepsilon}(K_{n},T_{f,k})/\omega_{n}^{\varepsilon}\). Define \(\kappa_{n}^{+}(L)=(-1)^{n/2}\bar{\kappa}_{n}^{+}(L)\) if \(n\) is even and \(\kappa_{n}^{-}(L)=(-1)^{(n-1)/2}\bar{\kappa}_{n}^{-}(L)\) if \(n\) is odd. A simple computation as in SS4.3 show that the classes \(\kappa_{n}^{\varepsilon}(L)\) are compatible under the canonical norm maps
\[\mathfrak{Sel}_{S}^{\varepsilon}(K_{n+2},T_{f,k})/\omega_{n+2}^{\varepsilon} \longrightarrow\mathfrak{Sel}_{S}^{\varepsilon}(K_{n},T_{f,k})/\omega_{n}^{\varepsilon}\]
for all \(n\geqslant 0\) with \(\varepsilon=(-1)^{n}\). We may therefore define \(\kappa_{\infty}^{\varepsilon}(L)\) as the inverse limit of these classes with respect to the canonical maps:
\[\kappa_{\infty}^{\varepsilon}(L)=\varprojlim_{n}\kappa_{n}^{\varepsilon}(L) \in\varprojlim_{n\in\mathbb{N}^{\varepsilon}}\mathfrak{Sel}_{S}^{\varepsilon}(K _{n},T_{f,k})/\omega_{n}^{\varepsilon}\cong\left(\varprojlim_{n}\Lambda_{n,k} /\omega_{n}^{\varepsilon}\right)^{\delta(S)}\cong\Lambda_{k}^{\delta(S)} \tag{6.3}\]
where \(\Lambda_{k}=\Lambda/p^{k}\) as before and \(\mathbb{N}^{\varepsilon}\) is the set of positive integers verifying the condition \((-1)^{n}=\varepsilon\). By the freeness of \(\mathfrak{Sel}_{S}^{\varepsilon}(K_{n},T_{f,k})\) and \(\mathfrak{Sel}_{S}^{\varepsilon}(K_{\infty},T_{f,k})\) and the fact that \(n\mapsto\omega_{n}^{\varepsilon}\) converges to \(0\),
\[\varprojlim_{n\in\mathbb{N}^{\varepsilon}}\mathfrak{Sel}_{S}^{\varepsilon}(K_{n},T_{f,k})/\omega_{n}^{\varepsilon}\cong\mathfrak{Sel}_{S}^{\varepsilon}(K_{ \infty},T_{f,k}).\]
Therefore \(\kappa_{\infty}^{\varepsilon}(L)\) gives a class in \(\mathfrak{Sel}_{S}^{\varepsilon}(K_{\infty},T_{f,k})\).
#### 6.1.3. \(\Lambda\)-adic global classes
Recall that \(\varepsilon=\emptyset\) in the ordinary case and \(\varepsilon=\pm\) in the supersingular case. Equations (8.1) and (6.3) define global Selmer classes \(\kappa_{\infty}^{\varepsilon}(L)\in\mathfrak{Sel}_{S}^{\varepsilon}(K_{\infty},T_{g})\). For each \(k\)-admissible prime \(\ell\), Lemma 3.1 allows us to define morphisms
\[v_{\ell}:H^{1}(K,T_{f,k})\to H^{1}_{\mathrm{fin}}(K_{\ell},T_{f,k})\cong \mathbf{Z}/p^{k}\mathbf{Z},\]
\[\partial_{\ell}:H^{1}(K,T_{f,k})\longrightarrow H^{1}_{\mathrm{ord}}(K_{\ell},T_{ f,k})\cong\mathbf{Z}/p^{k}\mathbf{Z},\]
defined by composing the restriction map at \(\ell\) with the projection onto the finite and the ordinary (or singular) part respectively. Given a global class \(x\in H^{1}(K,T_{f,k})\), we call \(v_{\ell}(x)\) its _finite part_ at \(\ell\), and \(\partial_{\ell}(x)\) its _residue_ at \(\ell\). If \(L=\prod_{i}\ell_{i}\in\mathscr{S}_{k}\) is a squarefree product of admissible primes \(\ell_{i}\), then we write \(\partial_{L}=\oplus_{i}\partial_{\ell_{i}}\) and \(v_{L}=\oplus_{i}v_{\ell_{i}}\).
**Proposition 6.1**.: \(\kappa_{\infty}^{\varepsilon}(L)\in\mathfrak{S}\mathfrak{e}\mathfrak{l}_{L}^{ \varepsilon}(K_{\infty},T_{g})\) _and \(v_{L}(\kappa_{\infty}^{\varepsilon}(L))=0\)._
Proof.: In the ordinary case this is Theorem 4.1 of [3], which is proved in Section 8 of [3]. In the supersingular case, the second part follows from the first following the strategy developed in loc. cit. (see also [10] of Proposition 4.4). We then only need to show that the class \(\kappa_{\infty}^{\varepsilon}(L)\) belongs to \(\mathfrak{S}\mathfrak{e}\mathfrak{l}_{L}^{\varepsilon}(K_{\infty},T_{g})\). For this, it is enough to show that each class \(\kappa_{n}^{\varepsilon}(L)\) belongs to \(\mathfrak{S}\mathfrak{e}\mathfrak{l}_{L}^{\varepsilon}(K_{n},T_{g})/\omega_{ n}^{\varepsilon}\). Fix a prime number \(q\mid(S/L)\). Since \(q\) is inert in \(K\), it splits completely in \(K_{n}\), and therefore
\[H^{1}_{\mathrm{ord}}(K_{n,q},T_{g})\cong H^{1}_{\mathrm{ord}}(K_{q},T_{g}) \otimes\Lambda_{n,k}\]
is a free \(\Lambda_{n,k}\)-module of rank \(1\) by Lemma 3.1. The restriction at \(q\) of \(\tilde{\kappa}_{n}^{\varepsilon}(L)\), say \(x_{n}\), is killed by \(\omega_{n}^{\varepsilon}\), and therefore there exists \(y_{n}\in H^{1}_{\mathrm{ord}}(K_{n,q},T_{g})\) such that \(\tilde{\omega}_{n}^{-\varepsilon}\cdot y_{n}=x_{n}\) if \(p\) is split in \(K\) or \(p\) is inert in \(K\) and \(\varepsilon=-1\), and \(\omega_{n}^{-\varepsilon}\cdot y_{n}=x_{n}\) if \(p\) is inert in \(K\) and \(\varepsilon=+1\). Since \(\Lambda_{n,k}^{\varepsilon}\) has no non-trivial \(\tilde{\omega}_{n}^{-\varepsilon}\)-torsion if \(p\) is split or \(p\) is inert and \(\varepsilon=-1\), and has no non-trivial \(\omega_{n}^{-}\)-torsion if \(p\) is inert and \(\varepsilon=+1\), \(y_{n}\) coincides with the restriction of \(\kappa_{n}^{\varepsilon}(L)\) in the quotient group \(\mathfrak{S}\mathfrak{e}\mathfrak{l}_{L}^{\varepsilon}(K_{\infty},T_{g})/ \omega_{n}^{\varepsilon}\).
### Reciprocity laws
The cohomology classes \(\kappa_{\infty}^{\varepsilon}(L)\) for \(L\) indefinite, and the square-root \(p\)-adic \(L\)-functions \(\mathcal{L}_{g}^{\varepsilon}\) for \(L\) definite are related by the following explicit reciprocity laws.
**Theorem 6.2** (First Reciprocity Law).: _Assume that \(L\in\mathscr{S}^{\mathrm{def}}_{k}\) is definite, let \(g=f_{L}\) be the \(L\)-level raising of \(f\) modulo \(p^{k}\), and let \(\ell\nmid L\) be an admissible prime relative to \(g\) and \(K\), so that \(L\ell\in\mathscr{S}^{\mathrm{ind}}_{k}\) is indefinite. The equality_
\[\partial_{\ell}\left(\kappa_{\infty}^{\varepsilon}(L\ell)\right)=\mathcal{L}_ {g}^{\varepsilon}\]
_holds in \(\Lambda/p^{k}\) up to units._
Proof.: By Theorem 4.1 of [3] (whose proof works in the supersingular case) we have \(\partial_{\ell}\left(\kappa_{n}(L\ell)\right)=\mathcal{L}_{g,n}\), thus completing the proof in the ordinary case. In the supersingular case, recall that by definition we have for all \(n\) such that \(\varepsilon=(-1)^{n}\):
* If \(p\) is split in \(K\) or \(p\) is inert in \(K\) and \(\varepsilon=-1\) (the non-exceptional case): \[\kappa_{n}(L\ell)=\begin{cases}(-1)^{n/2}\tilde{\omega}_{n}^{-\varepsilon} \kappa_{n}^{\varepsilon}(L\ell),\text{ if }n\text{ is even;}\\ (-1)^{(n-1)/2}\tilde{\omega}_{n}^{-\varepsilon}\kappa_{n}^{\varepsilon}(L \ell),\text{ if }n\text{ is odd;}\end{cases}\] \[\mathcal{L}_{g,n}=\begin{cases}(-1)^{n/2}\tilde{\omega}_{n}^{-\varepsilon} \mathcal{L}_{g,n}^{\varepsilon},\text{ if }n\text{ is even;}\\ (-1)^{(n-1)/2}\tilde{\omega}_{n}^{-\varepsilon}\mathcal{L}_{g,n}^{\varepsilon}, \text{ if }n\text{ is odd;}\end{cases}\]
* If \(p\) is inert in \(K\) and \(\varepsilon=+1\) (the exceptional case): \[\kappa_{n}(L\ell)=(-1)^{n/2}\omega_{n}^{-}\kappa_{n}^{+}(L\ell)\] \[\mathcal{L}_{g,n}=(-1)^{n/2}\omega_{n}^{-}\mathcal{L}_{g,n}^{+}\]
In both case, since \(\Lambda_{n,k}^{\varepsilon}\) is \(\tilde{\omega}_{n}^{-\varepsilon}\)-torsion free (split and non-exceptional case) and is \(\omega_{n}^{-\varepsilon}\)-torsion free (exceptional case) it follows from \(\partial_{\ell}\left(\kappa_{n}(L\ell)\right)=\mathcal{L}_{g,n}\) that \(\partial_{\ell}\left(\kappa_{n}^{\varepsilon}(L\ell)\right)=\mathcal{L}_{g,n}^ {\varepsilon}\) for all \(n\geqslant 0\), and the conclusion follows.
**Theorem 6.3** (Second Reciprocity Law).: _Assume that \(L\in\mathscr{S}^{\mathrm{ind}}_{k}\) is indefinite and let \(\ell\nmid L\) be an admissible prime relative to \(g\) and \(K\), so that \(L\ell\in\mathscr{S}^{\mathrm{def}}_{k}\) is definite. Let \(g=f_{L\ell}\) be the \(L\)-level raising of \(f\) modulo \(p^{k}\). Then \(\kappa_{\infty}^{\varepsilon}(L)\) is finite at \(\ell\) and the equality_
\[v_{\ell}\left(\kappa_{\infty}^{\varepsilon}(L)\right)=\mathcal{L}_{g}^{\varepsilon}\]
_holds in \(\Lambda/p^{k}\) up to units._
Proof.: The result follows as in the proof of Theorem 6.2 from the relation
\[v_{\ell}\left(\kappa_{\infty}(L)\right)=\mathcal{L}_{g}; \tag{6.4}\]
the details are left to the reader. If \(N^{-}\neq 1\), (6.4) is [3, Theorem 4.2], which is proved in Section 9 of _loc. cit._ using an extension of Ihara's Lemma to indefinite Shimura curves due to Diamond-Taylor [11]. The same argument applies when \(N^{-}=1\) using the standard Ihara's Lemma; alternatively, to prove (6.4) when \(N^{-}=1\) one can adapt the arguments in Section 6 of Vatsal's paper [33], where the case \(n=1\) is considered.
## 7. \(\varepsilon\)-BSD formulae in the definite case
This section is devoted to the proof of BSD formulae for the \(\varepsilon\)-Selmer groups. They are a crucial ingredient in the proof of the main results stated in the Introduction. We adopt the abuse of notation introduced in the previous section, thus writing \(M/x\) instead of \(M/xM\) for any principal ideal \((x)\) of a commutative ring with unity \(R\), and for any \(R\)-module \(M\).
Fix a positive integer \(k\geqslant 1\) and a (possibly empty) _definite_ squarefree product \(L\in\mathscr{S}_{2k}^{\mathrm{def}}\) of \(2k\)-admissible primes relative to \((f,K,p)\) (hence \(\epsilon_{K}(LN^{-})=-1\)). Denote by \(\check{g}=f_{L}\in S_{2}(N^{+},LN^{-};\mathbf{Z}/p^{2k})\) the \(L\)-level raising of the reduction of \(f\) modulo \(p^{2k}\) (cf. Section 3.3) and by \(g\in S_{2}(N^{+},LN^{-};\mathbf{Z}/p^{k})\) the reduction of \(\check{g}\) modulo \(p^{k}\).
Let \(\chi:\Lambda\to\mathscr{O}_{\chi}\) be a morphism of \(\mathbf{Z}_{p}\)-algebras, where \(\mathscr{O}_{\chi}\) is a discrete valuation ring finite over \(\mathbf{Z}_{p}\). Denote by \(\mathfrak{P}_{\chi}\subseteq\Lambda\) the kernel of \(\chi\). We assume throughout this section that \(\mathscr{O}_{\chi}\) is the integral closure of \(\Lambda/\mathfrak{P}_{\chi}\) in its fraction field \(\mathscr{K}_{\chi}=\mathrm{Frac}(\mathscr{O}_{\chi})\) and, by an abuse of notation, we still denote \(\chi:\Lambda_{\mathscr{O}_{\chi}}\twoheadrightarrow\mathscr{O}_{\chi}\) the morphism of \(\mathscr{O}_{\chi}\)-algebras induced by \(\chi\) and by \(\mathfrak{P}_{\chi}\) its kernel. Let \(\mathrm{ord}_{\chi}:\mathscr{K}_{\chi}\twoheadrightarrow\mathbf{Z}\cup\{\infty\}\) be the normalised discrete valuation, let \(\varpi_{\chi}\) be a uniformiser of \(\mathscr{O}_{\chi}\) and let \(\mathbf{F}_{\chi}=\mathscr{O}_{\chi}/\varpi_{\chi}\) be its residue field. If \(M\) is a finitely generated free \(\mathscr{O}_{\chi}/\varpi_{\chi}^{m}\)-module (for some integer \(m\geqslant 1\)) and \(x\) is a non-zero element of \(M\), denote by \(\mathrm{ord}_{\chi}(x)\in\mathbf{N}\) the largest nonnegative integer \(t\geqslant 0\) such that \(x\in\varpi_{\chi}^{t}\cdot M\). After setting \(\mathrm{ord}_{\chi}(0)=\infty\), this defines a \(\mathscr{O}_{\chi}\)_-adic valuation_\(\mathrm{ord}_{\chi}:M\to\{0,1,\ldots,m-1,\infty\}\). To simplify the notation, set
\[T_{g}(\chi) =T_{g,\mathscr{O}_{\chi}}(\mathfrak{P}_{\chi}),\] \[A_{g}(\chi) =A_{g,\mathscr{O}_{\chi}}(\mathfrak{P}_{\chi}).\]
**Theorem 7.1**.: _Assume that \(\mathcal{L}_{g}^{\varepsilon}(\bar{\chi})\neq 0\). Then \(\mathrm{length}_{\mathscr{O}_{\chi}}\big{(}\mathrm{Sel}_{\varepsilon}(K,A_{g}( \chi))\big{)}\leqslant 2\mathrm{ord}_{\chi}\left(\mathcal{L}_{g}^{ \varepsilon}(\bar{\chi})\right)\), with equality in the non-exceptional case._
The rest of this section is devoted to the proof of Theorem 7.1.
### The Kolyvagin system
Assume that the value of the \(p\)-adic \(L\)-function \(\mathcal{L}_{g}^{\varepsilon}\in\Lambda/p^{k}\) at \(\chi\) is non-zero and denote by
\[t_{\chi}^{\varepsilon}(g)=\mathrm{ord}_{\chi}(\mathcal{L}_{g}^{\varepsilon}( \chi))<\infty \tag{7.1}\]
its \(\varpi_{\chi}\)-adic valuation. Let \(\ell\in\mathscr{S}_{2k}\) be a \(2k\)-admissible prime not dividing \(L\), so that \(\ell\cdot L\in\mathscr{S}_{2k}^{\mathrm{ind}}\) is _indefinite_, and let \(S\in\mathscr{S}_{2k}\) be a freeing set relative to \(\check{g}\) which is divisible by \(\ell\cdot L\) (cf. Section 5.5). We simplify the notation and write
\[\mathfrak{S}\mathfrak{e}\mathfrak{l}_{S}^{\varepsilon}(K,\mathbf{T}_{\check{ g}})\otimes\mathscr{O}_{\chi}=\mathfrak{S}\mathfrak{e}\mathfrak{l}_{S}^{ \varepsilon}(K,\mathbf{T}_{\check{g},\mathscr{O}_{\chi}})\otimes_{\Lambda_{ \mathscr{O}_{\chi}}}\mathscr{O}_{\chi},\]
where the tensor product on the right is taken with respect to the canonical map \(\chi:\Lambda_{\mathscr{O}_{\chi}}\to\mathscr{O}_{\chi}\) induced by \(\chi\). Proposition 5.11 shows that \(\mathfrak{S}\mathfrak{e}\mathfrak{l}_{S}^{\varepsilon}(K,\mathbf{T}_{\check{g}}) \otimes\mathscr{O}_{\chi}\) is a free \(\mathscr{O}_{\chi}/p^{2k}\)-module of rank \(\delta(S)\).
Section 6 attaches to \(\ell\) a global cohomology class
\[\kappa_{\infty}^{\varepsilon}(\ell)=\kappa_{\infty}^{\varepsilon}(\check{g}, \ell)\in\mathfrak{S}\mathfrak{e}\mathfrak{l}_{\varepsilon}^{\varepsilon}(K, \mathbf{T}_{\check{g}})\subset\mathfrak{S}\mathfrak{e}\mathfrak{l}_{S}^{ \varepsilon}(K,\mathbf{T}_{\check{g}})\subset\mathfrak{S}\mathfrak{e} \mathfrak{l}_{S}^{\varepsilon}(K,\mathbf{T}_{\check{g},\mathscr{O}_{\chi}})\]
(cf. Proposition 6.1). Denote by \(\kappa_{\chi}^{\varepsilon}(\ell)\) the image of \(\kappa_{\infty}^{\varepsilon}(\ell)\) in \(\mathfrak{S}\mathfrak{e}\mathfrak{l}_{S}^{\varepsilon}(K,\mathbf{T}_{\check{g}}) \otimes\mathscr{O}_{\chi}\) under the natural projection, and by
\[t_{\chi}^{\varepsilon}(g,\ell)=\mathrm{ord}_{\chi}(\kappa_{\chi}^{\varepsilon}( \ell)) \tag{7.2}\]
its \(\mathscr{O}_{\chi}\)-adic valuation. Note that \(t_{\chi}^{\varepsilon}(g,\ell)\) is independent of the choice of \(S\) and Theorem 6.2 yields
\[t_{\chi}^{\varepsilon}(g,\ell)\leqslant\mathrm{ord}_{\chi}\big{(}\partial_{\ell }(\kappa_{\chi}^{\varepsilon}(\ell))\big{)}=\mathrm{ord}_{\chi}(\mathcal{L}_{g}^ {\varepsilon}(\chi))=t_{\chi}^{\varepsilon}(g)<\mathrm{ord}_{\chi}(p^{k}), \tag{7.3}\]
where
\[\partial_{\ell}:\mathfrak{S}\mathfrak{e}\mathfrak{l}_{S}^{\varepsilon}(K, \mathbf{T}_{\check{g},\mathscr{O}_{\chi}})\longrightarrow H^{1}_{\mathrm{sing}}(K_{ \ell},T_{\check{g},\mathscr{O}_{\chi}})\cong\Lambda_{\mathscr{O}_{\chi}}/p^{2k}\]
is the scalar extension of the residue map at \(\ell\) introduced in Section 6 and the second equality follows from Equation (7.1). In particular there exists \(\tilde{\kappa}_{\chi}^{\varepsilon}(\ell)\in\mathfrak{S}\mathfrak{e}\mathfrak{l}_ {S}^{\varepsilon}(K,\mathbf{T}_{\check{g}})\otimes\mathscr{O}_{\chi}\) such that
\[\mathrm{ord}_{\chi}(\tilde{\kappa}_{\chi}^{\varepsilon}(\ell))=0, \tag{7.4}\]
\[\kappa_{\chi}^{\varepsilon}(\ell)=\varpi_{\chi}^{t_{\chi}^{\varepsilon}(g,\ell)} \cdot\tilde{\kappa}_{\chi}^{\varepsilon}(\ell). \tag{7.5}\]
While \(\tilde{\kappa}_{\chi}^{\varepsilon}(\ell)\) is not uniquely determined by the previous equations, its image
\[\hat{\kappa}_{\chi}^{\varepsilon}(\ell)\in\mathfrak{S}\mathfrak{e}\mathfrak{f}_{S }^{\varepsilon}(K,\mathbf{T}_{g})\otimes\mathscr{O}_{\chi}\stackrel{{ \mathrm{def}}}{{=}}\mathfrak{S}\mathfrak{e}\mathfrak{f}_{S}^{ \varepsilon}(K,\mathbf{T}_{g,\mathscr{O}_{\chi}})\otimes_{\Lambda_{\mathscr{O} _{\chi}}}\mathscr{O}_{\chi}\]
under the morphism induced by the projection \(T_{\hat{g}}\twoheadrightarrow T_{g}\) is independent of any choice. Let
\[\mathbf{s}_{\chi}:\mathfrak{S}\mathfrak{e}\mathfrak{f}_{S}^{\varepsilon}(K, \mathbf{T}_{g})\otimes\mathscr{O}_{\chi}\longrightarrow\mathfrak{S}\mathfrak{e }\mathfrak{f}_{S}^{\varepsilon}(K,T_{g}(\chi))\]
be the specialization map. Define
\[\xi_{\chi}^{\varepsilon}(\ell)=\xi_{\chi}^{\varepsilon}(g,\ell)=\mathbf{s}_{ \chi}(\hat{\kappa}_{\chi}^{\varepsilon}(\ell))\in\mathfrak{S}\mathfrak{e} \mathfrak{f}_{S}^{\varepsilon}(K,T_{g}(\chi)).\]
and
\[\bar{\xi}_{\chi}^{\varepsilon}(\ell)=\bar{\xi}_{\ell}^{\varepsilon}(g,\ell) \in H^{1}(K,T_{\tilde{g}})\otimes_{\mathbf{F}_{p}}\mathbf{F}_{\chi}\]
as the image of \(\hat{\kappa}_{\chi}^{\varepsilon}(\ell)\) under the map induced in cohomology by the map \(\varpi_{\chi}\): \(T_{g}(\chi)\twoheadrightarrow T_{\tilde{g}}\otimes_{\mathbf{F}_{p}}\mathbf{F} _{\chi}\), where \(\tilde{g}\in S_{2}(N^{+},LN^{-};\mathbf{F}_{p})\) is the reduction of \(g\) modulo \(p\).
**Lemma 7.2** (_cf. Lemma 4.5 of [3]_).:
1. \(0\neq\xi_{\chi}^{\varepsilon}(\ell)\in\mathfrak{S}\mathfrak{e}\mathfrak{f}_{ \ell}^{\varepsilon}(K,T_{g}(\chi))\) _and_ \(v_{\ell}\big{(}\xi_{\chi}^{\varepsilon}(\ell)\big{)}=0\)_._
2. \(\mathrm{ord}_{\chi}\big{(}\partial_{\ell}(\xi_{\chi}^{\varepsilon}(\ell)) \big{)}=t_{\chi}^{\varepsilon}(g)-t_{\chi}^{\varepsilon}(g,\ell)\)_._
3. \(0\neq\bar{\xi}_{\chi}^{\varepsilon}(\ell)\in\mathfrak{S}\mathfrak{e}\mathfrak{ f}_{\ell}^{\varepsilon}(K,T_{\tilde{g}})\otimes_{\mathbf{F}_{p}}\mathbf{F} _{\chi}\) _and_ \(\partial_{\ell}(\bar{\xi}_{\chi}^{\varepsilon}(\ell))\) _is non-zero if and only if_ \(t_{\chi}^{\varepsilon}(g,\ell)=t_{\chi}^{\varepsilon}(g)\)_._
Proof.: (1) Because the kernel of \(\mathfrak{S}\mathfrak{e}\mathfrak{f}_{S}^{\varepsilon}(K,\mathbf{T}_{\tilde{g}} )\rightarrow\mathfrak{S}\mathfrak{e}\mathfrak{f}_{S}^{\varepsilon}(K,\mathbf{ T}_{g})\) is killed by \(p^{k}\) and \(\mathrm{ord}_{\chi}(\tilde{\kappa}_{\chi}^{\varepsilon}(\ell))=0\) by Equation (7.4), the class \(\hat{\kappa}_{\chi}^{\varepsilon}(\ell)\) is not zero, hence so is its image \(\xi_{\chi}^{\varepsilon}(\ell)\) under the map \(\mathbf{s}_{\chi}\), which is injective by Proposition 5.9. Let \(q\) be a prime divisor of \(S/\ell\). To prove the first statement one has to show that the residue
\[\partial_{q}(\xi_{\chi}^{\varepsilon}(\ell))\in H^{1}_{\mathrm{sing}}(K_{q},T_ {g,\mathscr{O}_{\chi}})\cong\mathscr{O}_{\chi}/p^{k}\]
of \(\xi_{\chi}^{\varepsilon}(\ell)\) at \(q\) is zero. Fix isomorphisms \(H^{1}_{\mathrm{sing}}(K_{q},\mathbf{T}_{\tilde{g},\mathscr{O}_{\chi}})\cong \Lambda_{\mathscr{O}_{\chi}}/p^{2k}\) and \(H^{1}_{\mathrm{sing}}(K_{q},\mathbf{T}_{g,\mathscr{O}_{\chi}})\cong\Lambda_{ \mathscr{O}_{\chi}}/p^{k}\) such that the map \(H^{1}_{\mathrm{sing}}(K_{q},\mathbf{T}_{\tilde{g},\mathscr{O}_{\chi}})\to H^{1 }_{\mathrm{sing}}(K_{q},\mathbf{T}_{g,\mathscr{O}_{\chi}})\) becomes identified with the natural projection \(\Lambda_{\mathscr{O}_{\chi}}/p^{2k}\twoheadrightarrow\Lambda_{\mathscr{O}_{ \chi}}/p^{k}\). Since \(\partial_{q}(\kappa_{\infty}^{\varepsilon}(\ell))\) is zero by Proposition 6.1 and \(t_{\chi}^{\varepsilon}(g,\ell)<\mathrm{ord}_{\chi}(p^{k})\) by Equation (7.3), it follows that \(\partial_{q}(\tilde{\kappa}_{\chi}^{\varepsilon}(\ell))\in\mathscr{O}_{\chi}/p ^{2k}\) has \(\mathscr{O}_{\chi}\)-adic valuation at least \(\mathrm{ord}_{\chi}(p^{k})\), hence its projection \(\partial_{q}(\xi_{\chi}^{\varepsilon}(\ell))\in\mathscr{O}_{\chi}/p^{k}\) modulo \(p^{k}\) vanishes (here and in the following we wrote \(\partial_{q}\) for the scalar extension \(\partial_{q}\otimes\mathrm{id}\) to simplify the notation as before). This gives
\[\partial_{q}(\xi_{\chi}^{\varepsilon}(\ell))=\partial_{q}\circ\mathbf{s}_{\chi} (\hat{\kappa}_{\chi}^{\varepsilon}(\ell))=\partial_{q}(\xi_{\chi}^{\varepsilon }(\ell))=0,\]
as was to be shown. The second statement is proved similarly, using that \(v_{\ell}(\kappa_{\infty}^{\varepsilon}(\ell))=0\) by Proposition 6.1.
(2) Equations (7.3), (7.4) and (7.5) show that \(\partial_{\ell}(\tilde{\kappa}_{\chi}^{\varepsilon}(\ell))\) has \(\mathscr{O}_{\chi}\)-adic valuation \(t_{\chi}^{\varepsilon}(g)-t_{\chi}^{\varepsilon}(g,\ell)\). Since \(\mathrm{ord}_{\chi}(p^{k})>t_{\chi}^{\varepsilon}(g)\) this is also the \(\mathscr{O}_{\chi}\)-adic valuation of \(\partial_{\ell}(\hat{\kappa}_{\chi}^{\varepsilon}(\ell))\), which is equal to that of \(\partial_{\ell}(\xi_{\chi}^{\varepsilon}(\ell))\) (_cf._ the proof of (1)).
(3) Note that the class \(\bar{\xi}_{\chi}^{\varepsilon}(\ell)\) is equal to the image of \(\tilde{\kappa}_{\chi}^{\varepsilon}(\ell)\) under the composition
\[\mathfrak{S}\mathfrak{e}\mathfrak{f}_{S}^{\varepsilon}(K,\mathbf{T}_{\tilde{g}} )\otimes\mathscr{O}_{\chi}\longrightarrow\mathfrak{S}\mathfrak{e}\mathfrak{f}_{S}^{ \varepsilon}(K,\mathbf{T}_{\tilde{g}})\otimes\mathscr{O}_{\chi}\stackrel{{ \mathbf{s}_{\chi}}}{{\longrightarrow}}\mathfrak{S}\mathfrak{e}\mathfrak{f}_{S}^{ \varepsilon}(K,T_{\tilde{g}}(\chi))\otimes_{\mathbf{F}_{p}}\mathbf{F}_{\chi}.\]
As above this implies that \(\bar{\xi}_{\chi}^{\varepsilon}(\ell)\) is not zero, since \(\mathbf{s}_{\chi}\) is injective and \(\mathrm{ord}_{\chi}(\tilde{\kappa}_{\chi}^{\varepsilon}(\ell))=0\). Together with (1) this implies the first statement. Since \(\partial_{\ell}\big{(}\bar{\xi}_{\chi}^{\varepsilon}(\ell)\big{)}\in H^{1}_{ \mathrm{sing}}(K_{\ell},T_{\tilde{g}}(\chi))\otimes_{\mathbf{F}_{p}}\mathbf{F} _{\chi}\cong\mathbf{F}_{\chi}\) is the projection of \(\partial_{\ell}(\xi_{\chi}^{\varepsilon}(\ell))\in\mathscr{O}_{\chi}/p^{k}\) modulo \(\varpi_{\chi}\), the second statement follows from (2).
### Proof of Theorem 7.1
The proof of Theorem 7.1 is divided into several steps. Steps 1, 2 and 3 consist in a generalization to the present context of similar results of [3]. The direct generalizations of the techniques in (7.1) only allow to prove the inequality \(\mathrm{length}_{\mathscr{O}_{\chi}}\big{(}\mathrm{Sel}_{\varepsilon}(K,A_{g}( \chi))\big{)}\leqslant 2\mathrm{ord}_{\chi}\left(\mathcal{L}_{\tilde{g}}^{ \varepsilon}(\tilde{\chi})\right)\), in Theorem 7.1 which also holds in the exceptional case. We can prove the opposite inequality in the non-exceptional case with a further inductive argument on the length of \(\mathrm{Sel}_{\varepsilon}(K,A_{g}(\chi))\), developed in Steps 4, 5, 6 and 7. The key ingredient is Step 4 (the basis of the inductive argument, _i.e._ the case when \(\mathrm{length}_{\mathscr{O}_{\chi}}\big{(}\mathrm{Sel}_{\varepsilon}(K,A_{g}( \chi))\big{)}=0\)) which combines Gross formula and Lemma 4.5 with results of Shinner-Urban (ordinary case) and Wan (supersingular case); the inductive argument then follows in Steps 6 and 7 using a a structure theorem for \(\mathrm{Sel}_{\varepsilon}(K
#### 7.2.1. Step 1
If \(\mathcal{L}_{g}^{\varepsilon}(\bar{\chi})\) is a \(p\)-adic unit then \(\operatorname{Sel}_{\varepsilon}(K,A_{g}(\chi))\) is trivial.
Proof.: (Cf. [3, Proposition 4.7].) Assume _ad absurdum_ that there exists a nontrivial class \(x\) in the Selmer group \(\operatorname{Sel}_{\varepsilon}(K,A_{g}(\chi))\). Choose a \(2k\)-admissible prime \(\ell\) such that \(v_{\ell}(x)\in H^{1}_{\operatorname{fin}}(K_{\ell},A_{g}(\chi))\cong\mathscr{O }_{\chi}/p^{k}\) is not zero, which exists by Theorem 3.2 of [3]. Since \(\mathfrak{Sel}^{\varepsilon}(K,T_{g}(\bar{\chi}))\) is the dual Selmer group of \(\operatorname{Sel}_{\varepsilon}(K,A_{g}(\chi))\), Lemma 7.2(1) and the reciprocity law of global class field theory yield
\[0=\sum_{v}\left\langle\operatorname{res}_{v}(x),\operatorname{res}_{v}(\xi_{ \bar{\chi}}^{\varepsilon}(\ell))\right\rangle_{v}=\left\langle\operatorname{ res}_{\ell}(x),\operatorname{res}_{\ell}(\xi_{\bar{\chi}}^{\varepsilon}(\ell)) \right\rangle_{\ell},\]
where the sum is taken over all the primes of \(K\) and \(\left\langle-,-\right\rangle_{v}\) denotes the local Tate pairing at \(v\) induced by the duality \(T_{g}(\chi)\times A_{g}(\chi)\to\mathscr{O}_{\chi}/p^{k}(1)\) (cf. [21, Chapter 1]). Since \(\partial_{\ell}(x)=0\), \(v_{\ell}(x)\neq 0\) and \(H^{1}_{\operatorname{fin}}(K_{\ell},T_{g}(\bar{\chi}))\) is the orthogonal complement of \(H^{1}_{\operatorname{fin}}(K_{\ell},A_{g}(\chi))\) under the perfect pairing \(\left\langle-,-\right\rangle_{\ell}\), the previous equation implies that the residue at \(\ell\) of \(\xi_{\bar{\chi}}^{\varepsilon}(\ell)\) has positive \(\mathscr{O}_{\chi}\)-adic valuation. According to Lemma 7.2(2) this in turn implies that \(\mathcal{L}_{g}^{\varepsilon}(\bar{\chi})\) has positive \(\mathscr{O}_{\chi}\)-adic valuation, contradicting the assumption.
#### 7.2.2. Step 2
Assume that \(\operatorname{Sel}_{\varepsilon}(K,A_{g}(\chi))\) is non-trivial. Then there exist two distinct \(2k\)-admissible primes \(\ell_{1}\) and \(\ell_{2}\) satisfying the following properties.
1. \(t_{\bar{\chi}}^{\varepsilon}(g,\ell_{1})=t_{\bar{\chi}}^{\varepsilon}(g,\ell_ {2})<t_{\bar{\chi}}^{\varepsilon}(g)\).
2. If \(h\in S_{2}(N^{+},L\ell_{1}\ell_{2}N^{-};\mathbf{Z}/p^{k})\) denotes the \(\ell_{1}\ell_{2}\)-level raising of \(g\), then \[\operatorname{Sel}_{\varepsilon}(K,A_{h}(\chi))=\operatorname{Sel}_{\varepsilon }^{\ell_{1}\ell_{2}}(K,A_{g}(\chi)).\]
3. The \(\varpi_{\chi}\)-adic valuation of \(\mathcal{L}_{h}^{\varepsilon}(\bar{\chi})\in\mathscr{O}_{\chi}/p^{k}\) is equal to \(t_{\bar{\chi}}^{\varepsilon}(g,\ell_{i})\) (for \(i=1,2\)): \[t_{\bar{\chi}}^{\varepsilon}(h)=\operatorname{ord}_{\chi}\bigl{(}\mathcal{L}_{h }^{\varepsilon}(\bar{\chi})\bigr{)}=t_{\bar{\chi}}^{\varepsilon}(g,\ell_{i})<\infty.\]
Proof.: We first prove that there exist infinitely many \(2k\)-admissible primes \(\ell\) such that \(t_{\bar{\chi}}(g,\ell)<t_{\bar{\chi}}(g)\). Let \(\mathfrak{m}_{\Lambda_{\mathscr{O}_{\chi}}}\) be as above the maximal ideal of \(\Lambda_{\mathscr{O}_{\chi}}\), so that \(A_{g}(\chi)[\mathfrak{m}_{\Lambda_{\mathscr{O}_{\chi}}}]\cong A_{\bar{g}}\otimes _{\mathbf{F}_{p}}\mathbf{F}_{\chi}\) (as \(\chi(g)\equiv 1\pmod{\varpi_{\chi}}\) for every \(g\in G_{\infty}\)). The control theorem Proposition 5.8 yields
\[\operatorname{Sel}_{\varepsilon}(K,A_{\bar{g},\mathscr{O}_{\chi}})\cong \operatorname{Sel}_{\varepsilon}(K,A_{g}(\chi))[\mathfrak{m}_{\Lambda_{\mathscr{ O}_{\chi}}}],\]
hence \(\operatorname{Sel}_{\varepsilon}(K,A_{\bar{g},\mathscr{O}_{\chi}})\) is nontrivial by Nakayama's Lemma. Fix a non-zero class
\[0\neq x\in\operatorname{Sel}_{\varepsilon}(K,A_{\bar{g},\mathscr{O}_{\chi}}).\]
According to (a slight generalization of) Theorem 3.2 of [3] there exist infinitely many \(2k\)-admissible primes \(\ell\) such that \(v_{\ell}(x)\in H^{1}_{\operatorname{fin}}(K_{\ell},A_{\bar{g},\mathscr{O}_{\chi}})\) is non zero. We claim that for every such prime \(\ell\) one has
\[t_{\bar{\chi}}^{\varepsilon}(g,\ell)<t_{\bar{\chi}}^{\varepsilon}(g). \tag{7.6}\]
Recall the class \(\tilde{\xi}_{\bar{\chi}}^{\varepsilon}(\ell)\in H^{1}(K,T_{\bar{g}})\otimes_{ \mathbf{F}_{p}}\mathbf{F}_{\chi}\) constructed in Section 7.1. Lemma 7.2(3) shows that \(\tilde{\xi}_{\bar{\chi}}^{\varepsilon}(\ell)\) belongs to \(\mathfrak{Sel}_{\ell}^{\varepsilon}(K,T_{\bar{g}})\otimes_{\mathbf{F}_{p}} \mathbf{F}_{\chi}\), hence (as in the proof of Step 1) the reciprocity law of global class field theory yields
\[\left\langle\partial_{\ell}\bigl{(}\tilde{\xi}_{\bar{\chi}}^{\varepsilon}(\ell) \bigr{)},v_{\ell}(x)\right\rangle_{\ell}=0,\]
where \(\left\langle-,-\right\rangle_{\ell}\) is the \(\mathbf{F}_{\chi}\)-linear extension of the perfect local Tate pairing
\[H^{1}_{\operatorname{sing}}(K_{\ell},T_{\bar{g}})\otimes_{\mathbf{F}_{p}}H^{1}_{ \operatorname{fin}}(K_{\ell},A_{\bar{g}})\longrightarrow\mathbf{F}_{p}.\]
Since \(v_{\ell}(x)\neq 0\) this gives \(\partial_{\ell}\bigl{(}\tilde{\xi}_{\bar{\chi}}^{\varepsilon}(\ell)\bigr{)}=0\), and the claim (7.6) follows from another application of Lemma 7.2(3).
Fix a \(2k\)-admissible prime \(\ell_{1}\) such that \(t_{\bar{\chi}}^{\varepsilon}(g,\ell_{1})<t_{\bar{\chi}}^{\varepsilon}(g)\), and such that \(t_{\bar{\chi}}^{\varepsilon}(g,\ell_{1})\leqslant t_{\bar{\chi}}^{\varepsilon}(g,\ell)\) for every \(2k\)-admissible prime \(\ell\). Since \(\tilde{\xi}_{\bar{\chi}}^{\varepsilon}(\ell_{1})\) is non-zero by Lemma 7.2(3), Theorem 3.2 of [3] proves that there exists a \(2k\)-admissible prime \(\ell_{2}\neq\ell_{1}\) such that \(v_{\ell_{2}}\bigl{(}\tilde{\xi}_{\bar{\chi}}^{\varepsilon}(\ell_{1})\bigr{)} \in H^{1}_{\operatorname{fin}}(K_{\ell_{2}},T_{\bar{g}})\otimes_{\mathbf{F}_{p}} \mathbf{F}_{\chi}\cong\mathbf{F}_{\chi}\) is non-zero. By construction (cf. Section 7.1) the latter condition is equivalent to
\[\operatorname{ord}_{\chi}\bigl{(}v_{\ell_{2}}(\xi_{\bar{\chi}}^{\varepsilon}(\ell_ {1}))\bigr{)}=0.\]
The second reciprocity law Theorem 6.3 and the definition of \(\xi^{\varepsilon}_{\bar{\chi}}(\ell_{1})\) show that the identities (where we write \(v_{\ell}\) for \(v_{\ell}\otimes\mathrm{id}\) for \(\ell=\ell_{1}\) and \(\ell=\ell_{2}\) as before)
\[\varpi_{\chi}^{\underline{t}^{\varepsilon}_{\bar{\chi}}(g,\ell_{1})}\cdot v_{ \ell_{2}}\big{(}\xi^{\varepsilon}_{\bar{\chi}}(\ell_{1})\big{)}=v_{\ell_{2}} \big{(}\kappa^{\varepsilon}_{\bar{\chi}}(\ell_{1})\big{)}\stackrel{{ \mathrm{Th.}\ref{thm:2}}}{{=}}\mathcal{L}^{\varepsilon}_{h}(\bar{\chi}) \stackrel{{\mathrm{Th.}\ref{thm:2}}}{{=}}v_{\ell_{1}}\big{(}\kappa^ {\varepsilon}_{\bar{\chi}}(\ell_{2})\big{)}=\varpi_{\chi}^{\underline{t}^{ \varepsilon}_{\bar{\chi}}(g,\ell_{2})}\cdot v_{\ell_{1}}\big{(}\xi^{ \varepsilon}_{\bar{\chi}}(\ell_{2})\big{)} \tag{7.7}\]
hold in \(\mathcal{O}_{\chi}/p^{k}\) up to multiplication by \(p\)-adic units (_cf._ the proof of Lemma 7.2(1) for the first and last identities). Since \(t^{\varepsilon}_{\bar{\chi}}(g,\ell)<\mathrm{ord}_{\chi}(p^{k})\) for \(\ell=\ell_{1},\ell_{2}\) by Equation (7.3), and since by construction \(t^{\varepsilon}_{\bar{\chi}}(g,\ell_{1})\leqslant t^{\varepsilon}_{\bar{\chi} }(g,\ell_{2})\), the previous two equations and Lemma 7.2(1) show that
\[t^{\varepsilon}_{\bar{\chi}}(g,\ell_{1})=t^{\varepsilon}_{\bar{\chi}}(g,\ell _{2})<t^{\varepsilon}_{\bar{\chi}}(g) \tag{7.8}\]
and that the identities
\[\big{(}v_{\ell_{1}}\big{(}\xi^{\varepsilon}_{\bar{\chi}}(\ell_{2})\big{)},v_{ \ell_{2}}\big{(}\xi^{\varepsilon}_{\bar{\chi}}(\ell_{2})\big{)}\big{)}=(1,0), \tag{7.9}\]
\[\big{(}v_{\ell_{1}}\big{(}\xi^{\varepsilon}_{\bar{\chi}}(\ell_{1})\big{)},v_{ \ell_{2}}\big{(}\xi^{\varepsilon}_{\bar{\chi}}(\ell_{1})\big{)}\big{)}=(0,1) \tag{7.10}\]
hold in \(\mathscr{O}_{\chi}/p^{k}\oplus\mathscr{O}_{\chi}/p^{k}\) up to multiplication by \(p\)-adic units. (Here for \(\ell=\ell_{1}\) or \(\ell=\ell_{2}\) one fixes an isomorphism \(H^{1}_{\mathrm{fin}}(K_{\ell},T_{g}(\bar{\chi}))\cong\mathscr{O}_{\chi}/p^{k}\).) It follows from the definitions (cf. Section 5.2) that
\[\mathrm{Sel}^{\ell_{1}\ell_{2}}_{\varepsilon}(K,A_{g}(\chi))=\mathrm{Sel}^{ \ell_{1}\ell_{2}}_{\varepsilon}(K,A_{h}(\chi)),\]
\[\mathfrak{Sel}^{\ell_{1}\ell_{2}}_{\varepsilon}(K,T_{g}(\bar{\chi}))= \mathfrak{Sel}^{\ell_{1}\ell_{2}}_{\varepsilon}(K,T_{h}(\bar{\chi})),\]
and a class \(z\in\mathfrak{Sel}^{\varepsilon}_{\ell_{1}\ell_{2}}(K,T_{g}(\bar{\chi}))\) belongs to \(\mathfrak{Sel}^{\varepsilon}(K,T_{h}(\bar{\chi}))\) precisely if \(v_{\ell_{1}}(z)\) and \(v_{\ell_{2}}(z)\) are both trivial. Poutto-Tate duality (see Theorem 7.3 of [28] or Chapter 1 of [21]) then yields a short exact sequence of \(\mathscr{O}_{\chi}/p^{k}\)-modules
\[\mathfrak{Sel}^{\varepsilon}_{\ell_{1}\ell_{2}}(K,A_{g}(\bar{\chi}))\stackrel{{ v_{\ell_{1}}\oplus v_{\ell_{2}}}}{{\longrightarrow}}\mathscr{O}_{\chi}/p^{k}\oplus \mathscr{O}_{\chi}/p^{k}\stackrel{{\partial^{\vee}_{\ell_{1}} \oplus\partial^{\vee}_{\ell_{2}}}}{{\longrightarrow}}\mathrm{Sel}_{ \varepsilon}(K,A_{h}(\bar{\chi}))^{\vee}\longrightarrow\mathrm{Sel}^{\ell_{1 }\ell_{2}}_{\varepsilon}(K,A_{g}(\chi))^{\vee}\longrightarrow 0, \tag{7.11}\]
where \((\cdot)^{\vee}=\mathrm{Hom}_{\mathbf{Z}_{p}}(\cdot,\mathbf{Q}_{p}/\mathbf{Z}_{p})\) and for \(\ell=\ell_{1},\ell_{2}\) one identifies \(H^{1}_{\mathrm{sing}}(K_{\ell},A_{g}(\chi))\) with the Pontrjagin dual of \(H^{1}_{\mathrm{fin}}(K_{\ell},T_{g}(\bar{\chi}))\cong\mathscr{O}_{\chi}/p^{k}\) under the local Tate duality. Equation (7.10) shows that the first map is surjective, hence
\[\mathrm{Sel}_{\varepsilon}(K,A_{h}(\chi))=\mathrm{Sel}^{\ell_{1}\ell_{2}}_{ \varepsilon}(K,A_{g}(\chi)).\]
Together with Equations (7.7)-(7.10) this concludes the proof.
#### 7.2.3. Step 3
\(\mathrm{length}_{\mathscr{O}_{\chi}}\big{(}\mathrm{Sel}_{\varepsilon}(K,A_{g}( \chi))\big{)}\leqslant 2t^{\varepsilon}_{\bar{\chi}}(g)\).
Proof.: As in [3] one proceeds by induction on \(t_{\bar{\chi}}(g)\). Step 1 shows that the statement holds if \(t_{\bar{\chi}}(g)=0\). Assume then \(t_{\bar{\chi}}(g)>0\). If \(\mathrm{Sel}_{\varepsilon}(K,A_{g}(\chi))=0\) the statement is trivially verified, hence assume that \(\mathrm{Sel}_{\varepsilon}(K,A_{g}(\chi))\) is non-trivial. According to Step 2 there exists two distinct \(2k\)-admissible primes \(\ell_{1}\) and \(\ell_{2}\) satisfying the properties \(\mathbf{I}_{1}\)-\(\mathbf{I}_{3}\). As in loc. cit. denote by \(h\in S_{2}(N^{+},L\ell_{1}\ell_{2}L;\mathbf{Z}/p^{k})\) the \(\ell_{1}\ell_{2}\)-level raising of \(g\).
Let \(\zeta^{\varepsilon}_{\bar{\chi}}(\ell_{1})\in\mathfrak{Sel}^{\varepsilon}_{\ell_ {1}}(K,T_{g}(\bar{\chi}))\) be a global class such that \(\partial_{\ell_{1}}\big{(}\zeta^{\varepsilon}_{\bar{\chi}}(\ell_{1})\big{)}\) generates the image of the residue map \(\partial_{\ell_{1}}:\mathfrak{Sel}^{\varepsilon}_{\ell_{1}}(K,T_{g}(\bar{\chi}) )\to H^{1}_{\mathrm{sing}}(K_{\ell},T_{g}(\bar{\chi}))\cong\mathscr{O}_{\chi}/p^ {k}\), viz. \(\partial_{\ell_{1}}\) induces an isomorphism
\[\partial_{\ell_{1}}:\mathfrak{Sel}^{\varepsilon}_{\ell_{1}}(K,T_{g}(\bar{\chi}) )\big{/}\mathfrak{Sel}^{\varepsilon}(K,T_{g}(\bar{\chi}))\cong\partial_{\ell_{1}} \big{(}\zeta^{\varepsilon}_{\bar{\chi}}(\ell_{1})\big{)}\cdot\mathscr{O}_{\chi}/p^ {k}. \tag{7.12}\]
Since \(\xi^{\varepsilon}_{\bar{\chi}}(\ell_{1})\) belongs to the Selmer group \(\mathfrak{Sel}^{\varepsilon}_{\ell_{1}}(K,T_{g}(\bar{\chi}))\) by Lemma 7.2(1), multiplying \(\zeta^{\varepsilon}_{\bar{\chi}}(\ell_{1})\) by a \(p\)-adic unit if necessary one can assume that there exists an integer \(m_{1}\geqslant 0\) such that
\[\xi^{\varepsilon}_{\bar{\chi}}(\ell_{1})-\varpi_{\chi}^{m_{1}}\cdot\zeta^{ \varepsilon}_{\bar{\chi}}(\ell_{1})\in\mathfrak{Sel}^{\varepsilon}(K,T_{g}( \bar{\chi})).\]
Equation (7.12), Lemma 7.2(2) and property \(\mathbf{I}_{3}\) then yield
\[\mathrm{length}_{\mathscr{O}_{\chi}}\big{(}\mathfrak{Sel}^{\varepsilon}_{\ell_ {1}}(K,T_{g}(\bar{\chi}))\big{/}\mathfrak{Sel}^{\varepsilon}(K,T_{g}(\bar{ \chi}))\big{)}=\mathrm{ord}_{\chi}(p^{k})-t^{\varepsilon}_{\bar{\chi}}(g)+t^{ \varepsilon}_{\bar{\chi}}(h)+m_{1}. \tag{7.13}\]
Similarly let \(\zeta^{\varepsilon}_{\bar{\chi}}(\ell_{2})\in\mathfrak{Sel}^{\varepsilon}_{\ell_ {1}\ell_{2}}(K,T_{g}(\bar{\chi}))\) be a class such that the residue map at \(\ell_{2}\) induces an isomorphism
\[\partial_{\ell_{2}}:\mathfrak{Sel}^{\varepsilon}_{\ell_{1}\
and apply as above Lemma 7.2(2) and property \(\mathbf{I}_{3}\) to deduce the equality
\[\operatorname{length}_{\mathscr{O}_{\chi}}\bigl{(}\mathfrak{S}\mathfrak{e} \mathfrak{t}\mathfrak{l}^{\varepsilon}_{\bar{\xi}_{1}\ell_{2}}(K,T_{g}(\bar{ \chi}))\bigr{/}\mathfrak{S}\mathfrak{e}\mathfrak{t}\mathfrak{l}^{\varepsilon}_ {\bar{\xi}_{1}}(K,T_{g}(\bar{\chi}))\bigr{)}=\operatorname{ord}_{\chi}(p^{k} )-t^{\varepsilon}_{\bar{\chi}}(g)+t^{\varepsilon}_{\bar{\chi}}(h)+m_{2}. \tag{7.14}\]
When combined together Equations (7.13) and (7.14) give the equality
\[\operatorname{length}_{\mathscr{O}_{\chi}}\bigl{(}\mathfrak{S}\mathfrak{e} \mathfrak{t}\mathfrak{l}^{\varepsilon}_{\bar{\xi}_{1}\ell_{2}}(K,T_{g}(\bar{ \chi}))\bigr{/}\mathfrak{S}\mathfrak{e}\mathfrak{t}^{\varepsilon}(K,T_{g}( \bar{\chi}))\bigr{)}=2\cdot\operatorname{ord}_{\chi}(p^{k})-2\cdot t^{ \varepsilon}_{\bar{\chi}}(g)+2\cdot t^{\varepsilon}_{\bar{\chi}}(h)+m_{1}+m_{2}. \tag{7.15}\]
By construction \(\operatorname{Sel}_{\varepsilon}(K,A_{g}(\chi))\) is the dual Selmer group of \(\mathfrak{S}\mathfrak{e}\mathfrak{l}^{\varepsilon}(K,T_{g}(\bar{\chi}))\), hence Poitou-Tate duality gives a short exact sequence of \(\mathscr{O}_{\chi}/p^{k}\)-modules (cf. Equation (7.11) in the proof of Step 2)
\[0\longrightarrow\frac{\mathfrak{S}\mathfrak{e}\mathfrak{l}^{\varepsilon}_{ \bar{\xi}_{1}\ell_{2}}(K,T_{g}(\bar{\chi}))}{\mathfrak{S}\mathfrak{e} \mathfrak{l}^{\varepsilon}(K,T_{g}(\bar{\chi}))}\stackrel{{ \partial_{t_{1}}\oplus\partial_{t_{2}}}}{{\longrightarrow}}\mathscr{O}_{ \chi}/p^{k}\oplus\mathscr{O}_{\chi}/p^{k}\stackrel{{ v^{\vee}_{t_{1}}\oplus v^{\vee}_{t_{2}}}}{{ \longrightarrow}}\left(\frac{\operatorname{Sel}_{\varepsilon}(K,A_{g}(\chi))}{ \operatorname{Sel}_{\varepsilon}^{\mathfrak{l}_{\varepsilon}\ell_{2}}(K,A_{g} (\chi))}\right)^{\vee}\longrightarrow 0,\]
where for \(\ell=\ell_{1},\ell_{2}\) one identifies \(H^{1}_{\operatorname{sing}}(K_{\ell},T_{g}(\bar{\chi}))\cong H^{1}_{ \operatorname{fin}}(K_{\ell},A_{g}(\chi))^{\vee}\) with \(\mathscr{O}_{\chi}/p^{k}\) under a fixed isomorphism. Together with Equation (7.15) and property \(\mathbf{I}_{2}\) this implies
\[\operatorname{length}_{\mathscr{O}_{\chi}}\bigl{(}\operatorname{Sel}_{ \varepsilon}(K,A_{g}(\chi))\bigr{)}-2\cdot t^{\varepsilon}_{\bar{\chi}}(g)= \operatorname{length}_{\mathscr{O}_{\chi}}\bigl{(}\operatorname{Sel}_{ \varepsilon}(K,A_{h}(\chi))\bigr{)}-2\cdot t^{\varepsilon}_{\bar{\chi}}(h)-m_ {1}-m_{2}. \tag{7.16}\]
Properties \(\mathbf{I}_{1}\) and \(\mathbf{I}_{3}\) give \(t^{\varepsilon}_{\bar{\chi}}(h)<t^{\varepsilon}_{\bar{\chi}}(g)\), hence
\[\operatorname{length}_{\mathscr{O}_{\chi}}\bigl{(}\operatorname{Sel}_{ \varepsilon}(K,A_{h}(\chi))\bigr{)}-2\cdot t^{\varepsilon}_{\bar{\chi}}(h)\leqslant 0 \tag{7.17}\]
by the induction hypothesis. The statement follows from Equations (7.16) and (7.17).
#### 7.2.4. Step 4
Assume that \((f,K,p,\varepsilon)\) is not exceptional and that \(\operatorname{Sel}_{\varepsilon}(K,A_{g}(\chi))=0\). Then \(t^{\varepsilon}_{\bar{\chi}}(g)=0\).
Proof.: Theorem B of [11] implies that there exists a newform \(\xi=\sum_{n=1}^{\infty}a_{n}(\xi)\cdot q^{n}\) in \(S_{2}(\Gamma_{0}(NL))^{\operatorname{new}}\) which is congruent to \(f\) modulo \(p\). More precisely, if \(\mathbf{Q}(\xi)\) denotes the field generated over \(\mathbf{Q}\) by the Fourier coefficients of \(\xi\), then there exists a prime \(\mathfrak{P}\) of \(\mathbf{Q}\) dividing \(p\) such that \(a_{l}(\xi)\equiv a_{l}(E)\pmod{\mathfrak{P}}\) for every rational prime \(l\nmid NLp\). (_Loc. cit._ proves the existence of an eigenform \(\xi\in S_{2}(\Gamma_{1}(N)\cap\Gamma_{0}(L))\) of conductor divisible by \(L\) which is congruent to \(f\) modulo \(p\). It is not difficult to prove that an eigenform with these properties has trivial character and conductor \(NL\).) Let \(J^{o}_{\xi}/\mathbf{Q}\) be the quotient of \(\operatorname{Pic}^{0}(X_{0}(NL)/\mathbf{Q})\) associated to \(\xi\) by the Eichler-Shimura construction. It is an abelian variety of dimension \([\mathbf{Q}(\xi):\mathbf{Q}]\) equipped with a morphism of \(\mathbf{Q}\)-algebras \(\mathbf{Q}(\xi)\to\operatorname{End}_{\mathbf{Q}}(J^{o}_{\xi})\otimes_{\mathbf{ Z}}\mathbf{Q}\). Let \(J_{\xi}/\mathbf{Q}\) be an abelian variety in the isogeny class of \(J^{o}_{\xi}\) which has real multiplication by the ring of integers \(\mathcal{O}\) of \(\mathbf{Q}(\xi)\) and set \(\mathfrak{P}=\bar{\mathfrak{P}}\cap\mathcal{O}\). Since \(E_{p}\) is an irreducible \(\mathbf{F}_{p}[G_{\mathbf{Q}}]\)-module by Hypothesis 1.1(1), the Eichler-Shimura relations and the Brauer-Nesbit theorem imply that there are isomorphisms of \(\mathcal{O}/\mathfrak{P}[G_{\mathbf{Q}}]\)-modules
\[J_{\xi}[\mathfrak{P}]\cong E_{p}\otimes_{\mathbf{F}_{p}}\mathcal{O}/\mathfrak{ P}\cong A_{\bar{g}}\otimes_{\mathbf{F}_{p}}\mathcal{O}/\mathfrak{P}.\]
Identify in what follows \(J_{\xi}[\mathfrak{P}]\) and \(A_{\bar{g}}\otimes_{\mathbf{F}_{p}}\mathcal{O}/\mathfrak{P}\) under a fixed isomorphism, and let
\[\operatorname{Sel}_{\mathfrak{P}}(J_{\xi}/K)\subset H^{1}(K,A_{\bar{g}})\]
be the \(\mathfrak{P}\)-Selmer group of \(J_{\xi}\) over \(K\) (cf. [13]). It follows from the results of [13, Sections 3-5] that
\[\operatorname{Sel}_{\mathfrak{P}}(J_{\xi}/K)=\operatorname{Sel}(K,A_{\bar{g}}) \otimes_{\mathbf{F}_{p}}\mathcal{O}/\mathfrak{P} \tag{7.18}\]
inside \(H^{1}(K,A_{\bar{g}})\otimes_{\mathbf{F}_{p}}\mathcal{O}/\mathfrak{P}\), where \(\operatorname{Sel}(K,A_{\bar{g}})\) is the Selmer group defined by imposing the finite local condition \(H^{1}_{\operatorname{fin}}(K_{\mathfrak{p}},A_{\bar{g}})\cong E(K_{\mathfrak{P}}) \otimes\mathbf{F}_{p}\) at every prime \(\mathfrak{p}\) of \(K\) dividing \(p\) (viz. \(\operatorname{Sel}(K,A_{\bar{g}})=\operatorname{Sel}_{\mathfrak{Q}}(K,A_{\bar{g}})\) with the notations of Section 5.2, independently of whether \(E\) has ordinary or supersingular reduction at \(p\)). Note that since \((f,K,p,\varepsilon)\) is not exceptional, then by definition \(E(K_{\mathfrak{p}})_{\varepsilon}=E(K_{\mathfrak{p}})\), hence
\[H^{1}_{\operatorname{fin},\varepsilon}(K_{\mathfrak{p}},A_{\bar{g}})=H^{1}_{ \operatorname{fin},\varepsilon}(K_{\mathfrak{p}},\mathbf{A}_{f})[\mathfrak{m}_{ \Lambda}]=E(K_{\mathfrak{p}})_{\varepsilon}\otimes\mathbf{F}_{p}=E(K_{ \mathfrak{p}})\otimes\mathbf{F}_{p}=H^{1}_{\operatorname{fin}}(K_{\mathfrak{p}},A_{ \bar{g}})\]
where the first equality follows from Corollary 5.5(2) and the second from Proposition 5.3. It follows that \(\operatorname{Sel}_{\varepsilon}(K,A_{\bar{g}})=\operatorname{Sel}(K,A_{\bar{g}})\). We have an isomorphism \(H^{1}(K,A_{\bar{g}})\otimes_{\mathbf{Z}_{p}}\mathscr{O}_{\chi}\cong H^{1}(K,A_{ \bar{g},\mathscr{O}_{\chi}})\) and an injection \(\operatorname{Sel}(K,A_{\bar{g}})\otimes_{\mathbf{Z}_{p}}\mathscr{O}_{\chi}\hookrightarrow H ^{1}(K,A_{\bar{g}})\otimes_{\mathbf{Z}_{p}}\mathscr{O}_{\chi}\) by the flatness of \(\mathscr{O}_{\chi}/\mathbf{Z}_{p}\), and therefore \(\operatorname{Sel}(K,A_{\bar{g}})\otimes_{\mathbf{Z}_{p}}\mathscr{O}_{\chi}\) injects into \(\operatorname{Sel}(K,A_{\bar{g}}(\chi))\). Since by assumption \(\operatorname{
Let \(L(\xi/K,1)_{\rm alg}\) denote the algebraic part of the special value of the complex \(L\)-function of \(\xi\) over \(K\), normalized as in Section 4 of [1]. Results of Skinner-Urban-Wan ([31] and Theorem B of [32] in the ordinary case, [34] and [35] in the supersingular case) prove the inequality
\[{\rm ord}_{\mathfrak{P}}\big{(}L(\xi/K,1)_{\rm alg}\big{)}\leqslant{\rm length }_{\mathcal{O}_{\mathfrak{P}}}\big{(}{\rm Sel}_{\mathfrak{P}^{\infty}}(J_{ \xi}/K)\big{)}+\sum_{q|NL}t_{\xi}(q).\]
Because \(J_{\xi}[\mathfrak{P}]\) is an irreducible \(G_{K}\)-module, \({\rm Sel}_{\mathfrak{P}}(J_{\xi}/K)\) is equal to the \(\mathfrak{P}\)-torsion submodule of the Selmer group \({\rm Sel}_{\mathfrak{P}^{\infty}}(J_{\xi}/K)\), so that \({\rm Sel}_{\mathfrak{P}^{\infty}}(J_{\xi}/K)\) is trivial by Equation (7.19). In addition \(t_{\xi}(q)=0\) for every prime \(q|N^{+}\) under our assumptions, and the previous equation yields
\[{\rm ord}_{\mathfrak{P}}\big{(}L(\xi/K,1)_{\rm alg}\big{)}\leqslant\sum_{q| LN^{-}}t_{\xi}(q).\]
On the other hand Gross' formula (see Theorem 4.2 of [1] for the formulation in the form required in this paper) gives the identity
\[{\rm ord}_{\mathfrak{P}}\big{(}L(\xi/K,1)_{\rm alg}\big{)}=2\cdot{\rm ord}_{ \mathfrak{P}}\big{(}\psi_{\xi}\big{(}P_{K}(L)\big{)}\big{)}+\sum_{q|LN^{-}}t _{\xi}(q).\]
It follows combining the two previous formulas that \(\psi_{\xi}\big{(}P_{K}(L)\big{)}\) has trivial \(\mathfrak{P}\)-adic valuation:
\[\psi_{\xi}\big{(}P_{K}(L)\big{)}\in\mathcal{O}_{\mathfrak{P}}^{*}. \tag{7.20}\]
Since \(\xi\) is congruent to \(f\) modulo \(p\), one has \(\psi_{\xi}\big{(}P_{K}(L)\big{)}\equiv\psi_{\bar{g}}\big{(}P_{K}(L)\big{)}\pmod {\mathfrak{P}}\). In addition, as \((f,K,p,\varepsilon)\) is not exceptional, Lemmas 4.1 and 4.5 show that the equalities \(\psi_{\bar{g}}\big{(}P_{K}(L)\big{)}=\mathcal{L}_{\bar{g}}^{\varepsilon}( \mathbf{1})=\mathcal{L}_{g}^{\varepsilon}(\mathbf{1})\pmod{p}\) hold in \(\mathbf{F}_{p}\) up to multiplication by non-zero elements. Equation (7.20) then yields \(\mathcal{L}_{g}^{\varepsilon}(\mathbf{1})\in\mathbf{Z}_{p}^{*}\). This implies that the \(p\)-adic \(L\)-function \(\mathcal{L}_{g}^{\varepsilon}\) is a unit in \(\Lambda/p^{k}\), which in turn gives \(t_{\bar{\chi}}^{\varepsilon}(g)=0\).
#### 7.2.5. Step 5
There exist a \(\mathscr{O}_{\chi}/p^{k}\)-module \(\mathtt{M}\) and an integer \(s\in\{0,1\}\) such that
\[{\rm Sel}_{\varepsilon}(K,A_{g}(\chi))\cong(\mathscr{O}_{\chi}/p^{k})^{s} \oplus\mathtt{M}\oplus\mathtt{M}\cong\mathfrak{S}\mathfrak{e}\mathfrak{e} \mathfrak{e}^{\varepsilon}(K,T_{g}(\bar{\chi})).\]
Proof.: To prove this result, we need to show that the Selmer groups \({\rm Sel}_{\varepsilon}(K,A_{g}(\chi))\) and \(\mathfrak{S}\mathfrak{e}\mathfrak{e}^{\varepsilon}(K,T_{g}(\bar{\chi}))\) are isomorphic and that the structure result holds for one of them. To prove the structure results \(\mathfrak{S}\mathfrak{e}\mathfrak{e}\mathfrak{e}^{\varepsilon}(K,T_{g}(\bar{ \chi}))\cong(\mathscr{O}_{\chi}/p^{k})^{s}\oplus\mathtt{M}\oplus\mathtt{M}\), we use Theorem 1.4.2 of [17], for which we need to show that local conditions defining \(\mathfrak{S}\mathfrak{e}\mathfrak{e}\mathfrak{e}^{\varepsilon}(K,T_{g}(\bar{ \chi}))\) are maximal isotropic for the local Tate pairing. It turns out that proving this isotropic condition is sufficient to also show the isomorphism \({\rm Sel}_{\varepsilon}(K,A_{g}(\chi))\simeq\mathfrak{S}\mathfrak{e}\mathfrak{ e}^{\varepsilon}(K,T_{g}(\bar{\chi}))\).
To begin with, fix an isomorphism of \(\mathbf{Z}/p^{k}[G_{K}]\)-modules \(T_{g}\cong E_{p^{k}}\), and let \((\cdot,\cdot):T_{g}\otimes_{\mathbf{Z}_{p}}T_{g}\to\mu_{p^{k}}\) be the perfect, skew-symmetric Kummer duality corresponding to the Weil pairing on \(E_{p^{k}}\). As \(\mathscr{O}_{\chi}\cong\operatorname{Hom}_{\mathbf{Z}_{p}}(\mathscr{O}_{ \chi},\mathbf{Z}_{p})\) as \(\mathscr{O}_{\chi}\)-modules, \((\cdot,\cdot)\) induces an isomorphism of \(\mathscr{O}_{\chi}/p^{k}[G_{K}]\)-modules
\[A_{g}(\chi)=\operatorname{Hom}(T_{g}\otimes_{\mathbf{Z}_{p}}\mathscr{O}_{\chi} (\chi),\mu_{p^{k}})\cong\operatorname{Hom}(T_{g},\mu_{p^{k}})\otimes_{\mathbf{ Z}_{p}}\mathscr{O}_{\chi}(\bar{\chi})\cong T_{g}\otimes_{\mathbf{Z}_{p}} \mathscr{O}_{\chi}(\bar{\chi})=T_{g}(\bar{\chi}) \tag{7.21}\]
under which one identifies \(A_{g}(\chi)\) with \(T_{g}(\bar{\chi})\) as \(\mathscr{O}_{\chi}/p^{k}[G_{K}]\)-modules. We show that this isomorphism induces an isomorphism
\[{\rm Sel}_{\varepsilon}(K,A_{g}(\chi))\cong\mathfrak{S}\mathfrak{e}\mathfrak{e} \mathfrak{e}^{\varepsilon}(K,T_{g}(\bar{\chi})). \tag{7.22}\]
To check (7.22), it is enough to show that the local conditions defining the two Selmer groups correspond to each other under the previous identification of \(A_{g}(\chi)\) and \(T_{g}(\bar{\chi})\) as \(\mathscr{O}_{\chi}/p^{k}[G_{K}]\)-modules. First, recall that given any finite field extension \(F\) of \(\mathbf{Q}_{\ell}\), where \(\ell\) is a prime, the Weil pairing identifies the subgroup \(E(F)\otimes_{\mathbf{Z}}(\mathbf{Q}_{p}/\mathbf{Z}_{p})_{p^{k}}\) of \(H^{1}(F,A_{g})\) with the subgroup \(E(F)\otimes_{\mathbf{Z}}\mathbf{Z}/p^{k}\mathbf{Z}\) of \(H^{1}(F,T_{g})\), and these groups are the exact annihilators of each other under the local Tate pairing. For rational primes \(\ell\nmid Np\) and prime ideals \(w\mid\ell\), it is well known that \(H^{1}_{\rm fin}(K_{w},\mathbf{A}_{g})\) is equal to the image of \(E(K_{\infty,w})\otimes_{\mathbf{Z}}(\mathbf{Q}_{p}/\mathbf{Z}_{p})_{p^{k}}\), where \(K_{\infty,w}\) is the completion of \(K_{\infty}\) above a prime dividing \(w\); for primes \(\ell\mid N^{-}\), the same follows from the theory of \(\ell\)-adic uniformisation, and finally for primes \(\ell\mid N^{+}\) the local cohomology vanishes. By definition \(H^{1}_{\rm fin}(K_{w},T_{g}(\bar{\chi})))\) is the orthogonal complement of \(H^{1}_{\rm fin}(K_{w},A_{g}(\chi))\) under the local Tate pairing, hence the local conditions in \(H^{1}(K_{w},T_{g}(\bar{\chi}))\) corresponding to \(H^{1}_{\rm fin}(K_{w},A_{g}(\chi))\) under the isomorphism \(A_{g}(\chi)\cong T_{g}(\bar{\chi})\) coincide with \(H^{1}_{\rm fin}(K_{w},T_{g}(\bar{\chi}))\). Let now \(\ell=p\) and \(w\mid p\) be a prime of \(K\). Recall that \(H^{1}_{\rm fin,\varepsilon}(K_{w},\mathbf{A}_{g})\) is a subgroup of \(E(K_{\infty,w})\otimes_{\mathbf{Z}}(\mathbf{Q}_{p}/\mathbf{Z}_{p})_{p^{k}}\) (where again \(K_{\infty,w}\) is the completion of \(K_{\infty}\) at
the unique prime above \(w\)). It follows that the isomorphism \(A_{g}(\chi)\cong T_{g}(\bar{\chi})\) takes \(H^{1}_{\operatorname{fin},\varepsilon}(K_{w},A_{g}(\chi))\) to a subgroup of \(H^{1}(K_{w},T_{g}(\bar{\chi}))\) contained in the orthogonal complement of \(H^{1}_{\operatorname{fin},\varepsilon}(K_{w},A_{g}(\chi))\); by definition, this orthogonal complement is \(H^{1}_{\operatorname{fin},\varepsilon}(K_{w},T_{g}(\bar{\chi}))\). However, \(H^{1}_{\operatorname{fin},\varepsilon}(K_{w},A_{g}(\chi))\) and \(H^{1}_{\operatorname{fin},\varepsilon}(K_{w},T_{g}(\bar{\chi}))\) have the same cardinality by Proposition 5.4, and therefore the isomorphism \(A_{g}(\chi)\cong T_{g}(\bar{\chi})\) takes \(H^{1}_{\operatorname{fin},\varepsilon}(K_{\mathfrak{p}},A_{g}(\chi))\) exactly to \(H^{1}_{\operatorname{fin},\varepsilon}(K_{\mathfrak{p}},T_{g}(\bar{\chi}))\). This shows the isomorphism (7.22), and to conclude Step 5 it therefore enough to prove that, for any character \(\chi\), we have
\[\mathfrak{S}\mathfrak{e}\mathfrak{f}^{\varepsilon}(K,T_{g}(\chi))\cong( \mathscr{O}_{\chi}/p^{k})^{s}\oplus\mathtt{M}\oplus\mathtt{M}.\]
This follows from Theorem 1.4.2 of [17] since, as noticed above, the the local conditions defining \(\mathfrak{S}\mathfrak{e}\mathfrak{f}^{\varepsilon}(K,T_{g}(\chi))\) are maximal isotropic for the local Tate pairing.
#### 7.2.6. Step 6
Assume that \((f,K,p,\varepsilon)\) is not exceptional and let \(\ell_{1}\) and \(\ell_{2}\) be \(2k\)-admissible primes which satisfy the conditions \(\mathbf{I}_{1}\)-\(\mathbf{I}_{4}\) (cf. Step 2). Then (with the notations of _loc. cit._)
\[\operatorname{length}_{\mathscr{O}_{\chi}}\bigl{(}\operatorname{Sel}_{ \varepsilon}(K,A_{g}(\chi))\bigr{)}-2\cdot t_{\bar{\chi}}^{\varepsilon}(g)= \operatorname{length}_{\mathscr{O}_{\chi}}\bigl{(}\operatorname{Sel}_{ \varepsilon}(K,A_{h}(\chi))\bigr{)}-2\cdot t_{\bar{\chi}}^{\varepsilon}(h). \tag{7.23}\]
Proof.: We first prove that the dimension of \(\operatorname{Sel}_{\varepsilon}(K,A_{\bar{g}})\) over \(\mathbf{F}_{p}\) is even:
\[\dim_{\mathbf{F}_{p}}\bigl{(}\operatorname{Sel}_{\varepsilon}(K,A_{\bar{g}}) \bigr{)}\equiv 0\pmod{2}. \tag{7.24}\]
Let \(x\in\operatorname{Sel}_{\varepsilon}(K,A_{\bar{g}})\) be a nonzero class. Choose an admissible prime \(\ell\) such that
\[v_{\ell}(x_{1})\in H^{1}_{\operatorname{fin}}(K\ell_{1},A_{\bar{g}})\cong \mathbf{F}_{p}\]
is non-zero, which exists by Theorem 3.2 of [3]. Let \(h\in S_{2}(N^{+},\ell LN^{-};\mathbf{F}_{p})\) be the \(\ell\)-level raising of \(\bar{g}\). Note that \(\operatorname{Sel}_{\varepsilon}(K,A_{\bar{g}})\) is identified with \(\mathfrak{S}\mathfrak{e}\mathfrak{f}^{\varepsilon}(K,T_{\bar{g}})\) under the isomorphism \(T_{\bar{g}}\cong A_{\bar{g}}\) induced by the Weil pairing, viz. \(\operatorname{Sel}_{\varepsilon}(K,A_{\bar{g}})\) is equal to its own dual Selmer group. (This can either be seen as a special case of Step 5 or, more simply, follows from the discussion in the proof of Step 4 under the current assumptions.) As in the proof of Step 1, Poitou-Tate duality then implies that \(\operatorname{Sel}_{\varepsilon}(K,A_{h})\) is equal to \(\operatorname{Sel}_{\varepsilon}^{\ell}(K,A_{\bar{g}})\), hence
\[\dim_{\mathbf{F}_{p}}\bigl{(}\operatorname{Sel}_{\varepsilon}(K,A_{h})\bigr{)} =\dim_{\mathbf{F}_{p}}\bigl{(}\operatorname{Sel}_{\varepsilon}(K,A_{\bar{g}}) \bigr{)}-1\]
since \(v_{\ell}(x)\neq 0\). If \(\operatorname{Sel}_{\varepsilon}(K,A_{h})\neq 0\) we can apply the same argument after replacing \(\bar{g}\) with \(h\). In this way one constructs a squarefree product \(T\in\mathscr{S}_{1}\) of \(\dim_{\mathbf{F}_{p}}\operatorname{Sel}_{\varepsilon}(K,A_{\bar{g}})\) admissible primes such that \(\operatorname{Sel}_{\varepsilon}(K,A_{h})=0\), where \(h\in S_{2}(N^{+},TLN^{-};\mathbf{F}_{p})\) denotes now the \(T\)-level raising of \(\bar{g}\). As in the proof of Step 4, the theorem of Skinner-Urban-Wan then implies that \(L(\xi/K,1)\neq 0\), where \(\xi\) is a newform of weight \(\Gamma_{0}(TLN)\) which is congruent to \(f\) modulo \(p\). As a consequence
\[-1=\epsilon_{K}(TLN)=\epsilon_{K}(LN^{-})\cdot(-1)^{\dim_{\mathbf{F}_{p}}\bigl{(} \operatorname{Sel}_{\varepsilon}(K,A_{\bar{g}})\bigr{)}},\]
and since by assumption \(LN^{-}\) has an _odd_ number of prime divisors, this proves (7.24).
We now show (7.23), which is equivalent to show that the integers \(m_{1}\) and \(m_{2}\) in Equation (7.16) are both equal to \(0\). Preliminarily, note that if \(\operatorname{length}_{\mathscr{O}_{\chi}}\bigl{(}\operatorname{Sel}_{ \varepsilon}(K,A_{g}(\chi))\bigr{)}=2\cdot t_{\bar{\chi}}^{\varepsilon}(g)\), then, since \(\operatorname{length}_{\mathscr{O}_{\chi}}\bigl{(}\operatorname{Sel}_{ \varepsilon}(K,A_{h}(\chi))\bigr{)}\leqslant 2\cdot t_{\bar{\chi}}^{\varepsilon}(h)\) by Step 3, we have \(m_{1}+m_{2}=0\) (where \(m_{1}\) and \(m_{2}\) are defined in Step 3; we also have \(\operatorname{length}_{\mathscr{O}_{\chi}}\bigl{(}\operatorname{Sel}_{ \varepsilon}(K,A_{h}(\chi))\bigr{)}=2\cdot t_{\bar{\chi}}^{\varepsilon}(h)\)) directly from Equation (7.16). Therefore we assume in the following that \(\operatorname{length}_{\mathscr{O}_{\chi}}\bigl{(}\operatorname{Sel}_{ \varepsilon}(K,A_{g}(\chi))\bigr{)}<2\cdot t_{\bar{\chi}}^{\varepsilon}(g)\).
We first show that \(m_{1}=0\). By (7.1), \(t_{\bar{\chi}}^{\varepsilon}(g)<\operatorname{ord}_{\chi}(p^{k})\). Since the \(\mathbf{F}_{p}\)-dimension of \(\operatorname{Sel}_{\varepsilon}(K,A_{\bar{g}})\) is even, combining Step 5 and Nakayama Lemma shows that
\[\operatorname{Sel}_{\varepsilon}(K,A_{g}(\chi))\cong\mathtt{M}\oplus\mathtt{M}.\]
If follows that \(\operatorname{length}_{\mathscr{O}_{\chi}}(\mathtt{M})<\operatorname{ord}_{\chi}(p^{k})\), and therefore
\[\varpi_{\chi}^{\operatorname{ord}_{\chi}(p^{k})-1}\cdot\operatorname{Sel}_{ \varepsilon}(K,A_{g}(\chi))=0.\]
We now consider the class \(\xi_{\bar{\chi}}^{\varepsilon}(\ell_{1})-\varpi_{\chi}^{m_{1}}\cdot\zeta_{\bar{ \chi}}^{\varepsilon}(\ell_{1})\) in \(\mathfrak{S}\mathfrak{e}\mathfrak{f}^{\varepsilon}(K,T_{g}(\bar{\chi}))\) appearing in the proof of Step 3. Since \(\varpi^{\operatorname{ord}_{\chi}(p^{k})-1}\) kills \(\operatorname{Sel}_{\varepsilon}(K,A_{g}(\chi))\), and since \(\operatorname{Sel}_{\varepsilon}(K,A_{g}(\chi))\) and \(\mathfrak{S}\mathfrak{e}\mathfrak{f}^{\varepsilon}(K,T_{g}(\bar{\chi}))\) are dual to each other, the same is true for \(\mathfrak{S}\mathfrak{e}\mathfrak{f}^{\varepsilon}(K,T_{g}(\bar{\chi}))\). Therefore we obtain the equality
\[\varpi_{\chi}^{\operatorname{ord}_{\chi}(p^{k})-1}\cdot\xi_{\bar{\chi}}^{ \varepsilon}(\ell_{1})=\varpi_{\chi}^{\operatorname{ord}_{\chi}(p^{k})-1+m_{1}} \cdot\zeta_{\bar{\chi}}^{\varepsilon}(\ell_{1}). \tag{7.25}\]
We now show that the left hand side of this equality is always non-trivial. First, enlarge \(\{\ell_{1}\}\) to a freeing set \(S\) as in Section 5.5; then by Proposition 5.11, \(\mathfrak{S}\mathfrak{e}\mathfrak{f}_{S}^{\varepsilon}\big{(}K,T_{g}(\chi)\big{)}\) is free over \(\mathscr{O}_{\chi}/p^{k}\) of rank \(\delta(S)\), the number of prime divisors of \(S\). By Lemma 7.2(3), the class \(\bar{\xi}_{\chi}^{\varepsilon}(\ell)\) in \(\mathfrak{S}\mathfrak{e}\mathfrak{f}_{S}^{\varepsilon}(K,T_{\tilde{g}}) \otimes_{\mathbf{F}_{p}}\mathbf{F}_{\chi}\) is not trivial, therefore \(\varpi_{\chi}\) does not divide \(\xi_{\chi}^{\varepsilon}(\ell)\) and it follows that \(\varpi_{\chi}^{\mathrm{ord}_{\chi}(p^{k})-1}\cdot\xi_{\bar{\chi}}^{\varepsilon }(\ell_{1})\neq 0\) from the freeness result \(\mathfrak{S}\mathfrak{e}\mathfrak{f}_{S}^{\varepsilon}\big{(}K,T_{g}(\chi) \big{)}\cong(\mathscr{O}_{\chi}/p^{k})^{\delta(S)}\) recalled above. On the other hand, if \(m_{1}>0\), then the right hand side of (7.25) is zero, which is a contradiction. Therefore \(m_{1}\) must be equal to \(0\).
We now show that \(m_{2}=0\) with a similar argument. Consider the class \(\xi_{\chi}^{\varepsilon}(\ell_{2})-\varpi_{\chi}^{m_{2}}\cdot\zeta_{\bar{\chi }}^{\varepsilon}(\ell_{2})\) in \(\mathfrak{S}\mathfrak{e}\mathfrak{f}_{\ell_{1}}^{\varepsilon}(K,T_{g}(\bar{ \chi}))\) appearing in the proof of Step 3. Since \(m_{1}=0\), we know that \(\partial_{\ell_{1}}(\xi_{\bar{\chi}}^{\varepsilon}(\ell_{1}))\) generates the image of the residue map \(\partial_{\ell_{1}}:\mathfrak{S}\mathfrak{e}\mathfrak{f}_{\ell_{1}}^{ \varepsilon}(K,T_{g}(\bar{\chi}))\to H^{1}_{\mathrm{sing}}(K_{\ell},T_{g}(\bar {\chi}))\), and therefore there exists an integer \(m_{3}\geqslant 0\) such that the class \(\partial_{\ell_{1}}\left(\xi_{\bar{\chi}}^{\varepsilon}(\ell_{2})-\varpi_{ \chi}^{m_{2}}\cdot\zeta_{\bar{\chi}}^{\varepsilon}(\ell_{2})\right)\) is equal to the class \(\varpi_{\chi}^{m_{3}}\cdot\partial_{\ell_{1}}(\xi_{\bar{\chi}}^{\varepsilon}( \ell_{1}))\), i.e.
\[\partial_{\ell_{1}}\left(\xi_{\bar{\chi}}^{\varepsilon}(\ell_{2})-\varpi_{ \chi}^{m_{2}}\cdot\zeta_{\bar{\chi}}^{\varepsilon}(\ell_{2})-\varpi_{\chi}^{ m_{3}}\cdot\xi_{\bar{\chi}}^{\varepsilon}(\ell_{1})\right)=0.\]
Therefore by definition the class \(\xi_{\bar{\chi}}^{\varepsilon}(\ell_{2})-\varpi_{\chi}^{m_{2}}\cdot\zeta_{\bar {\chi}}^{\varepsilon}(\ell_{2})-\varpi_{\chi}^{m_{3}}\cdot\xi_{\bar{\chi}}^{ \varepsilon}(\ell_{1})\) belongs to \(\mathfrak{S}\mathfrak{e}\mathfrak{e}\mathfrak{f}^{\varepsilon}(K,T_{g}(\bar {\chi}))\). Since this group is annihilated by \(\varpi_{\chi}^{\mathrm{ord}_{\chi}(p^{k})-1}\) we obtain the equality
\[\varpi_{\chi}^{\mathrm{ord}_{\chi}(p^{k})-1}\cdot\xi_{\bar{\chi}}^{\varepsilon }(\ell_{2})-\varpi_{\chi}^{\mathrm{ord}_{\chi}(p^{k})-1+m_{2}}\cdot\zeta_{\bar {\chi}}^{\varepsilon}(\ell_{2})=\varpi_{\chi}^{\mathrm{ord}_{\chi}(p^{k})-1+m _{3}}\cdot\xi_{\bar{\chi}}^{\varepsilon}(\ell_{1}).\]
We now suppose _ad absurdum_ that \(m_{2}>0\). Then the above equation implies
\[\varpi_{\chi}^{\mathrm{ord}_{\chi}(p^{k})-1}\cdot\xi_{\bar{\chi}}^{\varepsilon }(\ell_{2})=\varpi_{\chi}^{\mathrm{ord}_{\chi}(p^{k})-1+m_{3}}\cdot\xi_{\bar{ \chi}}^{\varepsilon}(\ell_{1}). \tag{7.26}\]
By \(\mathbf{I}_{4}\), \(\mathrm{ord}_{\chi}\big{(}v_{\ell_{1}}\big{(}\xi_{\bar{\chi}}^{\varepsilon}( \ell_{2})\big{)}\big{)}=0\), and therefore, again using the freeness argument as above, we see that \(\varpi_{\chi}^{\mathrm{ord}_{\chi}(p^{k})-1}\cdot\xi_{\bar{\chi}}^{\varepsilon }(\ell_{2})\) is not trivial. Equation (7.26) then shows that \(m_{3}=0\). Therefore, applying \(v_{\ell_{1}}\) to Equation (7.26), we obtain the equality
\[\varpi_{\chi}^{\mathrm{ord}_{\chi}(p^{k})-1}\cdot v_{\ell_{1}}\big{(}\xi_{\bar {\chi}}^{\varepsilon}(\ell_{2})\big{)}=\varpi_{\chi}^{\mathrm{ord}_{\chi}(p^{k })-1}\cdot v_{\ell_{1}}\big{(}\xi_{\bar{\chi}}^{\varepsilon}(\ell_{1})\big{)}\]
By \(\mathbf{I}_{4}\), the left hand side of this equality is not trivial, while by Lemma 7.2(1), the right hand side is trivial, which is a contradiction. Therefore, \(m_{2}=0\), concluding the proof of Step 6.
#### 7.2.7. Step 7
Assume that \((f,K,p,\varepsilon)\) is not exceptional. Then
\[\mathrm{length}_{\mathscr{O}_{\chi}}\big{(}\mathrm{Sel}_{\varepsilon}(K,A_{g}( \chi))\big{)}=2\cdot t_{\bar{\chi}}^{\varepsilon}(g).\]
Proof.: The proof is by induction on \(\mathrm{length}_{\mathscr{O}_{\chi}}\big{(}\mathrm{Sel}_{\varepsilon}(K,A_{g}( \chi))\big{)}\). If \(\mathrm{length}_{\mathscr{O}_{\chi}}\big{(}\mathrm{Sel}_{\varepsilon}(K,A_{g}( \chi))\big{)}=0\), then the equality follows from Step 4. When \(\mathrm{Sel}_{\varepsilon}(K,A_{g}(\chi))\) is not trivial, choose a pair of \(2k\)-admissible primes \(\ell_{1}\) and \(\ell_{2}\) satisfying conditions \(\mathbf{I}_{1}\)-\(\mathbf{I}_{4}\), and let \(h\) be the \(\ell_{1}\ell_{2}\)-level raising of \(g\). Since \(t_{\bar{\chi}}^{\varepsilon}(h)<t_{\bar{\chi}}^{\varepsilon}(g)\), we see from Step 6 that \(\mathrm{length}_{\mathscr{O}_{\chi}}\big{(}\mathrm{Sel}_{\varepsilon}(K,A_{h}( \chi))\big{)}\) is strictly smaller than \(\mathrm{length}_{\mathscr{O}_{\chi}}\big{(}\mathrm{Sel}_{\varepsilon}(K,A_{g}( \chi))\big{)}\), and therefore by the inductive hypothesis \(\mathrm{length}_{\mathscr{O}_{\chi}}\big{(}\mathrm{Sel}_{\varepsilon}(K,A_{h}( \chi))\big{)}=2t_{\bar{\chi}}^{\varepsilon}(h)\). A further application of the equality in Step 6 implies then the result.
**Definition 7.3**.: Let \(\mathfrak{X}_{p}^{\varepsilon}(f)\) be the Pontryagin dual of \(\mathrm{Sel}_{\varepsilon}(K,\mathbf{A}_{f})\), which is a compact \(\Lambda\)-module, and denote \(\mathrm{Char}_{p}^{\varepsilon}(f)\) its characteristic power series.
## 8. \(\varepsilon\)-BSD formulas in the indefinite case
Assume that \(N^{-}\) is _indefinite_ (i.e. \(\epsilon_{K}(N^{-})=+1\)).
**Proposition 8.1**.: _The compact \(\Lambda_{\mathscr{O}}\)-module \(\mathfrak{S}\mathfrak{e}\mathfrak{f}^{\varepsilon}(K,\mathbf{T}_{f,\mathscr{O}})\) is free of finite rank._
Proof.: By Proposition 5.9 (for \(\mathfrak{P}\) equal to the augmentation ideal of \(\Lambda_{\mathscr{O}}\)) and Shapiro's Lemma, the \(\Lambda_{\mathscr{O}}\)-quotient of \(G_{\infty}\)-coinvariants of \(\mathfrak{S}\mathfrak{e}\mathfrak{e}\mathfrak{f}^{\varepsilon}(K,\mathbf{T}_{f, \mathscr{O}})\) injects into \(\mathfrak{S}\mathfrak{e}\mathfrak{e}\mathfrak{f}^{\varepsilon}(K,T_{f,\mathscr{O}})\), and therefore is \(\mathscr{O}\)-free. Thanks to \(E_{p}(K)=0\), we have \(\mathbf{T}_{f,\mathscr{O}}^{G_{\infty}}=0\), so the \(\Lambda_{\mathscr{O}}\)-module \(\mathfrak{S}\mathfrak{e}\mathfrak{f}^{\varepsilon}(K,\mathbf{T}_{f,\mathscr{O}})\) is torsion free by [25, Lemma 1.3.3], hence its \(\Lambda_{\mathscr{O}}\)-submodule of \(G_{\infty}\)-invariants is trivial. The result follows then from a standard argument (_e.g._ [23, Proposition 5.3.19(ii)]).
### \(\Lambda\)-adic classes
We first construct global classes, in a way similar to SS6.1. Since \(\epsilon_{K}(N^{-})=+1\), \(J_{N^{+},N^{-}}\) is the Picard variety of the Shimura curve \(X_{N^{+},N^{-}}\). Let \(I_{f}\subset\mathbf{T}_{N^{+},N^{-}}\) denote the kernel of \(f\). Modularity implies that there is an isomorphism of \(\mathbf{Z}_{p}[G_{\mathbf{q}}]\)-modules
\[\pi_{f}:\mathrm{Ta}_{p}(J_{N^{+},N^{-}})/I_{f}\cong T_{f},\]
which is unique up to multiplication by a \(p\)-adic unit by Hypothesis 1.1(1). For every integer \(n\geqslant 0\) define
\[\psi_{f,n}:J_{N^{+},N^{-}}(K_{n})\longrightarrow H^{1}(K_{n},\mathrm{Ta}_{p}( J_{N^{+},N^{-}})/I_{f})\cong H^{1}(K_{n},T_{f}),\]
where the first (resp., second) map is induced by the Kummer map (resp., by \(\pi_{f}\)). For every point \(x\in J_{N^{+},N^{-}}(K_{n})\) the classe \(\psi_{f,n}(x)\) is finite at every prime of \(K_{n}\) dividing \(p\). Moreover, since \(J_{N^{+},N^{-}}\) has purely toric reduction at every prime divisor of \(N^{-}\), Mumford-Tate theory of \(p\)-adic uniformisation implies that these classes are ordinary at every such prime. Therefore, we obtain a map
\[\psi_{f,n}:J_{N^{+},LN^{-}}(K_{n})\longrightarrow\mathfrak{Sel}(K_{n},T_{f}).\]
Recall the compatible sequence of Heegner points \(P_{n}=P_{n}(1)\), for \(n\geqslant 0\), introduced in Section 2.5 and define \(\tilde{\kappa}_{n}=\psi_{g,n}(P_{n})\).
#### 8.1.1. Ordinary case
Suppose that \(E\) has ordinary reduction at \(p\). The classes
\[\kappa_{n}=\frac{1}{\alpha_{p}(g)^{n}}\Big{(}\tilde{\kappa}_{n-1}-\alpha_{p}( g)\cdot\tilde{\kappa}_{n}\Big{)}\Big{)}\]
belong to \(\mathfrak{Sel}(K_{n},T_{f})\) by the previous discussion, and Equation (2.5) shows that they are norm-compatible. As in SS6.1, define
\[\kappa_{\infty}=\varprojlim_{n}\kappa_{n}\in\varprojlim_{n}\mathfrak{Sel}(K_{ n},T_{f}) \tag{8.1}\]
where the inverse limit is computed with respect to the canonical norm maps.
#### 8.1.2. Supersingular case
Using the freeness result of Proposition 8.1, by the same argument in SS6.1 one can define classes
\[\tilde{\kappa}_{n}^{\varepsilon}\in\mathfrak{Sel}^{\varepsilon}(K_{n},T_{f})/ \omega_{n}^{\varepsilon}\]
such that \(\tilde{\omega}_{n}^{-\varepsilon}\cdot\tilde{\kappa}_{n}^{\varepsilon}= \tilde{\kappa}_{n}\) if \(p\) is split in \(K\) or \(p\) is inert in \(K\) and \(\varepsilon=-1\) (the non-exceptional case), and \(\omega_{n}^{-}\cdot\tilde{\kappa}_{n}^{+}=\tilde{\kappa}_{n}\) if \(p\) is inert in \(K\) and \(\varepsilon=+1\) (the exceptional case). Define \(\kappa_{n}^{+}=(-1)^{n/2}\tilde{\kappa}_{n}^{+}\) if \(n\) is even and \(\kappa_{n}^{-}=(-1)^{(n-1)/2}\tilde{\kappa}_{n}^{-}\) if \(n\) is odd. A calculation using Equation (2.5) shows that the classes \(\kappa_{n}^{\varepsilon}\) are compatible with respect to the canonical projection maps. Define as in SS6.1
\[\kappa_{\infty}^{\varepsilon}=\varprojlim_{n}\kappa_{n}^{\varepsilon}\in \varprojlim_{n\in\mathbf{N}^{\varepsilon}}\mathfrak{Sel}^{\varepsilon}(K_{n},T _{f})/\omega_{n}^{\varepsilon},\]
where \(\mathbf{N}^{\varepsilon}\) is the set of positive integers verifying the condition \((-1)^{n}=\varepsilon\).
### Lengths of Selmer groups
Fix a morphism \(\chi:\Lambda\to\mathscr{O}_{\chi}\) of \(\mathbf{Z}_{p}\)-algebras, where as above \(\mathscr{O}_{\chi}\) is the integral closure of \(\Lambda/\mathfrak{P}_{\chi}\), and \(\mathfrak{P}_{\chi}=\ker(\chi)\). Denote
\[\kappa_{\chi}^{\varepsilon}\in\mathfrak{Sel}^{\varepsilon}(K,\mathbf{T}_{f}) \otimes\mathscr{O}_{\chi}\stackrel{{\mathrm{def}}}{{=}}\mathfrak{Sel }^{\varepsilon}(K,\mathbf{T}_{f,\mathscr{O}_{\chi}})\otimes_{\Lambda_{\mathscr{O }_{\chi}}}\mathscr{O}_{\chi}\]
the image of \(\kappa_{\infty}^{\varepsilon}\) via the canonical map described above, where recall that the tensor product \(\otimes_{\Lambda_{\mathscr{O}_{\chi}}}\) is taken with respect to \(\chi\). We assume that \(\mathrm{ord}_{\chi}(\kappa_{\chi}^{\varepsilon})\) is finite. Using that \(\mathfrak{Sel}^{\varepsilon}(K,\mathbf{T}_{f,\mathscr{O}_{\chi}})\) is \(\Lambda_{\mathscr{O}_{\chi}}\)-free by Proposition 8.1, define the integer
\[t_{\chi}^{\varepsilon}(f)=\mathrm{ord}_{\chi}\left(\kappa_{\chi}^{\varepsilon} \right)<\infty.\]
For any group \(p\)-power torsion group \(G\), let \(G_{/\mathrm{div}}\) denote the quotient of \(G\) by its maximal \(p\)-divisible subgroup.
**Theorem 8.2**.: _Suppose that \(t_{\chi}^{\varepsilon}(f)<\infty\). Then the \(\mathscr{O}_{\chi}\)-corank of \(\mathrm{Sel}_{\varepsilon}(K,A_{f}(\chi))\) is \(1\) and we have_
\[\mathrm{length}_{\mathscr{O}_{\chi}}\left(\mathrm{Sel}_{\varepsilon}(K,A_{f}( \chi))_{/\mathrm{div}}\right)\leqslant 2\cdot\mathrm{length}_{\mathscr{O}_{\chi}}\left(( \mathfrak{Sel}^{\varepsilon}(K,\mathbf{T}_{f})\otimes\mathscr{O}_{\chi})/ \mathscr{O}_{\chi}\cdot\kappa_{\bar{\chi}}^{\varepsilon}\right),\]
_and the equality holds if \((f,K,p,\varepsilon)\) is not exceptional._
Proof.: It follows from the freeness of \(\mathfrak{S}\mathfrak{e}\mathfrak{l}^{\varepsilon}(K,\mathbf{T}_{f,\mathscr{O}_{ \chi}})\) that there exists \(\tilde{\kappa}_{\chi}^{\varepsilon}\) in \(\mathfrak{S}\mathfrak{e}\mathfrak{l}^{\varepsilon}(K,\mathbf{T}_{f})\otimes \mathscr{O}_{\chi}\) such that \(\mathrm{ord}_{\chi}(\tilde{\kappa}_{\chi}^{\varepsilon})=0\) and \(\kappa_{\chi}^{\varepsilon}=\varpi_{\chi}^{t\varepsilon(f)}\cdot\tilde{ \kappa}_{\chi}^{\varepsilon}\). Define \(\xi_{\chi}^{\varepsilon}\in\mathfrak{S}\mathfrak{e}\mathfrak{l}^{\varepsilon }(K,T_{f}(\chi))\) to be the image of \(\tilde{\kappa}_{\chi}^{\varepsilon}\) under the (injective) specialization map \(\mathfrak{s}_{\chi}:\mathfrak{S}\mathfrak{e}\mathfrak{l}_{S}^{\varepsilon}(K, \mathbf{T}_{f})\otimes\mathscr{O}_{\chi}\hookrightarrow\mathfrak{S}\mathfrak{e }\mathfrak{l}^{\varepsilon}(K,T_{f}(\chi))\). We also denote \(\kappa_{\chi,k}^{\varepsilon}\) the image of \(\kappa_{\chi}^{\varepsilon}\) in \(\mathfrak{S}\mathfrak{e}\mathfrak{l}^{\varepsilon}(K,T_{f,k}(\chi))\), for all integers \(k\geqslant 1\), and \(\xi_{\chi,k}^{\varepsilon}\) the image of \(\xi_{\chi}^{\varepsilon}\) in \(H^{1}(K,T_{f,k}(\chi))\). If \(k=1\), the element \(\xi_{\chi,1}^{\varepsilon}\) will be denoted \(\bar{\xi}_{\chi}^{\varepsilon}\). As before (_cf._ Step 5 in SS7.2.5) we have
\[\mathrm{Sel}_{\varepsilon}(K,A_{f}(\chi))\cong(\mathscr{K}_{\chi}/\mathscr{O} _{\chi})^{s}\oplus M_{\chi}\oplus M_{\chi}.\]
for some integer \(s\) and a finite torsion \(\mathscr{O}_{\chi}\)-module \(M_{\chi}\). Choose an integer
\[k>\max\{\mathrm{length}_{\mathscr{O}_{\chi}}(M_{\chi}),t_{\bar{\chi}}^{ \varepsilon}(f)\}.\]
Using [3, Theorem 3.2], choose an admissible prime \(\ell\in\mathscr{S}_{k}\) such that \(v_{\ell}(\bar{\xi}_{\bar{\chi}}^{\varepsilon})\neq 0\). Let \(g\) be the \(\ell\)-level raising of \(f\). Then since \(H^{1}_{\mathrm{fin}}(K_{\ell},T_{f,\mathscr{O}_{\chi}})\) is a free \(\mathscr{O}_{\chi}\)-module of rank \(1\), and \(v_{\ell}(\bar{\xi}_{\bar{\chi}}^{\varepsilon})\neq 0\), using Proposition 5.5, we have
\[\mathrm{ord}_{\chi}\left(v_{\ell}\left(\kappa_{\bar{\chi},k}^{\varepsilon} \right)\right)=\mathrm{ord}_{\chi}\left(v_{\ell}\left(\kappa_{\bar{\chi}}^{ \varepsilon}\right)\right). \tag{8.2}\]
_Step 1._ Theorem 7.1 for \(g\) shows that
\[\mathrm{length}_{\mathscr{O}_{\chi}}\left(\mathrm{Sel}_{\varepsilon}(K,A_{g}( \chi))\right)\leqslant 2\cdot\mathrm{ord}_{\chi}\left(\mathcal{L}_{g}^{ \varepsilon}(\bar{\chi})\right),\]
with equality in the non-exceptional case, and Theorem 6.3 shows that
\[\mathrm{ord}_{\chi}\left(v_{\ell}\left(\kappa_{\bar{\chi},k}^{\varepsilon} \right)\right)=\mathrm{ord}_{\chi}\left(\mathcal{L}_{g}^{\varepsilon}(\bar{ \chi})\right).\]
Thus, by (8.2) and the injectivity of the map \(v_{\ell}\) (which follows from \(v_{\ell}(\bar{\xi}_{\bar{\chi}}^{\varepsilon})\neq 0\)), we have
\[\mathrm{length}_{\mathscr{O}_{\chi}}\left(\mathrm{Sel}_{\varepsilon}(K,A_{g}( \chi))\right)\leqslant 2\cdot\mathrm{ord}_{\chi}\left(v_{\ell}\left(\kappa_{\bar{\chi}}^{ \varepsilon}\right)\right)=2\cdot\mathrm{length}_{\mathscr{O}_{\chi}}\left( \mathfrak{S}\mathfrak{e}\mathfrak{l}^{\varepsilon}(K,\mathbf{T}_{f})\otimes \mathscr{O}_{\chi}/\mathscr{O}_{\chi}\cdot\kappa_{\bar{\chi}}^{\varepsilon}\right)\]
with equality in the non-exceptional case.
_Step 2._ Recall the relaxed Selmer group \(\mathrm{Sel}_{\varepsilon}^{(\ell)}(K,A_{f,k}(\chi))\supseteq\mathrm{Sel}_{ \varepsilon}(K,A_{f,k}(\chi))\), _i.e._ the set of cohomology classes defined requiring the same conditions as \(\mathrm{Sel}_{\varepsilon}(K,A_{f,k}(\chi))\) at primes different from \(\ell\), and no condition at \(\ell\). We claim that
\[\mathrm{Sel}_{\varepsilon}(K,A_{f,k}(\chi))=\mathrm{Sel}_{\varepsilon}^{(\ell) }(K,A_{f,k}(\chi)). \tag{8.3}\]
To prove this, let \(x\in\mathrm{Sel}_{\varepsilon}^{(\ell)}(K,A_{f,k}(\chi))\). We have to show that \(x\) is in the kernel of the residue map at \(\ell\). By global class field theory, using the orthogonality of \(\mathrm{res}_{v}(x)\) and \(\mathrm{res}_{v}(\xi_{\bar{\chi},k}^{\varepsilon})\) outside \(\ell\) as in Step 1 of the proof of Theorem 7.1, one then obtains
Since \(v_{\ell}\big{(}\bar{\xi}_{\bar{\chi}}^{\varepsilon}\big{)}\big{)}\neq 0\), and since \(\left\langle-,-\right\rangle_{\ell}\) is a perfect pairing, this implies that \(\partial_{\ell}(x)=0\), as was to be shown.
_Step 3._ We claim that there is an exact sequence:
\[0\longrightarrow\mathrm{Sel}_{\varepsilon}(K,A_{g}(\chi))\longrightarrow \mathrm{Sel}_{\varepsilon}(K,A_{f,k}(\chi))\overset{v_{\ell}}{\longrightarrow}H^{1} _{\mathrm{fin}}(K_{\ell},A_{f,k}(\chi))\longrightarrow 0.\]
To show this, first note that
\[\mathrm{Sel}_{\varepsilon}(K,A_{g}(\chi))\subseteq\mathrm{Sel}_{\varepsilon}^{( \ell)}(K,A_{g}(\chi))=\mathrm{Sel}_{\varepsilon}^{(\ell)}(K,A_{f,k}(\chi))= \mathrm{Sel}_{\varepsilon}(K,A_{f,k}(\chi))\]
where the last equality follows from Step 2; this shows the exactness on the left. By definition, the kernel of the map \(v_{\ell}:\mathrm{Sel}_{\varepsilon}(K,A_{f,k}(\chi))\to H^{1}_{\mathrm{fin}}(K_{ \ell},A_{f,k}(\chi))\) is \(H^{1}_{\mathrm{ord}}(K_{\ell},A_{f,k}(\chi))\), proving the exactness in the middle. Finally, \(v_{\ell}\) is surjective because, under the isomorphism \(T_{f,k}(\chi)\simeq A_{f,k}(\chi)\), \(\xi_{\bar{\chi}}^{\varepsilon}\) is a class in \(\mathrm{Sel}_{\varepsilon}(K,A_{f,k}(\chi))\) which satisfies \(v_{\ell}\big{(}\bar{\xi}_{\bar{\chi}}^{\varepsilon}\big{)}\neq 0\).
_Step 4._ From Step 3 we obtain the equality
\[\mathrm{length}_{\mathscr{O}_{\chi}}\left(\mathrm{Sel}_{\varepsilon}(K,A_{g}( \chi))\right)=\mathrm{length}_{\mathscr{O}_{\chi}}\left(\mathrm{Sel}_{ \varepsilon}(K,A_{f,k}(\chi))\right)-\mathrm{length}_{\mathscr{O}_{\chi}} \left(\mathscr{O}_{\chi}/p^{k}\mathscr{O}_{\chi}\right)\]
and combinig with Step 1 we get
\[\mathrm{length}_{\mathscr{O}_{\chi}}\left(\mathrm{Sel}_{\varepsilon}(K,A_{f,k}( \chi))\right)-\mathrm{length}_{\mathscr{O}_{\chi}}\left(\mathscr{O}_{\chi}/p^{k} \mathscr{O}_{\chi}\right)\leqslant 2\cdot\mathrm{length}_{\mathscr{O}_{\chi}}\left(\mathfrak{S} \mathfrak{e}\mathfrak{l}^{\varepsilon}(K,\mathbf{T}_{f}\otimes\mathscr{O}_{ \chi})/\mathscr{O}_{\chi}\cdot\kappa_{\bar{\chi}}^{\varepsilon}\right)\]
with equality in the non-exceptional case. Since the left hand side has finite order, bounded independently of \(k\), we see that the \(\mathscr{O}_{\chi}\)-corank of \(\operatorname{Sel}_{\varepsilon}(K,A_{f}(\chi))\) is \(1\). By the choice of \(k\),
\[\operatorname{length}_{\mathscr{O}_{\chi}}\left(\operatorname{Sel}_{ \varepsilon}(K,A_{f,k}(\chi))\right)-\operatorname{ord}_{\chi}\left(\mathscr{O }_{\chi}/p^{k}\mathscr{O}_{\chi}\right)=\operatorname{length}_{\mathscr{O}_{ \chi}}\left(\operatorname{Sel}_{\varepsilon}(K,A_{f}(\chi))_{/\operatorname{ div}}\right),\]
concluding the proof.
The next two definitions intervene in the statement of the IAMC.
**Definition 8.3**.: Let \(L^{\varepsilon}_{p}(f)\) denote the characteristic power series of \(\mathfrak{Sel}(K,\mathbf{T}_{f})/\Lambda\cdot\kappa^{\varepsilon}_{\infty}\).
Let \(\mathfrak{X}^{\varepsilon}_{p}(f)\) be the Pontryagin dual of \(\operatorname{Sel}_{\varepsilon}(K,\mathbf{A}_{f})\). Then the compact \(\Lambda\)-module \(\mathfrak{X}^{\varepsilon}_{p}(f)\) is pseudo-isomorphic to \(\Lambda\oplus\mathfrak{M}\oplus\mathfrak{M}\) for a torsion \(\Lambda\)-module \(\mathfrak{M}\), supported only on primes of height \(1\); this follows from Theorem 8.2, the structure results in Step 5 of the proof of Theorem 7.1, and Proposition 5.8.
**Definition 8.4**.: Let \(\operatorname{Char}^{\varepsilon}_{p}(f)\) be the characteristic ideal of the \(\Lambda\)-module \(\mathfrak{M}\).
## 9. Proof of Theorems B and C
Fix throughout this section a finite order character \(\chi:G_{\infty}\twoheadrightarrow\mathscr{O}_{\chi}^{\times}\) of conductor \(p^{n}\).
### Comparison of Selmer groups
Suppose that \(p\) is supersingular. The aim of this section is to compare the discrete Selmer groups \(\operatorname{Sel}^{\varepsilon}(K,A_{f}(\chi))\) and \(\operatorname{Sel}(K,A_{f}(\chi))\) and the compact Selmer groups \(\mathfrak{Sel}^{\varepsilon}(K,T_{f}(\chi))\) and \(\mathfrak{Sel}(K,T_{f}(\chi))\). Let \(\mathfrak{p}\mid p\) be a prime and fix \(k\in\mathbf{N}\cup\{\infty\}\). Set as before \(\Phi=K_{\mathfrak{p}}\), \(\Phi_{n}=K_{n,\mathfrak{p}}\) and \(\Phi_{\infty}=K_{\infty,\mathfrak{p}}\). Let \(\mathfrak{P}_{\chi}=(\mathfrak{p}_{\chi})\) be the kernel of the character \(\chi:\Lambda_{\mathscr{O}_{\chi}}\twoheadrightarrow\mathscr{O}_{\chi}\) obtained from \(\chi\), where \(\mathfrak{p}_{\chi}=\gamma-\chi(\gamma)\). We also view \(\chi\) as a character \(\chi:\mathscr{O}_{\chi}[G_{n}]\twoheadrightarrow\mathscr{O}_{\chi}\), whose kernel we still denote by \(\mathfrak{P}_{\chi}=(\mathfrak{p}_{\chi})\). To simplify the notation, define
\[\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{\operatorname{div}} =\mathbf{E}(\Phi_{n})\otimes_{\mathbf{Z}}(\mathscr{K}_{\chi}/ \mathscr{O}_{\chi}),\] \[\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{\pm,\operatorname{div}} =\mathbf{E}(\Phi_{n})_{\pm}\otimes_{\mathbf{Z}}(\mathscr{K}_{ \chi}/\mathscr{O}_{\chi}).\]
**Lemma 9.1**.: _Let \(\varepsilon=(-1)^{n}\)._
1. _In the non-exceptional case,_ \(\frac{(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{\operatorname{div}})[ \mathfrak{P}_{\chi}]}{(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{\varepsilon,\operatorname{div}})[\mathfrak{P}_{\chi}]}\) _is finite and_ \[\operatorname{length}_{\mathscr{O}_{\chi}}\left(\frac{\left(\mathbf{E}_{ \mathscr{O}_{\chi}}(\Phi_{n})_{\operatorname{div}}\right)[\mathfrak{P}_{\chi }]}{(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{\varepsilon,\operatorname{div }})[\mathfrak{P}_{\chi}]}\right)=[\Phi:\mathbf{Q}_{p}]\cdot\operatorname{ord}_{ \chi}(\tilde{\omega}_{n}^{-\varepsilon}).\]
2. _In the exceptional case,_ 1. _If_ \(n=0\)_, so_ \(\chi=\mathbf{1}\) _is the trivial character_ \[\frac{\left(\mathbf{E}_{\mathbf{Z}_{p}}(\Phi)_{\operatorname{div}}\right)[( \gamma-1)]}{\left(\mathbf{E}_{\mathbf{Z}_{p}}(\Phi)_{+,\operatorname{div}} \right)[(\gamma-1)}=\mathbf{E}(\Phi)\otimes_{\mathbf{Z}_{p}}\mathbf{Q}_{p}/ \mathbf{Z}_{p}\cong(\mathbf{Q}_{p}/\mathbf{Z}_{p})^{[\Phi:\mathbf{Q}_{p}]},\] 2. _If_ \(n\geqslant 2\)_,_ \(\frac{(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{\operatorname{div}})[ \mathfrak{P}_{\chi}]}{(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{+, \operatorname{div}})[\mathfrak{P}_{\chi}]}\) _is finite and_ \[\operatorname{length}\left(\frac{\left(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n} )_{\operatorname{div}}\right)[\mathfrak{P}_{\chi}]}{\left(\mathbf{E}_{ \mathscr{O}_{\chi}}(\Phi_{n})_{+,\operatorname{div}}\right)[\mathfrak{P}_{ \chi}]}\right)=[\Phi:\mathbf{Q}_{p}]\cdot\operatorname{ord}_{\chi}(\omega_{n}^ {-}).\]
Proof.: Suppose first \(n=0\). Then \(\chi\) is the trivial character, \(\mathscr{O}_{\chi}=\mathbf{Z}_{p}\) and we suppress the index \(\mathscr{O}_{\chi}\) from the notation, thus writing \(\mathbf{E}(\Phi)_{\operatorname{div}}\) for \(\mathbf{E}_{\mathbf{Z}_{p}}(\Phi)_{\operatorname{div}}\) and \(\mathbf{E}(\Phi)_{\pm,\operatorname{div}}\) for \(\mathbf{E}_{\mathbf{Z}_{p}}(\Phi)_{\pm,\operatorname{div}}\). If \(p\) is split, then \(\mathbf{E}(\Phi)_{\operatorname{div}}=\mathbf{E}(\Phi)_{+,\operatorname{div}}\), so the quotient in the statement is trivial; on the other hand, \(\tilde{\omega}_{0}^{-}=1\), and the statement is proved. If \(p\) is inert, \(\mathbf{E}(\Phi)_{\operatorname{div}}=\mathbf{E}(\Phi)_{-,\operatorname{div}}\) and \(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi)_{+,\operatorname{div}}=0\), so the quotient in the statement is \(\mathbf{E}(\Phi)_{\operatorname{div}}[(\gamma-1)]\cong(\mathbf{Q}_{p}/\mathbf{Z }_{p})^{[\Phi:\mathbf{Q}_{p}]}\), where the last isomorphism follows from Theorem 5.1.
Suppose \(n\geqslant 1\). We first observe that we have an exact sequence:
(9.1)
where \(C=0\) if \(p\) is inert in \(K\) and \(C=\left(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi)_{\operatorname{div}}\right)[ \mathfrak{P}_{\chi}]\) if \(p\) is split in \(K\); in this exact sequence the second arrow is the map \(x\mapsto(x,x)\), and the third arrow is the map \((x,y)\mapsto x-y\). If \(p\) is inert in \(K\), it
follows from Theorem 5.1 that \(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{\mathrm{div}}\) is the direct sum of \(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{\varepsilon,\mathrm{div}}\) and \(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{-\varepsilon,\mathrm{div}}\), which proves (9.1) (also in the exceptional case). In the split case, it follows again from Theorem 5.1 that
\[\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{\varepsilon,\mathrm{div}}\cap \mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{-\varepsilon,\mathrm{div}}=\mathbf{ E}_{\mathscr{O}_{\chi}}(\Phi)_{\mathrm{div}},\]
so we need to show that the exact sequence
remains exact after taking \(\mathfrak{P}_{\chi}\)-torsion, so we need to show that the map
\[\left(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{\varepsilon,\mathrm{div}} \right)[\mathfrak{P}_{\chi}]\oplus\left(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_ {n})_{-\varepsilon,\mathrm{div}}\right)[\mathfrak{P}_{\chi}]\longrightarrow \left(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{\mathrm{div}}\right)[ \mathfrak{P}_{\chi}]\]
is surjective. The cokernel of this map injects into the quotient \(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi)_{\mathrm{div}}/\mathfrak{p}_{\chi} \mathbf{E}_{\mathscr{O}_{\chi}}(\Phi)_{\mathrm{div}}\), and we need to show that this group is trivial. Since \(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi)\) is \(\mathscr{O}_{\chi}\)-free, it is enough to show that
\[\left(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi)/\mathfrak{p}_{\chi}\mathbf{E}_{ \mathscr{O}_{\chi}}(\Phi)\right)\otimes_{\mathscr{O}_{\chi}}\mathscr{K}_{ \chi}/\mathscr{O}_{\chi}=0. \tag{9.2}\]
Now, \(\mathfrak{p}_{\chi}\) acts on the \(\mathscr{O}_{\chi}\)-free module \(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi)\) as multiplication by \(1-\chi(\gamma)\), and since \(\chi(\gamma)\) is a primitive \(p^{n}\)-root of unity, and \(n\geqslant 1\), the quotient \(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi)/\mathfrak{p}_{\chi}\mathbf{E}_{ \mathscr{O}_{\chi}}(\Phi)\) is finite, and (9.2) follows.
We have therefore a commutative diagram with exact rows
where \(C\) is defined before, and the middle vertical arrow is the map \(x\mapsto(x,0)\). By the snake lemma we obtain an exact sequence
\[0\longrightarrow C\longrightarrow\left(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi _{n})_{-\varepsilon,\mathrm{div}}\right)[\mathfrak{P}_{\chi}]\longrightarrow \frac{\left(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{\mathrm{div}}\right)[ \mathfrak{P}_{\chi}]}{\left(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{ \varepsilon,\mathrm{div}}\right)[\mathfrak{P}_{\chi}]}\longrightarrow 0.\]
The Pontryagin dual of the middle term is \(\Lambda_{\mathscr{O}_{\chi}}/(\omega_{n}^{-\varepsilon},\mathfrak{p}_{\chi})\), because the Pontryagin dual of \(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{-\varepsilon,\mathrm{div}}\) is \(\Lambda_{\mathscr{O}_{\chi}}/(\omega_{n}^{-\varepsilon})\); since \(\Lambda_{\mathscr{O}_{\chi}}/\mathfrak{P}_{\chi}\cong\mathscr{O}_{\chi}\), the length of the middle term is equal to the length of \(\mathscr{O}_{\chi}/\chi(\omega_{n}^{-\varepsilon})\). Similarly, if \(p\) is split, the length of \(C=\left(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi)_{\mathrm{div}}\right)[ \mathfrak{P}_{\chi}]\) is equal to the length of \(\mathscr{O}_{\chi}/\chi(\gamma-1)\) and therefore the length of the quotient is equal to the length of \(\mathscr{O}_{\chi}/\chi(\tilde{\omega}_{n}^{-\varepsilon})\), completing the proof in this case. If \(p\) is inert, then \(C\) is trivial. If \(\varepsilon=-1\) (the non-exceptional case), then \(\tilde{\omega}_{n}^{+}=\omega_{n}^{+}\), and the length of the last term is equal to the length of \(\mathscr{O}_{\chi}/\chi(\omega_{n}^{+})=\mathscr{O}_{\chi}/\chi(\tilde{\omega}_ {n}^{+})\), while if \(\varepsilon=+1\) (the exceptional case) then the length is \(\mathscr{O}_{\chi}/\chi(\omega_{n}^{-\varepsilon})\), completing the proof.
**Proposition 9.2**.: _In the exceptional case, assume that \(n\neq 0\). The discrete Selmer groups \(\mathrm{Sel}_{\varepsilon}(K,A_{f}(\chi))\) and \(\mathrm{Sel}(K,A_{f}(\chi))\) have the same \(\mathscr{O}_{\chi}\)-corank. Moreover,_
1. _If_ \(p\) _is split in_ \(K\) _or_ \(p\) _is inert in_ \(K\) _and_ \(\varepsilon=-1\) _(the non-exceptional case),_ \[\mathrm{length}_{\mathscr{O}_{\chi}}\left(\frac{\mathrm{Sel}(K,A_{f}(\chi))}{ \mathrm{Sel}_{\varepsilon}(K,A_{f}(\chi))}\right)=2\cdot\mathrm{ord}_{\chi}( \omega_{n}^{-\varepsilon}).\]
2. _If_ \(p\) _is inert in_ \(K\) _and_ \(\varepsilon=+1\) _(the exceptional case),_ \[\mathrm{length}_{\mathscr{O}_{\chi}}\left(\frac{\mathrm{Sel}(K,A_{f}(\chi))}{ \mathrm{Sel}_{+}(K,A_{f}(\chi))}\right)=2\cdot\mathrm{ord}_{\chi}(\omega_{n}^ {-}).\]
Proof.: We have the Poitou-Tate exact sequence
\[0\longrightarrow\mathrm{Sel}_{\varepsilon}(K,A_{f}(\chi)) \longrightarrow\mathrm{Sel}(K,A_{f}(\chi))\longrightarrow\prod_{ \mathfrak{p}|p}\frac{\left(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi)_{\mathrm{div}} \right)[\mathfrak{P}_{\chi}]}{\left(\mathbf{E}_{\mathscr{O}_{\chi}}(\Phi_{n})_{ \varepsilon,\mathrm{div}}\right)[\mathfrak{P}_{\chi}]}\longrightarrow\] \[\longrightarrow\left(\mathfrak{Sel}^{\varepsilon}(K,T_{f}(\bar{ \chi}))\right)^{\vee}\longrightarrow\left(\mathfrak{Sel}(K,T_{f}(\bar{\chi})) \right)^{\vee}\longrightarrow 0.\]
Now \(\mathfrak{Sel}^{\varepsilon}(K,T_{f}(\bar{\chi}))\) is \(\mathscr{O}_{\chi}\)-free, and therefore \(\left(\mathfrak{Sel}^{\varepsilon}(K,T_{f}(\bar{\chi}))\right)^{\vee}\) is \(p\)-divisible. If we show that the kernel of the map \(\left(\mathfrak{Sel}^{\varepsilon}(K,T_{f}(\bar{\chi}))\right)^{\vee}\to\left( \mathfrak{Sel}(K,T_{f}(\bar{\chi}))\right)^{\vee}\) is divisible, then, since the local quotient in the middle of the above exact sequence is finite by Lemma 9.1, we have an exact sequence
and the result follows from Lemma 9.1. So to complete the proof we need to show that the kernel of the (surjective) map
is divisible. For this, it is enough to show that the cokernel of the (injective) map
\[\mathfrak{Sel}(K,T_{f}(\bar{\chi}))^{\circledSel}\,\mathfrak{Sel}^{\varepsilon}(K,T_{f}(\bar{\chi})) \tag{9.3}\]
is torsion free. Let \(x\in\mathfrak{Sel}^{\varepsilon}(K,T_{f}(\bar{\chi}))\) and let \(M\geqslant 1\) be such that \(\varpi_{\chi}^{M}\cdot x\in\mathfrak{Sel}(K,T_{f}(\bar{\chi}))\); to conclude that the cokernel of the map (9.3) is torsion-free, it is then enough to show that \(x\in\mathfrak{Sel}(K,T_{f}(\bar{\chi}))\). Since \(\varpi_{\chi}^{M}\cdot x\in\mathfrak{Sel}(K,T_{f}(\bar{\chi}))\), we have (writing \(\langle-,-\rangle\) for the local Tate pairing \(\langle-,-\rangle_{\mathfrak{R}_{\chi}\triangleright}\) to simplify the notation) \(\langle\operatorname{res}_{\mathfrak{p}}(\varpi_{\chi}^{M}\cdot x),y\rangle=0\) for all \(y\in H^{1}_{\operatorname{fin}}(K_{\mathfrak{p}},A_{f}(\chi))\), and since \(\langle\operatorname{res}_{\mathfrak{p}}(\varpi_{\chi}^{M}\cdot x),y\rangle= \langle\operatorname{res}_{\mathfrak{p}}(x),\varpi_{\chi}^{M}\cdot y\rangle\), we also have \(\langle\operatorname{res}_{\mathfrak{p}}(x),\varpi_{\chi}^{M}\cdot y\rangle=0\) for all \(y\in H^{1}_{\operatorname{fin}}(K_{\mathfrak{p}},A_{f}(\chi))\). Recall that \(H^{1}_{\operatorname{fin}}(K_{\mathfrak{p}},A_{f}(\chi))\) is co-free over \(\mathscr{O}_{\chi}\) by Proposition 5.4, hence \(H^{1}_{\operatorname{fin}}(K_{\mathfrak{p}},A_{f}(\chi))\) is \(\varpi_{\chi}\)-divisible. So the function \(y\mapsto\langle\operatorname{res}_{\mathfrak{p}}(x),y\rangle\) is zero on \(H^{1}_{\operatorname{fin}}(K_{\mathfrak{p}},A_{f}(\chi))\) and therefore \(x\) belongs to \(\mathfrak{Sel}(K,T_{f}(\bar{\chi}))\), concluding the proof.
### Proof of Theorem B
Recall \(L_{p,n}(f)=\mathcal{L}_{f,n}\cdot(\mathcal{L}_{f,n})^{\iota}\in\mathbf{Z}_{p} [G_{n}]\). By Remark 1.3 we may assume \(n\neq 0\) in the exceptional case.
_Step 1._ We first show that
\[\operatorname{length}_{\mathscr{O}_{\chi}}\bigl{(}\operatorname{Sel}(K,A_{f}( \chi))\bigr{)}\leqslant\operatorname{ord}_{\chi}\bigl{(}\chi(L_{p,n}(f)) \bigr{)}\]
with equality in the non-exceptional case. Take \(g=f_{k}\) for \(L=\emptyset\) in Theorem 7.1. By Theorem 7.1, \(\operatorname{Sel}_{\varepsilon}(K,A_{f,k}(\chi))\) is finite, of order bounded independently of \(k\), so \(\operatorname{Sel}_{\varepsilon}(K,A_{f}(\chi))\) is finite, and by Proposition 9.2 the Selmer group \(\operatorname{Sel}(K,A_{f}(\chi))\) is finite. Let \(t^{\varepsilon}_{\bar{\chi}}(f)=\operatorname{ord}_{\bar{\chi}}\bigl{(}\bar{ \chi}(\mathcal{L}^{\varepsilon}_{f})\bigr{)}\) and choose
\[k>\max\left\{\operatorname{length}_{\mathscr{O}_{\chi}}\bigl{(}\operatorname{Sel }_{\varepsilon}(K,A_{f}(\chi))\bigr{)},t^{\varepsilon}_{\bar{\chi}}(f), \operatorname{ord}_{\chi}(\tilde{\omega}_{n}^{-\varepsilon})\right\}.\]
For such a \(k\), we have \(\operatorname{Sel}_{\varepsilon}(K,A_{f}(\chi))\cong\operatorname{Sel}_{ \varepsilon}(K,A_{f,k}(\chi))\), and, by Proposition 9.2,
* \(\operatorname{length}_{\mathscr{O}_{\chi}}\left(\operatorname{Sel}(K,A_{f}( \chi))\right)=\operatorname{length}_{\mathscr{O}_{\chi}}\left(\operatorname{Sel }_{\varepsilon}(K,A_{f,k}(\chi))\right)+2\cdot\operatorname{ord}_{\chi}(\tilde{ \omega}_{n}^{-\varepsilon})\) in the non-exceptional case;
* \(\operatorname{length}_{\mathscr{O}_{\chi}}\left(\operatorname{Sel}(K,A_{f}( \chi))\right)=\operatorname{length}_{\mathscr{O}_{\chi}}\left(\operatorname{Sel }_{\varepsilon}(K,A_{f,k}(\chi))\right)+2\cdot\operatorname{ord}_{\chi}(\omega _{n}^{-\varepsilon})\) in the exceptional case.
By [2, Proposition 2.6], \((\mathcal{L}^{\varepsilon}_{f})^{\iota}=\pm\gamma_{\infty}\mathcal{L}^{ \varepsilon}_{f}\), for some \(\gamma_{\infty}\in G_{\infty}\), and therefore \(t^{\varepsilon}_{\chi}(f)=t^{\varepsilon}_{\bar{\chi}}(f)\). We thus have \(\operatorname{ord}_{\chi}\bigl{(}\chi(L^{\varepsilon}_{p}(f))\bigr{)}=2\cdot t ^{\varepsilon}_{\bar{\chi}}(f)\). If \(p\) is split in \(K\) or \(p\) is inert in \(K\) and \(\varepsilon=-1\) (non-exceptional case) we have \(\tilde{\omega}_{n}^{-\varepsilon}\cdot\mathcal{L}^{\varepsilon}_{f,n}\equiv\pm \mathcal{L}_{f,n}\) modulo \(\omega_{n}\), and therefore
\[\operatorname{length}_{\mathscr{O}_{\chi}}\left(\operatorname{Sel}(K,A_{f}( \chi))\right) =\operatorname{length}_{\mathscr{O}_{\chi}}\left(\operatorname{Sel}_{ \varepsilon}(K,A_{f,k}(\chi))\right)\] \[=2\cdot\operatorname{ord}_{\chi}\left(\mathcal{L}^{\varepsilon}_{f,n }(\bar{\chi})\right)+2\cdot\operatorname{ord}_{\chi}(\tilde{\omega}_{n}^{- \varepsilon})\text{ (by Theorem \ref{thm:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:
where the constant \(C=\frac{\sqrt{D}p^{n}}{\Omega}\), and \(\Omega\) is Gross' period, defined in Lemma 2.5 of [33]; see Sections 2.4 and 2.5 of [33] for a complete description of these constants. From Theorem A and Gross' formula we see that \(\operatorname{Sel}(K,A_{f}(\chi))\) is finite if and only if \(L(E/K,\chi,1)\neq 0\) and
\[\operatorname{length}_{\mathscr{O}_{\chi}}\left(\operatorname{Sel}(K,A_{f}( \chi))\leqslant\operatorname{ord}_{\chi}\left(\frac{L(E/K,\chi,1)}{C}\right)\right.\]
with equality in the non-exceptional case.
### Proof of Theorem C
As noted in Remark 1.3 we may assume that \(n\neq 0\) in the exceptional case. We define the _regulator_
\[\operatorname{Reg}_{\chi}(E/K)=\frac{h_{\operatorname{NT}}(P_{\tilde{\chi}})} {2\cdot\operatorname{length}_{\mathscr{O}_{\chi}}\left(\mathfrak{Sel}(K,T_{f} (\chi))/\mathscr{O}_{\chi}\cdot\tilde{\kappa}_{\tilde{\chi}}\right)}\]
and the _Stafarevich-Tate group_
\[\Sha(K,A_{f}(\chi))=\operatorname{Sel}(K,A_{f}(\chi))_{/\operatorname{div}}.\]
_Step 1._ We first show that
\[\operatorname{length}_{\mathscr{O}_{\chi}}\left(\operatorname{Sel}(K,A_{f}( \chi))_{/\operatorname{div}}\right)\leqslant\operatorname{length}_{ \mathscr{O}_{\chi}}\left(\mathfrak{Sel}(K,T_{f}(\chi))/\mathscr{O}_{\chi}\cdot \tilde{\kappa}_{\tilde{\chi}}\right))\]
with equality in the non-exceptional case. Combining Theorem 8.2 and Proposition 9.2, if \(p\) is split in \(K\) or \(p\) is inert in \(K\) and \(\varepsilon=-1\) (non-exceptional case) we have
\[\operatorname{length}_{\mathscr{O}_{\chi}}\left(\operatorname{Sel}(K,A_{f}( \chi))_{/\operatorname{div}}\right)=2\cdot\operatorname{length}_{\mathscr{O} _{\chi}}\left(\left(\mathfrak{Sel}^{\varepsilon}(K,\mathbf{T}_{f,\mathscr{O}_ {\chi}})\otimes_{\Lambda_{\mathscr{O}_{\chi}}}\Lambda_{\mathscr{O}_{\chi}}/ \mathfrak{P}_{\tilde{\chi}}\right)\big{/}\left(\mathscr{O}_{\chi}\cdot\kappa _{\tilde{\chi}}^{\varepsilon}\right)\right)+2\cdot\operatorname{ord}_{\chi}( \tilde{\omega}_{n}^{-\varepsilon}). \tag{9.4}\]
From (9.6), using Proposition 5.9 and Proposition 9.2(2), we obtain
\[\operatorname{length}_{\mathscr{O}_{\chi}}\left(\operatorname{Sel}(K,A_{f}( \chi))_{/\operatorname{div}}\right)=2\cdot\operatorname{length}_{\mathscr{O} _{\chi}}\left(\mathfrak{Sel}(K,T_{f}(\chi))/\left(\mathscr{O}_{\chi}\cdot \kappa_{\tilde{\chi}}^{\varepsilon}\right)\right)+2\cdot\operatorname{ord}_{ \chi}(\tilde{\omega}_{n}^{-\varepsilon}). \tag{9.5}\]
Now \(\tilde{\kappa}_{\tilde{\chi}}=\chi(\tilde{\omega}_{n}^{-\varepsilon})\cdot \kappa_{\tilde{\chi}}^{\varepsilon}\), and the result follows. If \(p\) is inert in \(K\) and \(\varepsilon=+1\) (the exceptional case), combining Theorem 8.2 and Proposition 9.2, we have
\[\operatorname{length}_{\mathscr{O}_{\chi}}\left(\operatorname{Sel}(K,A_{f}( \chi))_{/\operatorname{div}}\right)\leqslant 2\cdot\operatorname{length}_{ \mathscr{O}_{\chi}}\left(\left(\mathfrak{Sel}^{\varepsilon}(K,\mathbf{T}_{f, \mathscr{O}_{\chi}})\otimes_{\Lambda_{\mathscr{O}_{\chi}}}\Lambda_{\mathscr{O} _{\chi}}/\mathfrak{P}_{\tilde{\chi}}\right)\big{/}\left(\mathscr{O}_{\chi} \cdot\kappa_{\tilde{\chi}}^{+}\right)\right)+2\cdot\operatorname{ord}_{\chi}( \omega_{n}^{-}). \tag{9.6}\]
From (9.6), using Proposition 5.9 and Proposition 9.2(2), we obtain
\[\operatorname{length}_{\mathscr{O}_{\chi}}\left(\operatorname{Sel}(K,A_{f}( \chi))_{/\operatorname{div}}\right)\leqslant 2\cdot\operatorname{length}_{ \mathscr{O}_{\chi}}\left(\mathfrak{Sel}(K,T_{f}(\chi))/\left(\mathscr{O}_{ \chi}\cdot\kappa_{\tilde{\chi}}^{+}\right)\right)+2\cdot\operatorname{ord}_{ \chi}(\omega_{n}^{-}). \tag{9.7}\]
Now \(\tilde{\kappa}_{\tilde{\chi}}=\chi(\omega_{n}^{-\varepsilon})\cdot\kappa_{ \tilde{\chi}}^{\varepsilon}\), and the result follows.
_Step 2._ We now use Gross-Zagier formulas to conclude the proof of Theorem C. By the Gross-Zagier formula ([14], [37])
\[L^{\prime}(E/K,\chi,1)=C\cdot h_{\operatorname{NT}}(P_{\tilde{\chi}})\]
where the non-zero complex constant \(C\) is defined by
\[C=\begin{cases}\frac{8\pi^{2}(f,f)}{h_{K}h_{K}h_{K}\nabla D_{K}},\text{ if }N^{-}=1\\ \frac{4(\phi^{\sharp},\phi^{\sharp})^{2}}{\sqrt{D_{K}}},\text{ if }N^{-}\neq 1; \end{cases}\]
here \(h_{\operatorname{NT}}:E(K_{n})\otimes_{\mathbf{Z}}\mathscr{O}_{\chi}\to\mathbf{R}\) is the \(\mathscr{O}_{\chi}\)-linear extension of the usual Neron-Tate height on \(E(K_{n})\), \((f,f)\) is the Petersson norm of \(f\), \(\phi^{\sharp}\) is the quasi newform associated with \(f\) as in [37], Theorem 1.2.1, and \((\phi^{\sharp},\phi^{\sharp})\) is the \(L^{2}\)-norm of \(\phi^{\sharp}\) with respect to the Haar measure normalised as in _loc. cit._; see also [36], Theorem 1.3.1, and [38], Theorem 8.1 and the discussion that follows. The result follows combing Step 1 with the definitions of \(\operatorname{Reg}_{\chi}(E/K)\) and \(\Sha(K,A_{f}(\chi))\) introduced above and the Gross-Zagier formula.
## 10. Statements and declarations
* On behalf of all authors, the corresponding author states that there is no conflict of interest.
* Data sharing not applicable to this article as no datasets were generated or analysed during the current study. |
2309.12079 | Pair Production in time-dependent Electric field at Finite times | We investigate the finite-time behavior of pair production from the vacuum by
a time-dependent Sauter pulsed electric field using the spinor quantum
electrodynamics (QED). In the adiabatic basis, the one-particle distribution
function in momentum space is determined by utilizing the exact analytical
solution of the Dirac equation. By examining the temporal behavior of the
one-particle distribution function and the momentum spectrum of created pairs
in the sub-critical field limit $(E_0 = 0.2E_c)$, we observe oscillatory
patterns in the longitudinal momentum spectrum(LMS) of particles at finite
times. These oscillations arise due to quantum interference effects resulting
from the dynamical tunneling. Furthermore, we derive an approximate and
simplified analytical expression for the distribution function at finite times,
which allows us to explain the origin and behavior of these oscillations.
Additionally, we discuss the role of the vacuum polarization function and its
counter term to the oscillations in LMS vacuum excitation. We also analyse the
transverse momentum spectrum (TMS). | Deepak Sah, Manoranjan P. Singh | 2023-09-21T13:49:43Z | http://arxiv.org/abs/2309.12079v3 | # Pair Production in time-dependent Electric field at Finite times
###### Abstract
We investigate the finite-time behavior of pair production from the vacuum by a time-dependent Sauter pulsed electric field using the spinor quantum electrodynamics (QED). In the adiabatic basis, the one-particle distribution function in momentum space is determined by utilizing the exact analytical solution of the Dirac equation. By examining the temporal behavior of the one-particle distribution function and the momentum spectrum of created pairs in the sub-critical field limit (\(E_{0}=0.2E_{c}\)), we observe oscillatory patterns in the longitudinal momentum spectrum(LMS) of particles at finite times. These oscillations arise due to quantum interference effects resulting from the dynamical tunneling. Furthermore, we derive an approximate and simplified analytical expression for the distribution function at finite times, which allows us to explain the origin and behavior of these oscillations. Additionally, we discuss the role of the vacuum polarization function and its counter term to the oscillations in LMS vacuum excitation. We also analyse the transverse momentum spectrum (TMS).
+
Footnote †: E-mail address: [email protected] (Deepak).
Introduction
The concept of pair production in an electromagnetic field has its roots in the mid-1920s after the invention of quantum mechanics, with the formulation of the relativistic wave equation for electrons by Paul Dirac in 1928 [1]. The Dirac Sea model was proposed to explain the enigma of negative energy solutions. F. Sauter's work in 1931 [2] demonstrated that strong electric fields can lead to pair creation through tunneling with exponential suppression. This paved the way for quantum field theory, recognizing the vacuum as a polarizable medium influenced by constant fluctuations. In 1935, W. Heisenberg and H. Euler further explored the peculiarities of the Dirac equation, revealing non-linear modifications in Maxwell's equations due to the interaction of electromagnetic fields with the electron vacuum loop[3]. J. Schwinger's groundbreaking work in 1951 precisely calculated the imaginary part of the one-loop effective Lagrangian in the presence of a static electric field [4]. As a result of his seminal work, the phenomenon of vacuum pair creation by electric fields has since become widely known as the Schwinger effect, and it is also famously referred to as the Sauter-Schwinger effect in recognition of F. Sauter's prior work on solving the Dirac equation in the presence of an electric field. Schwinger's pioneering calculation opened up new avenues of research in quantum field theory and has profoundly impacted our understanding of particle physics in the presence of strong fields. This extraordinary property of quantum vacuum producing spontaneous particle-antiparticle pairs has far-reaching implications for understanding the generation of particle-antiparticle pairs in the presence of a strong electric field [5]; particle creation in the expanding universe[6]; black hole evaporation as a result of Hawking radiation [7]; and Unruh radiation, in which particle production is seen by an accelerating observer[8]. The study of generating electron-positron pairs through a spatially constant electrical background field was extended to the electric field with various time dependences. In the 1970s, researchers explored the occurrence of pair production from the vacuum in the presence of an oscillating time-dependent electric field [9][10]. Their investigations revealed different qualitative behaviors for this process under various interaction regimes. The interaction regimes can be distinguished by the value of the dimensionless Keldysh parameter \(\gamma=\frac{m_{\omega}}{|e|E_{0}}\)[11], with field amplitude \(E_{0}\), field frequency \(\omega\), electron charge \(e\) and mass \(m\). When \(\gamma>>1\), the process probability shows a perturbative power-law scaling with field intensity. Instead, for \(\gamma<<1\), it exhibits a manifestly non-perturbative exponential dependence on \(\frac{1}{E_{0}}\), similar to the case of a constant electric field [4]. Particle production in a spatially homogeneous single pulse of an electric field has been explored [12], [13], and methods for tackling particle creation in an arbitrary time-dependent electric field have been developed [14],[15]. Because a single pulse of an electric field is an idealized form of an elec
tric field created by two colliding laser beams, particle generation in an alternating electric field has been explored for a more realistic scenario [16]. However, it was found that the mass of the created particle, via the Schwinger mechanism, exponentially suppresses the pair generation rate, necessitating a very strong electric field to observe this phenomenon. Therefore, this event has not yet been the subject of any experimental observations. This makes it unclear how well theoretical prediction captures the physics of pair production. However, the study of pair production in strong electric fields subject has attracted sustained interest from theoreticians in recent years due to the extraordinary progress in development of the ultra-intense lasers technique and the strong field QED experimental studies that are planned at upcoming high-intensity laser facilities, such as the European X-Ray Free-Electron Laser [17], the Extreme-Light Infrastructure [18, 19], the Exawatt Center for Extreme Light Studies [20]. The electric field strength is fast getting closer to the critical value. Additionally, it is suggested that the Schwinger mechanism can be indirectly tested in the condensed matter system of a single monolayer of graphene, where the electrons are roughly characterized by the massless pseudo-relativistic Dirac equation [21, 22, 23].
Particle production can be viewed as evolving a quantum system from an initial equilibrium configuration to a new final equilibrium configuration via an intermediate non-equilibrium evolution caused by a strong field background. In intermediate non-equilibrium states, when matter fields interact with a time-dependent external field, the classical Hamiltonian loses its time translation invariance, leading to various choices for annihilation and creation operators (and consequently, the vacuum) in the Fock quantization. This lack of uniqueness of the vacuum state poses challenge in describing the evolution of the vacuum resulting in the creation of particle-antiparticle pairs as a function of time. Various vacuum choices have been explored in the literature, and the selection depends on the specific properties of the system under study and the quantum theory adopted. Adiabatic vacua are commonly employed in cosmology and the context of the Schwinger effect [24]. These adiabatic modes are established using a semiclassical WKB-type approximation. Reference mode functions are chosen as plane waves [25], proving particularly useful when the external background field changes slowly over time. Parker was among the first to propose a prominent alternative known as adiabatic vacua [26]. Subsequently, Leders and Roberts formalized Parker's proposal in [27]. In this standard approach, the asymptotic analysis of particle states in the remote past ( or in-states /before the external field is switched on ) and future (or out-states/long after the interaction with the external field has finished) is well understood using the quantum field operator expression in terms of creation and annihilation operators, connected to one-particle states in both the present and the future. Then, using the relationship between the two sets of operators
in the past and future, we can compute an equation for the process's S-matrix and the number of particles produced throughout the process. A variety of methodologies have been devised to investigate pair production in external fields. These techniques encompass the proper-time method [4, 28], the canonical method[29], Green's function methods [30], semiclassical tunneling [7], the Schrodinger-Functional approach[31], functional techniques [32, 33], mean-field treatment[34], and worldline instantons techniques [35, 36]. In the literature [37][38], pair production in an intense laser field is studied using analytical ( or numerical) calculations and theoretical predictions about the pair-production rates as time averages over an infinite period of time, and these studies only focus on understanding the pair formation at the asymptotic time final equilibrium state. This idea, however, is not completely good enough for understanding pair production, at least theoretically, for several reasons. Firstly, it is rarely mentioned how pair production processes from vacuum change over time. This raises the question: Can particles manifest instantly during pair production? And lastly, can these times at which the formation of physical pairs occur be accessed in experiments? Motivated by these ideas, we provide a way of understanding the dynamics of pair production at all times. We look for the evolution of the quantum system at some initial time \(t_{0}\) in the vacuum state, but now what will be the properties of the quantum system at finite time \(t\)? What happens to the system properties at all finite times? For that, the temporal evolution of particle distribution function in momentum space is good enough observable to give information about the asymptotic states of the quantum field but also give a complete description of the process when the matter field and the strong field background are still interacting. Information about the properties of the vacuum state before the production of real pairs unveils a previously undiscovered dimension in quantum non-equilibrium physics, thus exerting influence across various research domains concerning the emergence of quasi-particles. These specific time-dependent vacuum states poses questions about the physical interpretation of time-dependent observables related to pair production studies and the discussion in the literature is still open [39],[40]. Secondly, can we provide the physical meaning of the possible definition of the number of particles in terms of the number of particles measured well after the finite time \(t\) at which the external background has been switched off? As studied by various authors [30],[41],[42] who derived the adiabatic number of pairs created after time \(T\) where \(T\) is larger than the electric field pulse duration, which is a good approximation for measuring the real pairs. The second motivation for our work focuses on the possibility of the formation of real pairs at the finite time when the electric field nearly vanishes, which may be experimentally accessible in the future. In the present work, we consider the production of electron-positron pairs from the vacuum in a time-varying, spatially uniform pulsed electric
field is given by \(E(t)=\frac{E_{0}}{\cosh^{2}(t/t)}\), with height of \(E_{0}\) and width of \(\tau\). Such background field has received extensive attention in the literature [30],[13], with a focus on the asymptotic behavior of the probability of pair production. This naturally raises questions about the instantaneous appearance of particles in pair production and their behavior at intermediate times when using a dynamical formalism. To this end, we study the evolution of the one-particle distribution function, \(f(\mathbf{p},t)\) in momentum space, which is rigorously derived from QED by canonical quantization of the Dirac field and subsequent Bogoliubov transformation to a quasi-particle representation. In the case of a time-dependent Sauter-pulse electric field, the exact solution of the one-particle Dirac equation is possible, and using it, we analytically compute the particle distribution function,\(f(\mathbf{p},t)\) in terms of Gauss-Hypergeometric function. It is well known that pair production from vacuum undergoes three different stages of evolution: quasielectron-positron plasma (QEPP), the transient stage, and the final residual electron-positron plasma (REPP) stage. By temporal evolution of \(f(\mathbf{p},t)\), we find that the occurrence of three stages is influenced by longitudinal and transverse momentum values qualitatively and quantitatively. Next, We analyze the created particles' Longitudinal Momentum Spectrum (LMS) at finite times. In the tunneling regime (\(\gamma<1\)), we observe oscillatory structure in the LMS at time \(t>2\tau\), and this oscillation pattern continuously changes from \(t>2\tau\) up to \(t<6\tau\) for \(\gamma<1\). This oscillation behavior at finite time clearly illustrates the quantum interference effects associated with particle production, as explained in terms of the vacuum polarisation function, \(u(\mathbf{p},t)\) and its counter-part depolarisation function, \(v(\mathbf{p},t)\) using dynamical tunneling in the momentum-space representation. We emphasize that the oscillations seen in the LMS are not artifacts but rather possess significant physical relevance. In the multi-photon regime, we see that LMS at finite time near \(t=3\tau\) shows the multi-modal structure. Also, we investigated the impact of transverse momentum value on LMS and revealed that oscillation present at the finite time disappears for a large value of \(p_{\perp}\). Further, we find out that the Transverse Momentum Spectrum(TMS) does not show this quantum interference effect; only some deformation occurs during the transient stage.
This work is organized as follows: In Sec.II detailed theoretical formulations of our problem are given this largely follows the derivation from [43; 44]. In Sec. III, we present expressions for the particle momentum distribution function. Results are discussed in Sec. IV. The article is concluded in Sec. V.
Throughout the paper, we use natural units and set \(\hbar=c=m=|e|=1\), the electric charge \(e<0\) and express all variables in terms of the electron mass unit.
Theoretical framework
In this section, we will briefly discuss the formalism required for our present work based on the original literature [43; 44]. This formalism serves as the theoretical framework that underlies our study and enables us to analyze and interpret the phenomenon of electron-positron creation under a strong electric field. To construct one-particle solutions of the Dirac equation in such a field, we start by writing the Dirac equation for a particle in an electromagnetic field, which takes the following form:
\[(\mathrm{i}\gamma^{\mu}\partial_{\mu}-e\gamma^{\mu}A_{\mu}-m)\Psi(x)=0. \tag{1}\]
where, \(A^{\mu}\) is the four-vector electromagnetic potential, \(m\) is the mass of electron, \(e\) is the charge of the electron, \(\Psi(x)\) is the four-component spinor. For \(\gamma\)-matrices we chose the weyl basis [45]
\[\gamma^{0}=\begin{pmatrix}\mathbb{I}&0\\ 0&-\mathbb{I}\end{pmatrix},\gamma^{i}=\begin{pmatrix}0&-\sigma^{i}\\ \sigma^{i}&0\end{pmatrix},i=1,2,3, \tag{2}\]
where, \(\mathbb{I}\) is the identity matrix and \(\sigma^{i}\) are the Pauli matrices.The \(\gamma\) matrices satisfy the anti-commutation relations:
\[\{\gamma^{\mu},\gamma^{\nu}\}=2g^{\mu\nu} \tag{3}\]
with the metric tensor, \(g^{\mu\nu}=diag(+---)\).
Four coupled differential equations result from the Dirac equation for the spinor, and it is typically challenging to find precise analytical solutions in the presence of external fields. Feynmann and Gell-Mann were able to circumvent this difficulty by taking into account a two-component form of the Dirac equation [46].
Accordingly, we turn this equation into a second-order differential equation by assuming the existence of a bispinor \(\chi(x)\) such that
\[\Psi(x)=(\mathrm{i}\gamma^{\nu}\partial_{\nu}-e\gamma^{\nu}A_{\nu}+m)\chi(x). \tag{4}\]
and inserting Eq. (4) into Eq. (1), it follows that \(\chi(x)\) satisfies the quadratic Dirac equation
\[[(i\partial_{\mu}-eA_{\mu})^{2}-\frac{e}{2}\sigma^{\mu\nu}\mathcal{F}_{\mu\nu} -m^{2}]\chi(x)=0 \tag{5}\]
where \(\chi(x)\) is a four component spinor. Here, we consider the case where the electromagnetic field tensor is \(\mathcal{F}^{\mu 0}=(0,0,0,E(t))\) with \(E(t)\) linearly polarized time-dependent quasi-classical spatially uniform electric field along the \(z\)-axis which is characterized by the four-vector potential \(A^{\mu}(x)=(0,\mathbf{A}(t))\equiv(0,0,0,A(t))\), with an arbitrary \(A(t)\) such that \(E(t)=-\frac{dA(t)}{dt}\).
Then, the equation can be simplified to
\[(\Box+e^{2}A^{2}(t)+2iA(t)\partial_{3}-ie\partial_{t}A(t)\gamma^{0}\gamma^{3}+m^{ 2})\chi(x)=0. \tag{6}\]
Here,\(\Box=\partial_{\mu}\partial^{\mu}\) is D'Alembert operator. It is worth noting that this equation has twice as many solutions as the Dirac equation. As a result, in order to have a one-to-one correspondence between both sets of solutions, we must reduce the number of solutions of Eq. (6). The method used here reduces the problem to solving a differential equation for a single scalar function. To illustrate this, we first point out that the problem is spatial homogeneity, which implies that there exist solutions of the form
\[\chi(x)=\mathrm{e}^{\mathrm{i}\mathbf{p}\cdot\mathbf{x}}\chi_{\mathbf{p}}(t), \tag{7}\]
where, \(\chi_{\mathbf{p}}(t)\) is independent of the position \(\mathbf{x}\) and we label it by momentum of a particle \(\mathbf{p}\).
Now, Eq. (6) becomes
\[\Big{(}\partial_{t}^{2}+\mathrm{i}eE(t)\gamma^{0}\gamma^{3}+\omega^{2}(\mathbf{p},t)\Big{)}\chi_{\mathbf{p}}(t)=0. \tag{8}\]
with
\[\omega^{2}(\mathbf{p},t)=\epsilon_{\perp}^{2}(\mathbf{p})+P(t)^{2},\qquad\epsilon_{ \perp}^{2}(\mathbf{p})=\mathbf{p}_{\perp}^{2}+m^{2},\qquad P(t)=p_{\parallel}-eA(t) \tag{9}\]
Here, \(\mathbf{p}_{\perp}\) and \(p_{\parallel}\) are the momentum components perpendicular and parallel to the external field direction.Now we expand the function \(\chi_{\mathbf{p}}(t)\) in the basis of eigenvectors of \(\gamma^{0}\gamma^{3}\). The matrix representation of \(\gamma^{0}\gamma^{3}\) is
\[\gamma^{0}\gamma^{3}=\begin{pmatrix}\mathbb{I}&0\\ 0&-\mathbb{I}\end{pmatrix}\begin{pmatrix}0&\sigma^{3}\\ -\sigma^{3}&0\end{pmatrix}=\begin{pmatrix}0&\sigma^{3}\\ \sigma^{3}&0\end{pmatrix}, \tag{10}\]
making it easy to recognize the eigenvector. They are given by
\[R_{1}=\begin{pmatrix}1\\ 0\\ 0\\ 0\end{pmatrix},\quad R_{2}=\begin{pmatrix}0\\ 0\\ 0\\ 1\end{pmatrix},\quad R_{3}=\begin{pmatrix}0\\ 1\\ 0\\ 0\end{pmatrix},\quad R_{4}=\begin{pmatrix}0\\ 0\\ 1\\ 0\end{pmatrix}, \tag{11}\]
Two doubly degenerate eigenvalues exist for \(\gamma^{0}\gamma^{3}\) with \(R_{1},R_{2}\) have eigenvalue '\(1^{\prime}\) and \(R_{3},R_{4}\) have eigenvalue '\(-1^{\prime}\). But it turns out that picking one of them is sufficient [43]. Now, we shall seek the solutions of Eq. (8) in the form
\[\chi_{\mathbf{p}}(t)\equiv\chi_{\mathbf{p}^{\prime}}(t)=\psi_{\mathbf{p}}(t)R_{r}, \tag{12}\]
where \(\gamma^{0}\gamma^{3}R_{r}=R_{r}\). Solving a differential equation for a scalar function \(\psi_{\mathbf{p}}(t)\)[66] simplifies the problem,
\[\Big{(}\partial_{t}^{2}+\mathrm{i}eE(t)+\omega^{2}(\mathbf{p},t)\Big{)}\psi_{\mathbf{ p}}(t)=0. \tag{13}\]
Now, let's examine the resulting solutions.It follows from Eq. (13) that, in vanishing electric field, as \(t\rightarrow-\infty\), the term \(\omega(\mathbf{p},t)\) becomes independent of time \(\omega(\mathbf{p})\) and the scalar function \(\psi(\mathbf{p},t)\) satisfies the asymptotic equation,
\[\Big{(}\partial_{t}^{2}+\omega^{2}(\mathbf{p})\Big{)}\psi_{\mathbf{p}}(t)=0, \tag{14}\]
There are two linearly independent solutions to this harmonic oscillator equation, which correspond to energy \(\pm\omega(\mathbf{p})\). In what follows, we will label these solutions with superscripts \(\lambda=+\) and \(\lambda=-\), respectively. Solutions of this equation are clearly given by plane waves,and thus
\[\psi^{(\lambda)}_{\mathbf{p}}(t)\underset{t\rightarrow-\infty}{\sim}\mathrm{e}^{- \mathrm{i}\lambda\omega(\mathbf{p})t}. \tag{15}\]
We will interpret these solutions as describing an electron (\(\lambda=+\)) and its anti-particle, i.e., a positron (\(\lambda=-\)). in the electric field. Finally, the corresponding solutions of Eq. (6) have the form,
\[\chi^{(\lambda)}_{\mathbf{p}r}(x)=\mathrm{e}^{\mathrm{i}\mathbf{p}\cdot\mathbf{x}}\psi^{( \lambda)}_{\mathbf{p}}(t)R_{r}, \tag{16}\]
those of the Dirac equation, however, are derived
\[\Psi^{(\lambda)}_{\mathbf{p}r}(x)=\Big{[}\mathrm{i}\gamma^{0}\partial_{t}-\mathbf{p} \cdot\mathbf{\gamma}+eA(t)\gamma^{3}+m\Big{]}\mathrm{e}^{\mathrm{i}\mathbf{p}\cdot\mathbf{ x}}\psi^{(\lambda)}_{\mathbf{p}}(t)R_{r}. \tag{17}\]
where the spinor solutions, \(\Psi^{(\lambda)}_{\mathbf{p}r}(x)\) are normalised according to the product:
\[\int d^{3}\mathbf{x}[\Psi^{(\lambda)}_{\mathbf{p}r}(x)]^{\dagger}\Psi^{(\lambda^{ \prime})}_{\mathbf{p}^{\prime}r^{\prime}}(x)=(2\pi)^{3}\delta(\mathbf{p}-\mathbf{p}^{ \prime})\delta_{rr^{\prime}}\delta_{\lambda\lambda^{\prime}} \tag{18}\]
Hence, the newly constructed eigenstates of the Dirac equation representing an electron or positron in a time-dependent electric field provide a complete and orthonormal relation
\[\sum_{\lambda=\pm}\sum_{r=\pm}\int\frac{d^{3}\mathbf{p}}{(2\pi)^{3}}\,\Psi^{( \lambda)}_{\mathbf{p}r}(x)[\Psi^{(\lambda)}_{\mathbf{p}r}(x^{\prime})]^{\dagger}= \delta(\mathbf{x}-\mathbf{x}^{\prime}). \tag{19}\]
Now that we have established that \(\Psi^{(\lambda)}_{\mathbf{p}r}(x)\) provide a complete set of orthonormal solutions for the Dirac equation in a time-dependent electric field, as shown in Eq. (1), we can proceed to construct the Dirac fermion field operator \(\hat{\Psi}(x)\) in the framework of second quantization. The \(\hat{\Psi}(x)\) field operator is then quantized and written in the form
\[\hat{\Psi}(x)=\sum_{r}\int\frac{d^{3}\mathbf{p}}{(2\pi)^{3}}\Big{(}\Psi^{(+)}_{\bm {p}r}(x)\hat{b}_{\mathbf{p}r}+\Psi^{(-)}_{-\mathbf{p}r}(x)\hat{\partial}^{\dagger}_{ \mathbf{p}r}\Big{)}, \tag{20}\]
where \(\Psi^{(\lambda)}_{\mathbf{p}r}(x)\) are the single particle solutions of the Dirac equation, whereas \(\hat{b}_{\mathbf{p}r}\) and \(\hat{d}_{\mathbf{p}r}\) are the annihilation operators of electron and positron with momentum \(\mathbf{p}\) and spin \(r\). The operators satisfy the standard fermionic anti-commutation relations,
\[\{\hat{b}_{\mathbf{p}r},\hat{b}^{\dagger}_{\mathbf{p}^{\prime}r^{\prime}}\}=\{\hat{d}_ {\mathbf{p}r},\hat{d}^{\dagger}_{\mathbf{p}^{\prime}r^{\prime}}\}=\delta(\mathbf{p}-\mathbf{p} ^{\prime})\delta_{rr^{\prime}}, \tag{21}\]
The term \(\hat{\Psi}(x)\) field operator also satisfies the anti-commutation realtion,
\[\{\hat{\Psi}_{n}(t,\mathbf{x}),\hat{\Psi}_{m}^{\dagger}(t,\mathbf{x}^{\prime})\}=(2\pi)^{ 3}\delta(\mathbf{x}-\mathbf{x}^{\prime})\delta_{mn}. \tag{22}\]
Now, the Hamiltonian can be calculated from the energy-momentum tensor which yields,
\[\hat{H}(t)=\mathrm{i}\int\hat{\Psi}^{\dagger}(t,\mathbf{x})\hat{\Psi}(t,\mathbf{x})d^{3 }\mathbf{x} \tag{23}\]
After a lengthy calculation, we determine the diagonal and off-diagonal parts of the Hamiltonian as
\[\hat{H}_{diag}(t)=\mathrm{i}\sum_{r}\int\frac{d^{3}\mathbf{p}}{(2\pi)^{3}}\Big{[} \varepsilon_{\mathbf{p}}^{(+)}(t)\hat{b}_{pr}^{\dagger}\hat{b}_{pr}+\varepsilon_{ \mathbf{p}}^{(-)}(t)\hat{d}_{-pr}\hat{d}_{-pr}^{\dagger}\Big{]}, \tag{24}\]
\[\hat{H}_{offdiag}(t)=\mathrm{i}\sum_{r}\int\frac{d^{3}\mathbf{p}}{(2\pi)^{3}}\Big{[} \varepsilon_{\mathbf{p}}^{(+)}(t)\hat{b}_{pr}^{\dagger}\hat{d}_{-pr}^{\dagger}+ \varepsilon_{\mathbf{p}}^{(-)}(t)\hat{d}_{-pr}\hat{b}_{pr}\Big{]}, \tag{25}\]
where the factors \(\varepsilon_{\mathbf{p}}^{(\lambda\lambda^{\prime})}(t)\) are expressed as
\[\varepsilon_{\mathbf{p}}^{(\lambda\lambda^{\prime})}(t)=\left\{\begin{array}{ll} \omega^{2}(\mathbf{p},t)\Big{(}\dot{\psi}_{\mathbf{p}}^{(\lambda)}(t)[\psi_{\mathbf{p}}^ {(\lambda)}(t)]^{*}-\psi_{\mathbf{p}}^{(\lambda)}(t)[\dot{\psi}_{\mathbf{p}}^{(\lambda )}(t)]^{*}\Big{)}+\mathrm{i}P(t)\Big{(}|\dot{\psi}_{\mathbf{p}}^{(\lambda)}(t)|^{2 }+\omega^{2}(\mathbf{p},t)|\psi_{\mathbf{p}}^{(\lambda)}(t)|^{2}\Big{)}&\mbox{if }\lambda= \lambda^{\prime},\\ \omega^{2}(\mathbf{p},t)\Big{(}\dot{\psi}_{\mathbf{p}}^{(\lambda)}(t)\psi_{-\mathbf{p}}^{ (\lambda^{\prime})}(t)-\psi_{\mathbf{p}}^{(\lambda)}(t)\dot{\psi}_{-\mathbf{p}}^{( \lambda)}(t)\Big{)}+\mathrm{i}P(t)\Big{(}\dot{\psi}_{\mathbf{p}}^{(\lambda)}(t) \dot{\psi}_{-\mathbf{p}}^{(\lambda^{\prime})}(t)+\omega^{2}(\mathbf{p},t)\psi_{\mathbf{p} }^{(\lambda)}(t)\psi_{-\mathbf{p}}^{(\lambda^{\prime})}(t)\Big{)}&\mbox{if }\lambda \neq\lambda^{\prime},\end{array}\right. \tag{26}\]
We shall stress here that with the above Hamiltonian, having non-vanishing off-diagonal elements, the positive and negative energy modes mix, and thus, clear interpretation in terms of particles and antiparticles is difficult. In order to calculate the spectrum, we have to diagonalize the Hamiltonian; this is achieved by a basis transformation, where we introduce new time-dependent operators \(\hat{B}_{\mathbf{p}r}(t)\) and \(\hat{D}_{\mathbf{p}r}(t)\). The relation between the \(\hat{b}_{\mathbf{p}r},\hat{d}_{\mathbf{p}r}\)and \(\hat{B}_{\mathbf{p}r}(t),\hat{D}_{\mathbf{p}r}(t)\) operators is given by a Bogoliubov transformation
\[\hat{B}_{\mathbf{p}r}(t) =\alpha_{\mathbf{p}}(t)\hat{b}_{\mathbf{p}r}+\beta_{\mathbf{p}}(t)\hat{d}_{- pr}^{\dagger}, \tag{27}\] \[\hat{D}_{\mathbf{p}r}(t) =\alpha_{-\mathbf{p}}(t)\hat{d}_{\mathbf{p}r}-\beta_{-\mathbf{p}}(t)\hat{b}_{ -\mathbf{p}r}^{\dagger}. \tag{28}\]
It introduces a new set of creation and annihilation operators for quasiparticles at time \(t\), such that the Hamiltonian is diagonalized in that "quasi-particle representation" (new basis). Hence, the instantaneous vacuum state is defined as \(\hat{B}_{\mathbf{p}r}(t)|0_{t}\rangle=0\) and \(\hat{D}_{\mathbf{p}r}(t)|0_{t}\rangle=0\). Note that this transformation preserves the anti-commutation relations of the creation and annihilation operators provided that, at every time \(t\), unknown functions \(\alpha_{\mathbf{p}}(t)\) and \(\beta_{\mathbf{p}}(t)\) satisfy the condition,
\[|\alpha_{\mathbf{p}}(t)|^{2}+|\beta_{\mathbf{p}}(t)|^{2}=1. \tag{29}\]
We can also express \(\hat{\Psi}(x)\) in terms of new operators as the Bogoliubov transformation provides us with the desired change of basis.
\[\hat{\Psi}(x)=\sum_{r}\int\frac{d^{3}\mathbf{p}}{(2\pi)^{3}}\Big{[}\Phi^{(+)}_{\mathbf{p }r}(x)\hat{B}_{\mathbf{p}r}(t)+\Phi^{(-)}_{-pr}(x)\hat{D}^{\dagger}_{\mathbf{p}r}(t) \Big{]}, \tag{30}\]
with the spinors \(\Phi^{(\lambda)}_{\mathbf{p}r}(x)\) such that
\[\Phi^{(+)}_{\mathbf{p}r}(x) =\alpha^{*}_{\mathbf{p}}(t)\Psi^{(+)}_{\mathbf{p}r}(x)+\beta^{*}_{\mathbf{p}} (t)\Psi^{(-)}_{\mathbf{p}r}(x), \tag{31}\] \[\Phi^{(-)}_{\mathbf{p}r}(x) =\alpha_{\mathbf{p}}(t)\Psi^{(-)}_{\mathbf{p}r}(x)-\beta_{\mathbf{p}}(t)\Psi^ {(+)}_{\mathbf{p}r}(x). \tag{32}\]
It follows from here that \(\Phi^{(\lambda)}_{\mathbf{p}r}(x)\) should have the same spinor form as \(\Psi^{(\lambda)}_{\mathbf{p}r}(x)\). Thus, we propose that
\[\Phi^{(\lambda)}_{\mathbf{p}r}(x)=\Big{[}\mathrm{i}\gamma^{0}\partial_{t}-\mathbf{p} \cdot\mathbf{\gamma}+eA(t)\gamma^{3}+m\Big{]}\mathrm{e}^{\mathrm{i}\mathbf{p}\cdot\bm {x}}\phi^{(\lambda)}_{\mathbf{p}}(t)R_{r}, \tag{33}\]
where, \(\phi^{(\lambda)}_{\mathbf{p}}(t)\) are unknown functions. The function \(\phi^{(\lambda)}_{\mathbf{p}}(t)\) are the mode functions in the quasiparticle representation, which are chosen according to the ansatz,
\[\phi^{(\lambda)}_{\mathbf{p}}(t)=\frac{e^{-\mathrm{i}\lambda\Theta_{\mathbf{p}}(t)}}{ \sqrt{2\omega(\mathbf{p},t)(\omega(\mathbf{p},t)-\lambda P(t))}} \tag{34}\]
The functions \(\phi^{(\lambda)}_{\mathbf{p}}(t)\) are chosen such that they coincide with the mode functions \(\psi^{\lambda}_{\mathbf{p}}(t)\) in the case of a vanishing vector potential. Now, combining Eqs. (17), (31), (32), and (33), we obtain that
\[\psi^{(+)}_{\mathbf{p}}(t) =\alpha_{\mathbf{p}}(t)\mathrm{e}^{-\mathrm{i}\Theta_{\mathbf{p}}(t)}\phi ^{(+)}_{\mathbf{p}}(t)-\beta^{*}_{\mathbf{p}}(t)\mathrm{e}^{\mathrm{i}\Theta_{\mathbf{p} }(t)}\phi^{(-)}_{\mathbf{p}}(t), \tag{35}\] \[\psi^{(-)}_{\mathbf{p}}(t) =\beta_{\mathbf{p}}(t)\mathrm{e}^{-\mathrm{i}\Theta_{\mathbf{p}}(t)}\phi ^{(+)}_{\mathbf{p}}(t)+\alpha^{*}_{\mathbf{p}}(t)\mathrm{e}^{\mathrm{i}\Theta_{\mathbf{p} }(t)}\phi^{(-)}_{\mathbf{p}}(t). \tag{36}\]
with the accumulated dynamical phase, \(\Theta_{\mathbf{p}}(t_{0},t)=\int_{t_{0}}^{t}dt\,\omega(\mathbf{p},t^{\prime}).\) and the coefficients \(\alpha_{\mathbf{p}}(t)\) and \(\beta_{\mathbf{p}}(t)\) are given by
\[\alpha_{\mathbf{p}}(t)=i\phi^{(-)}_{\mathbf{p}}(t)\epsilon_{\perp}(\mathbf{p},t)\mathrm{e }^{\mathrm{i}\Theta_{\mathbf{p}}(t)}(\partial_{t}-i\omega(\mathbf{p},t))\psi^{(+)}_{ \mathbf{p}}(t) \tag{37}\]
\[\beta_{\mathbf{p}}(t)=-i\phi^{(+)}_{\mathbf{p}}(t)\epsilon_{\perp}(\mathbf{p},t)\mathrm{e }^{-\mathrm{i}\Theta_{\mathbf{p}}(t)}(\partial_{t}+i\omega(\mathbf{p},t))\psi^{(+)}_{ \mathbf{p}}(t) \tag{38}\]
From the above equations, if we know \(\psi_{\mathbf{p}}(t)\) from a solution of differential Eq. (13) for a specific electric field, we are able to find out the Bogolyubov transformation coefficients and correspondingly momentum distribution function of the created particle. Let us introduce the occupation number of electrons in the given eigenmode \(\mathbf{p}r\) of the fermionic field using the time-dependent creation and annihilation operators for the initial vacuum state:
\[f_{r}(\mathbf{p},t)=\langle 0_{in}|\hat{B}^{\dagger}_{\mathbf{p}r}(t)\hat{B}_{\mathbf{p}r}(t)| 0_{in}\rangle \tag{39}\]
Similarly, one can also define the occupation number of the positron,
\[\tilde{f}_{r}(-\mathbf{p},t)=\langle 0_{in}|\hat{D}_{-\mathbf{p}r}^{\dagger}(t)\hat{D}_{-\mathbf{p }r}(t)|0_{in}\rangle \tag{40}\]
Because of the invariance of charge conjugation,
\[f_{r}(\mathbf{p},t)=\tilde{f}_{r}(-\mathbf{p},t) \tag{41}\]
Under the quasi-particle model, \(f_{r}(\mathbf{p},t)\) and \(\tilde{f}_{r}(-\mathbf{p},t)\) will act as one-particle distribution functions [47]. The temporal evolution of \(f(\mathbf{p},t)\) satisfies
\[f(\mathbf{p},t)=\sum_{r}|\beta_{\mathbf{p}}(t)|^{2}=2|\beta_{\mathbf{p}}(t)|^{2} \tag{42}\]
\[f(\mathbf{p},t)=\lim_{t\rightarrow\infty}2|\beta_{\mathbf{p}}(t)|^{2} \tag{43}\]
Here, we emphasized that the \(f(\mathbf{p},t)\) is interpreted as the distribution function of real particles at asymptotic times only where the electric field vanished.
## III Pair production in Sauter-pulse electric field
A spatially uniform external background is a common approximation of the electromagnetic field of two counter-propagating laser pulses generating a standing wave [48; 49]. In general, the pair-production process takes place close to the electric field maximum (or near to critical field limit), where the magnetic field vanishes. Despite the fact that laser fields typically include many optical cycles, in this case, we examine a relatively simple model of the external field made up of the Sauter profile, which can be thought of as an extremely short laser pulse.
\[E\left(t\right)=E_{0}\text{sech}^{2}\left(\frac{t}{\tau}\right), \tag{44}\]
where, \(\tau\) is the duration of pulse and \(E_{0}\) is field strength. This electric field exponentially goes to zero for \(|t|>>\tau.\) In the limit of \(\tau\rightarrow\infty\) the electric field becomes homogeneous in time. We can choose a gauge in which \(A_{0}=0\) and the vector potential associated with the electric field is \(A(t)=(0,0,A_{3}=-\int dtE(t)).\) After the integration, we find the Sauter-type gauge potential,
\[A\left(t\right)=-E_{0}\tau\left(1+\tanh\left(\frac{t}{\tau}\right)\right). \tag{45}\]
Figure 1 illustrates the corresponding time dependence of (44) and (45). Its peak height is attained at \(t=0\), its half height is reached at \(t=\pm 0.81\tau\), and at \(t=\pm\tau\), the pulse amplitude has already
dropped to \(41\%\) of its peak height, followed by a further dramatic reduction well below \(10\%\) at \(t=\pm 2\tau\).
Now, in the presence of external electric field Eq. (44) equation of motion for mode function Eq. (13) reads
\[\left(\partial_{t}^{2}+\mathrm{i}E_{0}\mathrm{sech}^{2}\left(\frac{t}{\tau} \right)+\omega^{2}(\mathbf{p},t)\right)\!\!\psi_{\mathbf{p}}(t)=0. \tag{46}\]
where we have skipped the irrelevant index \(\lambda\) of the mode function \(\psi_{\mathbf{p}}(t)\). This equation can be solved by converting it into a hypergeometric differential equation [50], by the change of the time variable as \(y=\frac{1}{2}\left(1+tanh(\frac{l}{\tau})\right)\). The new variable \(y\) transforms the equation as
\[\left(\frac{4}{\tau^{2}}y\left(1-y\right)\partial_{y}y\left(1-y\right)\partial _{y}+\omega^{2}(\mathbf{p},y)+4\mathrm{i}E_{0}y\left(1-y\right)\right)\psi_{\mathbf{p} }(t)=0. \tag{47}\]
Further, by using the following ansatz
\[\psi_{\mathbf{p}}(y)=y^{k}\left(1-y\right)^{l}\eta_{\mathbf{p}}(y) \tag{48}\]
with, \(k=\frac{-\mathrm{i}\tau\omega_{-}}{2}\), \(l=\frac{\mathrm{i}\tau\omega_{+}}{2}\) and \(\omega_{\pm}^{2}=m^{2}+\mathbf{p}_{\perp}^{2}+(p_{\parallel}\pm eE_{0}\tau)^{2}\) in order to combining Eqs. (47) and (48), we have \(\eta_{\mathbf{p}}(y)\) satisfying the following hyper-geometric differential equation
\[\left(y\left(1-y\right)\partial_{y}^{2}+\left(c-\left(a+b+1\right)y\right) \partial_{y}-ab\right)\eta_{\mathbf{p}}(y)=0. \tag{49}\]
Here,
\[a =-\mathrm{i}E_{0}\tau^{2}-\frac{\mathrm{i}\tau\omega_{-}}{2}+ \frac{\mathrm{i}\tau\omega_{+}}{2}=i\zeta_{1}\] \[b =1+\mathrm{i}E_{0}\tau^{2}-\frac{\mathrm{i}\tau\omega_{-}}{2}+ \frac{\mathrm{i}\tau\omega_{+}}{2}=1+i\zeta_{2}, \tag{50}\] \[c =1-\mathrm{i}\tau\omega_{-}=1+i\zeta_{3},\]
Figure 1: Time evolution of \(E(t)\) (left) and \(A(t)\) (right) for the field parameters \(E_{0}=0.2E_{c}\) and \(\tau=10[m^{-1}]\).
The two linearly independent solutions of Eq.(49) are \(\eta_{p}^{(\pm)}(y)\) in the neighborhood of the singular point \(y=0\)
\[\eta_{p}^{(+)}(t)=N^{(+)}{}_{2}\mathcal{F}_{1}\left(a,b,c;y\right), \tag{51}\]
\[\eta_{p}^{(-)}(t)=N^{(-)}{}_{2}\mathcal{F}_{1}\left(1-a,1-b,2-c;y\right). \tag{52}\]
with \({}_{2}\mathcal{F}_{1}\left(a,b,c;y\right)\) denoting the Gauss hyper-geometric function and \(N^{(\pm)}\) being the normalization constants fixed by initial conditions.
To get the mode functions \(\psi_{p}^{(\pm)}(y)\) we have to resubstitute \(\eta_{p}^{(\pm)}(t)\) in Eq.(48)
\[\psi_{p}^{(+)}(y)=N^{(+)}(\mathbf{p})y^{k}\left(1-y\right)^{l}{}_{2} \mathcal{F}_{1}\left(a,b,c;y\right), \tag{53}\] \[\psi_{p}^{(-)}(y)=N^{(-)}(\mathbf{p})y^{-k}\left(1-y\right)^{-l}{}_{2 }\mathcal{F}_{1}\left(1-a,1-b,2-c;y\right) \tag{54}\]
where \(N^{(\pm)}(\mathbf{p})\) as normalization constants. These constants are chosen such that
\[\psi_{p}^{(\pm)}(y\to 0)=\phi_{\mathbf{p}}^{(\pm)}(y\to 0)\]
using the initial condition, we get
\[N^{(\pm)}=\frac{e^{\pi\mathrm{i}\tilde{\Theta}_{p}(y_{0},0)}}{\sqrt{2\omega_{ -}\left(\omega_{-}\mp P\left(0\right)\right)}} \tag{55}\]
with accumulated phase, \(\tilde{\Theta}_{p}(y_{0},0)=\frac{2}{\tau}\int_{y_{0}}^{0}dy^{\prime}\frac{ \omega(\mathbf{p},y^{\prime})}{y^{\prime}(1-y^{\prime})}\).
We can now proceed with the calculation of the One-particle distribution function using Eq. (43)
\[f(\mathbf{p},t)=2|\beta(\mathbf{p},t)|^{2}. \tag{56}\]
To continue, we will convert all functions in \(\beta(\mathbf{p},t)\) given by Eq. (38) to the new time variable \(y\).This transformation yields
\[|\beta(\mathbf{p},y)|^{2}=\frac{\epsilon_{\perp}^{2}(\mathbf{p},y)}{2\omega(\mathbf{p},y) (\omega(\mathbf{p},y)-P(p_{\parallel},y))}\left|\Big{(}\frac{2}{\tau}y(1-y)\partial _{y}+\mathrm{i}\omega(\mathbf{p},y)\Big{)}\psi^{(+)}(\mathbf{p},y)\right|^{2} \tag{57}\]
By utilizing the equation above and carrying out a detailed calculation, we obtain an analytical expression for the one-particle distribution function in terms of the transformed time variable \(y\) :
\[f(\mathbf{p},y) =|N^{+}(\mathbf{p})|^{2}\Big{(}1+\frac{(p_{\parallel}-eA(y))}{\omega( \mathbf{p},y)}\Big{)}\bigg{(}\frac{4}{\tau^{2}}y^{2}(1-y)^{2}|\frac{ab}{c}f_{1}|^{2 }+(\omega(\mathbf{p},y)-(1-y)\omega_{-}-y\omega_{+})^{2}|f_{2}|^{2}\] \[+\frac{4}{\tau}y(1-y)(\omega(\mathbf{p},t)-(1-y)\omega_{-}-y\omega_{+ })\Re(\frac{ab}{c}f_{1}\tilde{f}_{2})\bigg{)}, \tag{58}\]
where, \(f_{1}={}_{2}\mathcal{F}_{1}\left(1+a,1+b,1+c;y\right)\), \(f_{2}={}_{2}\mathcal{F}_{1}\left(a,b,c;y\right).\)
## IV Results and discussion
### Temporal evolution of particle distribution
Quantum vacuum becomes unstable under the action of the external electric field. As a consequence, virtual particle-antiparticle pairs are created in an off-mass-shell configuration. These virtual charged particles are accelerated by the electric field to enough energy to become real particles in an on-shell mass configuration. During the action of the external force, pair annihilation processes occur simultaneously with the pair creation, giving rise to a dynamical quasiparticle plasma. This results in different states of in and out-vacuums. The most complete description of vacuum pair creation from in-state to out-state is given by the one-particle distribution function. The time evolution of the one-particle distribution function shows that virtual particle-antiparticle (electron-positron plasma (EPP) ) excited from vacuum passes through three different temporal stages: the quasiparticle electron-positron plasma (QEPP) stage, the transient stage, and the final residual electron-positron plasma(REPP) in the out-state as pointed by authors of [51] see figure 2. The initial stage QEPP and the final REPP stage are separated by the fast oscillation of EPP by the transient stage. The transient stage is considered to begin at time \(t_{in}\) where oscillating of \(f(t)\) attains the REPP level for the first time. The time \(t_{out}\) where the transient stage ends is taken as the time when the average level of the oscillating \(f(t)\) hits the REPP; after that, the REPP stage
Figure 2: Time evolution of \(\mathbf{f}(\mathbf{p},t)\) with \(E_{0}=0.2E_{c}\) and \(\tau=10[m^{-1}]\) and all the units are taken in electron mass unit.
begins. At the REPP stage, quasi-particles become independent, and real particle-antiparticles are observed with a lesser value off than that at the electric field maximum at \(t=0\).
Each of the three stages contributes to the various physical effects, such as vacuum polarization effects [52], the emission of annihilation photons originating from the focal point of colliding laser beams [53; 54; 55; 51], the birefringence effect [56], and various other secondary effects. In order to estimate the contributions of the various stages in measuring observable effects, a detailed analysis of each period of the EPP's evolution is quite helpful.
The dependence of the temporal evolution of the quasi-particle distribution functions \(f(\mathbf{p},t)\) on the momentum, \(\mathbf{p}(p_{\parallel},p_{\perp})\) is shown in Fig. 3. As depicted in the left panel of Figure 3, the distribution function \(f(p_{\parallel},t)\) demonstrates a consistent increase over time within the region of QEPP. Notably, it showcases a higher value for negative longitudinal momentum value than positive ones.On closer inspection, it becomes evident that the distribution function reaches its peak precisely when the longitudinal quasi-momentum,\(P(t)\), becomes zero. This particular peak occurrence is influenced by the specific choice of \(p_{\parallel}\) value. After \(t=0[m^{-1}]\), where the electric field reaches its maximum value, \(f(p_{\parallel},t)\) decreases. \(f(p_{\parallel},t)\), after decreasing to a certain value, shows rapid oscillations and thus transitions from QEPP to a transient region. One observes a gradual narrowing and a disappearance of the fluctuations in the transient domain for higher \(p_{\parallel}\)- value. This is because the role of the vacuum polarization effect decreases with increasing the longitudinal momentum \(p_{\parallel}\), and The transient region appears later for a higher \(p_{\parallel}\)- value. We confirm this by quantifying the period of the transient stage, \(t_{in}\) and \(t_{out}\) as shown in Table 1. Also, we can say that transient regions have a time period \(\approx 10[m^{-1}]\), and the transient stage starts nearly after \(t\approx\tau\) see Table 1.
Figure 3: The relationship between momentum and the transient domain’s time of occurrence. **Left panel:** for longitudinal momentum **Right panel:** for transverse momentum with \(E_{0}=0.2E_{c}\) and \(\tau=10[m^{-1}]\) and all the units are taken in electron mass unit.
After that, it reaches the REPP stage, where the distribution function,\(f_{out}\), becomes constant.\(f_{out}\) being largest for \(p_{\parallel}=0\) and the same for positive and negative \(p_{\parallel}-\) value.
Next, we also point out the influence of transverse momentum on temporal stages, as seen from the right panel of Fig. 3. We can observe an interesting behavior for the higher transverse momentum value, \(f(p_{\perp})\) slowly reaches to REPP stage with the residual value \(f(p_{\perp},t>t_{out})\) which is lowest in comparison to small transverse momentum value \(p_{\perp}\). These stages behaviors are mainly decided by the double quasi-energy \(2\omega(p_{\parallel},p_{\perp},t)\), as we increase the transverse momentum corresponding transverse energy \(\epsilon_{\perp}(p_{\perp},t)\)also increases that come in the dynamical energy gap \(2\omega(p_{\parallel},p_{\perp},t)\) and due to which to reach on-shell condition (REPP stage) takes a longer time.
### Longitudinal momentum
In this section, we discuss about longitudinal momentum spectrum (LMS) of the created particle.
From figure 4, we see how particle creation proceeds. At early times, \(t=-10[m^{-1}]\) in the QEPP region when the electric field is increasing, the created particle shows a smooth Gaussian-like structure of momentum spectrum with a peak around \(p_{\parallel}\approx 2[m]\). The electric field propels the newly created particles towards the negative \(z-\)direction. The movement of the distribution function's peak from \(p_{\parallel}=+2\) ( \(+eE_{0}\tau\)) to \(-2\) ( \(-eE_{0}\tau\)) can be understood through the concept of quasi-momentum \(P(t)=(p_{\parallel}-eA(t))\), as depicted in the time evolution shown in Figure 4(a) to (c). This phenomenon implies that the momentum distribution of the generated particles is expected to be distributed over a range of \(\Delta p_{\parallel}=2\), a spread determined by the electric field's magnitude. At time \(t=0[m^{-1}]\) where the electric field is maximum ( \(E(t)=E_{0}\)), magnitude of longitudinal momentum distribution function \(f(p_{\parallel}=0,t=0)\approx 5\times 10^{-3}\) that follows the tendency \(E^{2}(t)/8\)[57]. For \(t>0\), when there's a decrease in the strength of the field, an interesting phenomenon occurs
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(p_{\parallel}[m]\) & \(t_{m}[m^{-1}]\) & \(f(p_{\parallel},t_{m})\) & \(t_{out}[m^{-1}]\) & \(f(p_{\parallel},t_{out})\) \\ \hline
0.0 & 12.57 & \(7.203\times 10^{-7}\) & 24.038 & \(3.309\times 10^{-7}\) \\ \hline
0.25 & 13.37 & \(5.576\times 10^{-7}\) & 24.803 & \(3.112\times 10^{-7}\) \\ \hline
0.50 & 13.78 & \(2.242\times 10^{-7}\) & 24.320 & \(1.599\times 10^{-7}\) \\ \hline
0.75 & 14.53 & \(7.769\times 10^{-8}\) & 25.799 & \(7.033\times 10^{-8}\) \\ \hline
1.00 & 16.76 & \(1.748\times 10^{-8}\) & 26.866 & \(1.082\times 10^{-8}\) \\ \hline \end{tabular}
\end{table}
Table 1: Transient region time labeled by longitudinal momentum \(p_{\parallel}\).
in the momentum distribution function \(f(p_{\parallel})\). It rapidly drops to \(99.2\%\) of its maximum value, driven by its reliance on the strength of the electric field at \(t=10[m^{-1}]\), as illustrated in figure 4(c). The \(f(p_{\parallel})\) peak corresponds to newly generated particles. However, within a narrow range of \(-1<p_{\parallel}<1\), during what we term the transient stage, the typically smooth Gaussian shape of the distribution is disrupted. This disruption coincides with a weakening of the accelerating field's effect, which can be visualized in figure 4(d). Near \(t\approx 2\tau\), when the electric field's magnitude dwindles to roughly \(93\%\) of its maximum, a secondary peak emerges within the LMS. This peak manifests around a longitudinal momentum value of zero, accompanied by observable oscillations within a confined range of \(p_{\parallel}\), as demonstrated in figure 4(e). In proximity to \(p_{\parallel}=0\), this secondary peak experiences growth over time as the electric field weakens, while concurrently, the dominant peak begins to diminish. At \(t=3\tau\), we see that the small peak is now the dominant peak and shows oscillatory behavior near the transient stage and at the beginning of the REPP stage, as we see in figure 4(f) that oscillatory behavior in a small window of longitudinal momentum where the electric field diminishes to around one-hundredth of its maximum magnitude. Intriguingly, this oscillation exhibits an asymmetry, with its amplitude being more pronounced for negative longitudinal mo
Figure 4: Time evolution of particle distribution function in the momentum space \(f(p_{\parallel},t)\) at different times. The transverse momentum is considered to be zero, and all the units are taken in the electron mass unit.The field parameters are \(E_{0}=0.2E_{c}\) and \(\tau=10[m^{-1}]\).
mentum compared to its positive longitudinal momentum. Within figure 4(f), the Gaussian bump centered around \(p_{\parallel}\approx-2[m]\) arises from particles generated during the initial phases of the process. Conversely, the dominant peak, characterized by the initiation of oscillation, comprises particles formed at later instances, which have encountered relatively less acceleration since their inception. This oscillation in the LMS finds its origin in the quantum interference effect, a phenomenon stemming from the outcomes of dynamical tunneling, as elucidated in reference [58]. Around \(t\approx 4\tau\), a minor peak at \(p_{\parallel}=-2[m]\) value is nearly diminished. Only the dominant peak at \(p_{\parallel}=0\) persists, with a faint onset of oscillatory behavior superimposed on a Gaussian-like structure eventually, as depicted in figures 4(g) to (h), the oscillation gradually fades away by \(t=50[m^{-1}]\). Dynamical tunneling can be grasped as a process evolving over time involving inter-band dynamics in particle momentum representation[59]. Within momentum space, various channels offer possibilities for particle tunneling. As time advances, distinct scenarios emerge: (i) At the time \(t\), a particle can tunnel directly, adopting a momentum value denoted as \(p^{\prime}_{\parallel}\), (ii) In the early stages, a particle exhibiting lower momentum (\(p_{\parallel}<p^{\prime}_{\parallel}\)) can tunnel, followed by acceleration leading to momentum \(p^{\prime}_{\parallel}\), (iii) Conversely, a particle initially possessing higher momentum (\(p_{\parallel}>p^{\prime}_{\parallel}\)) can tunnel at time \(t_{1}\) and subsequently decelerate to reach the specific momentum \(p^{\prime}_{\parallel}\). Finally, when the individual probability amplitudes of this process are added together, they result in quantum interference at time \(t\). At the asymptotic time, those processes do not share the same phase information and become random due to averaging over particle paths that do not show quantum interference effects, which leads to a smooth distribution function. Many research articles, including the work by Dumlu et al. [60] have pointed out that LMS doesn't exhibit an oscillatory structure for the Sauter pulsed field as time approaches infinity. Instead, it displays a single-peaked Gaussian-like structure. This observation is consistent with the outcome illustrated in figure 4(i) at \(t=100[m^{-1}]\).
Certainly, we can establish certain time scales linked to the quantum signature manifesting as oscillations within the LMS of generated particles during the REPP stage. Drawing from the emergence of a secondary peak, we define three distinct time scales: (_i_) \(t_{cp}\) (Central Peak Formation): This time scale is characterized by the emergence of the secondary peak and the beginning of its development, (_ii_) \(t_{sep}\) (Peak Separation): At this time, the central peak becomes dominant or the time after the two peaks become distinctly separated, (_iii_) \(t_{dis}\) (Disappearance of Oscillation): This indicates the time when the oscillations within the central peak fade away or the time after the primary peak (or the left-side peak) ceases to exist. By identifying and quantifying these time scales, we can better characterize the intricate quantum dynamics reflected in the LMS during the REPP stage. Furthermore, it's important to highlight that these time scales are influenced by the
electric field strength \(E_{0}\), as indicated in Table 2. A remarkable pattern becomes apparent upon examining the table--a consistent behavior of these three-time scales emerges with an increase in \(E_{0}\). Additionally, there is a clear trend where these events tend to occur earlier in time as \(E_{0}\) increases.
Moreover, our observations extend to the spread of \(\Delta p\) within the LMS, which exhibits changes depending on the value of \(E_{0}\). This relationship is intuitively valid, as higher values of \(E_{0}\) correspond to greater kinetic momentum.
During the acceleration of the quasi-particle, we see the first Gaussian peak shift to the negative z-direction, and deformation in the tail gives the second peak in the spectrum around \(t=2.3\tau\) that is clearly visible with interference visibility of 0.5. There are two distinct peaks here: one shows the smooth Gaussian-like structure, and the other shows the interference with effect as an oscillation at time \(t=2.65\tau\) with visibility (or degree of coherence) of 0.314 The Gaussian-like onset oscillation peak is then dominated by \(t=3\tau\), reducing visibility to 0.15. This oscillation was observed in the Gaussian-like structure, which fades at \(t=3.34\tau\) tau and has a visibility of 0.08. From these, we can say that these oscillations are seen in LMS at the Compton time scale during the formation of electrons or positrons. The LMS figure shows that coherence abruptly disappears at \(t=4.15\tau\), where visibility is 0.02.
We still do not talk about the Keldysh parameter,\(\gamma\), on which we categorize the pair production process, whether it is governed by multi-photon or tunneling mechanisms. We also test whether this oscillation structure behavior explicitly depends on \(\gamma\) or not. We choose the \(\gamma\) value using two different combinations of \(E_{0}\) and \(\tau\) to see its behavior at a finite time. To show that the same value of Keldysh parameter,\(\gamma\), gives a different qualitative picture of LMS at a finite time, here we take \(\gamma=1\) corresponding to two different configurations of parameter \((E_{0},\tau)\) related to Sauter pulse electric field. As we know that \(\gamma=1\) gives an intermediate regime of pair production via tunneling and multi-photon processes, which are known as the nonperturbative [61]. Fig. 5 shows
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(E_{0}[E_{c}]\) & \(t_{cp}[m^{-1}]\) & \(t_{sep}[m^{-1}]\) & \(t_{dis.}[m^{-1}]\) \\ \hline
0.1 & 47 & 57 & 70 \\ \hline
0.2 & 22 & 30 & 50 \\ \hline
0.3 & 15 & 20 & 35 \\ \hline
0.4 & 9 & 15 & 28 \\ \hline
0.5 & 6 & 12 & 23 \\ \hline \end{tabular}
\end{table}
Table 2: time labelled by different electric field strength \(E_{0}\).
LMS for Sauter pulse electric field with \(E_{0}=0.2\) and \(\tau=5[m^{-1}]\). From fig. 5 (a) to 5(c) we can see that due to the electric field smooth uni-modal Gaussian structure accelerated towards negative z-direction and after the pulse duration \(t=6[m^{-1}]\) we observed some disturbance near the \(p_{\parallel}=1\). At \(t=11.5[m^{-1}]\) interference effect is observed due to which smooth Gaussian structure becomes a multi-modal structure as shown in figure 5(d) and 5(e). That interference effect nearly disappear at \(t\approx 4\tau\). Comparing figures 6(d) and 6(e), distinct structural differences become evident. These disparities contribute to the presence of two qualitatively distinct behaviors at finite times, both under the condition of \(\gamma=1\).
It's important to emphasize that this behavior during finite times is primarily influenced by the width of the pulse \(\tau\). Depending on the classification of short and long pulses, we can outline two distinct behaviors: (i) Long Pulse Behavior (\(\tau>8\)): In this regime, two Gaussian-pulse profiles are observable. One of these profiles forms at the origin, potentially with the onset of oscillations. (ii) Short Pulse Behavior (\(\tau<8\)): In contrast, shorter pulses lead to a unique outcome. Initially, a smooth unimodal structure experiences a splitting near the REPP region. This leads to the formation of a multi-modal Gaussian structure, which eventually converges into a single-peaked Gaussian profile as time progresses towards the asymptotic state.
Figure 5: Time evolution of quasi-particle distribution function in the longitudinal momentum space \(f(p_{\parallel},t)\). The transverse momentum is considered to be zero and all the units are taken in the electron mass unit.The field parameters are \(E_{0}=0.2E_{c}\) and \(\tau=5[m^{-1}]\).
#### iv.2.1 LMS in the multi-photon regime
In this section, we explored the LMS of the created particle in the multiphoton regime. We choose the parameters of the laser pulse in such a way that the Keldysh parameter,\(\gamma>>1\). Fig. 7 shows LMS for short pulse-duration \(\tau=4[m^{-1}]\) and \(E_{0}=0.1E_{c}\). The Keldysh parameter, in this case, is close to \(2.5\), i.e., \(\gamma>>1\) corresponds to \(n^{th}\) order perturbation theory, with \(n\) being the minimum number of photons to be absorbed in order to overcome the threshold energy for pair creation \(n\omega>2m\). In the early time of the creation of pairs, \(t=-4[m^{-1}]\), spectrum has a unimodal Gaussian-like profile peak at \(p_{\parallel}\simeq-0.4\), and as time proceeds, we see shifting of peak \(p_{\parallel}\simeq-eE_{0}\tau\) to \(p_{\parallel}\simeq eE_{0}\tau\) due to the action of Lorentz force and its peak value shows the maximum value for \(t=0\), i.e., \(f(p_{\parallel})=1.4\times 10^{-3}\).(see figure 7 (a-c)). However, that smooth unimodal spectrum now shows slight modulation Fig. 7(d) when \(t=9[m^{-1}]\)( which is greater than the effective pulse duration length (\(2\tau\))). At finite time, \(t=12[m^{-1}]\)spectrum has a quad-modal profile, as seen in figure 7(e). The central peak, which is located at \(p_{\parallel}=0\), is much more prominent than the two unequal peaks at \(p_{\parallel}\simeq\pm 0.4\) and another very small peak at \(p_{\parallel}\simeq 0.7\). Fig. 7(f-g) shows the merging of those multimodal peaks as a result of the fading of different peaks that occur and the single smooth Gaussian peaks observed at \(t>5\tau\) as shown in figure 7 (h-i). An interesting qualitative contrast becomes evident when comparing the previous scenario with \(\gamma=0.5\) to the current situation. The presented figure illustrates this distinct behavior. Notably, a multi-modal pattern emerges at a specific moment, characterized by the presence of more than two
Figure 6: Time evolution of quasi-particle distribution function in the longitudinal momentum space \(f(p_{\parallel},t)\). The transverse momentum is considered to be zero, and all the units are taken in the electron mass unit.The field parameters are \(E_{0}=0.1E_{c}\) and \(\tau=10[m^{-1}]\).
peaks, occurring approximately at \(3\tau\).
As time progresses, the multi-modal pattern gradually dissipates, eventually leading to a singular smooth peak above \(t>5\tau\). Intriguingly, a similar evolution is observable in the case of \(\gamma=0.5\), signifying a shared progression in both scenarios despite representing a non-multiphoton process.
### Approximate analytical expression for \(f(\mathbf{p},t)\) at finite time
One-particle distribution function as a function of the transformed time variable \(y\) is given by
\[\begin{split} f(\mathbf{p},t)&=|N^{+}(\mathbf{p})|^{2} \Big{(}1+\frac{(p_{\parallel}-eA(t))}{\omega(\mathbf{p},t)}\Big{)}\bigg{(}\frac{4} {\tau^{2}}y^{2}(1-y)^{2}|\frac{ab}{c}f_{\parallel}|^{2}+(\omega(\mathbf{p},t)-(1-y )\omega_{-}-y\omega_{+})^{2}|f_{2}|^{2}\\ &+\frac{4}{\tau}y(1-y)(\omega(\mathbf{p},t)-(1-y)\omega_{-}-y\omega_ {+})\Re(\frac{ab}{c}f_{1}\tilde{f}_{2})\bigg{)}\end{split} \tag{59}\]
where, \(f_{1}={}_{2}\mathcal{F}_{1}\left(1+a,1+b,1+c;y\right)\), \(f_{2}={}_{2}\mathcal{F}_{1}\left(a,b,c;y\right)\)
To investigate the behavior of the function \(f(\mathbf{p},t)\) as time approaches infinity, we employ a variety of approximations based on collections of formulas associated with Gamma and Gauss-hypergeometric functions. These approximations enable us to deduce simplified analytical ex
Figure 7: Time evolution of quasi-particle distribution function in the longitudinal momentum space \(f(p_{\parallel},t)\) at different times. The transverse momentum is considered to be zero, and all the units are taken in the electron mass unit.The field parameters are \(E_{0}=0.1E_{c}\) and \(\tau=4[m^{-1}]\).
pressions for \(f(\mathbf{p},t)\) in the late-time regime. First of all, we start with approximating the Gauss-hypergeometric function as \(y\) approaches \(1\). It is crucial to ensure a smooth convergence towards the limit of \({}_{2}\mathcal{F}_{1}\left(a,b,c;y\to 1\right)\), which requires a thorough grasp of the limit itself. This task is complicated by the intricate nature of the variables \(a\), \(b\), and \(c\) in this specific context, making it essential to exercise caution when dealing with this limit. Therefore, it is beneficial to perform a transformation in the argument by substituting \(y\) with \((1-y)\). This transition can be accomplished by employing the following mathematical identity
\[{}_{2}\mathcal{F}_{1}\left(a,b,c;z\right) =\frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c)\Gamma(c-a-b)}{}_{2} \mathcal{F}_{1}\left(a,b,1-c+a+b;1-z\right)\] \[+\frac{\Gamma(c)\Gamma(a+b-c)}{\Gamma(a)\Gamma(b)}(1-z)^{c-a-b}{ }_{2}\mathcal{F}_{1}\left(c-a,c-b,1+c-a-b;1-z\right) \tag{60}\]
In general Gauss-Hypergeometric function,
\[{}_{2}\mathcal{F}_{1}\left(a,b,c;z\right)=\sum_{n=0}^{\infty}\frac{(a)_{n}(b) _{n}}{(c)_{n}}\frac{z^{n}}{n!} \tag{61}\]
where \(()_{n}\) denotes the Pochhammer symbol.
\[{}_{2}\mathcal{F}_{1}\left(a,b,c;z\right)=1+\frac{ab}{c}z+\frac{a(a+1)b(b+1)} {c(c+1)}\frac{z^{2}}{2!}+\frac{a(a+1)(a+2)b(b+1)(b+2)}{c(c+1)(c+2)}\frac{z^{3} }{3!}+... \tag{62}\]
The series continues with additional terms involving higher powers of \(z\). Each term in the series involves the parameters \(a,b\), and \(c\), as well as the variable \(z\) raised to a specific power. Using the above relation, we approximate the Gauss-Hypergeometric function \(f_{1}\) and \(f_{2}\) that present in the relation of the particle distribution function and compute simple analytical expression near to \(y\to 1\).
\[\frac{2}{\tau}y(1-y)\frac{ab}{c}f_{1} =\frac{2}{\tau}y(1-y)\frac{ab}{c}{}_{2}\mathcal{F}_{1}\left(1+a,1 +b;1+c;y\right)\] \[=\frac{2}{\tau}y(1-y)\frac{ab}{c}\Bigg{[}\frac{\Gamma(1+c)\Gamma (c-a-b-1)}{\Gamma(c-a)\Gamma(c-b)}{}_{2}\mathcal{F}_{1}\left(1+a,1+b,2+a+b-c ;1-z\right)\] \[+\frac{\Gamma(1+c)\Gamma(1+a+b-c)}{\Gamma(1+a)\Gamma(1+b)}(1-y)^ {(c-a-b-1)}{}_{2}\mathcal{F}_{1}\left(c-a,c-b,c-a-b;1-z\right)\Bigg{]} \tag{63}\]
Now, let's take the limit as \(y\to 1\), and approximate the hypergeometric functions \({}_{2}\mathcal{F}_{1}\left(1+a,1+b,2+a+b-c;1-z\right)\) and \({}_{2}\mathcal{F}_{1}\left(c-a,c-b,c-a-b;1-z\right)\) using the series relation of the Gauss-hypergeometric function Eq. (62), focusing only on the zeroth-order term. We have,
\[\frac{2}{\tau}y(1-y)\frac{ab}{c}f_{1} =\frac{2}{\tau}y(1-y)ab\frac{\Gamma(c)\Gamma(c-a-b-1)}{\Gamma(c- a)\Gamma(c-b)}+\frac{2}{\tau}y(1-y)^{c-a-b}(a+b-c)\frac{\Gamma(c)\Gamma(a+b-c)}{ \Gamma(a)\Gamma(b)}\] \[=\frac{2}{\tau}y(1-y)ab\Gamma_{1}+\frac{2}{\tau}y(1-y)^{c-a-b}( a+b-c)\Gamma_{2} \tag{64}\]
where, \(\Gamma_{1}=\frac{\Gamma(c)\Gamma(c-a-b-1)}{\Gamma(c-a)\Gamma(c-b)}\) and \(\Gamma_{2}=\frac{\Gamma(c)\Gamma(a+b-c)}{\Gamma(a)\Gamma(b)}\)
Similarly,
\[(\omega(\mathbf{p},y)-(1-y)\omega_{-}-y\omega_{+})f_{2} =(\omega(\mathbf{p},y)-(1-y)\omega_{-}-y\omega_{+})\frac{\Gamma(c) \Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)}{\cal F}_{1}\left(a,b,1-c+a+b;1-y\right)\] \[+(1-y)^{c-a-b}\frac{\Gamma(c)\Gamma(a+b-c)}{\Gamma(a)\Gamma(b)}{ \cal F}_{1}\left(c-a,c-b,1+c-a-b;1-y\right)\] \[=(\omega(\mathbf{p},y)-(1-y)\omega_{-}-y\omega_{+})\Bigg{[}(c-a-b-1) \frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)}\] \[+(1-y)^{c-a-b}\frac{\Gamma(c)\Gamma(a+b-c)}{\Gamma(a)\Gamma(b)} \Bigg{]}\] \[=(\omega(\mathbf{p},y)-(1-y)\omega_{-}-y\omega_{+})\Bigg{[}(c-a-b-1) \Gamma_{1}+(1-y)^{c-a-b}\Gamma_{2}\Bigg{]} \tag{65}\]
We introduce it here in anticipation of encountering the Gamma function \(\Gamma(z)\) in the subsequent content. The Gamma function typically obeys the following relationship:
\[\Gamma(1+z)=z\Gamma(z), \tag{66}\] \[\Gamma(1-z)\Gamma(z)=\frac{\pi}{\sin(\pi z)} \tag{67}\]
from which we can derive the following useful relations,
\[|\Gamma(\mathrm{i}z)|^{2} =\frac{\pi}{z\sinh(\pi z)}\] \[|\Gamma(1+\mathrm{i}z)|^{2} =\frac{\pi z}{\sinh\pi z}\] \[|\Gamma(\frac{1}{2}+\mathrm{i}z)|^{2} =\frac{\pi}{\cosh\pi z} \tag{68}\]
Using that, we are able to derive the following new set of equations
\[|\Gamma_{1}|^{2} =\frac{\delta_{2}\tau\omega_{-}}{\delta_{0}\delta_{1}(1+\delta_{0 }^{2})}\Big{(}\frac{\sinh\pi\delta_{1}\sinh\pi\delta_{2}}{\sinh\left(\pi\tau \omega_{-}\right)\sinh\left(\pi\delta_{0}\right)}\Big{)}\] \[|\Gamma_{2}|^{2} =\frac{\delta_{3}\tau\omega_{-}}{\delta_{0}\delta_{4}}\Big{(} \frac{\sinh\pi\delta_{3}\sinh\pi\delta_{4}}{\sinh\left(\pi\delta_{0}\right) \sinh\left(\pi\tau\omega_{-}\right)}\Big{)} \tag{69}\]
To compute \(\Gamma_{1}\Gamma_{2}\), we employ approximate formulas for the Gamma function, specifically Stirling's formula for the Gamma function.
\[\Gamma(z)\approx z^{z-1/2}e^{-z}\sqrt{2\pi} \tag{70}\]
Then, we derive the set of equations employing Stirling's formula for the Gamma function, which
was used to determine the Gamma function in the computation of the distribution function.
\[\Gamma(1+\mathrm{i}x) \sim \sqrt{2\pi}e^{(\frac{\mathrm{i}}{2}ln(x)-\frac{\mathrm{i}}{2}x)+ \mathrm{i}(x(ln(x)-1)+\frac{\mathrm{i}}{2})}\] \[\Gamma(-\mathrm{i}x) \sim \sqrt{2\pi}e^{(\frac{\mathrm{i}}{2}x-\frac{\mathrm{i}}{2}ln(x))+ \mathrm{i}(x(1-ln(x))-\frac{\mathrm{i}}{2})}\] \[\Gamma(\mathrm{i}x) \sim \sqrt{2\pi}e^{(-\frac{\mathrm{i}}{2}x-\frac{\mathrm{i}}{2}ln(x))+ \mathrm{i}(x(ln(x)-1)-\frac{\mathrm{i}}{2})} \tag{71}\]
So,
\[\Gamma_{1}\vec{\Gamma_{2}}=\bigg{(}\frac{\Gamma(c)\Gamma(c-a-b-1)}{\Gamma(c-a )\Gamma(c-b)}\bigg{)}\overline{\frac{\Gamma(c)\Gamma(a+b-c)}{\Gamma(a)\Gamma(b )}} \tag{72}\]
Subsequently, following certain algebraic manipulations, we obtain :
\[\Gamma_{1}\vec{\Gamma_{2}}=\frac{\tau\omega_{-}}{2\sinh\big{(}\pi \tau\omega_{-}\big{)}}\frac{e^{\rho}}{\sqrt{1+\tau^{2}\omega_{+}^{2}}}\exp \Big{[}\pi+\ln\Big{(}\frac{\delta_{1}^{\delta_{1}}\delta_{2}^{\delta_{2}} \delta_{4}^{\delta_{4}}}{\delta_{0}^{2\delta_{0}}\delta_{3}^{\delta_{3}}} \Big{)}+arctan(-\tau\omega_{+})\Big{]} \tag{73}\]
where, \(\rho=ln\Big{(}\sqrt{\frac{(\delta_{2}\delta_{3})}{\delta_{1}\delta_{4}\delta_ {0}^{2}}}\Big{)}\), \(\delta_{0}=\tau\omega_{+}\), \(\delta_{1}=\frac{\tau}{2}((\omega_{-}+\omega_{+})-2E_{0}\tau)\), \(\delta_{2}=\frac{\tau}{2}((\omega_{-}+\omega_{+})+2E_{0}\tau)\), \(\delta_{3}=\frac{\tau}{2}((\omega_{-}-\omega_{+})+2E_{0}\tau)\), \(\delta_{4}=\frac{\tau}{2}((\omega_{+}-\omega_{-})+2E_{0}\tau)\). Now, employing this approximation, equation Eq. (59) can be re-expressed as follows:
\[f(\mathbf{p},t)=|N^{+}(\mathbf{p})|^{2}\bigg{(}1+\frac{(p_{1}-eA(t))}{\omega(\mathbf{p},t )}\bigg{)}\bigg{(}|\mathcal{Z}_{1}(\mathbf{p},t)|^{2}+|\mathcal{Z}_{2}(\mathbf{p},t)| ^{2}+2\Re(\mathcal{Z}_{1}(\mathbf{p},t)\mathcal{\bar{Z}}_{2}(\mathbf{p},t))\bigg{)} \tag{74}\]
where,
\[|\mathcal{Z}_{1}(\mathbf{p},t)|^{2} =|\Gamma_{1}|^{2}\big{(}\frac{4}{\tau}y^{2}(1-y)^{2}(\mu^{2}+v^{2 })+(1+\tau^{2}\omega_{+}^{2})(\omega(\mathbf{p},y)-y\omega_{+}-(1-y)\omega_{-})^{2} \big{)}\] \[+\frac{4}{\tau}y(1-y)(\omega(\mathbf{p},y)-y\omega_{+}-(1-y)\omega_{-}) \tag{75}\]
\[|\mathcal{Z}_{2}(\mathbf{p},t)|^{2} =|\Gamma_{2}|^{2}\big{(}(\omega(\mathbf{p},y)+y\omega_{+})^{2}-2\omega _{-}(\omega(\mathbf{p},y)+y\omega_{+})(1-y)+\omega_{-}^{2}(1-y)^{2}\big{)} \tag{76}\]
\[\Re(\mathcal{Z}_{1}(\mathbf{p},t)\mathcal{\bar{Z}}_{2}(\mathbf{p},t)) =\frac{2}{\tau}(y(1-y)\omega-y(1-y)^{2}\omega_{-}+y^{2}(1-y) \omega_{+})\Big{[}\sin\big{(}\tau\omega_{+}\ln\big{(}1-y)\big{)}(\mu\Re( \Gamma_{1}\vec{\Gamma_{2}})\] \[-\nu\Im(\Gamma_{1}\vec{\Gamma_{2}}))+\cos\big{(}\tau\omega_{+}\ln \big{(}1-y)\big{)}(\nu\Re(\Gamma_{1}\vec{\Gamma_{2}})+\mu\Im(\Gamma_{1}\vec{ \Gamma_{2}}))\Big{]}\] \[+(\omega^{2}+(1-y)^{2}\omega_{-}^{2}-2\omega_{-}(1-y)-y^{2} \omega_{+}^{2})\Big{[}\sin\big{(}\tau\omega_{+}\ln\big{(}1-y)\big{)}\bigg{(} \tau\omega_{+}\Re(\Gamma_{1}\vec{\Gamma_{2}})\] \[+\Im(\Gamma_{1}\vec{\Gamma_{2}})\bigg{)}+\cos\big{(}\tau\omega_{+} \ln\big{(}1-y)\big{)}\bigg{(}-\Re(\Gamma_{1}\vec{\Gamma_{2}})+\tau\omega_{+} \Im(\Gamma_{1}\vec{\Gamma_{2}})\bigg{)}\Big{]} \tag{77}\]
using the Eq. (73) for approximating the \(\Gamma_{1}\vec{\Gamma_{2}}\)
\[\Re(\mathcal{Z}_{1}(\mathbf{p},t)\mathcal{\bar{Z}}_{2}(\mathbf{p},t)) =\frac{e^{\rho}\tau\omega_{-}}{\sqrt{1+\tau^{2}\omega_{+}^{2}\sinh \big{(}\pi\tau\omega_{-}\big{)}}}\Bigg{[}\frac{2}{\tau}(y(1-y)\omega-y(1-y)^{2 }\omega_{-}+y^{2}(1-y)\omega_{+})\] \[\sqrt{\mu^{2}+v^{2}}\sin\big{(}\tau\omega_{+}\ln(1-y)+\Xi_{1} \big{)}+(\omega^{2}+(1-y)^{2}\omega_{-}^{2}-2\omega_{-}(1-y)-y^{2}\omega_{+}^{2})\] \[\sin\big{(}\tau\omega_{+}ln(1-y)-\Xi_{2}\big{)}\sqrt{1+\tau\omega_{ +}^{2}}\Bigg{]} \tag{78}\]
where, \(\mu=\Big{(}E_{0}\tau^{2}-\frac{\tau}{2}(\omega_{-}-\omega_{+})\Big{)}\Big{(}E_{0} \tau^{2}+\frac{\tau}{2}(\omega_{-}-\omega_{+})\Big{)}\), \(\nu=-\Big{(}E_{0}\tau^{2}-\frac{\tau}{2}(\omega_{-}-\omega_{+})\Big{)}\), \(\Xi_{1}=\pi+\arctan\Big{(}\frac{1-\omega_{0}\delta_{3}}{\delta_{0}-\delta_{3}} \Big{)}+\ln\Big{(}\frac{\delta_{1}^{2}\delta_{2}^{2}\delta_{3}^{4}}{\delta_{0}^ {2}\delta_{3}^{2}}\Big{)}\), \(\Xi_{2}=\ln\Big{(}\frac{\delta_{0}^{2}\delta_{0}^{2}\delta_{3}^{2}}{\delta_{1}^ {2}\delta_{2}^{2}\delta_{4}^{2}}\Big{)}-\frac{\pi}{2}\)
To further simplify,
\[f(\mathbf{p},t) =|N^{+}(\mathbf{p})|^{2}\Bigg{(}1+\frac{(p_{\parallel}-eA(y))}{\omega( \mathbf{p},y)}\Bigg{)}\Bigg{(}|\Gamma_{1}|^{2}(\frac{4}{\tau}y^{2}(1-y)^{2}(\mu^{2 }+\nu^{2})+(1+\tau^{2}\omega_{+}^{2})(\omega(p,y)-y\omega_{+}-(1-y)\omega_{-}) ^{2})\] \[+\frac{4}{\tau}y(1-y)(\omega-y\omega_{+}-(1-y)\omega_{-})+| \Gamma_{2}|^{2}((\omega+y\omega_{+})^{2}-2\omega_{-}(\omega+y\omega_{+})(1-y) +\omega_{-}^{2}(1-y)^{2})\] \[+\frac{(-e^{\rho})\tau\omega_{-}}{2\sinh{(\pi\tau\omega_{-})} \sqrt{1+\tau^{2}\omega_{+}^{2}}}\Big{(}\frac{2}{\tau}\sqrt{\mu^{2}+\nu^{2}} \sin{(\tau\omega_{+}ln(1-y)+\Xi_{1})}(y(1-y)\omega-y(1-y)^{2}\omega_{-}\] \[+y^{2}(1-y)\omega_{+})+\sin{(\tau\omega_{+}ln(1-y)-\Xi_{2})}\sqrt {1+\tau\omega_{+}^{2}}(\omega^{2}+(1-y)^{2}\omega_{-}^{2}\] \[-2\omega_{-}(1-y)-y^{2}\omega_{+}^{2})\Big{)}\Bigg{)} \tag{79}\]
Our goal is to analyze the behavior of the vacuum state at finite times, particularly at late times. To achieve this, we seek to derive an analytical expression for the longitudinal momentum distribution function \(f(p_{\parallel},t)\) as a power series in terms of \(\sum_{n}C_{n}(1-y)^{n}\). Here, \(y\) is a variable that changes with time according to the equation \(y=\frac{1}{2}\left(1+\tanh(\frac{t}{\tau})\right)\).As time progresses, especially in the context of large time limits, such as \(t=10\tau\), it becomes evident that \((1-y)\) approaches zero. Consequently, the most significant influence on the distribution function arises from the zeroth-order terms, where the coefficients of \((1-y)^{0}\) precisely describe \(f(p_{\parallel},t\to\infty)\). By considering the higher-order terms, we can further investigate the distribution function's behavior at finite times.
In this context, we consider the transverse momentum value to be zero and focus solely on exploration in the longitudinal direction. Now, quasi-energy,\(\omega(p_{\parallel},y)\) can be approximated as follows:
\[\omega(p_{\parallel},y)=\sqrt{1+(p_{\parallel}+yE_{0}\tau)^{2}} \Bigg{(}1-(1-y)\frac{E_{0}\tau(p_{\parallel}+yE_{0}\tau)}{1+(p_{\parallel}+yE _{0}\tau)^{2}}\Bigg{)} \tag{80}\]
Subsequently, through algebraic manipulation, we derive:
\[|\mathcal{Z}_{1}(\mathbf{p},t)|^{2} =|\Gamma_{1}|^{2}\Bigg{[}\frac{4}{\tau^{2}}y^{2}(1-y)^{2}(\mu^{2 }+\nu^{2})\] \[+(1+\tau^{2}\omega_{+}^{2})\Bigg{(}y^{2}\omega_{+}^{2}+2y(1-y) \omega_{+}\omega_{-}-2y\omega_{+}\sqrt{1+(p_{\parallel}+E_{0}\tau y)}\] \[\Bigg{(}1-(1-y)\frac{E_{0}\tau(p_{\parallel}+E_{0}\tau y)}{1+(p_{ \parallel}+E_{0}\tau y)^{2}}\Bigg{)}\Bigg{)}+(1+(p_{\parallel}+E_{0}\tau y)^{2 })(1-2(1-y)\frac{E_{0}\tau(p_{\parallel}+E_{0}\tau y)}{1+(p_{\parallel}+E_{0} \tau y)^{2}}\] \[+(1-y)^{2}\frac{(E_{0}\tau(p_{\parallel}+E_{0}\tau y))^{2}}{(1+(p _{\parallel}+E_{0}\tau y)^{2})^{2}}+(1-y)^{2}\omega_{-}^{2}-2\omega_{-}(1-y) \sqrt{(1+(p_{\parallel}+E_{0}\tau y)^{2})}\] \[\Bigg{(}1-(1-y)\frac{E_{0}\tau(p_{\parallel}+E_{0}\tau y)}{1+(p_{ \parallel}+E_{0}\tau y)^{2}}\Bigg{)}\Bigg{)}+\frac{4}{\tau}y(1-y)(\mu\tau\omega_{ +}-\nu)\Big{(}\sqrt{1+(p_{\parallel}+E_{0}\tau y)^{2}}\] \[\Bigg{(}1-(1-y)\frac{E_{0}\tau(p_{\parallel}+E_{0}\tau y)}{1+(p_{ \parallel}+E_{0}\tau y)^{2}}\Bigg{)}-(1-y)\omega_{-}-y\omega_{+})\Bigg{]} \tag{81}\]
\[|\mathcal{Z}_{2}(\mathbf{p},t)|^{2} =|\Gamma_{2}|^{2}\Bigg{[}y^{2}\omega_{+}^{2}-2y(1-y)\omega_{+}\omega_ {-}+2y\omega_{+}\sqrt{1+(p_{\parallel}+E_{0}\tau y)^{2}}\left(1-(1-y)\frac{E_{0} \tau(p_{\parallel}+E_{0}\tau y)}{1+(p_{\parallel}+E_{0}\tau y)^{2}}\right)\] \[+(1+(p_{\parallel}+E_{0}\tau y)^{2})\left(1-2(1-y)\frac{E_{0} \tau(p_{\parallel}+E_{0}\tau y)}{1+(p_{\parallel}+E_{0}\tau y)^{2}}+(1-y)^{2} \frac{E_{0}\tau(p_{\parallel}+E_{0}\tau y)^{2}}{(1+(p_{\parallel}+E_{0}\tau y )^{2})^{2}}\right)\] \[+(1-y)^{2}\omega_{-}^{2}-2(1-y)\omega_{-}\sqrt{1+(p_{\parallel}+ E_{0}\tau y)^{2}}\left(1-(1-y)\frac{E_{0}\tau(p_{\parallel}+E_{0}\tau y)}{1+(p_{ \parallel}+E_{0}\tau y)^{2}}\right)\Bigg{]} \tag{82}\]
\[\Re(\mathcal{Z}_{1}(\mathbf{p},t)\bar{\mathcal{Z}}_{2}(\mathbf{p},t)) =\frac{\tau\omega_{-}e^{\rho}}{2\sinh\pi\tau\omega_{-}}\Bigg{[} \frac{2}{\tau}\sqrt{\frac{\mu^{2}+v^{2}}{1+\tau\omega_{+}^{2}}}\left(y(1-y) \sqrt{1+(p_{\parallel}+E_{0}\tau y)^{2}}+y^{2}(1-y)\omega_{+}\right)\] \[\sin\left(\tau\omega_{+}\ln(1-y)+\Xi_{1}\right)+\sin\left(\tau \omega_{+}\ln(1-y)-\Xi_{2}\right)\!\!\left((1+(p_{\parallel}+E_{0}\tau y)^{2})\right.\] \[\left.\left(1-2(1-y)\frac{E_{0}\tau(p_{\parallel}+E_{0}\tau y)}{ 1+(p_{\parallel}+E_{0}\tau y)^{2}}\right)-y^{2}\omega_{+}^{2}-2(1-y)\omega_{- }\sqrt{1+(p_{\parallel}+E_{0}\tau y)^{2}}\right)\Bigg{]} \tag{83}\]
We can now find out simplified algebraic expressions for the distribution function at a finite time (\(t>2\tau\)). This expression can be represented as a power series that includes terms of \((1-y)^{n}\), where \(n\) can take values of \(0,1,\) or \(2\). We choose to exclude higher-order terms from our analysis because the influence of the (\((1-y)\) term diminishes as time advances.
\[f(p_{\parallel},t) =\Bigg{(}\frac{|N^{+}(\mathbf{p})|^{2}}{2\left[p_{\parallel}+E_{0} \tau(y-(1-y))^{2}\right]}\Bigg{)}\Bigg{[}(1-y)^{2}|\Gamma_{1}|^{2}\Bigg{(} \frac{4y^{2}(\mu^{2}+v^{2})}{\tau^{2}}+\Big{(}\omega_{-}+\frac{E_{0}\tau(p_{ \parallel}+E_{0}\tau y)}{\sqrt{1+(p_{\parallel}+E_{0}\tau y)^{2}}}\Big{)}^{2}\] \[-\frac{4}{\tau^{2}}y\omega_{-}(\mu\tau\omega_{+}-\nu)-\frac{4y}{ \tau}\frac{E_{0}\tau(p_{\parallel}+E_{0}\tau y)}{\sqrt{1+(p_{\parallel}+E_{0} \tau y)^{2}}}\Bigg{)}+|\Gamma_{1}|^{2}(1-y)\Bigg{(}2y\omega_{-}\omega_{+}(1+ \tau^{2}\omega_{+}^{2})\] \[+2y\omega_{+}(1+\tau^{2}\omega_{+}^{2})\frac{E_{0}\tau(p_{ \parallel}+E_{0}\tau y)}{\sqrt{1+(p_{\parallel}+E_{0}\tau y)^{2}}}-2E_{0} \tau(1+\tau^{2}\omega_{+}^{2})(p_{\parallel}+E_{0}\tau y)\] \[-2(1+\tau^{2}\omega_{+}^{2})\omega_{-}\sqrt{1+(p_{\parallel}+E_{0} \tau y)^{2}}-\frac{4}{\tau}y^{2}\omega_{+}(\mu\tau\omega_{+}-\nu)+\frac{4}{ \tau}y(\mu\tau\omega_{+}-\nu)\sqrt{1+(p_{\parallel}+E_{0}\tau y)^{2}}\Bigg{)}\] \[-2|\Gamma_{2}|^{2}(1-y)\Bigg{(}y\omega_{+}\omega_{-}+y\omega_{+} \frac{E_{0}\tau(p_{\parallel}+E_{0}\tau y)}{\sqrt{1+(p_{\parallel}+E_{0}\tau y )^{2}}}+E_{0}\tau(p_{\parallel}+E_{0}\tau y)+\omega_{-}\sqrt{1+(p_{\parallel}+ E_{0}\tau y)^{2}}\Bigg{)}\] \[+|\Gamma_{1}|^{2}(1+\tau^{2}\omega_{+}^{2})\Big{(}y\omega_{+}- \sqrt{1+(p_{\parallel}+E_{0}\tau y)^{2}}\Big{)}^{2}+|\Gamma_{2}|^{2}\Big{(}y \omega_{+}+\sqrt{1+(p_{\parallel}+E_{0}\tau y)^{2}}\Big{)}^{2}\] \[-\frac{e^{\rho}\tau\omega_{-}}{2\sinh\left(\pi\tau\omega_{-} \right)\sqrt{1+\tau^{2}\omega_{+}^{2}}}\Bigg{(}\sqrt{1+\tau^{2}\omega_{+}^{2}} \sin\left(\tau\omega_{+}ln(1-y)-\Xi_{2}\right)\!\left(1+(p_{\parallel}+E_{0} \tau y)^{2}-y^{2}\omega_{+}^{2}\right)\] \[+(1-y)\Bigg{(}\frac{2}{\tau}\sqrt{\mu^{2}+v^{2}}\sin\left(\tau \omega_{+}ln(1-y)+\Xi_{1}\right)\!\left(y\sqrt{1+(p_{\parallel}+E_{0}\tau y)^{2}}+y^{2} \omega_{+}\right)\] \[-2\sqrt{1+\tau^{2}\omega_{+}^{2}}\left(E_{0}\tau(p_{\parallel}+E_{0 }\tau y)+\omega_{-}\sqrt{1+(p_{\parallel}+E_{0}\tau y)^{2}}\right)\sin\left( \tau\omega_{+}\ln(1-y)-\Xi_{2}\right)\!\Big{)}\Bigg{)} \tag{84}\]
On carefully examining the approximate expression for the distribution function (74), we can say that in the early stages of electron-positron formation, first term \(\mathcal{Z}_{1}(p_{\parallel},t)\) responsible for the primary peak at \(p_{\parallel}\approx-2[m]\) due to the presence of \(|\Gamma_{1}|^{2}\) that dependent on \((E_{0},\tau)\) and as
time progresses that primary peak diminished. Secondary peak at \(p_{\parallel}=0\) starts build-up due to term \(\mathcal{Z}_{2}(p_{\parallel},t)\) and \(\Re(\mathcal{Z}_{1}(p_{\parallel},t)\bar{\mathcal{Z}}_{2}(p_{\parallel},t))\) responsible for onset oscillation on that peak. The oscillation pattern of \(\Re(\mathcal{Z}_{1}(p_{\parallel},t)\bar{\mathcal{Z}}_{2}(p_{\parallel},t))\) undergoes a transformation over time, primarily because of the presence of "\(ln(1-y)\)" in the sinusoidal function. As time progresses towards infinity, \(\Re(\mathcal{Z}_{1}(p_{\parallel},t)\bar{\mathcal{Z}}_{2}(p_{\parallel},t))\) leads to suppression. Consequently, we observe only a secondary peak at \(p_{\parallel}=0\) due to the dominance of the \(\mathcal{Z}_{2}(p_{\parallel},t)\) term. This observation is explicitly confirmed in Figure 8. It's important to note that \(\Re(\mathcal{Z}_{1}(p_{\parallel},t)\bar{\mathcal{Z}}_{2}(p_{\parallel},t))\) represents an oscillatory finite function whose magnitude depends on \(t\). The magnitude of this function plays a crucial role in determining the dynamics of \(f(p_{\parallel},t)\) in \(p_{\parallel}\)-space at finite times.
We further derive an asymptotic expression for the distribution function in the limit \(t\to\infty\) (\(y\approx 1\)). The term \(\mathcal{Z}_{1}(p_{\parallel},t)\) and \(\Re(\mathcal{Z}_{1}(p_{\parallel},t)\bar{\mathcal{Z}}_{2}(p_{\parallel},t))\) becomes zero and only \(\mathcal{Z}_{2}(p_{\parallel},t)\) term exist. By lengthy calculations, we found an analytical expression for the distribution function for asymptotic times as follows:
\[f(p_{\parallel},y\to 1)=\frac{2\sinh(\pi\tau(2E_{0}\tau+\omega_{-}-\omega_{+})/ 2)\sinh(\pi\tau(2E_{0}\tau-\omega_{-}+\omega_{+})/2)}{\sinh(\pi\tau\omega_{-}) \sinh(\pi\tau\omega_{+})} \tag{85}\]
### Correlation function
To depict the generation and elimination of particles within our physical system, it's essential to introduce a concept that can characterize these processes in the presence of an external field. This
concept takes the form of the time-dependent pair correlation function and its complex conjugate, which describe the creation and annihilation processes of particles, respectively.
\[\mathcal{C}(\mathbf{p},t)=\langle 0|\hat{D}^{\dagger}_{-\mathbf{p}r}(t)\hat{B}^{ \dagger}_{\mathbf{p}r}(t)|0\rangle=2\alpha^{*}_{\mathbf{p}}(t)\beta_{\mathbf{p}}(t) \tag{86}\]
\[\mathcal{C}^{*}(\mathbf{p},t)=\langle 0|\hat{D}_{-\mathbf{p}r}(t)\hat{B}_{\mathbf{p}r}(t)|0 \rangle=2\beta^{*}_{\mathbf{p}}(t)\alpha_{\mathbf{p}}(t) \tag{87}\]
As can easily be seen, this function \(\mathcal{C}(\mathbf{p},t)\) consisting of creation operators for a particle and an anti-particle with the opposite momentum describes the process of production of \(e^{-}e^{+}\) pair. In several research papers[47, 62, 63, 64, 65], the particle-antiparticle correlation function is redefined by incorporating the slowly varying component of the time-dependent creation and annihilation operators in adiabatic basis.
\[\hat{\mathcal{D}}_{\mathbf{p}r}(t)=\hat{B}_{\mathbf{p}r}(t)\mathrm{e}^{- \mathrm{i}\Theta_{\mathbf{p}}(t)}\] \[\hat{\mathcal{D}}_{-\mathbf{p}r}(t)=\hat{D}_{-\mathbf{p}r}(t)\mathrm{e}^ {-\mathrm{i}\Theta_{\mathbf{p}}(t)}\]
where,\(\Theta_{\mathbf{p}}(t)=\int^{t}dt^{\prime}\omega(\mathbf{p},t^{\prime})\)
So that,
\[\mathcal{C}(\mathbf{p},t) =\langle 0|\hat{\mathcal{D}}^{\dagger}_{-\mathbf{p}r}(t)\hat{\mathcal{B }}^{\dagger}_{\mathbf{p}r}(t)|0\rangle\] \[=\mathrm{e}^{2\mathrm{i}\Theta_{\mathbf{p}}(t)}\langle 0|\hat{D}^{ \dagger}_{-\mathbf{p}r}(t)\hat{B}^{\dagger}_{\mathbf{p}r}(t)|0\rangle \tag{88}\]
Now, using the above relation for the Pair correlation function in the presence of a Sauter-pulse electric field,
\[\mathcal{C}(\mathbf{p},t) =|N^{+}(\mathbf{p})|^{2}\sqrt{1-\frac{(p_{\parallel}-eA(t))^{2}}{ \omega^{2}(\mathbf{p},t)}}\bigg{(}\frac{4}{\tau^{2}}y^{2}(1-y)^{2}|\frac{ab}{c}|^{ 2}|f_{\parallel}|^{2}+(\omega^{2}(\mathbf{p},y)-(y\omega_{+}+(1-y)\omega_{-})^{2} )|f_{2}|^{2}\] \[+\frac{2\mathrm{i}}{\tau}y(1-y)((1-y)\omega_{-}+y\omega_{+})( \frac{ab}{c}f_{1}\bar{f}_{2}-\frac{\bar{ab}}{\bar{c}}\bar{f}_{1}f_{2})\bigg{)} \tag{89}\]
As we know, the vacuum polarization effects play a crucial role in the process of pair production. This effect is defined through the functions \(u(\mathbf{p},t)=\Re(\mathcal{C}(\mathbf{p},t))\) and \(v(\mathbf{p},t)=\Im(\mathcal{C}(\mathbf{p},t))\).
\[u(\mathbf{p},t) =|N^{+}(\mathbf{p})|^{2}\sqrt{1-\frac{(p_{\parallel}-eA(t))^{2}}{ \omega^{2}(\mathbf{p},t)}}\bigg{(}\frac{4}{\tau^{2}}y^{2}(1-y)^{2}|\frac{ab}{c}|^{ 2}|f_{\parallel}|^{2}+(\omega^{2}(\mathbf{p},y)-(y\omega_{+}+(1-y)\omega_{-})^{2} )|f_{2}|^{2}\] \[+\frac{2}{\tau}y(1-y)((1-y)\omega_{-}+y\omega_{+})(\Im(\frac{ \bar{ab}}{\bar{c}}\bar{f}_{1}f_{2})-\Im(\frac{ab}{c}f_{1}\bar{f}_{2}))\bigg{)} \tag{90}\]
\[v(\mathbf{p},t)=|N^{+}(\mathbf{p})|^{2}\sqrt{1-\frac{(p_{\parallel}-eA(t))^{2}}{ \omega^{2}(\mathbf{p},t)}}\bigg{(}\frac{2}{\tau}y(1-y)((1-y)\omega_{-}+y\omega_{+ })(\Re(\frac{ab}{c}f_{1}\bar{f}_{2})-\Re(\frac{\bar{ab}}{\bar{c}}\bar{f}_{1}f _{2}))\bigg{)} \tag{91}\]
The function \(u(\mathbf{p},t)\) depicts vacuum polarization effects and pair production phenomena. The function \(v(\mathbf{p},t)\) serves as counter terms to the pair production, effectively representing the pair annihilation in the vacuum excitation process. By considering these functions, we can gain valuable insights into the interplay between particle creation and annihilation in the complex dynamics of the vacuum polarization phenomenon.
Through algebraic manipulation, it becomes feasible to deduce the analytical expressions for \(u(\mathbf{p},t)\) and \(v(\mathbf{p},t)\) which are as follows:
\[u(\mathbf{p},t) =|N^{+}(\mathbf{p})|^{2}\sqrt{1-\frac{\big{(}p_{\parallel}-eA(t) \big{)}^{2}}{\omega(\mathbf{p},t)}}\Big{(}\frac{4}{\tau^{2}}y^{2}\Big{(}1-y)^{2}( \mu^{2}+\nu^{2})|\Gamma_{1}|^{2}+\tau\omega_{+}|\Gamma_{2}|^{2}\] \[-2(1-y)\sqrt{\frac{\mu^{2}+\nu^{2}}{1+\tau^{2}\omega_{+}^{2}}} \frac{e^{\rho}\tau^{2}\omega_{+}^{2}}{\sinh\big{(}\tau\omega_{+}^{2}\big{)}} \sin\big{(}\tau\omega_{+}\ln(1-y)-\Xi_{1}\big{)}\] \[+(\omega^{2}-((1-y)\omega_{-}+y\omega_{+})^{2})((1+\tau^{2} \omega_{+}^{2})|\Gamma_{1}|^{2}+|\Gamma_{2}|^{2}+2\frac{e^{\rho}\tau^{2}\omega_ {+}^{2}}{\sinh\big{(}\tau\omega_{+}^{2}\big{)}}\cos\big{(}\tau\omega_{+}\ln(1- y)-\Xi_{1}\big{)}\] \[+(1-y)\sqrt{\frac{\mu^{2}+\nu^{2}}{1+\tau^{2}\omega_{+}^{2}}} \frac{e^{\rho}\tau^{2}\omega_{+}^{2}}{\sinh\big{(}\pi\tau\omega_{+}\big{)}} \cos\big{(}\Xi_{1}-\xi\big{)}\sin\big{(}\tau\omega_{+}\ln(1-y)+\xi\big{)} \tag{92}\]
\[v(\mathbf{p},t) =(-\frac{2y}{\tau})|N^{+}(\mathbf{p})|^{2}\sqrt{1-\frac{\big{(}p_{ \parallel}-eA(t)\big{)}^{2}}{\omega(\mathbf{p},t)}}(\omega+(1-y)\omega_{-}+y\omega _{+})\bigg{(}2(1-y)\sqrt{(1+\tau^{2}\omega_{+}^{2})(\mu^{2}+\nu^{2})}\] \[|\Gamma_{1}|^{2}\cos\big{(}\xi-\eta\big{)}-(1-y)\sqrt{\frac{\mu^ {2}+\nu^{2}}{1+\tau^{2}\omega_{+}^{2}}}\frac{e^{\rho}\tau^{2}\omega_{+}^{2}}{ \sinh\big{(}\pi\tau\omega_{+}\big{)}}\cos\big{(}\Xi_{1}-\xi\big{)}\cos\big{(} \tau\omega_{+}\ln(1-y)+\xi\big{)}\] \[+\frac{e^{\rho}\tau^{2}\omega_{+}^{2}}{\sinh\big{(}\pi\tau\omega_ {+}\big{)}}\sin\big{(}\tau\omega_{+}\ln\big{(}1-y\big{)}+(\xi+\eta-\Xi_{1}) \big{)}\bigg{)}. \tag{93}\]
with, \(\xi=arctan(-1/\delta_{3})\) and \(\eta=arctan(\delta_{0})\)
For a better understanding of the phenomenon of particle creation under the strong electric field, we will also trace the evolution of the vacuum polarization function, \(u(\mathbf{p},t)\) its counter term \(v(\mathbf{p},t)\). Figure 9 shows the time evolution of \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\) for different values of \(p_{\parallel}\) at zero transverse momentum. Due to pair annihilation being stronger than pair creation in the early times, depolarisation function \(v(p_{\parallel},t)\) dominates over \(u(p_{\parallel},t)[9(a)-(c)]\). It appears that in the polarization function, there is a sinusoidal-type structure. On the other hand, the depolarization function \(v(p_{\parallel},t)\) shows an unimodal Gaussian peak in its temporal evolution. Both \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\) show oscillations with varying amplitudes during the initial transient stage. These oscillations are particularly pronounced when \(p_{\parallel}=0\), as evident from Figure 9(a) and 9(c). As time progresses, irregular oscillations are observed in the transient stage. However, the oscillations become regular and stable as the system enters the REPP stage. Both functions,
\(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\), now exhibit regular oscillations centered around the zero value. Moreover, as the momentum value \(p_{\parallel}\) increases, the amplitudes of these oscillations diminish, as shown in Figure 9(b) and 9(d). During the REPP stage, one interesting finding is that \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\) demonstrate balancing behavior characterized by similar oscillatory patterns. This balance is a result of the formation of real independent pairs of electron-positrons.
#### iv.2.1 Momentum Spectra of \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\)
The longitudinal momentum significantly impacts the vacuum polarization function's qualitative traits. To understand its dependence on \(p_{\parallel}\), we comprehensively analyze the momentum Spectra of both \(u(p_{\parallel},t)\) and its associated counter term \(v(p_{\parallel},t)\). Figure 10 displays the LMS of \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\). In the initial stages of particle formation, specifically at \(t<0\), both \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\) display asymmetric Gaussian peaks with roughly similar profiles. During this period, \(v(p_{\parallel},t)\) dominates over \(u(p_{\parallel},t)\). Upon closer examination, it becomes evident that the peaks occur at different positions around \(p_{\parallel}\approx 1[m]\), as illustrated in Figure 10(a). At \(t=0\), the initial
Gaussian-shaped structure of \(u(p_{\parallel},t)\) deformed, becoming a bi-modal asymmetric Gaussian-like structure. \(u(p_{\parallel})\) displays peaks at specific non-zero values of \(p_{\parallel}\simeq-0.35[m]\) and a dip (or valley) at \(p_{\parallel}\simeq+0.35[m]\).In contrast, the \(v(p_{\parallel},t)\) spectrum retains its Gaussian-like unimodal profile, with the peak at \(p_{\parallel}=0\). The behavior of \(u(p_{\parallel},t)\) undergoes significant changes, becoming asymmetric, while \(v(p_{\parallel},t)\) remains unchanged in its overall shape during this stage. Due to the force factor "\(eE(t)\)" and corresponding longitudinal quasi-momentum \(P(t)=(p_{\parallel}-eA(t))\), the spectrum moved to the left side of the origin, and its peak now located at \(p_{\parallel}\approx-2\) for \(v(p_{\parallel},t)\) whereas small dip in LMS of \(u(p_{\parallel},t)\) with some disruption in the tail ( \(-1<p_{\parallel}<1\)) as shown in the figure 10(c-d) near to the transient stage. The behavior of \(v(p_{\parallel},t)\) and \(u(p_{\parallel},t)\) at this stage explicitly counter each other, as they exhibit distinct shifts and variations due to the applied electric field and longitudinal quasi-momentum. At \(t=24[m^{-1}]\), we observe two distinct structures in the polarization and depolarization functions as time progresses. The first structure occurs at \(p_{\parallel}=-2[m]\), as a consequence of earlier vacuum excitation responsible for pair formation. The second structure is located in the range \(-1<p_{\parallel}<1\), exhibiting varying amplitude oscillations, with the maximum occurring at zero longitudinal momentum throughout the process. At this stage, \(u(p_{\parallel})\) and \(v(p_{\parallel})\) are located in the range \(-1<p_{\parallel}<1\), exhibiting a very large amplitude oscillations, with the maximum occurring at zero longitudinal momentum throughout the process. The second structure is located in the range \(-1<p_{\parallel}<1\), exhibiting a very large amplitude oscillations, with the maximum occurring at zero longitudinal momentum throughout the process. At this stage, \(u(p_{\parallel})\) and \(v(p_{\parallel})\) are located in the range \(-1<p_{\parallel}<1\), exhibiting a very large amplitude oscillations, with the maximum occurring at zero longitudinal momentum throughout the process. At this stage, \(u(p_{\parallel})\) and \(v(p_{\parallel})\) are located in the range \(-1<p_{\parallel}<1\), exhibiting a very large amplitude oscillations, with the maximum occurring at zero longitudinal momentum throughout the process.
compete with each other, as shown in Figure 10(d) and 10(e). As we approach the beginning of the REPP stage, both functions, \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\), indeed exhibit a single oscillating structure resembling a sine or cosine function with varying amplitude. The maximum now appears near \(p_{\parallel}=0\), dominating in comparison to the secondary peak (and dip for \(u(p_{\parallel},t)\)) that was present on the left side of the origin at \(p_{\parallel}\approx-2[m]\). As time progresses further, this left side structure slowly vanishes, as shown in Figure 10(e) and 10(f). Eventually, as \(A(t)\) reaches a constant value, a balance is achieved between the processes of particle creation and annihilation, as depicted in Figure 10(g). In the late REPP stage, where the particle distribution function \(f(p_{\parallel},t)\) is constant, the polarization function \(u(p_{\parallel},t)\) is balanced by its counterpart \(v(p_{\parallel},t)\). This results in very regular and rapid oscillations with varying amplitude within the Gaussian envelope, as explicitly shown in Figure 10(h-i). During this stage, the system exhibits a stable oscillatory behavior, indicating a well-established equilibrium between particle creation and annihilation processes. The overall behavior observed in both \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\) during this process provides valuable insights into the complex dynamics of particle-antiparticle pair creation and annihilation under the influence of the electric field.
iv.2.2 Examining quantum interference effect in LMS of \(f(p_{\parallel},t)\): Role of Vacuum Polarization and Depolarization Functions
During the formation of electron-positron pairs from the quantum vacuum, various processes can occur. Electrons are prominently produced and annihilated simultaneously, resulting in electron acceleration and deceleration throughout the formation of real \(e^{-}e^{+}\) independent pairs from the virtual \(e^{-}e^{+}\) pairs. In this situation, polarization functions \(u(\mathbf{p},t)\) and its counter term \(v(\mathbf{p},t)\) are responsible for the acceleration and deceleration of electrons, respectively. In that sense, when an electron Electrons are typically created in the direction of the external field with positive momentum and are subsequently decelerated and may then, as soon as \(p_{\parallel}<0\), be annihilated again; these processes can be understood in terms of \(u(\mathbf{p},t)\) and \(v(\mathbf{p},t)\) which represents the acceleration and declaration of electrons. In quasi-momentum space, there are many possibilities that particles can find with specific momentum \(p_{0}\) with time \(t_{0}\). Suppose particles at a lower momentum level with \((p_{0}-\delta p_{0})\) in earlier time reach momentum value \(p_{0}\) by acceleration process showed by \(u(p_{\parallel},t)\) and it is also possible that particles at higher momentum level with \((p_{0}+\delta p_{0})\) followed by deceleration process \((p_{0}-\delta p_{0})\) and finally come to momentum value \(p_{0}\) see figure 11.
In the quasi-particle longitudinal momentum spectrum at a specific time \(t_{0}\), two possible events
can give rise to the quantum interference effect. This expectation is explicitly confirmed in the middle panel of Figure 4, where oscillations are observed in a bell-shaped profile around \(t\approx 2\tau\). These oscillations in the LMS are observed for a very short duration, during which coherence is maintained in the LMS of \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\), as evident from Figure 10(e-f). However, these oscillations slowly fade away in the late REPP region around \(t=4\tau\). The disappearance of oscillations in the particle LMS can be understood in terms of the loss of coherence that was maintained in the LMS of \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\). After the loss of coherence, the LMS of \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\) become identical at \(t=6\tau\) (see Figure 10 (i)). The envelope of the Gaussian with a cosine or sine variation results in a smooth bell-shaped particle momentum distribution. This behavior suggests a well-established equilibrium in the system, with interference effects subsiding and a stable particle-antiparticle distribution being achieved.
### Dependence on Transverse momentum
In this section, we extensively investigate the effect of transverse momentum on the particle momentum distribution function by plotting the time evolution of the LMS for different fixed values of transverse momentum, \(p_{\perp}\).
In Figure 12, it is evident that the distribution function diminishes as \(p_{\perp}\) increases, akin to the addition of extra mass to electrons and positrons. The higher transverse momentum necessitates more energy to create \(e^{-}e^{+}\) pairs, reducing the number of particles produced at that specific moment.
Figure 11: A schematic representation of particle occupies different momentum states in different scenarios
Notably, in Figure 12 (c), an intriguing feature arises due to the elevated transverse momentum, resulting in the nearly complete attenuation of oscillations at \(t=3\tau\). This loss of coherence can be attributed to particles' reduced production and acceleration compared to instances with lower transverse momentum, thus explaining this phenomenon.
Furthermore, as we previously identified some time scales relative to the occurrence of quantum interference patterns discussed in section IV.2, we now observe that the quantum signature depends on the transverse momentum, as indicated in Table 3. Lower \(p_{\perp}\) values exhibit oscillation patterns that become visible earlier than higher momenta.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(p_{\perp}[m]\) & \(t_{cp}[m^{-1}]\) & \(t_{sep}[m^{-1}]\) & \(t_{dis.}[m^{-1}]\) \\ \hline
0.00 & 22 & 30 & 50 \\ \hline
0.25 & 24 & 32 & 53 \\ \hline
0.50 & 31 & 38 & 63 \\ \hline
0.75 & 39 & 47 & 71 \\ \hline
1.00 & 53 & 60 & 83 \\ \hline \end{tabular}
\end{table}
Table 3: Impact of Transverse Mode on Different Time Scale
Figure 12: LMS of created particles for different values of the transverse momentum \(p_{\perp}\) ( \(p_{\perp}=0\)(red), \(p_{\perp}=0.4\)(blue), and\(p_{\perp}=0.8\) (brown).)
### Transverse momentum spectrum
As we discussed in the previous section about the dependence of transverse momentum,\(p_{\perp}\) on the longitudinal momentum distribution function at finite times. Understanding the Transverse Momentum Spectrum (TMS) of the created pairs can provide valuable insights into the pair production process and offer more information about the particles involved. The time evolution of TMS of created quasi-particle pairs is depicted in figure 13. The shape of the transverse momentum distribution is changed when the electric field is switched on. At the initial time, \(t=-10[m^{-1}]\), we see that the TMS shows a Gaussian structure peak at \(p_{\perp}=0.\) After \(t\approx\tau/2\), the smooth Gaussian structure becomes distorted, and we see inconstancy in spectrum structure either a dip at the origin with an off-axis maximum or a peak at zero transverse momentum with small peaks up to \(t\approx 2\tau\) observed. During the transient stage and at the beginning of the REPP stage, the spectrum shows a prominent peak at zero transverse momentum, accompanied by weakly pronounced peaks at \(p_{\perp}\approx\pm 0.67[m]\) and \(p_{\perp}\approx\pm 0.84[m]\) as seen in the figure 13(c),(d). The width of the momentum spectra changes with time, with the momentum distribution of quasi-particle pairs having a width of the order of the electron mass. The half-width of the \(p_{\perp}\) distribution is determined by the field strength at that time, which is explicitly confirmed by Figure 13. The motion of the longitudinal and transverse directions remains coupled through the quasi-energy \(\omega(p_{\parallel},p_{\perp},t)\) during the depletion of the accelerating field (i.e., transient and the beginning of the REPP stage), leading to the observed substructure in the TMS. As the electric field decays to 98.6% of its maximum value, the width of the \(p_{\perp}\) distribution becomes of the order of 1, and the substructure in TMS disappears in the REPP stage ( see figure 13 (g)). At this stage, the TMS shows a smooth Gauss-like distribution, with the maximum value \(f(p_{\perp})\) occurring at \(p_{\perp}=0\) as shown in figure 13(h). The distribution function \(f(p_{\perp})\) can be well understood by assuming that particle creation is exponentially suppressed with \(\exp{(-\frac{m^{2}+p_{\perp}^{2}}{eE_{0}})}\). Furthermore, in the absence of an electric field, the transverse particle spectrum exhibits a Gaussian distribution, as shown in Figure 13(i). In this case, the absence of quantum interference arises because, although all particles have acquired a phase, the distribution of phase information only varies in the direction of \(p_{\parallel}\). Hence, when summing over the phases in the direction of \(p_{\perp}\), no interference pattern emerges.
## V Conclusion
We have conducted a detailed analysis of the electron-positron pair creation from the vacuum under the influence of a time-dependent Sauter pulse electric field. By deriving the exact analytic
solution for the mode function, we computed the one-particle distribution function in momentum space for the Sauter-pulse electric field. Our investigation reveals that the process of transition from initially virtual particles to real particles occurs in three distinct stages, which crucially depend on the longitudinal and transverse modes in momentum space. Moreover, we have quantified the initiation of the REPP stage, determined by the momentum value, with higher momenta resulting in narrower oscillations in the transient region. We meticulously examined the LMS and TMS to understand the momentum-dependent behavior during these stages. In the LMS, one interesting feature at the beginning of the REPP stage is that the Longitudinal momentum distribution function exhibits an oscillating structure as imprints of a quantum signature at the finite time where the electric field is nearly zero, which can be understood in the Dynamical Tunneling picture. The two-peaked structure in which the central Gaussian-peak structure has onset oscillation, and this quantum interference pattern evolves and fades away. Based on this observation, we identified three distinct time scales associated with this behavior. Consequently, we concluded that these oscillations are prominent in the LMS at the Compton time scale during the formation of electron-positron pairs. Additionally, we investigated the impact of different electric field strengths (\(E_{0}\)) on the longitudinal and transverse modes in the transverse modes. We also investigated the impact of the longitudinal momentum distribution function on the longitudinal and transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes the in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse transverse modes in the transverse modes the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes in the transverse modes the transverse modes in the transverse
the duration of the interference pattern's formation and disappearance. Subsequently, we examined whether or not the oscillation structure behavior at finite time explicitly depends on the Keldysh parameter (\(\gamma\)). Remarkably, we found that for two different configurations of parameters (\(E_{0},\tau\)) under the same \(\gamma=1\), distinct behaviors were observed due to their placement in the intermediate regime of pair production. In the multiphoton regime, for \(\gamma=2.5\), we explored the time evolution of LMS in different stages of pair production, and the spectrum shows the splitting of a smooth unimodal structure into a multi-modal Gaussian structure near the REPP region, after which it merges into a single peak Gaussian profile at the asymptotic time limit. We also derived approximate analytical expressions for the one-particle distribution function \(f(\mathbf{p},t)\) at finite times. Utilizing these expressions, we unveiled that the LMS structure mainly comprises three distinct functional behaviors. One function dominantly governs the early times, while combining the second and third functions leads to the central peak structure with onset oscillation. The third term contributes to oscillations, owing to the presence of a sinusoidal function, but its amplitude diminishes with time, resulting in a smooth spectrum profile in late time. Furthermore, we investigated the role of vacuum polarization, \(u(p_{\parallel},t)\), and its counterpart, \(v(p_{\parallel},t)\), by plotting their time evolution. In the REPP region, both \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\) exhibit nearly identical oscillations with the same amplitude, and this oscillation amplitude decays for higher \(p_{\parallel}\) values. This observation implies that the qualitative nature of the vacuum polarization function is significantly influenced by the longitudinal momentum. To elucidate this further, we plotted the LMS of \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\), showcasing two distinct structures at the beginning of the REPP stage. One structure was characterized by a Gaussian-like structure, while the other contained a deformed oscillating profile within the Gaussian envelope. These oscillations, exhibiting varying amplitudes, were confined to a small window of longitudinal momentum (\(-1<p_{\parallel}<1\)).Over time, the regular oscillations of \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\) became balanced, as explicitly observed in the plot of the LMS of the created particles for late time, where the oscillating behavior was absent due to the equilibrium between \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\). As we recognize the pivotal roles of \(u(p_{\parallel},t)\) and \(v(p_{\parallel},t)\) functions in accelerating (creation of \(e^{-}\)) and decelerating (annihilation of \(e^{-}\)) electrons, respectively, the observed oscillations in the LMS of \(f(p_{\parallel},t)\) at finite times can be attributed to the characteristic behavior imprints of these functions, resulting from the acceleration and deceleration of electrons in momentum representation, which manifests as the observed oscillating structure. Next, we also discuss the influence of transverse momentum,\(p_{\perp}\) on LMS, which diminishes the value of \(f(p_{\parallel})\) and smooths out the oscillation for higher \(p_{\perp}\) values. Moreover, we emphasized that these oscillations, seen in the late REPP stage as transverse momentum increases and also impact the formation time of pairs.
Finally, we study the TMS, uncovering interesting results not previously considered by any other author. In the TMS, fluctuating substructures change regularly in the transient region up to the start of the REPP region, after which they disappear. The appearance of these substructures in the smooth Gauss-like distribution is attributed to the motion of an electron in the longitudinal direction (i.e., z-direction) coupled to the transverse direction motion in the transient region, where electron-positron pair dependence is evident. However, upon reaching the REPP stage, the coupling between the longitudinal and transverse directions is not observed, as real independent electron-positron pairs are formed where electric fields decay out.
## VI Acknowledgments
Deepak gratefully acknowledge the financial support from Homi Bhabha National Institute (HBNI) for carrying out this research work.
|
2309.05511 | Poisson valuations | We study Poisson valuations and provide their applications in solving
problems related to rigidity, automorphisms, Dixmier property, isomorphisms,
and embeddings of Poisson algebras and fields. | Hongdi Huang, Xin Tang, Xingting Wang, James J. Zhang | 2023-09-11T14:53:36Z | http://arxiv.org/abs/2309.05511v1 | # Poisson valuations
###### Abstract.
We study Poisson valuations and provide their applications in solving problems related to rigidity, automorphisms, Dixmier property, isomorphisms, and embeddings of Poisson algebras and fields.
Key words and phrases:Poisson field, valuation, filtration, isolated singularity 2020 Mathematics Subject Classification: Primary 17B63, 17B40, 16W20
## Introduction
Poisson algebras are ubiquitous and an essential area of study with great significance. Various aspects of Poisson algebras have been researched in the works [Ba1, Ba2, Go1, Go2, GLa, GLe, JO, LS, LuWW1, LuWW2, LvWZ, PS], focusing on (twisted) Poincare duality and the modular derivation, Poisson Dixmier-Moeglin equivalence, Poisson enveloping algebras, Poisson spectrum and more. The role of Poisson algebras has been featured in the study of the representation theory of PI Sklyanin algebras [WWY1, WWY2] and discriminants of noncommutative algebras [BY1, BY2, LY1, NTY]. Additionally, the isomorphism problem and cancellation problem have been introduced and studied in the context of Poisson algebras [GVW, GW, GWY].
This paper presents the concept of a Poisson valuation applied to a Poisson field. Poisson valuations can be perceived as an invariant of Poisson fields (or Poisson algebras) connected to the idea of a prime divisor in algebraic geometry. We explore the applications of Poisson valuations by solving problems concerning rigidity, automorphisms, Dixmier property, isomorphisms, and embeddings of Poisson algebras and fields.
### Definitions
Let \(\Bbbk\) be a base field, and algebraic objects are over \(\Bbbk\). After the middle of Section 3, we assume throughout that \(\Bbbk\) is of characteristic zero.
**Definition 0.1**.: Let \(K\) be a Poisson field. A map
\[\nu:K\to\mathbb{Z}\cup\{\infty\}\]
is called a _Poisson valuation_ on \(K\) if, for all \(a,b\in K\),
1. \(\nu(a)=\infty\) if and only if \(a=0\),
2. \(\nu(a)=0\) for all \(a\in\Bbbk^{\times}:=\Bbbk\setminus\{0\}\),
3. \(\nu(ab)=\nu(a)+\nu(b)\),
4. \(\nu(a+b)\geq\min\{\nu(a),\nu(b)\}\),
5. \(\nu(\{a,b\})\geq\nu(a)+\nu(b)\).
In Definition 1.1(3), we will introduce the notion of a \(w\)-valuation for any \(w\in\mathbb{Z}\). The Poisson valuation described earlier is essentially a \(0\)-valuation. Given a Poisson
###### Contents
* 1 Introduction
* 2 Preliminaries
[MISSING_PAGE_POST]
In addition, our focus is on the Poisson fraction fields of some quotients of three-dimensional Poisson polynomial algebras, which have unimodular Poisson structures defined by a homogeneous potential \(\Omega\).
**Construction 0.6**.: _Let \(\Omega\in\Bbbk[x,y,z]\) be a non-constant homogeneous polynomial. We recall some well-known constructions involving \(\Omega\) as follows._
1. _First, we define a Poisson bracket on the polynomial ring_ \(\Bbbk[x,y,z]\) _by_ (E0.6.1) \[\{f,g\}_{\Omega}=\ \det\begin{pmatrix}f_{x}&f_{y}&f_{z}\\ g_{x}&g_{y}&g_{z}\\ \Omega_{x}&\Omega_{y}&\Omega_{z}\end{pmatrix}\quad\text{for all }f,g\in\Bbbk[x,y,z].\] _This Poisson polynomial algebra_ \((\Bbbk[x,y,z],\{-,-\}_{\Omega})\) _is denoted by_ \(A_{\Omega}\)_. This definition is dependent on the set of generators_ \((x,y,z)\)_. However, if we use a new set of generators, the new Poisson bracket is a scalar multiple of_ \(\{-,-\}_{\Omega}\)_, see_ _[_11_, Definition 3.1]__._
2. _It is easy to check that_ \(\Omega\) _is in the Poisson center of_ \(A_{\Omega}\)_. Hence the Poisson polynomial algebra_ \(A_{\Omega}\) _has a Poisson factor ring_ \(P_{\Omega-\xi}:=A_{\Omega}/(\Omega-\xi)\) _where_ \(\xi\in\Bbbk\)_. If_ \(\xi=0\)_, we use_ \(P_{\Omega}\) _instead of_ \(P_{\Omega-0}\)_. Suppose the_ \((\)_Adams\()\) _degrees of_ \(x\)_,_ \(y\)_, and_ \(z\) _are 1. If_ \(\Omega\) _is homogeneous of degree 3, then_ \(P_{\Omega-\xi}\cong P_{\Omega-1}\) _when_ \(\xi\neq 0\)_. So, in this case, we can assume that_ \(\xi\) _is either 0 or_ \(1\)_._
_Unless otherwise stated, \(A_{\Omega}\), \(P_{\Omega}\), \(P_{\Omega-1}\) and \(P_{\Omega-\xi}\) will be the Poisson algebras defined as above._
We will consider two special cases. Case 1: \(\Omega:=x^{3}+y^{3}+z^{3}+\lambda xyz\) with \(\lambda^{3}\neq-3^{3}\). So \(\Omega\) has an isolated singularity at the origin. In this case, \(A_{\Omega}\) is called an _elliptic Poisson algebra_ which has been studied by several authors [FO, MTU, Pi, Po, TWZ]. Case 2: \(\Omega\) is a homogeneous element of degree \(\geq 5\) with an isolated singularity at the origin (in this case, we say that \(\Omega\) is an i.s. potential of degree \(\geq 5\)). The following result concerns the Poisson algebras in Case 1.
**Theorem 0.7**.: _Let \(\Omega:=x^{3}+y^{3}+z^{3}+\lambda xyz\) with \(\lambda^{3}\neq-3^{3}\)._
1. _(Theorem_ 3.8_) If_ \(K\) _is_ \(Q(P_{\Omega})\)_, then_ \(\mathbf{d}(K)=0\) _and_ \(\mathbf{w}(K)=1\)_._
2. (Theorem_ 3.11_) If_ \(K\) _is_ \(Q(P_{\Omega-1})\)_, then_ \(\mathbf{d}(K)=\mathbf{w}(K)=1\)_._
Both \(Q(P_{\Omega})\) and \(Q(P_{\Omega-1})\) in the above theorem play an important role in the partial classification in [11]. In Case 2, we have a series of results [Theorems 0.8 - 0.12]. (A few results also hold when \(\deg\Omega=4\).) Let \(\Omega\) be in Case 2 for the rest of this subsection. If \(P\) is a Poisson algebra, the group of Poisson algebra automorphisms of \(P\) is denoted \(\operatorname{Aut}_{Poi}(P)\).
**Theorem 0.8** (Theorem 8.1).: _Let \(\Omega\) be an i.s. potential of degree \(\geq 5\)._
1. \[\operatorname{Aut}_{Poi}(Q(P_{\Omega}))=\operatorname{Aut}_{Poi}(P_{\Omega}).\]
2. _If_ \(\xi\neq 0\)_, then_ \[\operatorname{Aut}_{Poi}(Q(P_{\Omega-\xi}))=\operatorname{Aut}_{Poi}(P_{ \Omega-\xi}).\]
The automorphism groups of other Poisson fields (and Nambu Poisson fields) are computed in [11, 12, 13] using Poisson valuations. Motivated by the Dixmier conjecture and Poisson conjecture, see Section 8.2, we consider the Dixmier property [Definition 8.6].
**Theorem 0.9** (Theorem 8.7).: _Let \(\Omega\) be an i.s. potential of degree \(\geq 5\). Then, for any \(\xi\in\Bbbk\), every injective endomorphism of \(P_{\Omega-\xi}\)\((\)resp. \(Q(P_{\Omega-\xi}))\) is an automorphism._
Other properties, such as the uniqueness of grading/filtration, are motivated by some work on noncommutative algebra, for example, [BZ, Corollary 0.3]. We refer to such a property as the rigidity of grading/filtration. The next two results are in this direction. Some undefined terms can be found in Sections 1 and 2. We say that a \(\mathbb{Z}\)-graded algebra \(A=\bigoplus_{i\in\mathbb{Z}}A_{i}\) is _connected graded_ if \(A_{i}=0\) for all \(i>0\) and \(A_{0}=\Bbbk\). So, \(A\) lives in nonpositive degrees. Note that \(P_{\Omega}\) is connected graded if we set the degree of \(x\), \(y\), and \(z\) to be \(-1\). This is different from the traditional definition of connected graded algebra. The reason for using this non-traditional definition is to match up with our definition of descending filtration. A Poisson \(n\)-graded algebra is defined in Definition 2.1.
**Theorem 0.10** (Theorem 8.10).: _Let \(\Omega\) be an i.s. potential of degree \(\geq 4\). Then \(P_{\Omega}\) has a unique connected grading such that it is Poisson \((\deg\Omega-3)\)-graded._
**Theorem 0.11** (Theorem 8.11).: _Let \(\Omega\) be an i.s. potential of degree \(\geq 4\). If \(\xi\neq 0\), then \(P_{\Omega-\xi}\) has a unique filtration \(\mathbb{F}\) such that the associated graded ring \(\operatorname{gr}_{\mathbb{F}}(P_{\Omega-\xi})\) is a connected graded Poisson \((\deg\Omega-3)\)-graded domain._
The results mentioned above aid in calculating the associated automorphism groups. An instance of this is
**Theorem 0.12** (Theorem 8.2).: _Let \(\Omega\) be an i.s. potential of degree \(\geq 5\). Then \(\operatorname{Aut}_{Poi}(Q(A_{\Omega}))=\operatorname{Aut}_{Poi}(A_{\Omega})= \operatorname{Aut}_{Poi}(P_{\Omega})\) and every Poisson automorphism of \(A_{\Omega}\) is graded. Consequently, \(\operatorname{Aut}_{Poi}(Q(A_{\Omega}))\) is a finite subgroup of \(GL_{3}(\Bbbk)\) of order bounded above by \(42(\deg\Omega)(\deg\Omega-3)^{2}\)._
When \(d\geq 5\) and \(\Omega=x^{d}+y^{d}+z^{d}=0\) is the Fermat curve, we can explicitly compute the Poisson automorphism groups of \(P_{\Omega-\xi}\) and \(A_{\Omega}\) according to Proposition 8.4(1,2,3). There is a close connection between the Poisson automorphism group of \(A_{\Omega}\) and the automorphism group of the projective curve \(X=\operatorname{Proj}(A/(\Omega))\). For instance, the order bound of \(\operatorname{Aut}_{Poi}(Q(A_{\Omega}))\) follows from applying Hurwitz's automorphism theorem to the smooth curve \(X\). We're curious if Theorems 0.8, 0.9, and 0.12 hold when \(\deg\Omega=4\).
**Remark 0.13**.:
1. There is a Poisson field of transcendence degree \(2\) that does not admit any nontrivial Poisson valuation, see Lemma 4.8(3).
2. We will introduce a weighted version of Poisson valuations so that the Poisson field in part (1) admits a weighted version of a nontrivial Poisson valuation, see Lemma 5.3(2).
3. Weighted valuations are used in the proofs of Theorems 0.8-0.12.
### Secondary Invariants
Our primary objective is to provide effective and comprehensive solutions to significant questions like automorphism and embedding problems. To accomplish this, we introduce invariants that stem from Poisson valuations.
1. \(\alpha\)-type invariant of a Poisson field \(K\) is an invariant that is defined using the Poisson (\(w\)-)valuations directly on \(K\), e.g., the number of faithful valuations of \(K\) listed in Table 1.
2. \(\beta\)-type invariant is any invariant related to the arrow \(\longrightarrow_{\nu}\). So the depth and width of \(K\) in Definition 0.2 are \(\beta\)-type invariants.
3. \(\gamma\)-type invariant is any invariant related to the filtrations \(\mathbb{F}^{\nu}\) associated to valuations \(\nu\).
4. In [10], we also introduce \(\delta\)-type invariant that is deduced from the associated graded ring \(\operatorname{gr}_{\nu}(K)\) defined in (E0.1.2).
Secondary invariants are very useful in different Poisson algebra projects. Below is a summary of some Poisson fields and their faithful valuations studied in this paper.
### Applications
Poisson valuations (introduced in this paper) are useful for the following topics.
1. The automorphism problem. In [10], Makar-Limanov, Turusbekova, and Umirbaev computed the Poisson automorphism groups of the elliptic Poisson algebras. The automorphism groups of other Poisson algebras are computed in [10, 10, 11]. Using Poisson valuations, one can also compute the automorphism group of a family of Poisson fields. Theorems 0.8 and 0.12 are the initial outcomes of this type. Note that the valuation method has been used to solve the automorphism problem [11, 12].
2. The isomorphism problem; see Section 6.
3. The embedding problem; see Section 6.
4. The rigidity of grading; see Theorem 0.10.
5. The rigidity of filtration; see Theorem 0.11.
6. The Dixmier problem; see Theorem 0.9.
7. Classification of Poisson fields of transcendence degree two is an important project related to Artin's conjecture in noncommutative algebraic geometry. Artin's conjecture concerning division algebras of transcendence degree two [Ar] is (arguably) the most important open problem in noncommutative algebraic projective geometry today. The conjecture holds significant sway over the progress of the research field, attracting the attention and dedication of numerous renowned experts worldwide. We initiated this project to establish a Poisson version of Artin's conjecture. Further details about the said version can be found in [10].
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Poisson fields of tr.deg 2 & fields & Poisson brackets & \#\{faithful Poisson valuations\} \\ \hline Weyl Poisson field \(K_{Weyl}\) & \(\Bbbk(x,y)\) & \(\{x,y\}=1\) & \(\infty\) \\ over uncountable field \(\Bbbk\) & \(\Bbbk(x,y)\) & \(\{x,y\}=qxy\) & \(\infty\) \\ \hline \(q\)-Skew Poisson field \(K_{q}\) (\(q\neq 0\)) & \(\Bbbk(x,y)\) & \(\{x,y\}=qxy\) & countable \\ \hline Graded elliptic type & \(Q(P_{\Omega})\) & \(\{-,-\}_{\Omega}\) & 2 \\ \hline Elliptic type & \(Q(P_{\Omega-1})\) & \(\{-,-\}_{\Omega}\) & 1 \\ \hline Higher genus type & \(Q(P_{\Omega})\) & \(\{-,-\}_{\Omega}\) & 0 \\ \(\Omega\) is an i.s. potential of \(\deg\geq 4\) & \(Q(P_{\Omega-1})\) & \(\{-,-\}_{\Omega}\) & 0 \\ \hline \end{tabular}
\end{table}
Table 1. Some Poisson fields of tr.deg 2 and their faithful valuations
8. Solutions (and non-existence of solutions) to partial differential equations in the field \(\Bbbk(x_{1},\cdots,x_{n})\)[GZ, HTWZ3].
9. Study of \(n\)-Lie Poisson (or Nambu Poisson) algebras [HTWZ3].
The paper is organized as follows: Section 1 introduces basic definitions of valuations and Poisson valuations, Section 2 defines filtration and degree function closely related to the notion of valuation, examples are worked out in Section 3, Section 4 studies \(\gamma\)-type invariants, Section 5 defines more invariants, such as the weighted version of depth and width, embedding and isomorphism problems are studied in Section 6, and the proofs of several theorems listed above are given in Sections 7 and 8.
## 1. Preliminaries
This section reviews some introductory material related to valuations and Poisson valuations. We refer to the book [LPV] for basic definitions of Poisson algebras and [Bo, Chaprter VI] (and [Va]) for basic properties related to valuations.
**Definition 1.1**.: Let \(K\) be a commutative algebra (or a field) over \(\Bbbk\). For parts (3)-(5), let \(K\) be a Poisson algebra (or a Poisson field).
1. A _discrete valuation_ or simply _valuation_ on \(K\) is a map \[\nu:K\to\mathbb{Z}\cup\{\infty\}\] which satisfies the following properties: for all \(a,b\in K\), 1. \(\nu(a)=\infty\) if and only if \(a=0\), 2. \(\nu(a)=0\) for all \(a\in\Bbbk^{\times}:=\Bbbk\setminus\{0\}\), 3. \(\nu(ab)=\nu(a)+\nu(b)\) (assuming \(n+\infty=\infty\) when \(n\in\mathbb{Z}\cup\{\infty\}\)), 4. \(\nu(a+b)\geq\min\{\nu(a),\nu(b)\}\) (with equality if \(\nu(a)\neq\nu(b)\)),
2. A valuation \(\nu\) is called _trivial_ if \(\nu(a)=0\) for all \(a\in K\setminus\{0\}\). Otherwise, it is called _nontrivial_.
3. Let \(w\) be a given integer. A valuation \(\nu\) on \(K\) is called a _\(w\)-valuation_ if 1. \(\nu(\{a,b\})\geq\nu(a)+\nu(b)-w\) for all \(a,b\in K\). Any \(0\)-valuation is just a _Poisson valuation_ given in Definition 0.1. When \(\nu\) is a \(w\)-valuation, we say \(w\) is the _weight_ of \(\nu\). Let \(\mathcal{V}_{w}(K)\) be the set of nontrivial \(w\)-valuations on \(K\).
4. A \(w\)-valuation \(\nu\) on \(K\) is called _classical_ if \(\nu(\{a,b\})>\nu(a)+\nu(b)-w\) for all \(a,b\in K\setminus\{0\}\). Otherwise, it is called _nonclassical_.
5. Let \(\nu_{1}\) (resp. \(\nu_{2}\)) be a \(w_{1}\)-valuation (resp. \(w_{2}\)-valuation) on \(K\). We say \(\nu_{1}\) and \(\nu_{2}\) are _equivalent_ if there are positive integers \(a,b\) such that \(a\nu_{1}=b\nu_{2}\). In this case, we write \(\nu_{1}\sim\nu_{2}\). Otherwise we say \(\nu_{1}\) and \(\nu_{2}\) are _nonequivalent_.
In this paper, we will only study discrete valuations. Thus, the term "discrete" will be omitted throughout this paper. Note that our definition of equivalent valuations differs from the one in [Va, Proposition 1.5]. By convention, a \(w\)-valuation is defined on a Poisson algebra, while a valuation is defined on a commutative algebra without considering the Poisson structure. Although \(0\)-valuations play an important role in this paper, \(w\)-valuations with \(w\neq 0\) also provide useful information. For instance, there are Poisson fields such that (a) there is no nontrivial \(0\)-valuation [Lemma 4.8(3)] and (b) there are nontrivial Poisson \(w\)-valuations for some \(w>0\) [Lemmas 3.10 and 5.3]. Similar to Definition 1.1(3), we define
(E1.1.1) \[\mathcal{V}_{cw}(K):=\text{the set of nontrivial classical $w$- valuations on $K$}\]
and
(E1.1.2) \[\mathcal{V}_{ncw}(K):=\text{the set of nontrivial nonclassical $w$- valuations on $K$.}\]
It is clear that \(\mathcal{V}_{w}(K)=\mathcal{V}_{cw}(K)\sqcup\mathcal{V}_{ncw}(K)\) for all \(w\). Further, we define
(E1.1.3) \[w(K):=\inf\{w\mid\mathcal{V}_{w}(K)\neq\emptyset\}.\]
By Remark 4.4(2), \(w(K)\) is either \(-\infty\), \(0\) or \(1\). Similarly, one can define \(cw(K)\) and \(ncw(K)\).
When \(K\) is a field with valuation \(\nu\), the _valuation ring_\(D_{\nu}(K):=\{a\in K\mid\nu(a)\geq 0\}\) is local with maximal ideal \(\mathfrak{m}_{\nu}(K):=\{a\in K\mid\nu(a)\geq 1\}\). The corresponding _residue field_ is \(R_{\nu}(K):=D_{\nu}(K)/\mathfrak{m}_{\nu}(K)\). If \(\nu\) is nontrivial, then the transcendence degree of \(R_{\nu}(K)\) is strictly less than that of \(K\). When \(\nu\) is a \(0\)-valuation, then \(R_{\nu}(K)\) is a Poisson field. We refer to [KL] for basic definitions and properties related to Gelfand-Kirillov dimension (GKdim or GK-dimension). It's important to note that for any affine commutative algebra, its GK-dimension always equals its Krull dimension.
**Definition 1.2**.: Let \(\nu\) be a \(w\)-valuation on a Poisson field \(K\). We say \(\nu\) is _degenerate_ if \(\operatorname{GKdim}R_{\nu}(K)\leq\operatorname{GKdim}K-2\). Otherwise, we say \(\nu\) is _nondegenerate_.
The following lemma may be well-known.
**Lemma 1.3**.: _Suppose \(\Bbbk\) is algebraically closed. Let \(A\) be a \(\mathbb{Z}\)-graded domain. In parts \((1)\) and \((2)\) we suppose that \(A_{i}\neq 0\) and \(A_{-j}\neq 0\) for some \(i,j>0\)._
1. _Suppose_ \(A_{k}\) _is nonzero and finite-dimensional for some_ \(k\)_. Then_ \(A\cong\Bbbk[s^{\pm 1}]\) _for some nonzero homogeneous element_ \(s\in A\) _of positive degree._
2. _Suppose_ \(\operatorname{GKdim}A=1\)_. Then_ \(A\cong\Bbbk[s^{\pm 1}]\) _for some nonzero homogeneous element_ \(s\in A\) _of positive degree._
3. _Suppose_ \(\operatorname{GKdim}A=1\)_. If_ \(A\) _is_ \(\mathbb{N}\)_-graded with_ \(A_{i}\neq 0\) _for some_ \(i>0\)_, then_ \(A\subseteq\Bbbk[s]\) _for some nonzero homogeneous element_ \(s\in A\) _of positive degree._
Proof.: (1) Let \(0\neq z\in A_{k}\). Since \(A\) is a domain, \(A_{0}z\subseteq A_{k}\) implies that
\[\dim_{\Bbbk}A_{0}=\dim_{\Bbbk}A_{0}z\leq\dim_{\Bbbk}A_{k}<\infty.\]
Then \(A_{0}\) is an algebraic field extension of the base field \(\Bbbk\). Since \(\Bbbk\) is algebraically closed, \(A_{0}=\Bbbk\).
Let \(0\neq x\in A_{i}\) and \(0\neq y\in A_{-j}\). Then \(0\neq x^{j}y^{i}\in A_{0}\). So \(x^{j}y^{i}\in\Bbbk^{\times}\) is invertible, which implies that \(x\) (and \(y\)) is invertible. Consequently, every nonzero homogeneous element is invertible. Thus, \(A\) is a graded division algebra. Since \(A_{0}=\Bbbk\), \(A\) must be \(\Bbbk[s^{\pm 1}]\) for some nonzero homogeneous element \(s\in A\) of positive degree.
(2,3) The proofs are similar to the proof of part (1) and omitted.
**Lemma 1.4**.: _Let \(K\subseteq Q\) be two Poisson fields with the same finite GK-dimension. Let \(w\) be an integer._
1. _There is a natural map induced by restriction_ \[\mathcal{V}_{w}(Q)\to\mathcal{V}_{w}(K).\] _As a consequence, if_ \(\mathcal{V}_{w}(Q)\neq\emptyset\)_, then_ \(\mathcal{V}_{w}(K)\neq\emptyset\)_. This also implies_ \(w(K)\leq w(Q)\)
_._
2. \[\mathcal{V}_{w-1}(K)\subseteq\mathcal{V}_{cw}(K)\subseteq\mathcal{V}_{w}(K).\] _As a consequence, if_ \(\mathcal{V}_{w}(K)=\mathcal{V}_{ncw}(K)\)_, then_ \(\mathcal{V}_{w-1}(K)=\emptyset\)_._
3. _Let_ \(d\) _be a positive integer. Then the assignment_ \(\nu\to d\nu\) _defines an injective map_ \(\mathcal{V}_{w}(K)\to\mathcal{V}_{dw}(K)\)_. As a consequence, if_ \(\mathcal{V}_{w}(K)\neq\emptyset\) _for some_ \(w<0\)_, then_ \(w(K)=-\infty\)_._
4. _Part_ (3) _holds for_ \(\mathcal{V}_{cw}(K)\) _and_ \(\mathcal{V}_{ncw}(K)\)_._
Proof.: (1) Let \(\nu\in\mathcal{V}_{w}(Q)\) and \(\phi:=\nu\mid_{K}\). It is clear that \(\phi\) is a \(w\)-valuation on \(K\). It remains to show that \(\phi\) is nontrivial. Suppose to the contrary that \(\phi\) is trivial. So \(\phi(f)=0\) for all \(0\neq f\in K\). We claim that if \(0\neq q\in Q\), then \(\nu(q)=0\). If not, we can assume \(\nu(q)<0\) (after replacing \(q\) by \(q^{-1}\) if necessary). Since \(K\) and \(Q\) have the same finite GK-dimension, there are \(a_{0},\cdots,a_{n}\in K\) (for some \(n\geq 0\)) with \(a_{0}\neq 0\) such that \(q^{n+1}=a_{0}+a_{1}q+\cdots+a_{n}q^{n}\). Then \((n+1)\nu(q)\geq\min_{i=0}^{n}\{\nu(a_{i}q^{i})\}\geq n\nu(q)\) which contradicts the assumption \(\nu(q)<0\). Therefore, we have proved the claim. Since \(\nu\) is nontrivial, we have that \(\phi\) is nontrivial. This is the central assertion. The consequences are clear.
(2,3,4) The proofs are easy and omitted.
**Remark 1.5**.: Let \(K\) be a Poisson field.
1. We can use Lemma 1.4(2,3) to generate classical Poisson valuations by upgrading the weight of one \(w\)-valuation. Therefore, it is reasonable to remove these classical valuations and focus solely on nonclassical ones up to equivalence. To achieve this, we need to understand the "valuation lattice" defined as \[\mathcal{L}(K):=\frac{\bigcup_{w}\mathcal{V}_{ncw}(K)}{\sim}.\] An \(\alpha\)-type invariant of \(K\) is any invariant defined using the (\(w\)-)valuations on \(K\). Hence, \(\mathcal{L}(K)\) is an \(\alpha\)-type invariant of \(K\).
2. We can define other \(\alpha\)-type invariants. For example, let (E1.5.1) \[\alpha_{w}(K):=\#\left\{\frac{\mathcal{V}_{w}(K)}{\sim}\right\},\] which denotes the number (or cardinality) of all nontrivial \(w\)-valuations \(\nu\) on \(K\) where there is no \(w^{\prime}\)-valuation \(\nu^{\prime}\) on \(K\) such that \(\nu^{\prime}=a\nu\) for some positive rational number \(a<1\). Computing \(\alpha_{w}(K)\) for all \(w\) would be interesting. One could also define \(\alpha_{cw}(K)\) (resp. \(\alpha_{ncw}(K)\)) by replacing \(\mathcal{V}_{w}(K)\) with \(\mathcal{V}_{cw}(K)\)- see (E1.1.1) (resp. \(\mathcal{V}_{ncw}(K)\) - see (E1.1.2)) in (E1.5.1). Note that \(w(K)\) in (E1.1.3) is also an \(\alpha\)-type invariant of \(K\).
3. The depth and width defined in Definition 0.3 (or any invariants related to the arrow \(\to_{\nu}\)) are referred to as \(\beta\)-type invariants. Additional invariants will be introduced in later sections.
The following lemma will be used in the sequel.
**Lemma 1.6**.: _Suppose char \(\Bbbk=0\). Let \(K\) be a Poisson field and \(x,y\in K\)._
1. _If_ \(x\) _and_ \(y\) _are algebraically dependent, then_ \(\{x,y\}=0\)_._
2. _Suppose_ \(K\) _has GK-dimension 2 with a nontrivial Poisson bracket. If_ \(\{x,y\}=0\)_, then_ \(x\) _and_ \(y\) _are algebraically dependent._
Proof.: (1) For every \(x\in K\), let \(C_{x}(K):=\{f\in K\ |\ \{x,f\}=0\}\). Since \(\mathrm{char}\,\Bbbk=0\), \(C_{x}(K)\) is a subfield of \(K\) that is integrally closed in \(K\) (namely, \(C_{x}(K)\) is equal to its integral closure in \(K\)). Without loss of generality, we may assume that \(x\not\in\Bbbk\). Since \(x\in C_{x}(K)\) and \(y\) is integral over \(\Bbbk(x)\), \(y\) is integral over \(C_{x}(K)\). Hence \(y\in C_{x}(K)\). This means that \(\{x,y\}=0\).
(2) Suppose to the contrary that \(x\) and \(y\) are algebraically independent. Since \(\{x,y\}=0\), both \(x\) and \(y\) are in \(C_{x}(K)\). Then \(C_{x}(K)\) has a transcendence degree of at least 2. Note that \(K\) has transcendence degree 2. Since \(C_{x}(K)\) is integrally closed in \(K\), we have \(C_{x}(K)=K\). This means that \(x\) is in the center of \(K\). Similarly, \(y\) is in the center of \(K\). For every \(f\in K\), then \(x,y\in C_{f}(K)\). Since \(C_{f}(K)\) is integrally closed in \(K\), we obtain that \(C_{f}(K)=K\). Thus, \(K\) has a trivial Poisson bracket, yielding a contradiction.
An ideal \(I\) of a Poisson algebra \(P\) is called a _Poisson ideal_ if \(\{I,P\}\subseteq I\). A Poisson ideal \(I\) is called _Poisson prime_ if, for any two Poisson ideals \(J_{1},J_{2}\), \(J_{1}J_{2}\subseteq I\) implies that \(J_{1}\subseteq I\) or \(J_{2}\subseteq I\).
**Lemma 1.7**.: _[_6_, Lemma 1.1(c,d)]_ _If P is a noetherian Poisson algebra, then an ideal \(I\) of \(P\) is Poisson prime if and only if it is both Poisson and prime. Moreover, every prime ideal minimal over a Poisson ideal of \(P\) is Poisson prime._
## 2. Grading, degree, filtrations, and valuations
In this section, we establish some foundational concepts to use valuations effectively. First, we recall the notion of a Poisson \(w\)-graded algebra and a slightly weaker version of a valuation called a _filtration_ of an algebra. Throughout, let \(w\) be a given integer unless otherwise stated.
**Definition 2.1**.: A Poisson algebra \(P\) is called _Poisson \(w\)-graded_ if
1. \(P=\oplus_{i\in\mathbb{Z}}P_{i}\) is a \(\mathbb{Z}\)-graded commutative algebra, and
2. \(\{P_{i},P_{j}\}\subseteq P_{i+j-w}\) for all \(i,j\in\mathbb{Z}\).
By Construction 0.6, the Poisson algebras \(A_{\Omega}\) and \(P_{\Omega}\) are \(w\)-graded where \(w=-(\deg\Omega-3)\), if we use the original grading (i.e., the standard Adams grading with \(|x|=|y|=|z|=1\)) of \(A_{\Omega}\) and \(P_{\Omega}\).
**Definition 2.2**.: Let \(A\) be an algebra. Let \(\mathbb{F}:=\{F_{i}\ |\ i\in\mathbb{Z}\}\) be a set of \(\Bbbk\)-subspaces of \(A\).
1. We say \(\mathbb{F}\) is a _filtration_ of \(A\) if it satisfies 1. \(F_{i}\supseteq F_{i+1}\) for all \(i\in\mathbb{Z}\) and \(1\in F_{0}\setminus F_{1}\), 2. \(F_{i}F_{j}\subseteq F_{i+j}\) for all \(i,j\in\mathbb{Z}\), 3. \(\bigcap_{i\in\mathbb{Z}}F_{i}=\{0\}\), 4. \(\bigcup_{i\in\mathbb{Z}}F_{i}=A\).
2. Suppose that \(A\) is a Poisson algebra and \(\mathbb{F}\) is a filtration of \(A\). If further 5. \(\{F_{i},F_{j}\}\subseteq F_{i+j-w}\) for all \(i,j\in\mathbb{Z}\), then \(\mathbb{F}\) is called a _\(w\)-filtration_ of \(A\).
For a moment, let's set aside the Poisson structure. The _associated graded ring_ of a filtration \(\mathbb{F}\) of \(A\) is defined to be
\[\mathrm{gr}_{\mathbb{F}}\,A:=\bigoplus_{i\in\mathbb{Z}}F_{i}/F_{i+1},\]
which is a \(\mathbb{Z}\)-graded algebra. By [KL, Lemma 6.5], \(\operatorname{GKdim}\operatorname{gr}_{\mathbb{F}}A\leq\operatorname{GKdim}A\). For any element \(a\in F_{i}\), let \(\overline{a}\) denote the element \(a+F_{i+1}\) in the \(i\)th degree component \((\operatorname{gr}_{\mathbb{F}}A)_{i}:=F_{i}/F_{i+1}\). One canonical example is the following. Let \(A\) be an algebra generated by a subspace \(V\) containing \(1\). Let
\[F_{i}=\begin{cases}V^{-i}&i\leq 0,\\ 0&i>0.\end{cases}\]
Then \(\mathbb{F}:=\{F_{i}\}\) is a filtration of \(A\) satisfying conditions (a,b,c,d) in Definition 2.2(1). A slightly more general example is given in (E2.8.3). Next, we reintroduce the Poisson structure into our discussion.
**Lemma 2.3**.: _Suppose \(\mathbb{F}\) is a \(w\)-filtration of a Poisson algebra \(A\). Then \(\operatorname{gr}_{\mathbb{F}}A\) is a Poisson \(w\)-graded algebra._
Proof.: It is well-known that \(\operatorname{gr}_{\mathbb{F}}A\) is a graded algebra. The addition and multiplication are defined as follows: \(\overline{f}+\overline{g}=\overline{f+g}\) when \(\overline{f},\overline{g}\in(\operatorname{gr}_{\mathbb{F}}A)_{i}\) and \(\overline{fg}=\overline{fg}\) when \(\overline{f}\in(\operatorname{gr}_{\mathbb{F}}A)_{i}\) and \(\overline{g}\in(\operatorname{gr}_{\mathbb{F}}A)_{j}\).
Next, we define the \(w\)-graded Poisson structure on \(\operatorname{gr}_{\mathbb{F}}A\). Let \(\overline{f}\) and \(\overline{g}\) be in \((\operatorname{gr}_{\mathbb{F}}A)_{i}\) and \((\operatorname{gr}_{\mathbb{F}}A)_{j}\) respectively where \(f\in F_{i}\) and \(g\in F_{j}\). By definition, \(\{f,g\}\in F_{i+j-w}\). Let \(\overline{\{f,g\}}\) be its class in \((\operatorname{gr}_{\mathbb{F}}A)_{i+j-w}\). We define the Poisson bracket on \(\operatorname{gr}_{\mathbb{F}}A\) by
\[\{\overline{f},\overline{g}\}:=\overline{\{f,g\}}\in(\operatorname{gr}_{ \mathbb{F}}A)_{i+j-w}\]
for all \(\overline{f}\in(\operatorname{gr}_{\mathbb{F}}A)_{i}\) and \(\overline{g}\in(\operatorname{gr}_{\mathbb{F}}A)_{j}\). It is easy to see that \(\{-,-\}\) is well-defined (or is independent of the choices of preimages \(f\) and \(g\)), bilinear, and antisymmetric.
We claim that \(\{-,-\}\) is a Poisson bracket. Let \(a\in F_{i},b\in F_{j},c\in F_{k}\). Then in \((\operatorname{gr}_{\mathbb{F}}A)_{i+j+k-w}\), we have
\[\{\overline{a},\overline{b}\overline{c}\} =\{\overline{a},\overline{bc}\}=\overline{\{a,bc\}}\] \[=\overline{\{a,b\}c+\{a,c\}b}=\overline{\{a,b\}c}+\overline{\{a, c\}b}\] \[=\overline{\{a,b\}\overline{c}+\overline{\{a,c\}b}}=\{\overline{a,\overline{b}\}\overline{c}+\{\overline{a,\overline{c}}\}\overline{b}.\]
So \(\{\overline{a},-\}\) is a derivation. Similarly, the Jacobi identity holds for \(\overline{a},\overline{b},\overline{c}\). Therefore, \(\operatorname{gr}_{\mathbb{F}}A\) is a Poisson \(w\)-graded algebra.
Given a filtration \(\mathbb{F}\), we define the notion of a _degree_ function, denoted by \(\deg\), on elements in \(A\) by
(E2.3.1) \[\deg a:=i,\ \text{if}\ a\in F_{i}\setminus F_{i+1}\quad\text{and}\quad\deg(0)=+\infty.\]
**Lemma 2.4**.: _Let \(\mathbb{F}\) be a filtration of an algebra \(A\) such that \(\operatorname{gr}_{\mathbb{F}}A\) is a domain._
1. _Then_ \(\deg\) _satisfies the following conditions for_ \(a,b\in A\)_,_ 1. \(\deg(a)\in\mathbb{Z}\) _for any nonzero element_ \(a\in A\)_, and_ \(\deg a=\infty\) _if and only if_ \(a=0\)_,_ 2. \(\deg(c)=0\) _for all_ \(c\in\Bbbk^{\times}\)_,_ 3. \(\deg(ab)=\deg(a)+\deg(b)\)_,_ 4. \(\deg(a+b)\geq\min\{\deg(a),\deg(b)\}\)_, with equality if_ \(\deg(a)\neq\deg(b)\)_._ _Namely,_ \(\deg\) _is a valuation on_ \(A\)_._
2. _Suppose_ \(A\) _is a Poisson algebra. If_ \(\mathbb{F}\) _is a_ \(w\)_-filtration of_ \(A\)_, then_ \(\deg\) _is a_ \(w\)_-valuation in the sense of Definition_ 1.1(3)_._
3. \(F_{0}(A)\) _is integrally closed in_ \(A\)
Proof.: (1) This follows from a routine verification.
(2) Conditions (a,b,c,d) in Definition 1.1(1) were proved in part (1). Condition (e) in Definition 1.1(3) follows from Definition 2.2(2e). The assertion follows.
(3) Let \(f\in A\) be integral over \(F_{0}(A)\). This means that there are \(a_{0},\cdots,a_{n-1}\in F_{0}(A)\) with \(n>0\), and \(\sum_{i=0}^{n}a_{i}f^{i}=0\) where \(a_{n}=1\). We claim that \(f\in F_{0}(A)\). Denote \(\nu:=\deg\). It suffices to show that \(\nu(f)\geq 0\). Suppose to the contrary that \(\nu(f)<0\). It follows from the equation \(\sum_{i=0}^{n}a_{i}f^{i}=0\) (or \(f^{n}=-\sum_{i=0}^{n-1}a_{i}f^{i}\)) that
\[n\nu(f)=\nu(f^{n})=\nu(-\sum_{i=0}^{n-1}a_{i}f^{i})\geq\min_{i=0}^{n-1}\{i\nu( f)+\nu(a_{i})\}\geq(n-1)\nu(f),\]
which implies that \(\nu(f)\geq 0\), a contradiction. Therefore, \(\nu(f)\geq 0\) is required.
Conversely, if we are given a valuation denoted by \(\nu\) (or more generally, a degree function satisfying (1a), (1b), (1c) and (1d) of Lemma 2.4)
\[\nu:A\to\mathbb{Z}\cup\{\infty\},\]
then we can define a filtration (associated with \(\nu\)) \(\mathbb{F}^{\nu}:=\{F_{i}^{\nu}\mid i\in\mathbb{Z}\}\) of \(A\) by
(E2.4.1) \[F_{i}^{\nu}:=\{a\in A\mid\nu(a)\geq i\}.\]
If no confusion occurs, we will delete \({}^{\nu}\) from \(\mathbb{F}^{\nu}\) and \(F_{i}^{\nu}\).
**Lemma 2.5**.: _Let \(A\) be a Poisson algebra with a valuation \(\nu\). Then \(\mathbb{F}:=\{F_{i}\}\) defined as in (E2.4.1) is a filtration of \(A\) such that \(\operatorname{gr}_{\mathbb{F}}A\) is a domain. If \(\nu\) is a \(w\)-valuation, then \(\operatorname{gr}_{\mathbb{F}}A\) is a Poisson \(w\)-graded domain._
Proof.: This is another routine verification.
A filtration \(\mathbb{F}\) is called _good_ if \(\operatorname{gr}_{\mathbb{F}}A\) is a domain. One can check immediately that there is a one-to-one correspondence between the set of valuations on \(A\) and the set of good filtrations on \(A\). The following result applies in the Poisson setting.
**Lemma 2.6**.: _For a Poisson algebra \(A\), there is a one-to-one correspondence between the set of good \(w\)-filtrations of \(A\) and the set of \(w\)-valuations on \(A\)._
Proof.: Given a good \(w\)-filtration, by Lemma 2.4(2), \(\deg(=:\nu)\) defined in (E2.3.1) is a \(w\)-valuation on \(A\). Conversely, given a \(w\)-valuation \(\nu\), the filtration \(\mathbb{F}\) defined in (E2.4.1) is a good filtration by Lemma 2.5. It is easy to see that \(\mathbb{F}\) is a \(w\)-filtration. Moreover, the correspondence between \(\deg\) and \(\mathbb{F}\) is one-to-one by construction.
Suppose \(\operatorname{gr}_{\mathbb{F}}A\) is a domain for some filtration \(\mathbb{F}\) of \(A\), and so is \(A\). The corresponding degree function \(\deg(=:\nu)\) (see (E2.3.1)) satisfies all conditions in Lemma 2.4(1). Let \(Q\) be the fraction field of \(A\). Now, we define a _degree_ on \(Q\) by
(E2.6.1) \[\deg_{Q}(ab^{-1}):=\deg a-\deg b\]
for \(a\in A\) and \(b\in A\setminus\{0\}\). It is easy to check that \(\deg_{Q}\) is well-defined (or only depends on the equivalence class of \(ab^{-1}\)).
If \(A\) is a graded domain, the graded fraction ring of \(A\) is denoted by \(Q_{gr}(A)\).
**Lemma 2.7**.: _Retain the above notations. Suppose \(A\) is a Poisson domain and \(\mathbb{F}\) is a good \(w\)-filtration of \(A\)._
1. \(\deg_{Q}\) _is a_ \(w\)_-valuation on_ \(Q\) _and its associated filtration_ \(\mathbb{F}(Q)\) _is a good_ \(w\)_-filtration of_ \(Q\)
2. _The associated graded ring_ \(\operatorname{gr}_{\mathbb{F}(Q)}Q\) _is canonically isomorphic to the graded fraction ring_ \(Q_{gr}(\operatorname{gr}_{\mathbb{F}}A)\) _as Poisson_ \(w\)_-graded algebras._
Proof.: (1) By Lemma 2.6, it suffices to show that \(\deg_{Q}\) is a \(w\)-valuation on \(Q\). Recall that \(\deg\) is the degree function on \(A\) defined by (E2.3.1). It is clear that \(\deg_{Q}(a)=\deg(a)\) when \(a\in A\). Conditions (a,b,c) in Definition 1.1(1) are clear for \(\deg_{Q}\). For condition (d), let \(a,b\in Q\). We can write \(a=xy^{-1}\) and \(b=zy^{-1}\). Then
\[\deg_{Q}(a+b) =\deg_{Q}(xy^{-1}+zy^{-1})=\deg_{Q}((x+z)y^{-1})\] \[=\deg(x+z)-\deg(y)\geq\min\{\deg(x),\deg(z)\}-\deg(y)\] \[=\min\{\deg(x)-\deg(y),\deg(z)-\deg(y)\}\] \[=\min\{\deg_{Q}(a),\deg_{Q}(b)\}\]
where the " \(=\) " holds when \(\deg(x)\neq\deg(z)\), or equivalently, \(\deg_{Q}(a)\neq\deg_{Q}(b)\). Similarly, for condition (e) in Definition 1.1(3), we have
\[\deg_{Q}(\{a,b\}) =\deg_{Q}(\{xy^{-1},zy^{-1}\})\] \[=\deg_{Q}(\{x,z\}y^{-2}-zy^{-3}\{x,y\}-xy^{-3}\{y,z\})\] \[\geq\min\{\deg_{Q}(\{x,z\}y^{-2}),\deg_{Q}(zy^{-3}\{x,y\}),\deg_ {Q}(xy^{-3}\{y,z\})\}\] \[=\min\{\deg(\{x,z\})-2\deg(y),\deg(z)-3\deg(y)+\deg(\{x,y\}),\] \[\qquad\qquad\deg(x)-3\deg(y)+\deg(\{y,z\})\}\] \[\geq\deg(x)+\deg(z)-w-2\deg(y)\] \[=\deg_{Q}(a)+\deg_{Q}(b)-w.\]
Therefore, the assertion holds.
(2) For any \(x\in F_{i}(A)\setminus F_{i+1}(A)\), let \(\overline{x}\) be \(x+F_{i+1}(A)\). Similarly, for \(y\in F_{i}(Q)\setminus F_{i+1}(Q)\), let \(\overline{y}\) be \(y+F_{i+1}(Q)\). Then \(\overline{x}\to\overline{x}\) (where the second \(\overline{x}\) is in \(\operatorname{gr}_{\mathbb{F}(Q)}Q\)) for all \(x\in A\) induces a canonical inclusion \(\operatorname{gr}_{\mathbb{F}}A\subseteq\operatorname{gr}_{\mathbb{F}(Q)}Q\).
For every element \(y\in Q\), the equation \(yy^{-1}=1\) in \(Q\) implies that \(\overline{y}\overline{y^{-1}}=1\) in \(\operatorname{gr}_{\mathbb{F}(Q)}Q\). Hence \(\operatorname{gr}_{\mathbb{F}(Q)}Q\) is a graded field. As a consequence, \(Q_{gr}(\operatorname{gr}_{\mathbb{F}}A)\subseteq\operatorname{gr}_{\mathbb{F} (Q)}Q\). To prove \(Q_{gr}(\operatorname{gr}_{\mathbb{F}}A)\supseteq\operatorname{gr}_{\mathbb{F} (Q)}Q\), we let \(\overline{y}\in\operatorname{gr}_{\mathbb{F}(Q)}Q\) where \(y=ab^{-1}\) for some \(a,b\in A\). Then \(a=yb\) in \(Q\). This induces \(\overline{a}=\overline{y}\overline{b}\) in \(\operatorname{gr}_{\mathbb{F}(Q)}Q\). As a consequence, \(\overline{y}=\overline{a}\overline{b}^{-1}\). Since both \(\overline{a}\) and \(\overline{b}\) are in \(\operatorname{gr}_{\mathbb{F}}A\), \(\overline{y}\) is in \(Q_{gr}(\operatorname{gr}_{\mathbb{F}}A)\) as desired. The above natural isomorphism preserves the Poisson \(w\)-graded structures, and hence the assertion follows.
By Lemma 2.7, the induced good filtration \(\mathbb{F}(Q)\) on \(Q:=Q(A)\) is completely determined by the good filtration \(\mathbb{F}\) on \(A\), and vice versa. Combining with Lemma 2.6, one sees that a valuation \(\nu\) on \(Q\) is completely determined by its restriction on \(A\), and vice versa. If \(\mathbb{F}\) is the filtration defined by a valuation \(\nu\), we also use \(\operatorname{gr}_{\nu}(A)\) for \(\operatorname{gr}_{\mathbb{F}}(A)\).
**Lemma 2.8**.: _Let \(\mathbb{F}\) and \(\mathbb{G}\) be filtrations of \(A\). Suppose \(\mathbb{G}\) is a subfiltration of \(\mathbb{F}\), namely, \(G_{i}(A)\subseteq F_{i}(A)\) for all \(i\in\mathbb{Z}\)._
1. _There is a naturally graded algebra homomorphism_ \(\phi:\operatorname{gr}_{\mathbb{G}}A\to\operatorname{gr}_{\mathbb{F}}A\)_._
2. \(\mathbb{G}=\mathbb{F}\) _if and only if_ \(\phi\) _is injective._
Proof.: (1) We define a \(\Bbbk\)-linear map \(\phi:(\operatorname{gr}_{\mathbb{G}}A)_{i}\to(\operatorname{gr}_{\mathbb{F}}A)_ {i}\) by sending
(E2.8.1) \[\widetilde{x}:=x+G_{i+1}(A)\quad\mapsto\quad\overline{x}:=x+F_{i+1}(A)\]
for all \(x\in G_{i}(A)\). Since \(\mathbb{G}\) is a subfiltration of \(\mathbb{F}\), \(\phi\) is well-defined. It is clear that \(\phi\) is additive concerning homogeneous elements of the same degree. Let \(\widetilde{x}\in(\operatorname{gr}_{\mathbb{G}}A)_{i}\) and \(\widetilde{y}\in(\operatorname{gr}_{\mathbb{G}}A)_{j}\) for \(i,j\in\mathbb{Z}\). Then
\[\phi(\widetilde{x}\widetilde{y}) =\phi((x+G_{i+1}(A))(y+G_{j+1}(A)))=\phi(xy+G_{i+j+1}(A))\] \[=xy+F_{i+j+1}(A)=(x+F_{i+1}(A))(y+F_{j+1}(A))\] \[=\phi(\widetilde{x})\phi(\widetilde{y}).\]
Thus, \(\phi\) is a graded algebra homomorphism. Therefore, (E2.8.1) induces a graded algebra homomorphism as desired.
(2) One direction is clear. For the other direction, suppose that \(\mathbb{G}\neq\mathbb{F}\). Then there is some nonzero element \(f\in F_{j}(A)\setminus G_{j}(A)\) for some \(j\). Let \(i\) be the integer such that \(f\in G_{i}(A)\setminus G_{i+1}(A)\). Since \(f\not\in G_{j}(A)\) and \(f\in G_{i}(A)\), we have \(i<j\). Consider the algebra map \(\phi\) defined in part (1):
\[\phi:(\operatorname{gr}_{\mathbb{G}}(A))_{i}\to(\operatorname{gr}_{\mathbb{F} }(A))_{i}\]
which sends
\[\widetilde{f}:=f+G_{i+1}(A)\mapsto\overline{f}:=f+F_{i+1}(A).\]
Since \(i<j\) or \(i+1\leq j\), \(f\in F_{j}(A)\subseteq F_{i+1}(A)\). This means that \(\phi(\widetilde{f})=\overline{f}=0\). By the choice of \(i\), \(\widetilde{f}\neq 0\). Therefore, \(\phi\) is not injective as desired.
Now, we give a general setup for the rest of this section. Suppose that an algebra \(A\) is generated by elements \(\{x_{k}\}_{k}\) where \(k\) is in a fixed index set, and we are given a degree assignment on the generators and \(1\in A\) as follows:
(E2.8.2) \[\deg(1)=0\quad\text{and}\quad\deg(x_{k})=d_{k}\in\mathbb{Z}\quad\text{for all}\quad k.\]
Then, we can define a descending chain of vector spaces \(\mathbb{F}^{ind}:=\{F_{i}^{ind}\}_{i\in\mathbb{Z}}\) of \(A\) by
(E2.8.3) \[F_{i}^{ind}:=\text{the span of all monomials}\prod_{k}x_{k}^{n_{k}}\text{ where }\sum_{k}n_{k}d_{k}\geq i\]
with only finitely many \(n_{k}\neq 0\). It is easy to see that \(\mathbb{F}^{ind}\) satisfies Definition 2.2(b,d) and the first part of (a). It is not clear whether Definition 2.2(c) and the condition that \(1\in F_{0}^{ind}\setminus F_{1}^{ind}\) hold. However, this paper mainly considers when \(\mathbb{F}^{ind}\) is a subfiltration of another filtration \(\mathbb{F}\) of \(A\). Hence Definition 2.2(a) holds and
\[\bigcap_{i}F_{i}^{ind}(A)\subseteq\bigcap_{i}F_{i}(A)=0.\]
Therefore Definition 2.2(c) also holds, and whence \(\mathbb{F}^{ind}\) is a filtration of \(A\). In this case, \(\mathbb{F}^{ind}\) is called an _induced_ filtration determined by the degree assignment given in (E2.8.2). Note that the condition in Definition 2.2(c) holds automatically when \(A\) is a domain and \(\operatorname{GKdim}\operatorname{gr}_{\mathbb{F}^{ind}}A=\operatorname{GKdim }A<\infty\). It is clear that \(\mathbb{F}^{ind}\) is uniquely determined by the degree assignment given in (E2.8.2). Whenever we say \(\mathbb{F}^{ind}\) is an induced filtration, we always assume that it is a filtration satisfying Definition 2.2(a,b,c,d).
**Lemma 2.9**.: _Suppose that \(A\) is generated by a set \(\{x_{k}\}_{k}\) and that we are given a degree assignment \(\deg\) as in (E2.8.2). Write \(\nu:=\deg\)._
1. _Let_ \(\mathbb{F}^{ind}\) _be the induced filtration defined by (_E2.8.3_). Then_ \(\operatorname{gr}_{\mathbb{F}^{ind}}A\) _is generated by_ \(\{\widetilde{x_{k}}\}_{k}\) _where_ \(\widetilde{x_{k}}:=x_{k}+F_{\nu(x_{k})+1}^{ind}(A)\)_._
_In parts (2, 3), let \(\mathbb{F}\) be a filtration of \(A\)._
2. _Suppose that_ \(\deg\) _is the degree function corresponding to_ \(\mathbb{F}\) _via (_E2.3.1_) and that_ \(\mathbb{F}^{ind}\) _is defined by using (_E2.8.2_)-(_E2.8.3_) based on the given generating set_ \(\{x_{k}\}_{k}\)_. Then, there is a natural graded algebra homomorphism_ (E2.9.1) \[\phi^{ind}:\operatorname{gr}_{\mathbb{F}^{ind}}A\to\operatorname{gr}_{\mathbb{ F}}A\] _determined by_ \(\phi^{ind}(\widetilde{x_{k}})=\overline{x_{k}}\) _for all_ \(k\)_, where_ \(\overline{x_{k}}:=x_{k}+F_{\nu(x_{k})+1}(A)\)_._
3. _Suppose that_ \(A\) _is a Poisson algebra and that_ \(\mathbb{F}^{ind}\) _is the induced filtration defined as (_E2.8.3_). Then_ \(\mathbb{F}^{ind}\) _is a_ \(w\)_-filtration if and only if_ (E2.9.2) \[\deg(\{x_{k},x_{l}\})\geq\deg(x_{k})+\deg(x_{l})-w\] _for all_ \(k,l\)_._
Proof.: (1) Let \(\widetilde{f}\) be any nonzero element in \((\operatorname{gr}_{\mathbb{F}^{ind}}A)_{i}\). We may assume that \(f\in F_{i}^{ind}\setminus F_{i+1}^{ind}\). Modulo \(F_{i+1}^{ind}\), we may assume that \(f\) is a linear combination of monomials \(\prod_{k}x_{k}^{n_{k}}\) with \(\sum_{k}n_{k}d_{k}=i\). Hence \(\widetilde{f}\) is a linear combination of \(\widehat{\prod_{k}x_{k}^{n_{k}}}\) with \(\sum_{k}n_{k}d_{k}=i\) for finitely many \(n_{k}\neq 0\). Since \(\widehat{\prod_{k}x_{k}^{n_{k}}}=\prod_{k}\widetilde{x_{k}}^{n_{k}}\), the assertion follows.
(2) It follows from the definition that \(\mathbb{F}^{ind}\) is a subfiltration of \(\mathbb{F}\). By Lemma 2.8(1), there is a natural graded algebra homomorphism \(\phi^{ind}\) as given in (E2.9.1). By part (1), \(\operatorname{gr}_{\mathbb{F}^{ind}}\) is generated by \(\{\widetilde{x_{k}}\}_{k}\). Therefore, \(\phi^{ind}\) is completely determined by \(\phi^{ind}(\widetilde{x_{k}})=\overline{x_{k}}\) for all \(k\).
(3) One direction is clear. For the other direction, we assume that (E2.9.2) holds. Now let \(f\in A\) with degree \(a\) and \(g\in A\) with degree \(b\). Write \(f=\sum_{i}c_{i}x_{i_{1}}\cdots x_{i_{m}}\) where \(\sum_{s=1}^{m}\deg(x_{i_{s}})\geq a\) and \(g=\sum_{\mathbf{j}}d_{\mathbf{j}}x_{j_{1}}\cdots x_{j_{n}}\) where \(\sum_{s=1}^{n}\deg(x_{j_{s}})\geq b\). By (E2.9.2) and the Poisson structure,
\[\deg(\{x_{i_{1}}\cdots x_{i_{m}}, x_{j_{1}}\cdots x_{j_{n}}\})\] \[\geq\min_{\alpha,\beta}\{\deg(x_{i_{1}}\cdots\widehat{x_{i_{\alpha }}}\cdots x_{i_{m}}\cdot x_{j_{1}}\cdots\widehat{x_{j_{\beta}}}\cdots x_{j_{n }}\{x_{i_{\alpha}},x_{j_{\beta}}\})\}\] \[\geq\deg(x_{i_{1}}\cdots x_{i_{m}})+\deg(x_{j_{1}}\cdots x_{j_{n }})-w\] \[\geq a+b-w.\]
Then
\[\deg(\{f,g\}) =\deg(\{\sum_{\mathbf{i}}c_{\mathbf{i}}x_{i_{1}}\cdots x_{i_{m}}, \sum_{\mathbf{j}}d_{\mathbf{j}}x_{j_{1}}\cdots x_{j_{n}}\})\] \[\geq\min_{\mathbf{i},\mathbf{j}}\{\deg(\{x_{i_{1}}\cdots x_{i_{m}},x_{j_{1}}\cdots x_{j_{n}}\})\}\] \[\geq a+b-w=\deg(f)+\deg(g)-w\]
as required.
Lemma 2.9(3) offers a straightforward approach to verify whether a filtration is a \(w\)-filtration. The following is one of the well-known Poisson fields.
**Example 2.10**.: Let \(W\) be the Weyl Poisson polynomial ring \(\Bbbk[x,y]\) whose Poisson structure is determined by
(E2.10.1) \[\{x,y\}=1.\]
Let \(K_{Weyl}\) denote the Weyl Poisson field which is \(Q(W)\), see Example 0.5.
Let \(\deg(x)=1\) and \(\deg(y)=(w-1)\). Then, the induced filtration \(\mathbb{F}^{ind}\) is good. Since \(\Bbbk[x,y]\) is generated by \(x\) and \(y\) and
\[\deg(\{x,y\})=\deg(1)=0=1+(w-1)-w=\deg(x)+\deg(y)-w,\]
Lemma 2.9(3) implies that \(\mathbb{F}^{ind}\) is a good \(w\)-filtration. One can also check that \(\operatorname{gr}_{\mathbb{F}^{ind}}W\cong W\). Hence \(\mathcal{V}_{ncw}(K_{Weyl})\neq\emptyset\) and \(\alpha_{ncw}(K_{Weyl})\neq 0\) for all \(w\in\mathbb{Z}\). (In fact \(\mathcal{V}_{fw}(K_{Weyl})\neq\emptyset\) for all \(w\), see Definition 3.1(1)). As a consequence, \(\alpha_{w}(K_{Weyl})\neq 0\) for all \(w\in\mathbb{Z}\).
Next, we construct infinitely many different nontrivial nonclassical \(w\)-valuations on \(K_{Weyl}\) when \(w\leq 0\). For any \(\xi\in\Bbbk\), set \(f_{\xi}=x+y+\xi y^{2}\). Then \(\{f_{\xi},y\}=1\). Define \(\deg(f_{\xi})=1\) and \(\deg(y)=w-1\leq-1\). By the above paragraph, the induced filtration is a \(w\)-filtration, and hence the associated \(w\)-valuation, denoted by \(\nu_{\xi}\), is nontrivial and nonclassical. We claim that \(\nu_{\xi}\neq\nu_{\xi^{\prime}}\) whenever \(\xi\neq\xi^{\prime}\). This follows from the fact \(\nu_{\xi}(f_{\xi})=1\) and
\[\nu_{\xi^{\prime}}(f_{\xi})=\nu_{\xi^{\prime}}(f_{\xi^{\prime}}+(\xi-\xi^{ \prime})y^{2})=\min\{\nu_{\xi^{\prime}}(f_{\xi^{\prime}}),\nu_{\xi^{\prime}}(y ^{2})\}=\min\{1,2(w-1)\}=2(w-1).\]
If \(\Bbbk\) is uncountable, then there exist uncountably many nontrivial nonclassical \(w\)-valuations for \(K_{Weyl}\) for each fixed \(w\leq 0\), namely \(\{\nu_{\xi}\}_{\xi\in\Bbbk}\) as described in the previous paragraph.
Next, we proceed to two crucial lemmas, which will have numerous applications.
**Lemma 2.11**.: _Let \(P\) be a domain of finite GK-dimension, say \(d\), and generated by \(\{x_{k}\}_{k\in S}\) for an index set \(S\). Let \(\nu\) be a valuation on \(P\) and \(\mathbb{F}\) be the filtration associated to \(\nu\). In parts (1,2) let \(\mathbb{F}^{ind}\) be the induced filtration determined by_ (E2.8.3) _and the degree assignment \(\deg(x_{k}):=\nu(x_{k})\) for all \(k\in S\). Let \(I\) be the image of \(\phi^{ind}\), or the subalgebra of \(\operatorname{gr}_{\mathbb{F}}(P)\) generated by \(\{\overline{x_{k}}\}_{k}\) where \(\overline{x_{k}}\) is \(x_{k}+F_{\nu(x_{k})+1}(P)\) for every \(k\)._
1. _Suppose_ 1. \(\operatorname{GKdim}I=d\)_,_ 1. \(\operatorname{gr}_{\mathbb{F}^{ind}}(P)\) _is a domain. Then_ \(\mathbb{F}\) _agrees with the filtration_ \(\mathbb{F}^{ind}\)_. As a consequence, there are natural isomorphisms of graded algebras_ \(\operatorname{gr}_{\mathbb{F}^{ind}}(P)\cong I\cong\operatorname{gr}_{ \mathbb{F}}(P)\)_._
2. _Suppose_ 2. _all_ \(\{\nu(x_{k})\}_{k}\) _are zero,_ 2. \(\operatorname{GKdim}I=d\)_._ _Then,_ \(\nu\) _is trivial._
3. _Suppose_ 3. _all_ \(\{\nu(x_{k})\}_{k}\) _are nonnegative,_ 3. \(\omega\) _is a nonzero element in_ \(P\) _such that the factor ring_ \(P/(\omega)\) _is a domain of GK-dimension_ \(d-1\)_,_ 3. \(\operatorname{GKdim}F_{0}(P)/F_{1}(P)=d-1\) _and_ \(\omega\in F_{1}(P)\)_._ _Then_ \(\mathbb{F}\) _agrees with the new induced filtration_ \(\mathbb{F}^{new}\) _determined by the degree assignment on the generating set_ \(\{x_{k}\}_{k\in S}\cup\{w\}\)_:_ \[\{\deg_{new}(x_{k}):=\nu(x_{k})\}_{k\in S}\cup\{\deg_{new}(\omega):=\nu(\omega) >0\}\] _and_ \(\operatorname{gr}_{\mathbb{F}}P\cong(P/(\omega))[\overline{\omega}]\)_._
Proof.: (1) By definition, \(\mathbb{F}^{ind}\) is a subfiltration of \(\mathbb{F}\). By Lemma 2.9(2), there is an algebra homomorphism \(\phi^{ind}:\operatorname{gr}_{\mathbb{F}^{ind}}P\to\operatorname{gr}_{ \mathbb{F}}P\). By Lemma 2.8(2) it suffices to show that \(\phi^{ind}\) is injective. By definition, \(I\) is the image of the map \(\phi^{ind}\), so we have a surjective graded algebra homomorphism
\[\phi^{ind}:\operatorname{gr}_{\mathbb{F}^{ind}}P\to I.\]
Suppose to the contrary that \(\phi^{ind}\) is not injective. Since \(\operatorname{gr}_{\mathbb{F}^{ind}}P\) is a domain, we obtain that
\[\operatorname{GKdim}I<\operatorname{GKdim}\operatorname{gr}_{\mathbb{F}^{ind}}P \leq\operatorname{GKdim}P=d\]
which contradicts (1a). Therefore \(\phi^{ind}\) is injective. The consequence is clear.
(2) Since all \(\nu(x_{k})=0\), \(F_{0}(P)=P\) and \(I\) is a subalgebra of \((\operatorname{gr}_{\mathbb{F}}P)_{0}=F_{0}(P)/F_{1}(P)\). Since \(\operatorname{GKdim}I=d=\operatorname{GKdim}P=\operatorname{GKdim}F_{0}(P)\), we have \(F_{1}(P)=0\). Thus, the assertion follows.
(3) We claim that there is a positive integer \(h\) such that
\[F_{i}(P)=\begin{cases}P&i\leq 0,\\ \omega^{\lceil i/h\rceil}P&i>0.\end{cases}\]
Since \(\nu(x_{k})\geq 0\) for all \(i\), \(F_{0}(P)=P\). Consequently, \(F_{i}(P)=P\) for all \(i\leq 0\). For \(i>0\), we use induction. By (3c), \(F_{0}(P)/F_{1}(P)\) is a homomorphic image of \(P/(\omega)\). Since \(\operatorname{GKdim}F_{0}(P)/F_{1}(P)=d-1=\operatorname{GKdim}P/(\omega)\) and \(P/(\omega)\) is a domain, the map \(P/(\omega)\to F_{0}(P)/F_{1}(P)\) is an isomorphism. Thus \(F_{1}(P)=(\omega)=\omega P\). Hence, the claim holds for \(i=1\).
Now let \(h\) be the largest positive integer such that \(F_{h}(P)=F_{1}(P)\). Then, the claim holds for all \(1\leq i\leq h\). Next, we consider \(i=h+1\). By definition \(F_{h}(P)\supsetneq F_{h+1}(P)\) and \(\omega\not\in F_{h+1}(P)\). Consequently, \(\nu(\omega)=h\). For every element \(f\in F_{h+j}(P)\) where \(j\geq 1\), we can write it as \(\omega g\) since \(F_{h+j}(P)\subseteq F_{h}(P)=\omega P\). Then
\[\nu(g)=\nu(\omega g)-\nu(w)\geq h+j-h=j\]
or equivalently \(g\in F_{j}(P)\). Conversely, if \(g\in F_{j}(P)\), then it is clear that \(\omega g\in F_{h+j}(P)\). This means that \(F_{h+j}(P)=\omega F_{j}(P)\) for all \(j>0\). In particular \(F_{h+1}(P)=\omega F_{1}(P)=\omega^{2}P\). So the claim holds for \(i=h+1\). The claim for general \(i=h+j\) with \(j>0\) follows from the induction and the equation \(F_{h+j}(P)=\omega F_{j}(P)\). The claim easily implies the main assertion in part (3).
**Lemma 2.12**.: _Let \(P\) be a Poisson domain generated by \(\{x_{k}\}_{k\in S}\) for an index set \(S\). Let \(\nu\) be a \(w\)-valuation on \(P\) and \(\mathbb{F}\) be the filtration associated to \(\nu\)._
1. _Let_ \(\mathbb{F}^{ind}\) _be the induced filtration of_ \(P\) _determined by_ \(\deg(x_{k}):=\nu(x_{k})\) _for all_ \(k\)_. If_ \(\mathbb{F}^{ind}\) _is a_ \(w\)_-filtration, then_ \(\phi^{ind}:\operatorname{gr}_{\mathbb{F}^{ind}}P\to\operatorname{gr}_{\mathbb{F }}P\) _is a Poisson algebra homomorphism._
2. _Suppose that_ \(P\) _becomes a Poisson_ \(w\)_-graded algebra generated by homogeneous elements_ \(\{x_{k}\}_{k\in S}\) _with the degree assignment_ \(\deg(x_{k}):=\nu(x_{k})\) _for all_ \(k\in S\)_. Then_ \(\phi^{ind}:P\to\operatorname{gr}_{\mathbb{F}}P\) _is a Poisson algebra homomorphism._
Proof.: (1) Let \(\phi\) be \(\phi^{ind}\). By Lemma 2.9(2), \(\phi\) is a graded algebra homomorphism. Since both \(\operatorname{gr}_{\mathbb{F}^{ind}}P\) and \(\operatorname{gr}_{\mathbb{F}}P\) are Poisson \(w\)-graded algebras by Lemma 2.3, it remains to prove that \(\phi\) preserves the Poisson bracket. Since \(\operatorname{gr}_{\mathbb{F}^{ind}}P\) is generated by \(\widetilde{x_{k}}\) where \(\widetilde{x_{k}}=x_{k}+F^{ind}_{\nu(x_{k})+1}\) [Lemma 2.9(1)], it suffices to show that
(E2.12.1) \[\phi(\{\widetilde{x_{k}},\widetilde{x_{l}}\})=\{\phi(\widetilde{x_{k}}),\phi( \widetilde{x_{l}})\}\]
for all \(k,l\). Write \(n_{k}=\nu(x_{k})=\deg(x_{k})\) for all \(k\). Then \(\phi(\widetilde{x_{k}})=\overline{x_{k}}=x_{k}+F_{n_{k}+1}\) by definition. Suppose that, for all \(k,l\in S\),
\[\{x_{k},x_{l}\}=\sum_{k}c_{1}^{k,l}x_{i_{1}}\cdots x_{i_{s}}+\sum_{l}d_{1}^{k,l}x_{j_{1}}\cdots x_{j_{t}}\]
where \(c_{\mathbf{i}}^{k,l}\neq 0\) only when \(\sum_{\alpha=1}^{s}\deg(x_{i_{\alpha}})=\deg(x_{k})+\deg(x_{l})-w\) and where \(\sum_{\beta=1}^{t}\deg(x_{j_{\beta}})>\deg(x_{k})+\deg(x_{l})-w\). As a consequence,
\[\widetilde{\{x_{k},x_{l}\}}=\widetilde{\sum c_{\mathbf{i}}^{k,l}x_{i_{1}} \cdots x_{i_{s}}}=\sum c_{\mathbf{i}}^{k,l}\widetilde{x_{i_{1}}}\cdots\widetilde {x_{i_{s}}}.\]
Now we calculate to get
\[\{\phi(\widetilde{x_{k}}),\phi(\widetilde{x_{l}})\} =\{x_{k}+F_{n_{k}+1},x_{l}+F_{n_{l}+1}\}=\{x_{k},x_{l}\}+F_{n_{k} +n_{l}-w+1}\] \[=\sum c_{\mathbf{i}}^{k,l}x_{i_{1}}\cdots x_{i_{s}}+\sum d_{ \mathbf{j}}^{k,l}x_{j_{1}}\cdots x_{j_{t}}+F_{n_{k}+n_{l}-w+1}\] \[=\sum c_{\mathbf{i}}^{k,l}x_{i_{1}}\cdots x_{i_{s}}+F_{n_{k}+n_{l }-w+1}\] \[=\sum c_{\mathbf{i}}^{k,l}(x_{i_{1}}+F_{\nu(x_{i_{1}})+1})\cdots( x_{i_{s}}+F_{\nu(x_{i_{s}})+1})\] \[=\sum c_{\mathbf{i}}^{k,l}\phi(\widetilde{x_{i_{1}}})\cdots\phi( \widetilde{x_{i_{s}}})\] \[=\phi(\sum c_{\mathbf{i}}^{k,l}\widetilde{x_{i_{1}}}\cdots \widetilde{x_{i_{s}}})\] \[=\phi(\widetilde{\{x_{k},x_{l}\}})=\phi(\{\widetilde{x_{k}}, \widetilde{x_{l}}\})\]
as required.
(2) This follows from part (1) since \(P\) is canonically isomorphic to \(\operatorname{gr}_{\mathbb{F}^{ind}}P\) under the hypotheses.
## 3. Faithful valuations and examples
In this section, we will work out the complete set of \(0\)-valuations for several Poisson fields. Some of these \(0\)-valuations are unique in the following sense.
**Definition 3.1**.: Let \(K\) be a Poisson field.
1. A \(w\)-valuation \(\nu\) on \(K\) is called a _faithful \(w\)-valuation_ if the following hold. 1. The image of \(\nu\) is \(\mathbb{Z}\cup\{\infty\}\). 2. \(\nu\) is nondegenerate. 3. \(\nu\) is nonclassical.
2. Let \(\mathcal{V}_{fw}(K)\) be the set of all faithful \(w\)-valuations on \(K\).
3. A \(w\)-valuation \(\nu\) on a Poisson domain \(A\) is called _faithful_ if it becomes a faithful \(w\)-valuation when extended to the Poisson fraction field \(Q(A)\).
Note that condition (b) in Definition 3.1(1) says the transcendence degree of the residue field of \(\nu\) is one less than the transcendence degree of \(K\), and condition (c) means the induced Poisson bracket on the associated graded algebra \(\operatorname{gr}_{\nu}(K)\) is nonzero. In particular, a _faithful valuation_ introduced in Definition 0.2 can be understood as a faithful \(0\)-valuation.
Let \(P\) be a Poisson \(w\)-graded domain. We fix this \(\mathbb{Z}\)-graded structure on \(P\) and call it an _Adams grading_. We define the _Adams\({}^{Id}\) filtration_ of \(P\), denoted by \(\mathbb{F}^{Id}\), by
(E3.1.1) \[F_{i}^{Id}(P):=\oplus_{n\geq i}P_{n}\quad\text{for all }i\in\mathbb{Z}.\]
It is clear that \(\mathbb{F}^{Id}\) is a Poisson \(w\)-filtration such that the associated graded ring \(\operatorname{gr}_{\mathbb{F}^{Id}}P\) is canonically isomorphic to \(P\) as Poisson \(w\)-graded algebras. The _Adams\({}^{-Id}\) filtration_ of \(P\), denoted by \(\mathbb{F}^{-Id}\), is defined by
(E3.1.2) \[F_{i}^{-Id}(P):=\oplus_{n\leq-i}P_{n}\quad\text{for all }i\in\mathbb{Z}.\]
It is clear that \(\mathbb{F}^{-Id}\) is a Poisson \((-w)\)-filtration such that the associated graded ring \(\operatorname{gr}_{\mathbb{F}^{-Id}}P\) is isomorphic to \(P\) by flipping the grading \((i\leftrightarrow-i)\). Let \(\nu^{Id}\) (resp. \(\nu^{-Id}\)) denote the \(w\)-valuation (resp. \((-w)\)-valuation) of \(P\) associated to \(\mathbb{F}^{Id}\) (resp. \(\mathbb{F}^{-Id}\)). We call both \(\nu^{Id}\) and \(\nu^{-Id}\) the _Adams valuations_ of \(P\). The following lemma is easy.
**Lemma 3.2**.: _Let \(P\) be a \(\mathbb{Z}\)-graded domain with \(P\neq P_{0}\)._
1. \(P\) _has at least two good filtrations:_ \(\mathbb{F}^{Id}\) _and_ \(\mathbb{F}^{-Id}\)_. In both cases, the associated graded rings are isomorphic to_ \(P\)_. If_ \(P\) _is a Poisson_ \(w\)_-graded domain, then the corresponding_ \(\nu^{Id}\)__\((\)_resp._ \(\nu^{-Id})\) _is a_ \(w\)_-valuation_ \((\)_resp._ \((-w)\)_-valuation_\()\)_._
2. _If_ \(P\) _is a Poisson_ \(0\)_-graded algebra, then the Poisson field_ \(Q(P)\) _has at least two distinct_ \(0\)_-valuations_ \(\nu^{Id}\) _and_ \(\nu^{-Id}\)_._
3. _Suppose an algebra_ \(A\) _has a filtration_ \(\mathbb{F}^{c}\) _such that_ \(\operatorname{gr}_{\mathbb{F}^{c}}A\) _is canonically isomorphic to_ \(P\) _as_ \(\mathbb{Z}\)_-graded algebras. Then_ \(A\) _has a valuation associated to_ \(\mathbb{F}^{c}\)_, denoted by_ \(\nu^{c}\)_. If_ \(A\) _is further a Poisson algebra and_ \(\mathbb{F}^{c}\) _is a_ \(w\)_-filtration, then_ \(\nu^{c}\) _is a_ \(w\)_-valuation._
Proof.: The proof, which is straightforward, has been omitted.
We will see that the Weyl Poisson field \(K_{Weyl}\) [Example 0.5] plays a unique role in the study of \(2\)-valuations. To achieve our objectives, we will be introducing a few temporary concepts.
**Definition 3.3**.: Let \(K\) be a Poisson field and \(\nu\) a \(w\)-valuation on \(K\).
1. We say \(\nu\) is _quasi-Adams_ if \(Q(\operatorname{gr}_{\nu}K)\cong K\).
2. \(K\) is called \(w\)_-quasi-Adams_ if (a) \(K\) admits a nontrivial \(w\)-valuation and (b) every \(w\)-valuation is quasi-Adams.
3. We say \(\nu\) is _Weyl_ if \(Q(\operatorname{gr}_{\nu}K)\cong K_{Weyl}\).
4. We say \(K\) is \(w\)_-Weyl_ if (a) there is a faithful \(w\)-valuation of \(K\) and (b) every faithful \(w\)-valuation on \(K\) is Weyl.
We recall some notations introduced in Construction 0.6 together with a review of Adams grading (internal grading). Let \(\Bbbk[x,y,z]\) be a polynomial ring that has an internal grading, called _Adams grading_. The generators \(x\), \(y\), \(z\) have Adams grading \(|x|\), \(|y|\) and \(|z|\) being integers. If \(|x|=|y|=|z|=1\), we say \(\Bbbk[x,y,z]\) has _standard Adams grading_. Let \(\Omega\) be a homogeneous element of \(\Bbbk[x,y,z]\) with respect to the Adams grading. The Poisson structure on \(A_{\Omega}:=\Bbbk[x,y,z]\) is determined by \(\{x,y\}=\frac{\partial\Omega}{\partial z}\), \(\{y,z\}=\frac{\partial\Omega}{\partial x}\), and \(\{z,x\}=\frac{\partial\Omega}{\partial y}\). The above is equivalent to (E0.6.1). In this case, \(\Omega\) is called the _potential_ of \(A_{\Omega}\). Since \(\Omega\) is in the Poisson center of \(A_{\Omega}\), we have a Poisson factor ring \(P_{\Omega-\xi}:=A_{\Omega}/(\Omega-\xi)\) for all \(\xi\in\Bbbk\). A special case is when \(|\Omega|=|x|+|y|+|z|\), where both \(A_{\Omega}\) and \(P_{\Omega}\) are Poisson \(0\)-graded algebras.
**Definition 3.4**.: A homogeneous element \(\Omega\in A:=\Bbbk[x,y,z]\) is called to _have an isolated singularity at the origin_ (or simply _have an isolated singularity_) if \(A_{sing}:=A/(\Omega_{x},\Omega_{y},\Omega_{z})\) is finite-dimensional over \(\Bbbk\). In this case, we say \(\Omega\) is an _i.s. potential_.
A generic homogeneous element \(\Omega\) of degree \(n>1\) has an isolated singularity. For instance, each \(\Omega=x^{n}+y^{n}+z^{n}\)\((n>1)\) is an i.s. potential.
Note that the definition of having isolated singularities uses the graded generating set \(\{x,y,z\}\). However, it is well-known that \(A_{sing}\) is independent of the choices
of generating sets \(\{x,y,z\}\). Consequently, the concept of \(\Omega\) having isolated singularities is independent of the choices of generating sets \(\{x,y,z\}\). We will focus on potentials with isolated singularities for the rest of the paper.
Henceforth, for the sake of simplicity, we will impose
**Hypothesis 3.5**.: _The base field \(\Bbbk\) is of characteristic zero._
Two filtrations are _equivalent_ if their associated valuations are equivalent, see Definition 1.1(5). In the next lemma, we use \(\{x_{1},x_{2},x_{3}\}\) for any graded generating set of \(A:=\Bbbk[x,y,z]\), which is not necessarily \(\{x,y,z\}\).
**Lemma 3.6**.: _Let \(P\) be \(P_{\Omega}\) as defined in Construction 0.6. Let \(\{x_{1},x_{2},x_{3}\}\) be a graded generating set of \(A\)\((\)resp. \(P)\). Suppose that \(\nu\) is a \(w\)-valuation on \(P\) and that \(\mathbb{F}\) is the associated \(w\)-filtration of \(P\). Let \(I\) be the subalgebra of \(\operatorname{gr}_{\mathbb{F}}P\) generated by \(\overline{x_{1}},\overline{x_{2}},\overline{x_{3}}\)._
1. _Suppose that_ \(\Omega\) _is an i.s. potential of degree_ \(|x_{1}|+|x_{2}|+|x_{3}|\) _and_ \(w=0\)_. If_ \(F_{0}(P)=P\)_, then either_ \(\mathbb{F}\) _is a trivial filtration or_ \(F_{0}/F_{1}\cong\Bbbk\) _and_ \(\nu(x_{1}),\nu(x_{2}),\nu(x_{3})\) _are all positive. As a consequence, if_ \(\nu(x_{1})=\nu(x_{2})=\nu(x_{3})=0\)_, then_ \(\mathbb{F}\) _is trivial._
2. _Suppose_ \((\nu(x_{1}),\nu(x_{2}),\nu(x_{3}))=d(|x_{1}|,|x_{2}|,|x_{3}|)\) _for some positive integer_ \(d\)_. If_ \(\operatorname{GKdim}I=2\)_, then_ \(\mathbb{F}\) _is equivalent to the Adams_\({}^{Id}\) _filtration_ \(\mathbb{F}^{Id}\)_._
3. _Suppose_ \((\nu(x),\nu(y),\nu(z))=d(|x_{1}|,|x_{2}|,|x_{3}|)\) _for some negative integer_ \(d\)_. If_ \(\operatorname{GKdim}I=2\)_, then_ \(\mathbb{F}\) _is equivalent to the Adams_\({}^{-Id}\) _filtration_ \(\mathbb{F}^{-Id}\)_. As a consequence,_ \(F_{0}(P)=\Bbbk\)_._
Proof.: (1) If \(\mathbb{F}\) is nontrivial, then \(F_{1}(P)\neq 0\). Since \(\mathbb{F}\) is a Poisson 0-filtration, \(F_{1}(P)\) is a Poisson prime ideal of \(P\). Hence, \(P/F_{1}(P)\) is an affine Poisson domain of \(\operatorname{GK}\)-dimension \(\leq 1\). So Lemma 1.6(1) implies that \(P/F_{1}(P)\) has zero Poisson bracket. Then \(\{x,y\}=\Omega_{y},\{y,z\}=\Omega_{x},\{z,x\}=\Omega_{y}\in\{P,P\}\subseteq F _{1}(P)\). As a consequence, \(P/F_{1}(P)\) is a quotient of \(A_{sing}\). Since \(\Omega\) has an isolated singularity, \(A_{sing}\) is a finite-dimensional graded algebra. Note that \(P/F_{1}(P)\) is a domain. So \(P/F_{1}(P)=\Bbbk\) and \(F_{1}(P)\) is generated by \(x,y,z\) (as well as by \(x_{1},x_{2},x_{3}\)). The main assertion follows.
Now we assume that \(\nu(x_{1})=\nu(x_{2})=\nu(x_{3})=0\). By the valuation axiom, \(F_{0}(P)=P\). The consequence follows from the main assertion.
(2) Let \(\mathbb{F}^{ind}\) be the induced filtration defined in (E2.8.3) determined by the degree assignment \((\deg(x_{1}),\deg(x_{2}),\deg(x_{3}))=d(|x_{1}|,|x_{2}|,|x_{3}|)\) where \(d>0\). Then \(\mathbb{F}^{ind}\) is equivalent to \(\mathbb{F}^{Id}\) and \(\operatorname{gr}_{\mathbb{F}^{ind}}P=P\) since \(P\) is graded to start with. The assertion follows from Lemma 2.11(1).
(3) Let \(\mathbb{F}^{ind}\) be the induced filtration defined in (E2.8.3) determined by the degree assignment \((\deg(x_{1}),\deg(x_{2}),\deg(x_{3}))=d(|x_{1}|,|x_{2}|,|x_{3}|)\) where \(d<0\). Then \(\mathbb{F}^{ind}\) is equivalent to \(\mathbb{F}^{-Id}\) and \(\operatorname{gr}_{\mathbb{F}^{ind}}P=P\) since \(P\) is graded to start with. The assertion follows from Lemma 2.11(1). The consequence is clear.
The rest of this section is to gain a better understanding of the faithful valuations on some Poisson fraction fields \(Q(P_{\Omega-\xi})\) in Construction 0.6 when \(\Omega=x^{3}+y^{3}+z^{3}+\lambda xyz\). Note that such a \(\Omega\) has an isolated singularity if and only if \(\lambda^{3}\neq-3^{3}\).
**Lemma 3.7**.: _Let \(K\) be a Poisson field containing nonzero elements \(x,y,z\) such that \(\{x,y\}=3z^{2}+\lambda xy\), \(\{y,z\}=3x^{2}+\lambda yz\), and \(\{z,x\}=3y^{2}+\lambda xz\) for some \(\lambda\in\Bbbk\)._
1. _Let_ \(\nu\) _be a_ \(0\)_-valuation on_ \(K\)_. Then_ \(\nu(x)=\nu(y)=\nu(z)\)
_._
2. _If_ \(\lambda=0\)_, then there is no_ \(w\)_-valuation for all_ \(w<0\)_._
Proof.: (1) Let \(\nu\) be a \(0\)-valuation on \(K\). Set \(a=\nu(x)\), \(b=\nu(y)\), and \(c=\nu(z)\). Since \(\{x,y\}=3z^{2}+\lambda xy\), we have
\[2c=\nu(3z^{2})=\nu(\{x,y\}-\lambda xy)\geq\min\{\nu(\{x,y\}),\nu(xy)\}\geq\nu(x )+\nu(y)=a+b.\]
Similarly \(2a\geq b+c\) and \(2b\geq a+c\). These inequalities together imply that \(a=b=c\).
(2) Let \(\nu\) be a \(w\)-valuation on \(K\). Set \(a=\nu(x)\), \(b=\nu(y)\), and \(c=\nu(z)\). Since \(\{x,y\}=3z^{2}\) (when \(\lambda=0\)), we have
\[2c=\nu(3z^{2})=\nu(\{x,y\})\geq\nu(x)+\nu(y)-w=a+b-w.\]
Similarly \(2a\geq b+c-w\) and \(2b\geq a+c-w\). Adding these three inequalities, we obtain that \(2(a+b+c)\geq 2(a+b+c)-3w\) which implies that \(w\geq 0\). The assertion follows.
A Poisson \(w\)-graded algebra \(P\) is called _projectively simple_ if (a) \(P\) is infinite-dimensional and (b) every nonzero graded Poisson ideal of \(P\) is co-finite-dimensional. When \(\Omega\) has an isolated singularity, it follows from Lemma 1.6 that \(P_{\Omega}\) is projectively simple. We are ready to work out all faithful \(0\)-valuations on some Poisson fields \(Q(P_{\Omega})\).
**Theorem 3.8**.: _Let \(P\) be \(P_{\Omega}\) where \(\Omega=x^{3}+y^{3}+z^{3}+\lambda xyz\) where \(\lambda^{3}\neq-3^{3}\). Let \(K\) be the Poisson fraction field \(Q(P)\)._
1. \((\mathcal{V}_{0}(K)/\sim)=\{\nu^{Id},\nu^{-Id}\}=\mathcal{V}_{f0}(K)\)_. Consequently, every nontrivial_ \(0\)_-valuation of_ \(K\) _is equivalent to a faithful_ \(0\)_-valuation. Further,_ \(\alpha_{0}(K)=\alpha_{nc0}(K)=2\) _and_ \(\alpha_{c0}(K)=\alpha_{-1}(K)=0\)_._
2. _Let_ \(\nu\) _be in_ \(\mathcal{V}_{0}(K)\)_. Then_ \(Q(\operatorname{gr}_{\nu}(K))\cong K\)_._
3. \(\mathbf{d}(K)=0\)_,_ \(\mathbf{w}(K)=1\) _and_ \(K\) _is quasi-Adams._
Proof.: Recall that the Poisson algebra \(P_{\Omega}=A_{\Omega}/(\Omega)\) has Poisson bracket given by \(\{x,y\}=3z^{2}+\lambda xy\), \(\{y,z\}=3x^{2}+\lambda yz\), and \(\{z,x\}=3y^{2}+\lambda xz\).
(1) Let \(\nu\) be a nontrivial \(0\)-valuation on \(K\) and set \(a=\nu(x)\), \(b=\nu(y)\), and \(c=\nu(z)\). By Lemma 3.7(1), \(a=b=c\). By Lemma 3.6(1), \(a\neq 0\). So we have two cases to consider.
Case 1: \(a>0\). Let \(I\) be the subalgebra generated by \(\overline{x},\overline{y},\overline{z}\). Since \(P\) is a Poisson \(0\)-graded algebra with \(|x|=|y|=|z|=1\), \(\mathbb{F}^{ind}\) is equivalent to \(\mathbb{F}^{Id}\). Then \(\phi^{ind}:P\to I\subseteq\operatorname{gr}_{\mathbb{F}}P\) is a Poisson algebra homomorphism [Lemma 2.12(2)]. Since \(\operatorname{GKdim}I\geq 1\), \(\phi^{ind}\) is injective as \(P\) is projectively simple. As a consequence, \(\operatorname{GKdim}I=2\). By Lemma 2.11(1), \(\mathbb{F}^{ind}=\mathbb{F}\). Consequently, \(\mathbb{F}\) is equivalent to \(\mathbb{F}^{Id}\) or \(\nu\) is equivalent to \(\nu^{Id}\).
Case 2: \(a<0\). Let \(I\) be the subalgebra generated by \(\overline{x},\overline{y},\overline{z}\). Since \(P\) is a Poisson \(0\)-graded algebra with \(|x|=|y|=|z|=1\), \(\mathbb{F}^{ind}\) is equivalent to \(\mathbb{F}^{-Id}\). Then \(\phi^{ind}:P\to I\subseteq\operatorname{gr}_{\mathbb{F}}P\) is a Poisson algebra homomorphism [Lemma 2.12(2)]. Since \(\operatorname{GKdim}I\geq 1\), \(\phi^{ind}\) is injective as \(P\) is projectively simple. As a consequence, \(\operatorname{GKdim}I=2\). By Lemma 2.11(1), \(\mathbb{F}^{ind}=\mathbb{F}\). Consequently, \(\mathbb{F}\) is equivalent to \(\mathbb{F}^{-Id}\) or \(\nu\) is equivalent to \(\nu^{-Id}\).
Therefore, up to equivalence \(\sim\), there are only two \(0\)-valuations, namely \(\nu^{Id}\) and \(\nu^{-Id}\). It is easy to see that both \(\nu^{Id}\) and \(\nu^{-Id}\) are faithful \(0\)-valuations (consequently, nonclassical and nondegenerate since the residue field associated with \(\nu^{\pm Id}\) is the function field of the smooth elliptic curve \(\Omega=0\) in \(\mathbb{P}^{3}\) and hence has transcendence degree \(1\)). The assertions follow.
(2) Clear from the proof of part (1) and the facts that \(\operatorname{gr}_{\nu^{Id}}P\cong P\) and \(\operatorname{gr}_{\nu^{-Id}}P\cong P\), see discussion before Lemma 3.2.
(3) It is an easy consequence of parts (1) and (2).
Next, we turn to the other case \(P_{\Omega-\xi}\) in Construction 0.6 for \(\xi\in\Bbbk^{\times}\). We will use the following very nice result of Umirbaev-Zhelyabin [UZ].
**Lemma 3.9**.: _Suppose the \((\)weighted\()\) homogeneous element \(\Omega\in\Bbbk[x,y,z]\) has an isolated singularity. Let \(\xi\in\Bbbk^{\times}\)._
1. [UZ, Theorem 1 and Corollary 2]__\(P_{\Omega-\xi}\) _is a simple Poisson algebra._
2. \(\operatorname{gldim}P_{\Omega-\xi}=2\)_._
Part (2) of the above lemma follows quickly from the Jacobian criterion for hypersurfaces.
**Lemma 3.10**.: _Suppose that the Adams degrees \(|x|\), \(|y|\), and \(|z|\) in the weighted polynomial ring \(\Bbbk[x,y,z]\) are all positive and that the potential \(\Omega\) have degree \(|x|+|y|+|z|+w\) for some integer \(w\). If \(\Omega\) is irreducible, then \(P:=P_{\Omega-\xi}\) has at least one \(w\)-valuation \(\nu^{c}\) as given in Lemma 3.2(3)._
Proof.: Note that \(P\) has a filtration determined by (E2.8.3) and the degree assignment \(\deg(x)=-|x|\), \(\deg(y)=-|y|\) and \(\deg(z)=-|z|\). Or equivalently, we define a filtration \(\mathbb{F}:=\{F_{i}\}\) of \(P\) by
\[F_{-i}(P)=\{\sum_{j}a_{j}f_{j}\in P\mid a_{j}\in\Bbbk,f_{j}\text{ are monomials of Adams degree}\leq i\}\]
for all \(i\). Then, the associated graded ring is \(\operatorname{gr}_{\mathbb{F}}P\cong P_{\Omega}\). Since \(\Omega\) is irreducible, both \(P:=P_{\Omega-\xi}\) and \(P_{\Omega}\) are domains. Since \(|\Omega|=a+b+c+w\), we have \(\deg(\{x,y\})=-|\frac{\partial\Omega}{\partial z}|=-|x|-|y|-w=\deg(x)+\deg(y)-w\). Similarly, \(\deg(\{y,z\})=\deg(y)+\deg(z)-w\) and \(\deg(\{x,z\})=\deg(x)+\deg(z)-w\). These equalities together with (E0.6.1) imply that the filtration is a \(w\)-filtration [Lemma 2.9(3)]. The assertion follows.
The following result can be viewed as a twin of Theorem 3.8.
**Theorem 3.11**.: _Let \(P\) be \(P_{\Omega-1}\) where \(\Omega=x^{3}+y^{3}+z^{3}+\lambda xyz\) where \(\lambda^{3}\neq-3^{3}\) with standard Adams grading on \(x,y,z\). Let \(K\) be the Poisson fraction field \(Q(P)\)._
1. _There is a unique nontrivial_ \(0\)_-valuation up to equivalence. As a consequence,_ \(\mathcal{V}_{f0}(K)\) _is a singleton. Further_ \(\alpha_{0}(K)=\alpha_{nc0}(K)=1\) _and_ \(\alpha_{c0}(K)=\alpha_{-1}(K)=0\)_._
2. _Let_ \(\nu\) _be a nontrivial_ \(0\)_-valuation on_ \(K\) _with associated filtration_ \(\mathbb{F}\)_. Then,_ \(\nu\) _is determined by the grading of_ \(P\) _as given in Lemma_ 3.10_. As a consequence,_ \(Q(\operatorname{gr}_{\nu}(K))\cong Q(P_{\Omega})\) _or_ \(Q(P)\to_{\nu}Q(P_{\Omega})\)_._
3. \(\mathbf{d}(K)=1\) _and_ \(\mathbf{w}(K)=1\)_._
Proof.: (1,2) By Lemma 3.10, there is a \(0\)-valuation \(\nu\) such that \(Q(\operatorname{gr}_{\nu}(K))\cong Q(P_{\Omega})\). It remains to show every nontrivial \(0\)-valuation is equivalent to the one given in the proof of Lemma 3.10. The rest of the proof is similar to the one of Theorem 3.8.
Recall that \(P\) is the Poisson algebra \(P_{\Omega-1}:=\Bbbk[x,y,z]/(\Omega-1)\) with \(\{x,y\}=3z^{2}+\lambda xy\), \(\{y,z\}=3x^{2}+\lambda yz\), and \(\{z,x\}=3y^{2}+\lambda xz\).
Let \(\nu\) be a nontrivial \(0\)-valuation on \(K\) and set \(a=\nu(x)\), \(b=\nu(y)\), and \(c=\nu(z)\). By Lemma 3.7(1), \(a=b=c\). So, we have three cases to consider.
Case 1: \(a=0\). Then \(F_{0}(P)=P\). Since \(P\) is Poisson simple [Lemma 3.9(1)], \(F_{1}(P)=0\). Thus, \(\nu\) is trivial, yielding a contradiction.
Case 2: \(a>0\). This contradicts the fact that \(\Omega=1\) in \(P\).
Case 3: \(a<0\). Let \(I\) be the subalgebra of \(\operatorname{gr}_{\nu}(K)\) generated by \(\overline{x},\overline{y}\), and \(\overline{z}\). Let \(\mathbb{F}^{ind}\) be the induced filtration determined by \(\deg(x)=\deg(y)=\deg(z)=a<0\). Then \(\mathbb{F}^{ind}\) is a \(0\)-filtration with \(\operatorname{gr}_{\mathbb{F}^{ind}}P\cong P_{\Omega}\). By Lemma 2.12(1),
\[\phi^{ind}:\operatorname{gr}_{\mathbb{F}^{ind}}P\to I\subseteq\operatorname{ gr}_{\mathbb{F}}P\]
is a Poisson algebra morphism. Note that \(\operatorname{GKdim}I\geq 1\). Since \(\operatorname{gr}_{\mathbb{F}^{ind}}P(\cong P_{\Omega})\) is projectively simple, \(\phi^{ind}\) is injective. As a consequence, \(\operatorname{GKdim}I=2\). By Lemmas 2.11(1) and 3.10, \(\mathbb{F}=\mathbb{F}^{ind}\). So, we obtain a unique valuation as given in Lemma 3.10. The assertion is proved.
(3) Let \(\nu\) be the unique \(0\)-valuation on \(K\) given in part (2). By parts (1,2), \(Q(P_{\Omega-1})\to_{\nu}Q(P_{\Omega})\) is the only possible arrow. Hence \(\mathbf{w}(K)=1\). By Theorem 3.8, there is no arrow \(Q(P_{\Omega})\to_{\nu}K\) with \(K\not\cong Q(P_{\Omega})\). Thus \(\mathbf{d}(P)=1\).
## 4. \(\gamma\)-type invariants
An invariant is said to be _of \(\gamma\)-type_ if it is defined by using the filtrations \(\mathbb{F}^{\nu}\) (E2.4.1) associated to Poisson valuations \(\nu\). This section aims to introduce several \(\gamma\)-type invariants of Poisson fields and utilize them to classify certain valuations of Poisson algebras/fields defined in Construction 0.6. Hypothesis 3.5 will be imposed throughout the rest of the paper, although it is not necessary for most of the definitions.
**Definition 4.1**.: Let \(K\) be a Poisson field.
1. The _"\(\Gamma\)-cap_ of \(K\) is defined to be \[{}^{w}\Gamma(K):=\bigcap_{\nu}F_{0}^{\nu}(K)\] where \(\nu\) runs over all \(w\)-valuations \(\nu\) on \(K\). Since \(F_{0}^{\nu}(K)=K\) if \(\nu\) is a trivial \(w\)-valuation, it is clear that (E4.1.1) \[{}^{w}\Gamma(K)=\bigcap_{\nu\in\mathcal{V}_{w}(K)}F_{0}^{\nu}(K).\]
2. We say \(K\) is _"\(\Gamma\)-normal_ if \(C:=\,^{w}\Gamma(K)\) is a Poisson subalgebra of \(K\) and \(C\) is an affine normal domain with \(K=Q(C)\).
3. Let \(C\) be a subalgebra of \(K\). The _\(C\)-interval_ is defined to be \[i(C):=\{w\in\mathbb{Z}\mid\,\mbox{``}\Gamma(K)=C\}.\] By Lemma 4.2(2) below, if \(a,b\in i(C)\), then \(c\in i(C)\) for any integer \(c\) between \(a\) and \(b\).
4. The _\({}^{*}\Gamma\)-_subalgebra collection of \(K\) is defined to be \[{}^{*}\Gamma(K):=\{^{w}\Gamma(K)\mid w\in\mathbb{Z}\}.\]
Note that _"\(\Gamma\)-_cap_ of \(K\) will be used in later sections. It is clear that
(E4.1.2) \[{}^{w}\Gamma(K)=\{a\in K\mid\nu(a)\geq 0,\forall\;\nu\in\mathcal{V}_{w}(K)\}.\]
The following lemma is straightforward, and its proof is omitted.
**Lemma 4.2**.: _Let \(f:K\to Q\) be a Poisson algebra homomorphism between two Poisson fields._
1. \(f\) _maps_ \({}^{w}\Gamma(K)\) _into_ \({}^{w}\Gamma(Q)\) _via restriction for each_ \(w\)_._
2. \({}^{w}\Gamma(K)\supseteq\,^{w+1}\Gamma(K)\supseteq\Bbbk\) _for each_ \(w\)_._
Despite its technicality, the following theorem is a highly significant general result pertaining to \(1\)-valuations.
**Theorem 4.3**.: _Let \(A\) be a Poisson domain of GK-dimension at least 2 and \(K\) be the Poisson fraction field \(Q(A)\)._
1. _Let_ \(\mathfrak{p}\) _be a prime ideal of_ \(A\) _of height one such that_ \(A_{\mathfrak{p}}\) _is regular. Then there is a unique nontrivial_ \(1\)_-valuation on_ \(K\)_, denoted by_ \(\nu^{1,\mathfrak{p}}\)_, such that_ \(F_{i}(A_{\mathfrak{p}})=\mathfrak{p}^{i}A_{\mathfrak{p}}\) _for_ \(i\geq 0\)_. In this case,_ \(\text{im}(\nu^{1,\mathfrak{p}})=\{\mathbb{Z}\}\cup\{\infty\}\)_._
2. (_Controlling theorem_) _If_ \(A\) _is a noetherian normal domain, then_ \({}^{1}\Gamma(K)\subseteq A\)_._
3. _Suppose_ \(K\) _is finitely generated as a field. Then there are infinitely many nontrivial_ \(1\)_-valuations, namely,_ \(\alpha_{1}(K)=\infty\)_._
Proof.: (1) Let \(B\) be the localization \(A_{\mathfrak{p}}\). Then, \(B\) is a regular local ring of Krull dimension 1. Hence \(B\) is a DVR with its maximal ideal generated by an element \(\omega\in B\). Note that \(B\) is a Poisson algebra. Since \(B\) is a DVR, the factor ring \(B/(\omega)\) is a domain. Define a filtration \(\mathbb{F}^{1,\mathfrak{p}}\) on \(B\) as follows:
(E4.3.1) \[F_{i}^{1,\mathfrak{p}}:=\begin{cases}B&i<0,\\ \omega^{i}B(=\mathfrak{p}^{i}A_{\mathfrak{p}})&i\geq 0.\end{cases}\]
It is easy to check that (a) \(\text{gr}_{\mathbb{F}^{1,\mathfrak{p}}}\,B\cong(B/(\omega))[\overline{\omega}]\) with \(\nu(\omega)=1\) and (b) for all \(i,j\), \(\{F_{i}^{1,\mathfrak{p}},F_{j}^{1,\mathfrak{p}}\}\subseteq F_{i+j-1}^{1, \mathfrak{p}}\). Hence \(\mathbb{F}^{1,\mathfrak{p}}\) is a good \(1\)-filtration which produces a \(1\)-valuation \(\nu^{1,\mathfrak{p}}\) on \(B\) and whence on \(K\) via localization. It is also easy to check that \(\text{im}(\nu^{1,\mathfrak{p}})=\{\mathbb{Z}\}\cup\{\infty\}\). The uniqueness follows from an idea in the proof of Lemma 2.11(3).
(2) Continue the proof of part (1); we have \(\nu^{1,\mathfrak{p}}(\omega^{-1})=-1\). Since \(K=\cup_{i\in\mathbb{Z}}\omega^{i}A_{\mathfrak{p}}\), one sees that \(F_{i}^{1,\mathfrak{p}}(K)=\omega^{i}A_{\mathfrak{p}}\) for all \(i\in\mathbb{Z}\). Since \(A\) is noetherian normal, we have
\[A=\bigcap_{\text{height one primes }\mathfrak{p}}A_{\mathfrak{p}}=\bigcap_{ \text{height one primes }\mathfrak{p}}F_{0}^{1,\mathfrak{p}}(K)\supseteq\bigcap_{\nu\in\mathcal{V}_{ 1}(K)}F_{0}^{\nu}(K)=\,^{1}\Gamma(K).\]
(3) Let \(V:=\{v_{1},\cdots,v_{n}\}\) be a finite set of generators of the field \(K\) and let \(C\) be the subalgebra of \(K\) generated by \(V\). Then \(Q(C)=K\). Write \(\{v_{i},v_{j}\}=g_{ij}z^{-1}\) where \(z,g_{ij}\in A\) for all \(i,j\). Adding \(z^{-1}\) to the set \(V\), we may assume that \(C\) is a Poisson subalgebra of \(K\). Localizing further, we may assume that \(C\) is regular.
Now \(C\) is an affine Poisson domain that is regular as a commutative ring. Let \(\mathfrak{p}\) be a prime ideal of height 1. By part (1), there is a \(1\)-valuation \(\nu^{1,\mathfrak{p}}\) on \(K=Q(C)\). By prime avoidance and principal ideal theorem, there are infinitely many such \(\mathfrak{p}\). Therefore, we have infinitely many distinct nontrivial \(1\)-valuations on \(K\).
**Remark 4.4**.:
1. By the above theorem, \(\alpha_{1}(K)\) is infinite. However, when \(w<1\), \(\alpha_{w}(K)\) could be finite. For example, in Theorem 3.11, \(\alpha_{0}(K)=1\).
2. If \(\mathcal{V}_{w}(K)\neq\emptyset\) for some \(w<0\), then Lemma 1.4(3) says that \(w(K)=-\infty\). Otherwise, \(w(K)\geq 0\). If \(w(K)>0\), then by Theorem 4.3(3), \(w(K)=1\). In summary, \(w(K)\) can only be \(-\infty\), \(0\) or \(1\).
Next, we will compute various \(\gamma\)-type invariants for a large class of Poisson fields. For the rest of this section, we assume
**Hypothesis 4.5**.: _Let \(A=\Bbbk[x,y,z]\) with Adams grading \(|x|=|y|=|z|=1\). Let \(\Omega\in A\) denote a homogeneous element of degree \(3+n\) where \(n\geq 1\)._
Recall that \(A_{1}:=\Bbbk x+\Bbbk y+\Bbbk z\) is the degree \(1\) part of \(A\).
**Lemma 4.6**.: _Let \(\nu\) be a valuation of \(A\). Then there are \(x_{1},x_{2},x_{3}\in A_{1}\) generating \(A\) as an algebra while satisfying \(\nu(x_{1})=\nu(x_{2})=\nu(x_{3})\)._
Proof.: Let \(w\in A_{1}\) be a nonzero element such that \(\nu(w)=\min\{\nu(x),\nu(y),\nu(z)\}\). By valuation axioms, \(\nu(w)=\min\{\nu(f)\mid f\in A_{1}\}\). Let \(x_{1}=x+\lambda w\), \(x_{2}=y+\lambda w\) and \(x_{3}=z+\lambda w\) for some \(\lambda\in\Bbbk\). For a generic \(\lambda\), \(\nu(x_{1})=\nu(x_{2})=\nu(x_{3})=\nu(w)\) and \(\{x_{1},x_{2},x_{3}\}\) are linearly independent. The assertion follows.
The following lemma is straightforward.
**Lemma 4.7**.: _Let \(\{x_{1},x_{2},x_{3}\}\) be as in Lemma 4.6._
1. _The_ \(\Bbbk\)_-span of_ \(\{x_{i},x_{j}\}\) _for_ \(1\leq i,j\leq 3\) _is equal to the_ \(\Bbbk\)_-span of_ \(\{x,y\},\{y,z\}\)_, and_ \(\{z,x\}\)_._
2. _Let_ \(A_{\Omega}\) _be defined as Construction 0.6. Then_ \(A_{sing}:=A/(\Omega_{x},\Omega_{y},\Omega_{z})\) _is equal to_ \(A/(\{x_{i},x_{j}\},1\leq i,j\leq 3)\)_._
**Lemma 4.8**.: _Let \(\Omega\) be a homogeneous element of degree \(3+n\) for some \(n\geq 0\). Suppose \(\Omega\) has an isolated singularity. Let \(B\) be either \(A_{\Omega}\), \(P_{\Omega}\), or \(P_{\Omega-\xi}\) for some \(\xi\in\Bbbk^{\times}\). Let \(\nu\) be a \(w\)-valuation of \(K:=Q(B)\) and \(d=\min\{\nu(x),\nu(y),\nu(z)\}\)._
1. _Suppose_ \(w<n\)_. Then_ \(d\geq 0\)_. As a consequence,_ \({}^{w}\Gamma(K)\supseteq B\)_._
2. _Suppose that_ \(B=P_{\Omega-\xi}\) _with_ \(\xi\in\Bbbk^{\times}\) _and_ \(n=0\)_. Then there is no_ \((-1)\)_-valuation._
3. _Suppose that_ \(B=P_{\Omega-\xi}\) _with_ \(\xi\in\Bbbk^{\times}\) _and_ \(n>0\)_. Then, there is no nontrivial_ \(0\)_-valuation._
4. _Let_ \(B\) _be_ \(P_{\Omega}\) _or_ \(P_{\Omega-\xi}\)_. If_ \(n=w>0\)_, then either_ \(d\geq 0\) _or_ \(d=-1\) _with_ \(\mathbb{F}=\mathbb{F}^{-Id}\) _when_ \(B=P_{\Omega}\) _and_ \(\mathbb{F}=\mathbb{F}^{c}\) _when_ \(B=P_{\Omega-\xi}\)_._
5. _Let_ \(B\) _be_ \(P_{\Omega}\) _with_ \(n>0\)_. Then there is no faithful_ \(0\)_-valuation on_ \(K\)_._
Proof.: (1) Let \(\nu\) be any valuation of \(K\). By Lemma 4.6, there are linearly independent elements \(x_{1},x_{2},x_{3}\in A_{1}\) such that \(d=\nu(x_{1})=\nu(x_{2})=\nu(x_{3})\). Let \(\Omega_{k}=\{x_{i},x_{j}\}\) where \((i,j,k)\) is either \((1,2,3)\), \((2,3,1)\), or \((3,1,2)\). Then the Adams degree of \(\Omega_{k}\) is \(2+n\). By the choice of \(d\) and valuation axioms, we have \(\nu(\Omega_{k})\geq(2+n)d\) for \(k=1,2,3\).
Let \(I\) be the subalgebra of \(\operatorname{gr}_{\nu}K\) generated by \(\overline{x_{1}},\overline{x_{2}}\), and \(\overline{x_{3}}\).
Case 1: Suppose \(\nu(\Omega_{k})>(2+n)d\) for all \(k=1,2,3\). Then \(\overline{\Omega_{k}}=0\) in \(I\). Or equivalently, \(\Omega_{k}(\overline{x_{1}},\overline{x_{2}},\overline{x_{3}})=0\) in \(I\). This means that \(I\) is a factor ring of \(A_{sing}\) by Lemma 4.7(2). This implies that \(I=\Bbbk\) and \(d=0\).
Case 2: Suppose \(\nu(\Omega_{k})=(2+n)d\) for some \(k\). Then
\[(2+n)d=\nu(\Omega_{k})=\nu(\{x_{i},x_{j}\})\geq\nu(x_{i})+\nu(x_{j})-w=2d-w\]
which implies that \(nd\geq-w\), or equivalently, \(n(-d)\leq w\). If \(n=0\), then \(w\geq 0=n\), yielding a contradiction. Otherwise \(n>0\) and whence \(-d\leq w/n<1\) where the last \(<\) follows from the fact \(w<n\). Since \(d\) is an integer, \(-d\leq 0\), or \(d\geq 0\).
The consequence is clear.
(2) Suppose to the contrary that \(\nu\) is a \((-1)\)-valuation on \(K\). By the proof of part (1), only Case 1 can happen. As a consequence, \(I=\Bbbk\) and \(d=0\). In this case \(F_{0}^{\nu}(B)=B\) and \(F_{0}^{\nu}(B)/F_{1}^{\nu}(B)=\Bbbk\), which contradicts Lemma 3.9(1).
(3) Let \(\nu\) be a \(0\)-valuation on \(K\). By part (1), \({}^{0}\Gamma(K)\supseteq B\). Consequently, \(F_{0}^{\nu}(B)=B\). Since \(B\) is Poisson simple [Lemma 3.9(1)], the Poisson ideal \(F_{1}^{\nu}(B)\) is zero. Thus, \(\nu\) is trivial.
(4) Retain the notation introduced in the proof of part (1).
Case 1: Suppose \(\nu(\Omega_{k})>(2+n)d\) for all \(k=1,2,3\). The exact proof shows that \(I=\Bbbk\) and that \(d=0\).
Case 2: Suppose \(\nu(\Omega_{k})=(2+n)d\) for some \(k\). Then
\[(2+n)d=\nu(\Omega_{k})=\nu(\{x_{i},x_{j}\})\geq\nu(x_{i})+\nu(x_{j})-w=2d-n\]
which implies that \(nd\geq-n\). Since \(n>0\), we have \(d\geq-1\). If \(d\geq 0\), we are done. Otherwise, let \(d=-1\). Then \(\overline{\Omega_{k}}\neq 0\), and consequently, \(\{\overline{x_{i}},\overline{x_{j}}\}=\overline{\Omega_{k}}\neq 0\). This shows that \(\overline{x_{i}}\) and \(\overline{x_{j}}\) are algebraically independent by Lemma 1.6(1). So \(\operatorname{GKdim}I\geq 2\). By Lemma 2.11(1), \(\mathbb{F}=\mathbb{F}^{ind}\). Since \(d=-1\), we further have \(\mathbb{F}=\mathbb{F}^{-Id}\) when \(B=P_{\Omega}\) and \(\mathbb{F}=\mathbb{F}^{c}\) when \(B=P_{\Omega-\xi}\).
(5) Suppose to the contrary that \(\nu\) is a faithful \(0\)-valuation on \(B\). By part (1), \(d\geq 0\).
Case 1: Suppose \(d>0\). Let \(x_{1},x_{2},x_{3}\) be as in the proof of part (1). Then \(\deg\overline{x_{i}}=\nu(x_{i})=d>0\). Thus \(I\) is not \(\Bbbk\) and whence \(\operatorname{GKdim}I\geq 1\). So, Case 1 of the proof of part (1) is impossible, leaving only Case 2 as a possibility. Then \(\{\overline{x_{i}},\overline{x_{j}}\}=\overline{\Omega_{k}}\neq 0\) for some \(k\). So \(\overline{x_{i}}\) and \(\overline{x_{j}}\) are algebraically independent by Lemma 1.6(1). Then \(\operatorname{GKdim}I=2\). By Lemma 2.11(1), \(\mathbb{F}=\mathbb{F}^{ind}\) since \(B\) is graded to start with.
Since \(d>0\), for all \((i,j,k)=(1,2,3)\) or its cyclic permutations,
\[\nu(\{x_{i},x_{j}\})=\nu(\Omega_{k})\geq(2+n)d>2d=\nu(x_{i})+\nu(x_{j}).\]
By Lemma 2.9(3), \(\mathbb{F}^{ind}\) is a \((-1)\)-valuation. So \(\nu\) is not a faithful \(0\)-valuation, yielding a contradiction.
Case 2: Suppose \(d=0\). Since \(\nu\) is a faithful \(0\)-valuation, \(F_{0}^{\nu}(B)/F_{1}^{\nu}(B)=B/F_{1}^{\nu}(B)\) is an affine Poisson domain of \(\operatorname{GK}\)-dimension \(\leq 1\). So Lemma 1.6(1) implies that \(B/F_{1}^{\nu}(B)\) has zero Poisson bracket. Hence for all \((i,j,k)=(1,2,3)\) or its cyclic permutations
\[\{\overline{x_{i}},\overline{x_{j}}\}=\overline{\Omega_{k}}=0,\]
or equivalently \(\Omega_{k}\in F_{1}^{\nu}(B)\) for all \(k\). Note that \(\Omega\) is an i.s. potential and \(A_{sing}\) is local with residue field \(\Bbbk\). Thus, \(B/F_{1}^{\nu}(B)=\Bbbk\), which implies that \(c_{i}:=\overline{x_{i}}\in\Bbbk^{\times}\) such that
\[\overline{\Omega_{k}(x_{1},x_{2},x_{3})}=\Omega_{k}(\overline{x_{1}}, \overline{x_{2}},\overline{x_{3}})=\Omega_{k}(c_{1},c_{2},c_{3})=0\]
for all \(k\). But we must have \(c_{1}=c_{2}=c_{3}=0\), which contradicts \(d=0\).
We are now able to present the primary application of \(\gamma\)-invariants.
**Theorem 4.9**.: _Let \(\Omega\) be a homogeneous element of degree \(3+n\) for some \(n\geq 2\). Suppose \(\Omega\) has an isolated singularity. Let \(B\) be either \(A_{\Omega}\), \(P_{\Omega}\) or \(P_{\Omega-\xi}\) for some \(\xi\in\Bbbk^{\times}\). Then, for every \(w\) between \(1\) and \(n-1\), \({}^{w}\Gamma(Q(B))=B\)._
Proof.: Let \(K:=Q(B)\). By Lemma 4.8(1), \({}^{w}\Gamma(K)\supseteq B\). We can check that \(B\) is always normal (due to the fact that \(A_{\Omega}\) and \(P_{\Omega-\xi}\) are regular and \(P_{\Omega}\) is the homogeneous coordinate ring of a smooth irreducible curve). By Theorem 4.3(2), \({}^{1}\Gamma(K)\subseteq B\). Hence
\[B\supseteq\,^{1}\Gamma(K)\supseteq\,^{2}\Gamma(K)\supseteq\cdots\supseteq\,^{n- 1}\Gamma(K)\supseteq B.\]
The assertion follows.
The above theorem will be essential in establishing the Dixmier property of the corresponding Poisson fields \(K\).
**Theorem 4.10**.: _Let \(\Omega\) be a homogeneous element of degree \(3+n\) for some \(n\geq 1\). Suppose \(\Omega\) has an isolated singularity. Let \(B\) be \(P_{\Omega-\xi}\) for some \(\xi\in\Bbbk^{\times}\) and let \(K=Q(B)\). Then there is no nontrivial \(0\)-valuation of \(K\) and_
\[{}^{w}\Gamma(K)=\begin{cases}K&w\leq 0,\\ B&1\leq w\leq n-1,\\ \Bbbk&w\geq n.\end{cases}\]
_As a consequence, \({}^{*}\Gamma(K)=\{K,B,\Bbbk\}\) if \(n\geq 2\) and \({}^{*}\Gamma(K)=\{K,\Bbbk\}\) if \(n=1\). Further, \(i(B)=\{1,\cdots,n-1\}\)._
Proof.: By Theorem 4.8(3), there is no nontrivial \(0\)-valuation on \(K\). Hence \({}^{0}\Gamma(K)=K\). It immediately follows from Lemma 4.2(2) that \({}^{w}\Gamma(K)=K\) if \(w\leq 0\). If \(1\leq w\leq n-1\) (if \(n=1\), then there is no such \(w\)), the assertion follows from Theorem 4.9. Now let \(w=n\). By Lemma 3.2(3), there is an \(n\)-valuation \(\nu:=\nu^{c}\) such that \(\nu(x)=\nu(y)=\nu(z)=-1\). Consequently, \(F_{0}^{\nu}(B)=\Bbbk\). As a result, we have
\[{}^{n}\Gamma(K)\subseteq\,^{1}\Gamma(K)\cap F_{0}^{\nu}(K)\subseteq B\cap F_{0 }^{\nu}(K)=F_{0}^{\nu}(B)=\Bbbk.\]
By Lemma 4.2(2), \({}^{w}\Gamma(K)=\Bbbk\) for all \(w>n\).
The consequences are clear.
## 5. \(\beta\)-type invariants and \(w\)-valuations for small \(w\)
In this section, we introduce/recall \(\beta\)-type invariants and calculate \(w\)-valuations on some Poisson fields for \(0\leq w\leq 2\). The previous section provided a structure theorem for certain \(1\)-valuations in Theorem 4.3. We will present a similar theorem for specific \(2\)-valuations in this section.
**Definition 5.1**.: Let \(K\) and \(Q\) be two Poisson fields and \(w\) an integer.
1. We say \(K\)_\(w\)-controls_\(Q\) if there is a \(w\)-valuation \(\nu\) on \(K\) such that \[Q(\operatorname{gr}_{\nu}(K))\cong Q.\] We write \(K\to_{\nu}Q\) in this case.
2. Let \(\mathcal{PF}_{d,w}\) be the quiver with vertices being Poisson fields of GK-dimension \(d\) and arrows being \(\to_{\nu}\) from \(K\) to \(Q\) when \(K\)\(w\)-controls \(Q\).
As noted, a \(\beta\)-type invariant is defined using arrows \(\to_{\nu}\). For example, \(\mathcal{PF}_{d,w}\) is a \(\beta\)-type invariant. One of the projects in [10] is trying to understand the quiver \(\mathcal{PF}_{2,0}\).
**Definition 5.2**.: Let \(K\) be a Poisson field and \(w\) an integer.
1. The \(w\)_-depth_ of \(K\) is defined to be \[\mathbf{d}_{w}(K):=\sup\left\{n\mid\{K:=K_{0}\to_{\nu_{1}}K_{1}\to_{\nu_{2}}K_{ 2}\to_{\nu_{3}}\cdots\to_{\nu_{n}}K_{n}\}\right\}\] where \(K_{i}\not\cong K_{j}\) for all \(0\leq i\neq j\leq n\) and each \(\nu_{i}\) is a faithful \(w\)-valuation.
2. The \(w\)_-width_ of \(K\) is defined to be \[\mathbf{w}_{w}(K):=\#\left[\{Q\mid K\to_{\nu}Q\text{ where $\nu$ is a faithful $w$- valuation}\}/\cong\right].\]
When \(w=0\), we will omit the subscript \(w\) in the above definition as in previous sections. For example, \(\mathbf{d}_{0}(K)=\mathbf{d}(K)\) (resp. \(\mathbf{w}_{0}(K)=\mathbf{w}(K)\)) as in Definition 5.2(1) (resp. Definition 5.2(2)).
The following lemma gives some explicit examples of faithful \(1\)-valuations, which are special cases of Theorem 4.3(1).
**Lemma 5.3**.: _Let \(\Omega=x^{3+n}+y^{3+n}+z^{3+n}\) where \(n\geq 1\) and let \(P\) be the \(P_{\Omega-\xi}\) where \(\xi\in\Bbbk^{\times}\)._
1. _The induced filtration_ \(\mathbb{F}^{ind}\) _determined by_ \(\deg(x)=\deg(y)=0\) _and_ \(\deg(z)=1\) _is a good filtration._
2. \(\mathbb{F}^{ind}\) _is a faithful_ \(1\)_-filtration. As a consequence,_ \(\mathcal{V}_{f1}(Q(P))\neq\emptyset\)_._
Proof.: (1) Note that \(z\) is a nonzero divisor and \(P/(z)\cong\Bbbk[x,y]/(x^{3+n}+y^{3+n}-\xi)\). Since \(x^{3+n}+y^{3+n}-\xi\) is irreducible, \(P/(z)\) is a domain. By definition, \(\mathbb{F}^{ind}\) is defined by
\[F_{i}(P)=\begin{cases}P&i\leq 0,\\ z^{i}P&i>0.\end{cases}\]
So \(\operatorname{gr}_{\mathbb{F}^{ind}}P\cong(P/(z))[\overline{z}]\) which is a domain. The assertion follows.
(2) By definition, \(\{x,y\}=(3+n)z^{2+n}\in F_{0+0-1}\), \(\{y,z\}=(3+n)x^{2+n}\in F_{0+1-1}\) and \(\{z,x\}=(3+n)y^{2+n}\in F_{0+1-1}\). Since \(P\) is generated by \(x,y\) and \(z\), by Lemma 2.9(3), the filtration in part (1) is a \(1\)-filtration. The faithfulness of \(\nu\) is easy to check.
The subsequent lemma presents an interesting observation regarding \(1\)-valuations.
**Lemma 5.4**.: _Let \(P\) be a Poisson noetherian normal domain. Let \(\nu\) be a valuation on \(K:=Q(P)\) such that_ (a)_\(F_{0}(P)=P\) and that_ (b) \(\operatorname{GKdim}F_{0}(P)/F_{1}(P)=\operatorname{GKdim}P-1\). Then, \(\nu\) is equivalent to a \(1\)-valuation._
Proof.: By definition \(F_{1}(P)\) is a prime ideal of \(P(=F_{0}(P))\). Let \(S=F_{0}(P)\setminus F_{1}(P)\). Then, every element in \(S\) has \(\nu\)-value \(0\). Then the \(\nu\)-value of every element in \(B:=PS^{-1}\) is nonnegative. As a consequence, \(F_{0}(B)=B\). Since \(F_{1}(P)\subset P\) is prime of height \(1\) and \(P\) is normal, \(B\) is a regular local ring of Krull dimension \(1\). This means that \(B\) is a DVR with a generator \(\omega\) of the maximal ideal of \(B\).
By the proof of Lemma 2.11(3), we have that there is a positive integer \(h\) such that
\[F_{i}(B)=\begin{cases}B&i\leq 0,\\ \omega^{\lceil i/h\rceil}B&i>0.\end{cases}\]
Then \(\nu\) is equivalent to another valuation, denoted by \(\nu^{\prime}\), with its filtration defined by
\[F^{\prime}_{i}(B)=\begin{cases}B&i\leq 0,\\ \omega^{i}B&i>0.\end{cases}\]
By the proof of Theorem 4.3(1), \(\nu^{\prime}\) is a \(1\)-valuation. The assertion follows.
Recall that the _geometric genus_ of an affine domain \(A\) of GK-dimension \(1\), denoted by \(\mathfrak{g}(A)\), is the genus of the normalization of a projective closure of \(\operatorname{Spec}A\). It is well-known that the geometric genus (or simply called genus) is a birational invariant of \(A\). Part (1) of the following lemma is due to Sandor Kovacs. We thank him for allowing us to include his result in our paper.
**Lemma 5.5**.: _Let \(B\) be an affine domain of GK-dimension 2._
1. _For each positive integer_ \(N\)_, there exist infinitely many integers_ \(g>N\) _such that there is a height one prime ideal_ \(\mathfrak{p}\) _of_ \(B\) _with_ \(\mathfrak{g}(B/\mathfrak{p})=g\)_._
2. _Suppose_ \(B\) _has a nonzero Poisson algebra structure. Let_ \(\mathbb{F}^{ind}\) _be the induced filtration of_ \(B_{\mathfrak{p}}\) _defined by_ \[F_{i}:=\begin{cases}B_{\mathfrak{p}}&i\leq 0,\\ \mathfrak{p}^{i}B_{\mathfrak{p}}&i\geq 0.\end{cases}\] _Then_ \(\mathbb{F}^{ind}\) _is a faithful_ \(1\)_-filtration for infinitely many_ \(\mathfrak{p}\) _given in part_ (1)_._
Proof.: (1) We prove it for any affine domain of GK-dimension \(d\geq 2\) with the condition: \(\mathfrak{p}\) is a height \(d-1\) prime ideal of \(B\).
First, we prove the statement for the polynomial ring \(\Bbbk[x_{1},x_{2}]\) in place of \(B\). Indeed, choosing a generic irreducible polynomial \(f\in\Bbbk[x_{1},x_{2}]\) of degree \(>N\) provides a quotient \(\Bbbk[x_{1},x_{2}]/(f)\) with genus at least \(N\). By Bertini's theorem we may even assume that \(\Bbbk[x_{1},x_{2}]/(f)\) is smooth.
This easily implies the statement also for the polynomial ring \(A:=\Bbbk[x_{1},\ldots,x_{d}]\) in place of \(B\). Indeed, take an \(f\in\Bbbk[x_{1},x_{2}]\) as above and if \(d>2\), then take \(\mathfrak{p}:=(f,x_{3},\ldots,x_{d})\).
Finally, we will prove this for an arbitrary \(B\). By the Noether normalization theorem, there exists an embedding of the polynomial ring as above \(A\hookrightarrow B\) such that \(B\) is integral over \(A\). Choose a prime ideal \(\mathfrak{p}\subset A\) satisfying the desired condition. As the embedding is an integral extension, there exists a prime ideal \(\mathfrak{q}\subset B\) that lies over \(\mathfrak{p}\), i.e., \(A\cap\mathfrak{q}=\mathfrak{p}\), and such that \(\operatorname{ht}\mathfrak{q}=\operatorname{ht}\mathfrak{p}=d-1\). It follows that there is an induced injective homomorphism \(A/\mathfrak{p}\hookrightarrow B/\mathfrak{q}\). (Consider the composite homomorphism \(A\to B\to B/\mathfrak{q}\). The kernel of this homomorphism is \(A\cap\mathfrak{q}=\mathfrak{p}\).) This injective homomorphism corresponds to a morphism of curves \(\operatorname{Spec}(B/\mathfrak{q})\to\operatorname{Spec}(A/\mathfrak{p})\). By Hurwitz's theorem, the genus of the former is at least as large as the genus of the latter. This proves the statement.
(2) We claim \(B\) has only finitely many Poisson prime ideals of height \(1\). Let \(S\) be the set of all Poisson prime ideals of height \(1\). Consider the Poisson ideal \(I=\cap_{\mathfrak{p}\in S}\mathfrak{p}\). If \(I\neq 0\), there are only finitely many primes minimal over \(I\), each of which is Poisson prime of height one by Lemma 1.7. This implies that \(S\) is finite. So it remains to show that \(I\neq 0\). Suppose \(I=0\). Let \(\mathfrak{p}\in S\). Since \(B/\mathfrak{p}\) is an affine Poisson domain of GK-dimension \(1\), Lemma 1.6(1) implies that \(B/\mathfrak{p}\) has zero Poisson bracket. So \(\{B,B\}\subseteq\mathfrak{p}\) and \(\{B,B\}\subseteq I=0\). This contradicts the fact that \(B\) has a nonzero Poisson bracket. So, our claim is proven.
Therefore, after removing those finitely many \(\mathfrak{p}\), we can assume that every \(\mathfrak{p}\) is not a Poisson ideal. By construction, \(F_{1}(B)=B\cap F_{1}(B_{\mathfrak{p}})=\mathfrak{p}\). Since \(\mathfrak{p}\) is not a Poisson ideal, \(\{B,\mathfrak{p}\}\not\subseteq\mathfrak{p}\). Therefore \(\{F_{0}(B),F_{1}(B)\}\not\subseteq F_{1}(B)\). This means that \(\mathbb{F}^{ind}\) is not classical. Since \(\operatorname{GKdim}(B/\mathfrak{p})=1\), the residue field of \(\nu\) has GK-dimension \(1\), which implies that \(\nu\) is nondegenerate. It is clear that the image of \(\nu\) is \(\mathbb{Z}\cup\{\infty\}\). So \(\nu\) is a faithful \(1\)-valuation.
As a consequence, we have
**Proposition 5.6**.: _Let \(B\) be an affine Poisson domain of GK-dimension 2 with nonzero Poisson bracket and let \(K=Q(B)\). Then \(\mathbf{w}_{1}(K)=\mathbf{d}_{1}(K)=\infty\)._
Proof.: Without loss of generality, we may assume \(B\) is normal after localization and \(K=Q(A[t^{\pm 1}])\) for some affine domain \(A\) of GK-dimension \(1\). Let \(N\) be an integer \(>\mathfrak{g}(A)\). Applying Lemma 5.5(2) to \(B\), there are infinitely many \((g,\mathfrak{p})\) such that (i) \(g\geq N\), (ii) \(\mathfrak{g}(B/\mathfrak{p})=g\), and (iii) \(\mathbb{F}^{ind}\) in Lemma 5.5(2) is a faithful \(1\)-filtration. As a consequence, \(\operatorname{gr}_{\mathbb{P}^{ind}}B_{\mathfrak{p}}\cong(B_{\mathfrak{p}}/ \mathfrak{p}B_{\mathfrak{p}})[t]\cong Q(B/\mathfrak{p})[t]\). So, \(Q(\operatorname{gr}_{\mathbb{P}^{ind}}(B))=Q((B/\mathfrak{p})[t^{\pm 1}]) \not\cong Q(A[t^{\pm 1}])\) by the cancellation theorem [De, Theorem 2]. Since there are infinitely many \(g\), there are infinitely many pairwise non-isomorphic Poisson fields \(Q((B/\mathfrak{p})[t^{\pm 1}])\) with nonzero Poisson bracket. This implies that there are infinitely many faithful \(1\)-valuations \(\nu\) on \(K\) such that \(Q(\operatorname{gr}_{\nu}(K))=Q(\operatorname{gr}_{\nu}(B))=Q((B/\mathfrak{p })[t^{\pm 1}])\) are non-isomorphic. This means that \(\mathbf{w}_{1}(K)=\infty\).
Next we prove that \(\mathbf{d}_{1}(K)=\infty\). Let \(A_{0}:=A\) with genus \(N_{0}\) and let \(B_{0}=B\). By Lemma 5.5(2), there is a height one prime, but non-Poisson, ideal \(\mathfrak{p}_{0}\) of \(B_{0}\) such that the genus of \(A_{1}:=B_{0}/\mathfrak{p}_{0}\) is at least one larger than \(N_{0}\). Let \(N_{1}\) be the genus of \(A_{1}\). So \(N_{1}>N_{0}\). By induction, for \(i\geq 1\), let \(B_{i}=\operatorname{gr}_{\nu_{i-1}}B_{i-1}\) where \(\nu_{i-1}\) is the faithful \(1\)-valuation on \(Q(B_{i-1})=Q(A_{i-1}[t^{\pm 1}])\) associated with \(\mathfrak{p}_{i-1}\). We may still assume \(B_{i}\) is normal after localization. We can define \(A_{i+1}:=B_{i}/\mathfrak{p}_{i}\) such that the genus of \(A_{i+1}\) is at least one larger than the genus of \(N_{i}\). Let \(N_{i+1}\) be the genus of \(A_{i+1}\). Then \(N_{i+1}>N_{i}\) by definition. By construction, we obtain a sequence of arrows \(Q(A_{i}[t^{\pm 1}])\to_{\nu_{i}}Q(A_{i+1}[t^{\pm 1}])\) with \(A_{i}\not\cong A_{j}\) for all \(i\neq j\). By [De, Theorem 2], \(Q(A_{i}[t^{\pm 1}])\not\cong Q(A_{j}[t^{\pm 1}])\) for all \(i\neq j\). Thus \(\mathbf{d}_{1}(K)=\infty\).
Next, we provide a structure theorem for certain \(2\)-valuations of some Poisson fields. Let \(\operatorname{Reg}A\) denote the regular locus of \(\operatorname{Spec}A\). If \(A\) is affine, then \(\operatorname{Reg}A\) is an open variety of \(\operatorname{Spec}A\). If \(A\) is a Poisson algebra, let \(A_{cl}\) be the factor ring \(A/(\{A,A\})\).
**Theorem 5.7**.: _Let \(P\) be an affine Poisson domain of GK-dimension 2 and let \(K\) be the Poisson fraction field \(Q(P)\). Let \(\mathfrak{p}\) be a maximal ideal in \(\operatorname{Reg}P\)._
1. _There is a unique_ \(2\)_-valuation, denoted by_ \(\nu^{2,\mathfrak{p}}\)_, of_ \(K\) _such that_ \(F_{i}(P)=\mathfrak{p}^{i}\) _for all_ \(i\geq 0\)_._
2. \(\nu^{2,\mathfrak{p}}\) _is classical if and only of_ \(\mathfrak{p}\in\operatorname{Spec}P_{cl}\)_._
3. \(\nu^{2,\mathfrak{p}}\) _is either classical or Weyl, see Definition_ 3.3_(3)._
In fact, in parts (1) and (2) of the above theorem, we don't need to assume that \(\operatorname{GKdim}P=2\).
Proof of Theorem 5.7.: (1) Let \(\mathfrak{p}\) be a maximal ideal in \(\operatorname{Reg}P\). Then the localization \(B:=P_{\mathfrak{p}}\) is a regular local ring with maximal ideal, denoted by \(\mathfrak{m}\). Define a filtration \(\mathbb{F}^{2,\mathfrak{p}}:=\{F_{i}\mid i\in\mathbb{Z}\}\) of \(B\) by
\[F_{i}^{2,\mathfrak{p}}=\begin{cases}\mathfrak{m}^{i}&i\geq 0.\\ B&i<0.\end{cases}\]
Since \(B\) is regular local, \(\operatorname{gr}_{\mathbb{F}^{2,\mathfrak{p}}}B\) is isomorphic to a polynomial \(B_{0}[t_{1},\cdots,t_{d}]\) where \(B_{0}\) is the residue field and \(d=\operatorname{gldim}B\) (when \(\operatorname{GKdim}P=2\), then \(d=2\)). Hence \(\mathbb{F}^{2,\mathfrak{p}}\) is a good filtration. It is well-known in commutative algebra that \(\operatorname{rk}_{P/\mathfrak{p}}(\mathfrak{p}/\mathfrak{p}^{2})= \operatorname{rk}_{B/\mathfrak{m}}(\mathfrak{m}/\mathfrak{m}^{2})= \operatorname{gldim}B\). When restricted to \(P\), we have \(F_{0}^{2,\mathfrak{p}}(P)=P\) and, for \(i>0\), \(F_{i}^{2,\mathfrak{p}}(P)=P\cap\mathfrak{m}^{i}=\mathfrak{p}^{i}\) as required.
Next, we show that the valuation associated with \(\mathbb{F}^{2,\mathfrak{p}}\) is a \(2\)-valuation. Let \(S:=\{s_{1},\cdots,s_{d}\}\) be a regular system of parameters for the regular local ring \(B=P_{\mathfrak{p}}\). Then \(\mathfrak{m}=\sum_{\alpha=1}^{d}s_{\alpha}B\). Consequently, for all \(i\geq 0\), \(F_{i}^{2,\mathfrak{p}}=\sum_{\alpha_{1},\cdots,\alpha_{i}}s_{\alpha_{1}}\cdots s _{\alpha_{i}}B\)
Using the fact that \(\{s_{\alpha},s_{\beta}\}\in B\) and \(\{B,B\}\subseteq B\), we obtain that \(\{F_{i}^{2,\mathfrak{p}},F_{j}^{2,\mathfrak{p}}\}\subseteq F_{i+j-2}^{2, \mathfrak{p}}\) for all \(i,j\). Therefore \(\mathbb{F}^{2,\mathfrak{p}}\) is a good \(2\)-filtration. We use \(\nu^{2,\mathfrak{p}}\) for the associated \(2\)-valuation.
(2) Since \(\operatorname{gr}_{\mathbb{F}}P\) is generated in degree \(1\), we have
\[\nu^{2,\mathfrak{p}}\text{ is classical} \Leftrightarrow \{F_{1}(P),F_{1}(P)\}\subseteq F_{1+1-2+1}(P)\] \[\Leftrightarrow \{\mathfrak{p},\mathfrak{p}\}\subseteq\mathfrak{p}\Leftrightarrow \ \{\Bbbk+\mathfrak{p},\Bbbk+\mathfrak{p}\}\subseteq\mathfrak{p}\] \[\Leftrightarrow \{P,P\}\subseteq\mathfrak{p}\Leftrightarrow\ \mathfrak{p}\in\operatorname{Spec}P_{cl}.\]
(3) Since \(P\) has GK-dimension two, \(\operatorname{gr}_{\mathbb{F}}P\) is isomorphic to \(\Bbbk[t_{1},t_{2}]\) where \(t_{1}\) and \(t_{2}\) are in degree \(1\). If \(\nu^{2,\mathfrak{p}}\) is not classical, then \(\{t_{1},t_{2}\}\neq 0\). Since \(\operatorname{gr}_{\mathbb{F}}P\) is Poisson \(2\)-graded, \(\{t_{1},t_{2}\}\in\Bbbk^{\times}\). In this case \(\nu^{2,\mathfrak{p}}\) is Weyl.
By the proof of Theorem 5.7(1), there is a one-to-one correspondence between \(\operatorname{MaxReg}P\) and the set of Poisson \(2\)-valuations \(\{\nu^{2,\mathfrak{p}}\mid\mathfrak{p}\in\operatorname{MaxReg}P\}\). If we can add an appropriate algebraic structure to \(\mathcal{V}_{2}(K)\), then it makes sense to state that \(\operatorname{MaxReg}P\) parameterizes the set of Poisson \(2\)-valuations \(\{\nu^{2,\mathfrak{p}}\mid\mathfrak{p}\in\operatorname{MaxReg}P\}\).
By Lemma 5.4, if \(\mathfrak{p}\) has height one, then \(\nu^{2,\mathfrak{p}}\) is equivalent to a \(1\)-valuation. Therefore, it is a classical \(2\)-valuation. When \(\operatorname{GKdim}P=2\), then \(\nu^{2,\mathfrak{p}}\) is a faithful \(2\)-valuation only when \(\mathfrak{p}\) is a maximal ideal not in \(\operatorname{Spec}P_{cl}\). In this case, Theorem 5.7(3) implies that \(\nu^{2,\mathfrak{p}}\) is Weyl. Therefore, the Weyl Poisson field \(K_{Weyl}\) is crucial in studying \(2\)-valuations.
## 6. Embedding and isomorphism Problems
In this section, we will discuss the embedding problem of determining whether one Poisson field can be embedded into another and the isomorphism problem of deciding whether two given Poisson fields are isomorphic for certain families of Poisson fields.
Firstly, let's make a simple observation about the embedding problem.
**Lemma 6.1**.: _Let \(\Omega\) and \(\Omega^{\prime}\) be two i.s. potentials of degrees at least 4. Let \(A_{\Omega}\), \(P_{\Omega^{\prime}}\), \(P_{\Omega-\xi}\) and \(P_{\Omega^{\prime}-\xi}\), for \(\xi\in\Bbbk^{\times}\) be defined as in Construction 0.6._
1. \(Q(A_{\Omega})\) _cannot be embedded into_ \(Q(P_{\Omega^{\prime}})\) _and_ \(Q(P_{\Omega^{\prime}-\xi})\)_._
2. \(Q(P_{\Omega-\xi})\) _cannot be embedded into_ \(Q(P_{\Omega^{\prime}})\)_._
Proof.: (1) This is clear since
\[\operatorname{GKdim}Q(A_{\Omega})=3>2=\operatorname{GKdim}Q(P_{\Omega^{\prime} })=\operatorname{GKdim}Q(P_{\Omega^{\prime}-\xi}).\]
(2) Suppose to the contrary that \(Q(P_{\Omega-\xi})\) is a Poisson subfield of \(Q(P_{\Omega^{\prime}})\). Since \(P_{\Omega^{\prime}}\) is \(w^{\prime}\)-graded Poisson domain where \(w^{\prime}=\deg\Omega^{\prime}-3\), by Lemma 3.2(1), \(Q(P_{\Omega^{\prime}})\) has a nontrivial \((-w^{\prime})\)-valuation \(\nu\). By Lemma 1.4(2), \(\nu\) is also a nontrivial \(0\)-valuation on \(Q(P_{\Omega^{\prime}})\). By Lemma 1.4(1), \(\nu\) restricts to a nontrivial \(0\)-valuation on \(Q(P_{\Omega-\xi})\). This contradicts Lemma 4.8(3).
It is not immediately obvious to us whether \(Q(P_{\Omega})\) (resp. \(Q(P_{\Omega-\xi})\)) can be embedded into \(Q(A_{\Omega^{\prime}})\).
We will make more statements about embedding and isomorphism problems that help us to distinguish Poisson fields. Let \(m\) be an integer at least \(2\). Let \(T(m,p_{ij})\) denote a Poisson torus \(\Bbbk[x_{1}^{\pm 1},\cdots,x_{m}^{\pm 1}]\) with Poisson bracket determined by \(\{x_{i},x_{j}\}=p_{ij}x_{i}x_{j}\) for \(p_{ij}\in\Bbbk\) for all \(1\leq i<j\leq m\).
**Theorem 6.2**.: _Let \(T\) be the Poisson torus \(T(m,p_{ij})\) defined above and assume that \(T\) is Poisson simple. Let \(K\) be the Poisson fraction field of \(T\)._
1. _There is a one-to-one correspondence between the set of_ \(0\)_-valuations and the set_ \(\mathbb{Z}^{m}\)_. Further, for every_ \(0\)_-valuation_ \(\nu\)_,_ \(Q(\operatorname{gr}_{\nu}(K))\cong K\)_. As a consequence,_ \(\alpha_{0}(K)=\infty\)_,_ \(\mathbf{d}_{0}(K)=0\)_,_ \(\mathbf{w}_{0}(K)=1\)_, and_ \(K\) _is_ \(0\)_-quasi-Adams._
2. _There is a one-to-one correspondence between the set_ \(\mathcal{V}_{f0}(K)\) _and the set_ \(\{(v_{1},\cdots,v_{m})\in\mathbb{Z}^{m}\mid\gcd(v_{1},\cdots,v_{n})=1\}\)_._
3. _There is no_ \(w\)_-valuation for all_ \(w<0\)_._
Proof.: (1,2) First we prove that there is a one-to-one correspondence between the set of \(0\)-valuations and the set \(\mathbb{Z}^{m}\). Let \(\nu\) be a \(0\)-valuation on \(K\). Let \(v_{i}=\nu(x_{i})\). Then \((v_{1},\cdots,v_{m})\in\mathbb{Z}^{m}\). Conversely, given \((v_{1},\cdots,v_{m})\in\mathbb{Z}^{m}\) we claim that there is a unique \(0\)-valuation \(\nu\) such that \(\nu(x_{i})=v_{i}\) for all \(i\). Let \(\mathbb{F}^{ind}\) be the induced filtration determined by (E2.8.3) with the degree assignment \(\deg(x_{i})=v_{i}\) and \(\deg(x_{i}^{-1})=-v_{i}\) for all \(i\). By the definition of \(T\), it is easy to see that \(T\) is in fact \(\mathbb{Z}\)-graded with degree assignment \(\deg(x_{i}^{\pm 1})=\pm v_{i}\) for all \(i\). By Lemma 3.2(1), \(\mathbb{F}^{ind}\) agrees with the Adams\({}^{Id}\) filtration \(\mathbb{F}^{Id}\). As a consequence, \(\mathbb{F}^{ind}\) is a good filtration which provides a \(0\)-valuation \(\nu\) such that \(\nu(x_{i})=v_{i}\) for all \(i\). It remains to show that such \(\nu\) is unique.
Now let \(\nu\) be (another) \(0\)-valuation on \(K\) such that \(\nu(x_{i})=v_{i}\) for all \(i\) and let \(\mathbb{F}\) be the associated filtration. By the valuation axioms, \(\nu(x_{i}^{-1})=-v_{i}\) for all \(i\). By the argument in the previous paragraph and Lemma 2.9(2), we have a sequence of algebra homomorphisms
\[T\cong\operatorname{gr}_{\mathbb{F}^{Id}}T\cong\operatorname{gr}_{\mathbb{F }^{ind}}T\to I\subseteq\operatorname{gr}_{\mathbb{F}}T.\]
It is clear that \(T\) is Poisson \(0\)-graded. By Lemma 2.12, \(\phi^{ind}:\operatorname{gr}_{\mathbb{F}^{ind}}T\to\operatorname{gr}_{\mathbb{ F}}T\) is a Poisson \(0\)-graded algebra homomorphism. Since \(T\) is Poisson simple, \(\phi^{ind}\) is injective. Thus \(\operatorname{GKdim}I\geq\operatorname{GKdim}T=n\). By Lemma 2.11(1), \(\mathbb{F}\) agrees with \(\mathbb{F}^{ind}(=\mathbb{F}^{Id})\). Therefore \(\nu\) agrees with \(\nu^{Id}\). This proves the uniqueness.
Note that \(\nu^{Id}\) is a faithful \(0\)-valuation if and only if \(\gcd\{\nu(x_{1}),\cdots,\nu(x_{n})\}=1\). Assertion (2) follows. It is clear that there are infinitely many \(\{c_{1},\cdots,c_{n}\}\in\mathbb{Z}^{n}\) such that \(\gcd(c_{1},\cdots,c_{n})=1\). Hence \(\alpha_{0}(K)=\infty\). By the above proof, \(\operatorname{gr}_{\mathbb{F}}T\cong T\) for any \(0\)-valuation \(\nu\). By Lemma 2.7(2), \(Q(\operatorname{gr}_{\mathbb{F}}K)\cong K\). Or equivalently, \(K\to_{\nu}K\) for all \(0\)-valuations \(\nu\). By definition, we have \(\mathbf{d}_{0}(K)=0\), \(\mathbf{w}_{0}(K)=1\), and \(K\) is \(0\)-quasi-Adams.
(3) By the proof of part (1,2), every \(0\)-valuation is nonclassical. The assertion follows from Lemma 1.4(2).
**Example 6.3**.: For every \(q\in\Bbbk^{\times}\), the \(q\)-skew Poisson field, denoted by \(K_{q}\), is defined to be the \(Q(T(2,p_{12}=q))\). Similar to Example 2.10, we can write
\[K_{q}=\Bbbk(x,y\mid\{x,y\}=qxy).\]
One of the key results regarding the isomorphism problem is presented here.
**Corollary 6.4**.: _The following Poisson fields are nonisomorphic._
1. _The_ \(q\)_-skew Poisson field_ \(K_{q}\)_;_
2. _The Weyl Poisson field_ \(K_{Weyl}\)_;_
3. \(Q_{1}:=Q(P_{\Omega})\) _where_ \(\Omega=x^{3}+y^{3}+z^{3}+\lambda xyz\) _where_ \(\lambda^{3}\neq-3^{3}\)_;_
4. \(Q_{2}:=Q(P_{\Omega-1})\) _where_ \(\Omega=x^{3}+y^{3}+z^{3}+\lambda xyz\) _where_ \(\lambda^{3}\neq-3^{3}\)_;_
5. \(Q(P_{\Omega})\) _where_ \(\Omega\) _is an i.s. potential of degree_ \(\geq 4\)
_._
6. \(Q(P_{\Omega-\xi})\) _where_ \(\xi\in\Bbbk^{\times}\) _and_ \(\Omega\) _is an i.s. potential of degree_ \(\geq 4\)_._
Proof.: We have
1. \(\alpha_{-1}(K_{q})=0\) [Theorem 6.2(3)] and \(\alpha_{0}(K_{q})=\infty\) [Theorem 6.2(1)].
2. \(\alpha_{-1}(K_{Weyl})\neq 0\) [Example 2.10].
3. \(\alpha_{-1}(Q_{1})=0\) and \(\alpha_{0}(Q_{1})=2\) [Theorem 3.8(1)].
4. \(\alpha_{-1}(Q_{2})=0\) and \(\alpha_{0}(Q_{2})=1\) [Theorem 3.11(1)].
Hence, these four Poisson fields are pair-wisely nonisomorphic.
In cases (1-4), there is a nontrivial \(0\)-valuation. By Lemma 4.8(3), (6) is not isomorphic to (1-4). By Lemma 6.1(2), (5) and (6) are nonisomorphic. Finally, (5) is not isomorphic to (1-4) since the Poisson fields in (5) have no faithful \(0\)-valuation by Lemma 4.8(5), but each Poisson field in (1-4) has at least one faithful \(0\)-valuation.
In the next section, we will continue to work on the isomorphism problem within classes (5) and (6). Note that Corollary 6.4 and its proof are closely related to [Ar, Proposition 5.3].
For the rest of this section, we provide further detailed information about Poisson fields listed in Corollary 6.4(1-4). Hopefully, these results are also helpful in understanding other aspects of Poisson fields in Corollary 6.4(1-4).
**Lemma 6.5**.: _Let \(K\) be a Poisson field containing \(K_{q}\) where \(q\in\Bbbk^{\times}\)._
1. _Every_ \(0\)_-valuation on_ \(K\) _is nonclassical. As a consequence,_ \(\mathcal{V}_{-1}(K)=\emptyset\) _and_ \(ncw(K)\geq w(K)\geq 0\)_._
2. \(K_{q}\) _cannot be embedded into_ \(Q(P_{\Omega})\) _for_ \(\deg\Omega\neq|x|+|y|+|z|\)_._
3. _For every_ \(0\)_-valuation_ \(\nu\) _on_ \(K\)_,_ \(Q(\operatorname{gr}_{\nu}(K))\) _contains_ \(K_{q}\) _as a Poisson subfield._
Proof.: (1) Let \(\nu\) be a \(0\)-valuation on \(K\) and \(\mu\) be the restriction of \(\nu\) on \(K_{q}\). Then \(\operatorname{gr}_{\mu}K_{q}\subseteq\operatorname{gr}_{\nu}K\) as Poisson algebras. By Theorem 6.2(1), \(Q(\operatorname{gr}_{\mu}K_{q})\cong K_{q}\). So \(\mu\) is nonclassical. As a consequence, \(\nu\) is nonclassical. Therefore \(\mathcal{V}_{0}(K)=\mathcal{V}_{nc0}(K)\). By Lemma 1.4(2), \(\mathcal{V}_{-1}(K)=\emptyset\). Hence \(ncw(K)\geq w(K)\geq 0\).
(2) This follows from part (1) and the fact that \(\mathcal{V}_{-1}(Q(P_{\Omega}))\neq\emptyset\) when \(\deg\Omega\neq|x|+|y|+|z|\).
(3) By the proof of part (1), \(K_{q}\cong Q(\operatorname{gr}_{\mu}K_{q})\subseteq Q(\operatorname{gr}_{\nu}K)\) as Poisson algebras.
Recall from Definition 3.3(3) that a valuation \(\nu\) is called _Weyl_ if \(Q(\operatorname{gr}_{\nu}(K))\) is isomorphic to the Weyl Poisson field \(K_{Weyl}\). Next we describe \(1\)-valuations and \(2\)-valuations on \(K_{q}\). The following lemma is clear.
**Lemma 6.6**.: _Let \(T\) be \(T(2,p_{12}=q)\) where \(q\in\Bbbk^{\times}\) generated by \(x:=x_{1},y:=x_{2}\). Let \(\nu\) be any valuation on \(T\). Then, after a base change_
\[x\mapsto x^{a}y^{b},\quad y\mapsto x^{c}y^{d}\]
_for some \(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in SL_{2}(\mathbb{Z})\), we may assume that \(\nu(x)=0\)._
By definition, \(K_{q}=Q(T(2,p_{12}=q))\).
**Proposition 6.7**.: _Assume that the base field \(\Bbbk\) is algebraically closed. Let \(K_{q}\) be the \(q\)-skew Poisson field where \(q\neq 0\)._
1. _Every nontrivial nonclassical_ \(1\)_-valuation on_ \(K_{q}\) _is either Weyl or_ \(\nu^{1,\mathfrak{p}}\) _for some height one prime_ \(\mathfrak{p}\in\operatorname{Spec}T\) _with_ \(T=T(2,p_{12}=q)\) _as in Theorem_ 4.3(1)_._
2. _Every faithful_ \(2\)_-valuation on_ \(K_{q}\) _is Weyl._
Proof.: Let \(T\) be \(T(2,p_{12}=q)\) and \(P\) be the Poisson polynomial ring \(\Bbbk[x,y]\) with \(\{x,y\}=qxy\). Then \(P\subseteq T\subseteq K_{q}\) and \(K_{q}=Q(P)=Q(T)\).
(1) Let \(\nu\) be a nontrivial nonclassical \(1\)-valuation on \(K_{q}\) with associated filtration \(\mathbb{F}\). We consider the following cases.
Case 1: \(F_{0}(T)=T\). If \(\operatorname{GKdim}F_{0}(T)/F_{1}(T)=0\), then \(T/F_{1}(T)=\Bbbk\). So \(u:=x-c_{1}\) and \(v:=y-c_{2}\) have positive \(\nu\)-values for some \(c_{1},c_{2}\in\Bbbk^{\times}\). Then
\[0=\nu(qxy)=\nu(\{x,y\})=\nu(\{u,v\})\geq\nu(u)+\nu(v)-1,\]
yielding a contradiction. If \(\operatorname{GKdim}F_{0}(T)/F_{1}(T)=1\), then \(\mathfrak{p}:=F_{1}(T)\) is a prime ideal of \(T\) of height one. Let \(B\) be the localization \(T_{\mathfrak{p}}\) which is a DVR. By Theorem 4.3(1), \(\nu=\nu^{1,\mathfrak{p}}\). If \(\operatorname{GKdim}F_{0}(T)/F_{1}(T)=2\), then \(F_{1}(T)=0\) and whence \(\nu\) is trivial. In this case, we only obtain valuations of the form \(\nu^{1,\mathfrak{p}}\).
Case 2: \(F_{0}(T)\neq T\). By Lemma 6.6, we may assume that \(\nu(x)=0\), and consequently, \(a:=\nu(y)\neq 0\) (if \(a=0\), then \(F_{0}(T)=T\)). We may assume \(a<0\) (after replacing \(y\) by \(y^{-1}\) if necessary). Let \(I\) be the subalgebra of \(\operatorname{gr}_{\mathbb{F}}P\) generated by \(\overline{x}\) and \(\overline{y}\). If \(\operatorname{GKdim}I=2\), then Lemma 2.11(1), \(\mathbb{F}=\mathbb{F}^{ind}\). In this case, \(\nu\) is classical by Lemma 2.9(3). If \(\operatorname{GKdim}I=1\), then \(I\subseteq\Bbbk[s]\) with \(\deg s<0\) [Lemma 1.3(3)], and whence, \(u:=x-c\) has positive \(\nu\)-value for some \(c\in\Bbbk^{\times}\). Then
\[a=\nu(qxy)=\nu(\{x,y\})=\nu(\{u,y\})\geq\nu(u)+\nu(y)-1=\nu(u)+a-1.\]
This implies that \(\nu(u)=1\). In \((\operatorname{gr}_{\mathbb{F}}T)_{a}\), we have
(E6.7.1) \[\{\overline{u},\overline{y}\}=\overline{\{u,y\}}=\overline{\{x,y\}}=\overline {qxy}=cq\overline{y}\neq 0.\]
Let \(\widetilde{I}\) be the subalgebra of \(\operatorname{gr}_{\mathbb{F}}P\) generated by \(\overline{u},\overline{y}\) (note that \(P\) is the subalgebra of \(K_{q}\) generated by \(u\) and \(y\)). Then, by Lemma 1.6(2), \(\operatorname{GKdim}\widetilde{I}=2\). Then, by Lemma 2.11(1), \(\mathbb{F}(P)=\mathbb{F}^{ind}(P)\). By (E6.7.1), \(Q(\operatorname{gr}_{\mathbb{F}^{ind}}(P))\) is \(K_{Weyl}\). Hence, \(\nu\) is Weyl. In this case, we only obtain Weyl valuations.
The assertion follows by combining the above two cases.
(2) Let \(\nu\) be a faithful \(2\)-valuation on \(K_{q}\) with associated filtration \(\mathbb{F}\). We consider the following cases.
Case 1: \(F_{0}(T)=T\). If \(\operatorname{GKdim}F_{0}(T)/F_{1}(T)=0\), then \(F_{0}(T)/F_{1}(T)=\Bbbk\). Thus \(u:=x-c_{1}\) and \(v:=y-c_{2}\) possess positive \(\nu\)-values for some \(c_{1},c_{2}\in\Bbbk^{\times}\). Then
\[0=\nu(qxy)=\nu(\{x,y\})=\nu(\{u,v\})\geq\nu(u)+\nu(v)-2.\]
This implies that \(\nu(u)=\nu(v)=1\). Further, \(\{\overline{u},\overline{v}\}=qc_{1}c_{2}\neq 0\) in \((\operatorname{gr}_{\nu}T)_{0}\). Let \(I\) be the subalgebra of \(\operatorname{gr}_{\mathbb{F}}P\) generated by \(\overline{u}\) and \(\overline{v}\). By Lemma 1.6(1), \(\operatorname{GKdim}I=2\). It follows from Lemma 2.11(1), \(\mathbb{F}=\mathbb{F}^{ind}\) where \(\mathbb{F}^{ind}\) is the induced filtration of \(P\) determined by \(\deg(u)=\deg(v)=1\) (which is a good filtration). It is clear that \(Q(\operatorname{gr}_{\mathbb{F}^{ind}}P)\cong K_{Weyl}\). So \(\nu\) is Weyl. If \(\operatorname{GKdim}F_{0}(T)/F_{1}(T)=1\), then \(\mathfrak{p}:=F_{1}(T)\) is a prime ideal of \(T\) of height one. Let \(B\) be the localization \(T_{\mathfrak{p}}\). Then \(B\) is a DVR. By Theorem 4.3(1), \(\nu=\nu^{1,\mathfrak{p}}\). This is classical considered as a \(2\)-valuation. If \(\operatorname{GKdim}F_{0}(T)/F_{1}(T)=2\), then \(F_{1}(P)=0\) and whence \(\nu\) is trivial.
Case 2: \(F_{0}(T)\neq T\). By Lemma 6.6, we may assume that \(\nu(x)=0\), and by the proof of part (1) Case 1, \(a:=\nu(y)<0\). Let \(I\) be the subalgebra of \(\operatorname{gr}_{\mathbb{F}}P\) generated
by \(\overline{x}\) and \(\overline{y}\). If \(\operatorname{GKdim}I=2\), then Lemma 2.11(1), \(\mathbb{F}=\mathbb{F}^{ind}\). In this case, \(\nu\) is classical by Lemma 2.9(3), yielding a contradiction.
If \(\operatorname{GKdim}I=1\), then \(u:=x-c\) has positive \(\nu\)-value for some \(c\in\Bbbk^{\times}\). Then
\[a=\nu(qxy)=\nu(\{x,y\})=\nu(\{u,y\})\geq\nu(u)+\nu(y)-2=\nu(u)+a-2.\]
This implies that either \(\nu(u)=1\) or \(\nu(u)=2\). If \(\nu(u)=2\), then \(\{\overline{u},\overline{y}\}=qc\overline{y}\neq 0\) in \((\operatorname{gr}_{\mathbb{F}}P)_{a}\). An argument similar to Case 2 of the proof of part (1) shows that \(\nu\) is Weyl. Next, we consider the case when \(\nu(u)=1\). Let \(\widetilde{I}\) be the subalgebra generated by \(\overline{u}\) and \(\overline{y}\). If \(\operatorname{GKdim}\widetilde{I}=2\), similar to the argument at the beginning of the proof of Case 2, we have that \(\nu\) is classical. If \(\operatorname{GKdim}\widetilde{I}=1\), then \(\overline{y}=c^{\prime}\overline{u}^{a}\) for some \(c^{\prime}\in\Bbbk^{\times}\) [Lemma 1.3(2)]. Let \(v:=y-c^{\prime}u^{a}\). Then \(\nu(v)\geq\nu(y)\). Now
\[a=\nu(qxy)=\nu(\{x,y\})=\nu(\{u,y\})=\nu(\{u,v\})\geq\nu(u)+\nu(v)-2=\nu(v)-1.\]
This implies that \(\nu(v)=a+1\) and \(\{\overline{u},\overline{v}\}=qcc^{\prime}\overline{u}^{a}\neq 0\) in \((\operatorname{gr}_{\mathbb{F}}K_{q})_{a}\). By Lemma 1.6(1), the subalgebra generated by \(\overline{u}\) and \(\overline{v}\), denoted by \(I^{\prime}\), has GK-dimension 2. Let \(A\) be the subalgebra of \(K_{q}\) generated by \(u,u^{-1},v\). It is clear that \(Q(A)=K_{q}\). Let \(\mathbb{F}^{ind}\) be the induced filtration determined by \(\deg(u^{\pm 1})=\pm 1\) and \(\deg(v)=a+1\). By Lemma 2.11(1), \(\mathbb{F}=\mathbb{F}^{ind}\). In this case, \(\nu\) is Weyl since \(\{\overline{u},\overline{v}/\overline{u}^{a}\}=qcc^{\prime}\).
The assertion follows by combining the above two cases.
Proposition 6.7 provides useful information about the 1- and 2-valuations on \(K_{q}\). Such a detailed description can rarely be obtained for any Poisson field. In general, it is always challenging to work out the complete set of 1- and 2-valuations.
The Sklyanin Poisson field \(Skly_{3}\) is defined to be \(Q(P_{\Omega-1})\) where \(\Omega=x^{3}+y^{3}+z^{3}+\lambda xyz\) where \(\lambda^{3}\neq-3^{3}\) as in Corollary 6.4.
**Corollary 6.8**.: _Let \(Skly_{3}\) be the Sklyanin Poisson field and \(K\) be a Poisson field of GK-dimension two._
1. _If_ \(\mathcal{V}_{-1}(K)\neq\emptyset\)_, then there is no Poisson algebra homomorphism from_ \(Skly_{3}\) _to_ \(K\)_._
2. _If_ \(K\) _is either the Weyl Poisson field or_ \(Q(P_{\Omega})\) _where_ \(|\Omega|\neq|x|+|y|+|z|\)_, then there is no Poisson algebra homomorphism from_ \(Skly_{3}\) _to_ \(K\)_._
Proof.: (1) Suppose there is an embedding \(Skly_{3}\hookrightarrow K\). Let \(\nu\) be an element in \(\mathcal{V}_{-1}(K)\) and let \(\mu\) be the restriction of \(\nu\) on \(Skly_{3}\). Then \(\nu\in\mathcal{V}_{-1}(Skly_{3})\), which contradicts Theorem 3.8(1).
(2) The assertion follows because \(\mathcal{V}_{-1}(K)\neq\emptyset\) for all Poisson fields \(K\) in part (2).
**Corollary 6.9**.: _Suppose \(\Omega\) is an i.s. potential of degree \(\geq 4\). Let \(Q\) be \(Q(P_{\Omega-\xi})\) where \(\xi\in\Bbbk^{\times}\) and \(K\) be a Poisson field of GK-dimension two._
1. _If_ \(\mathcal{V}_{0}(K)\neq\emptyset\)_, then there is no Poisson algebra homomorphism from_ \(Q\) _to_ \(K\)_._
2. _If_ \(K\) _is either_ \(K_{Weyl}\)_, or_ \(K_{q}\)_, or_ \(Q(P_{\Omega^{\prime}})\) _where_ \(\Omega^{\prime}\) _is any homogeneous potential, so there is no Poisson algebra homomorphism from_ \(Q\) _to_ \(K\)_._
Proof.: (1) Suppose there is an embedding \(Q\hookrightarrow K\). Let \(\nu\) be an element in \(\mathcal{V}_{0}(K)\) and let \(\mu\) be the restriction of \(\nu\) on \(Q\). By Lemma 1.4(1), \(\mu\) is nontrivial. This yields a contradiction to Theorem 4.8(3).
(2) In all cases, one has that \(\mathcal{V}_{0}(K)\neq\emptyset\). The assertion follows from part (1).
Without using valuations, it probably isn't easy to demonstrate any of the non-embedding results above.
## 7. \(\epsilon\)-morphisms
To solve embedding and isomorphism problems within the class \(Q(P_{\Omega-\xi})\) (as well as the automorphism problem in the next section), it is convenient to consider \(\epsilon\)-morphisms to be introduced in this section.
Let \(A\) be a Poisson algebra with a Poisson bracket \(\{-,-\}\) and let \(e\) be in \(\Bbbk^{\times}\). Then we can define a new Poisson structure \(\{-,-\}_{e}\) on \(A\) by
\[\{f,g\}_{e}:=e\{f,g\}\quad\text{for all $f,g\in A$}.\]
This new Poisson algebra is denoted by \(A(e)\).
**Definition 7.1**.: Let \(A\) and \(B\) be two Poisson algebras. A \(\Bbbk\)-linear map \(\phi:A\to B\) is called an \(e\)_-morphism_ if it is a Poisson algebra homomorphism from \(A\to B(e)\) for some \(e\in\Bbbk^{\times}\). If \(e\) is not specified, we say \(\phi\) is an \(\epsilon\)_-morphism_.
Similarly, one can define \(e\)-\((\epsilon)\)-versions of Poisson endomorphism, Poisson isomorphism, Poisson automorphism, etc.
The following lemma is easy.
**Lemma 7.2**.: _Let \(A\) be a Poisson algebra and \(e\in\Bbbk^{\times}\). The following hold._
1. \(\nu\) _is a_ \(w\)_-valuation on_ \(A\) _if and only if it is a_ \(w\)_-valuation on_ \(A(e)\)_._
2. \(\operatorname{Aut}_{Poi}(A)=\operatorname{Aut}_{Poi}(A(e))\)_._
By the above lemma, we can replace Poisson homomorphisms by \(\epsilon\)-morphisms when \(w\)-valuations are concerned.
**Definition 7.3**.: Let \(A\) be a Poisson algebra with a nontrivial Poisson bracket. If \(\phi\) is an \(\epsilon\)-automorphism of \(A\), its _Poisson determinant_ of \(\phi\) is defined to be the scalar \(e\) such that \(\phi\) is an \(e\)-automorphism of \(A\). We write \(\operatorname{Pdet}(\phi)=e\) in this case.
Homological determinant of a graded algebra automorphism \(\sigma\) of an Artin-Schelter regular algebra \(A\) was introduced in [JZ]. In the study of Artin-Schelter regular algebras and superpotentials, Smith and Mori gave a nice interpretation of the homological determinant in terms of the following formula [MS, Theorem 1.2]
\[\sigma^{\otimes l}(w)=\operatorname{hdet}(\sigma)w\]
where \(w\in(A_{1})^{\otimes l}\) is the superpotential associated with an \(m\)-Koszul Artin-Schelter regular algebra \(A=\bigoplus_{i\geq 0}A_{i}\) of AS-index \(l\). Motivated by this, we propose the following definition.
**Definition 7.4**.: Let \(A_{\Omega}=\Bbbk[x,y,z]\) be the Poisson polynomial algebra given in Construction 0.6. Suppose that \(\Omega\) is an i.s. potential. Let \(\phi\) be an \(\epsilon\)-automorphism of \(A_{\Omega}\). The _homological determinant_ of \(\phi\) is defined to be the scalar \(c\in\Bbbk^{\times}\) such that \(\phi(\Omega)=c\,\Omega\).
For three polynomials \(f,g,h\) in \(\Bbbk[x,y,z]\) (with a given graded generators \((x,y,z)\)), the Jacobian determinant of \((f,g,h)\) is defined to be
(E7.4.1) \[J(f,g,h):=\det\begin{pmatrix}f_{x}&f_{y}&f_{z}\\ g_{x}&g_{y}&g_{z}\\ h_{x}&h_{y}&h_{z}\end{pmatrix}=:\det\left(\frac{\partial(f,g,h)}{\partial(x,y, z)}\right)\]
which was used in Construction 0.6.
As noted at the beginning of this section, we are trying to understand embedding and isomorphism problems within the classes in (5) and (6) in Corollary 6.4. For the rest of this section, we assume that \(\Omega\) and \(\Omega^{\prime}\) are two i.s. potentials of degrees at least five unless stated otherwise. Let \(\xi,\xi^{\prime}\in\Bbbk\), which could be zero.
**Lemma 7.5**.: _Let \(\phi\) be an \(\epsilon\)-morphism from \(Q(P_{\Omega-\xi})\) to \(Q(P_{\Omega^{\prime}-\xi^{\prime}})\)._
1. _Then_ \(\phi\) _restricts to an injective_ \(\epsilon\)_-morphism_ \(P_{\Omega-\xi}\to P_{\Omega^{\prime}-\xi^{\prime}}\)_._
2. \(\deg\Omega\leq\deg\Omega^{\prime}\)_._
Proof.: (1) By Theorem 4.9, \(\,{}^{1}\Gamma(Q(P_{\Omega-\xi}))=P_{\Omega-\xi}\) and \(\,{}^{1}\Gamma(Q(P_{\Omega^{\prime}-\xi^{\prime}}))=P_{\Omega^{\prime}-\xi^{ \prime}}\). The assertion follows from Lemma 4.2(1).
(2) Suppose to the contrary that \(n+3:=\deg\Omega>\deg\Omega^{\prime}=:n^{\prime}+3\). Let \(w=n^{\prime}<n\). By Lemmas 4.2(1), Theorem 4.9 and Theorem 4.10,
\[P_{\Omega-\xi}=^{w}\Gamma(Q(P_{\Omega-\xi}))\subseteq^{w}\Gamma(Q(P_{\Omega^ {\prime}-\xi^{\prime}}))=\Bbbk,\]
yielding a contradiction.
Next, we consider the case when \(\deg\Omega=\deg\Omega^{\prime}\). Let \(\phi:P_{\Omega-\xi}\to P_{\Omega^{\prime}-\xi^{\prime}}\) be an algebra homomorphism. We say \(\phi\) is _linear_ if \(\phi(x),\phi(y),\phi(z)\in\Bbbk+\Bbbk x+\Bbbk y+\Bbbk z\). When \(\xi=\xi^{\prime}=0\), we say \(\phi\) is _graded_ if \(\phi(x),\phi(y),\phi(z)\in\Bbbk x+\Bbbk y+\Bbbk z\).
**Lemma 7.6**.: _Suppose \(\deg\Omega=\deg\Omega^{\prime}\geq 4\). Let \(\phi\) be an injective \(\epsilon\)-morphism from \(P_{\Omega-\xi}\to P_{\Omega^{\prime}-\xi^{\prime}}\)._
1. \(\phi\) _is linear. Furthermore,_ \(\phi(x),\phi(y),\phi(z)\in\Bbbk x+\Bbbk y+\Bbbk z\)_._
2. \(\phi\) _is bijective._
3. \(\xi=0\) _if and only if_ \(\xi^{\prime}=0\)_._
4. _If_ \(\xi=\xi^{\prime}=0\)_, then_ \(\phi\) _is graded._
Proof.: We use \(x,y,z\) for the generators of both \(P_{\Omega-\xi}\) and \(P_{\Omega^{\prime}-\xi^{\prime}}\). Let \(w=\deg\Omega-3=\deg\Omega^{\prime}-3\geq 1\). We employ the notations introduced in Lemma 4.8. For example, \(d\) and \(d^{\prime}\) denote the minimum value of a valuation on the generators \(x,y,z\) of \((P_{\Omega-\xi})_{1}\) and \((P_{\Omega^{\prime}-\xi^{\prime}})_{1}\), respectively.
(1,2) By Lemma 4.8(4), there is a unique \(w\)-valuation on \(P_{\Omega^{\prime}-\xi^{\prime}}\) with \(d^{\prime}=-1\), denoted by \(\nu^{\prime}\). In this case, \(\nu^{\prime}(x)=\nu^{\prime}(y)=\nu^{\prime}(z)=-1\) when applied to \(x,y,z\in Q(P_{\Omega^{\prime}-\xi^{\prime}})\) and \(F_{0}^{\nu^{\prime}}(P_{\Omega^{\prime}-\xi^{\prime}})=\Bbbk\).
Let \(\nu\) be the pullback valuation on \(P_{\Omega-\xi}\) via \(\phi\). Since \(P_{\Omega-\xi}\) is a subalgebra of \(P_{\Omega^{\prime}-\xi^{\prime}}\) via \(\phi\), we have that \(F_{0}^{\nu}(P_{\Omega-\xi})=\Bbbk\). So \(d<0\). As a consequence of Lemma 4.8(4), \(d=-1\), whence \(\nu\) is the unique valuation on \(P_{\Omega-\xi}\) whose filtration is either \(\mathbb{F}^{-Id}\) or \(\mathbb{F}^{c}\). As a consequence, \(\nu(x)=\nu(y)=\nu(z)=-1\). This means that \(\phi\) maps \(\Bbbk+\Bbbk x+\Bbbk y+\Bbbk z\) in \(P_{\Omega-\xi}\) to \(\Bbbk+\Bbbk x+\Bbbk y+\Bbbk z\) in \(P_{\Omega^{\prime}-\xi^{\prime}}\). Thus, \(\phi\) is surjective. So \(\phi\) is bijective and linear.
Now assume that
\[\phi(x) =x_{1}+a,\] \[\phi(y) =y_{1}+b,\] \[\phi(z) =z_{1}+c,\]
where \(x_{1},y_{1},z_{1}\in\Bbbk x+\Bbbk y+\Bbbk z\) and \(a,b,c\in\Bbbk\). We compute
\[0 =\phi(\Omega-\xi)=\Omega(\phi(x),\phi(y),\phi(z))-\xi\] \[=\Omega(x_{1}+a,y_{1}+b,z_{0}+c)-\xi\] \[=\Omega(a,b,c)-\xi+\Omega_{x}(a,b,c)x_{1}+\Omega_{y}(a,b,c)y_{1} +\Omega_{z}(a,b,c)z_{1}+hdt\in P_{\Omega^{\prime}-\xi^{\prime}}\]
where \(hdt\) is a linear combination of higher Adams degree terms. Hence \(\Omega_{x}(a,b,c)=\Omega_{y}(a,b,c)=\Omega_{z}(a,b,c)=0\). As \(\Omega\) has an isolated singularity at the origin, \(a=b=c=0\) as desired.
(3) Note that \(P_{\Omega-\xi}\) has finite global dimension if and only if \(\xi\neq 0\). Since \(\phi\) is an algebra isomorphism from \(P_{\Omega-\xi}\) to \(P_{\Omega^{\prime}-\xi^{\prime}}\), the assertion follows.
(4) When \(\xi=\xi^{\prime}=0\), then \(P_{\Omega-\xi}\) and \(P_{\Omega^{\prime}-\xi^{\prime}}\) are connected Adams graded. The assertion follows from part (1).
Combining Lemma 7.5(1) with Lemma 7.6, we deduce
**Corollary 7.7**.: _Suppose \(\deg\Omega=\deg\Omega^{\prime}\geq 5\). Let \(\phi\) be a \(\epsilon\)-morphism from \(Q(P_{\Omega-\xi})\) to \(Q(P_{\Omega^{\prime}-\xi^{\prime}})\). Let \(\phi\) also denote the restriction \(P_{\Omega-\xi}\to P_{\Omega^{\prime}-\xi^{\prime}}\) as in Lemma 7.5(1). Then (1)-(4) of Lemma 7.6 hold._
If \(\phi\) is an automorphism of the commutative algebra \(\Bbbk[x,y,z]\). We write \(J(\phi):=J(\phi(x),\phi(y),\phi(z))\) which is \(\det\left(\frac{\partial(\phi(x),\phi(y),\phi(z))}{\partial(x,y,z)}\right)\) by definition.
**Lemma 7.8**.: _Let \(\phi:A_{\Omega}\to A_{\Omega^{\prime}}\) be an algebra isomorphism without considering their Poisson structures._
1. _Then_ \(\phi\) _is an_ \(e\)_-isomorphism if and only if_ \(\phi(\Omega)=a\Omega^{\prime}\) _where_ \(a=eJ(\phi)\in\Bbbk^{\times}\)_._
2. _Then_ \(\phi\) _is a Poisson algebra isomorphism if and only if_ \(\phi(\Omega)=J(\phi)\,\Omega^{\prime}\)_._
3. _Then_ \(\phi\) _is a Poisson algebra automorphism of_ \(A_{\Omega}\) _if and only if_ \(\phi(\Omega)=J(\phi)\,\Omega\)_._
4. _If_ \(\phi\) _is a Poisson_ \(\epsilon\)_-automorphism of_ \(A_{\Omega}\)_, then_ \(\operatorname{Pdet}(\phi)=aJ(\phi)^{-1}\) _where_ \(\phi(\Omega)=a\Omega\) _for some_ \(a\in\Bbbk^{\times}\)_._
5. _If_ \(\phi\) _is a Poisson_ \(\epsilon\)_-automorphism of_ \(A_{\Omega}\)_, then the homological determinant of_ \(\phi\) _is_ \(\operatorname{Pdet}(\phi)J(\phi)\)_._
Proof.: (1) "\(\Rightarrow\)" Since the Poisson center of \(A_{\Omega}\) is \(\Bbbk[\Omega]\), \(\phi(\Omega)=a\Omega^{\prime}+b\) for some \(a\in\Bbbk^{\times}\) and \(b\in\Bbbk\). If \(b\neq 0\), then we obtain an \(e\)-isomorphism from \(P_{\Omega}\to P_{a\Omega^{\prime}+b}\), which contradicts Lemma 7.6(3). So \(b=0\).
Let \(\{-,-\}_{\Omega}\) (resp. \(\{-,-\}_{\Omega^{\prime}}\)) be the Poisson bracket of \(A_{\Omega}\) (resp. \(A_{\Omega^{\prime}}\)). Now let \(f,g\in A_{\Omega}\). We compute by (E0.6.1)
\[e\{\phi(f),\phi(y)\}_{\Omega^{\prime}}=\phi(\{f,g\}_{\Omega})=\phi \left(\det\frac{\partial(f,g,\Omega)}{\partial(x,y,z)}\right)\]
\[=\det\frac{\partial(\phi(f),\phi(g),\phi(\Omega))}{\partial(\phi(x),\phi(y), \phi(z))}\]
\[=\det\frac{\partial(\phi(f),\phi(g),\phi(\Omega))}{\partial(x,y,z)}J(\phi)^{-1}\]
\[=aJ(\phi)^{-1}\{\phi(f),\phi(g)\}_{\Omega^{\prime}},\]
which implies that \(e=aJ(\phi)^{-1}\). The assertion follows.
"\(\Leftarrow\)" Partially reverse the above proof.
(2,3,4,5) Easy consequences of the part (1).
Note that the isomorphism problem for the class \(\{A_{\Omega}\mid\Omega\text{ i.s. potentials}\}\) is solved by Lemma 7.8(2).
**Lemma 7.9**.: _Let \(\Omega\) and \(\Omega^{\prime}\) be i.s. potentials of degree at least \(5\). Let \(\phi:P_{\Omega-\xi}\to P_{\Omega^{\prime}-\xi^{\prime}}\) be an \(e\)-isomorphism for some \(\xi,\xi^{\prime}\in\Bbbk\). Then \(\phi\) lifts uniquely to an \(e\)-isomorphism from \(A_{\Omega}\) to \(A_{\Omega^{\prime}}\), still denoted by \(\phi\). Furthermore, the following hold._
1. _In this case,_ \(\phi(\Omega)=a\Omega^{\prime}\) _where_ \(a=eJ(\phi)\in\Bbbk^{\times}\) _and_ \(\xi=a\xi^{\prime}\) _at the level of_ \(\phi:A_{\Omega}\to A_{\Omega^{\prime}}\)_._
2. _If_ \(\phi\) _is a Poisson algebra isomorphism, then_ \(\phi(\Omega)=a\Omega^{\prime}\) _where_ \(a=J(\phi)\in\Bbbk^{\times}\) _and_ \(\xi=a\xi^{\prime}\) _at the level of_ \(\phi:A_{\Omega}\to A_{\Omega^{\prime}}\)_._
3. _If_ \(\phi\) _is a Poisson algebra automorphism of_ \(A_{\Omega}\)_, then_ \(\phi(\Omega)=a\Omega\) _where_ \(a=J(\phi)\in\Bbbk^{\times}\) _and_ \(\xi=a\xi\) _at the level of_ \(\phi:A_{\Omega}\to A_{\Omega}\)_._
Proof.: First, by applying Lemma 7.5(2) to both \(\phi\) and \(\phi^{-1}\) extended to the Poisson fraction fields, we have \(\deg\Omega=\deg\Omega^{\prime}\). Secondly, we claim that \(\phi\) lifts to a graded algebra isomorphism from \(A_{\Omega}\to A_{\Omega^{\prime}}\). By Lemma 7.6(1), \(\phi\) is a \(\Bbbk\)-linear automorphism of \(\Bbbk x+\Bbbk y+\Bbbk z\). So \(\phi\) lifts to a graded algebra isomorphism from \(A_{\Omega}\) to \(A_{\Omega^{\prime}}\), still denoted by \(\phi\). It remains to show that \(\phi\) preserves the Poisson structure (or \(e\)-morphism structure). This means that \(\phi\) maps a Poisson relation such as \(\{x,y\}=\Omega_{z}\) to
(E7.9.1) \[e\{\phi(x),\phi(y)\}=\phi(\Omega_{z}).\]
Note that (E7.9.1) holds in \(P_{\Omega^{\prime}-\xi^{\prime}}\) and has degree less than \(\deg\Omega\). Since each element in \((\Omega^{\prime}-\xi^{\prime})A_{\Omega^{\prime}}\) has degree at least \(\deg\Omega\), (E7.9.1) holds in \(A_{\Omega^{\prime}}\). Therefore, we have proved the claim.
(1) By Lemma 7.8(1) (or since \(\phi\) is graded), \(\phi(\Omega)=a\Omega^{\prime}\) where \(a=eJ(\phi)\). Since \(\phi\) is an algebra isomorphism, it maps \(\Omega-\xi\) to some scalar multiple of \(\Omega^{\prime}-\xi^{\prime}\). Thus \(\phi(\Omega-\xi)=a(\Omega^{\prime}-\xi^{\prime})\). Consequently, \(\xi=a\xi^{\prime}\). The proofs of (2,3) are similar.
To summarize, the isomorphism problem (resp. embedding problem) for the class
\[\{P_{\Omega}\mid\Omega\text{ i.s. potentials of degree }\geq 5\}\]
is solved by Lemma 7.9(2) (resp. by Lemmas 7.6(2) and 7.9(2)). The isomorphism problem (resp. embedding problem) for the class
\[\{Q(P_{\Omega})\mid\Omega\text{ i.s. potentials of degree }\geq 5\}\]
is solved by Corollary 7.7 and Lemma 7.9(2) (resp. by Corollary 7.7 and Lemmas 7.6(2) and 7.9(2)).
## 8. Applications
In the following subsections, we present more applications of Poisson valuations. Firstly, we will examine the automorphism problem.
### Automorphism problem
In this paper, the automorphism problem is to compute the automorphism group of Poisson fields/algebras.
Throughout this subsection, we consider a general i.s. potential of degree \(\geq 5\).
**Theorem 8.1**.: _Let \(\Omega\) be an i.s. potential of degree \(\geq 5\)._
1. _Let_ \(P_{\Omega}\) _be as defined in Construction_ 0.6_. Then_ \[\operatorname{Aut}_{Poi}(Q(P_{\Omega}))=\operatorname{Aut}_{Poi}(P_{\Omega}).\] _Furthermore, every automorphism of_ \(P_{\Omega}\) _is Adams graded._
2. _Let_ \(P_{\Omega-\xi}\) _be as defined in Construction_ 0.6 _where_ \(\xi\in\Bbbk^{\times}\)_. Then_ \[\operatorname{Aut}_{Poi}(Q(P_{\Omega-\xi}))=\operatorname{Aut}_{Poi}(P_{ \Omega-\xi}).\]
Proof.: (1) Assertions follow from Lemmas 7.5(1) and 7.6(2, 4).
(2) It follows from Lemmas 7.5(1) and 7.6(2).
**Theorem 8.2**.: _Let \(\Omega\) be an i.s. potential of degree \(\geq 5\). Then_
\[\operatorname{Aut}_{Poi}(Q(A_{\Omega}))=\operatorname{Aut}_{Poi}(A_{\Omega})= \operatorname{Aut}_{Poi}(P_{\Omega})\]
_and each automorphism of \(A_{\Omega}\) is Adams graded. Consequently, for any \(\xi\neq 0\), there is an exact sequence of automorphism groups_
\[1\to\operatorname{Aut}_{Poi}(Q(P_{\Omega-\xi}))\to\operatorname{Aut}_{Poi}(Q( A_{\Omega}))\xrightarrow{J(-)}GL_{1}(\Bbbk)\]
_where \(\operatorname{Aut}_{Poi}(Q(A_{\Omega}))\) is a finite subgroup of \(GL_{3}(\Bbbk)\) of order bounded above by \(42(\deg\Omega)(\deg\Omega-3)^{2}\)._
Proof.: It is clear that \(\operatorname{Aut}_{Poi}(A_{\Omega})\subseteq\operatorname{Aut}_{Poi}(Q(A_{ \Omega}))\). By Theorem 4.9, we have \({}^{1}\Gamma(Q(A_{\Omega}))=A_{\Omega}\), which implies that \(\operatorname{Aut}_{Poi}(Q(A_{\Omega}))\subseteq\operatorname{Aut}_{Poi}(A_{ \Omega})\) by restriction. Therefore \(\operatorname{Aut}_{Poi}(Q(A_{\Omega}))=\operatorname{Aut}_{Poi}(A_{\Omega})\).
Next we prove that \(\operatorname{Aut}_{Poi}(A_{\Omega})=\operatorname{Aut}_{Poi}(P_{\Omega})\) and every Poisson automorphism of \(A_{\Omega}\) is Adams graded. For any Poisson automorphism \(\phi\) of \(A_{\Omega}\), let \(\phi^{\prime}\) denote the induced Poisson automorphism of \(P_{\Omega}\) by Lemma 7.8(1). By Lemma 7.6(4), \(\phi^{\prime}\) preserves the grading and we can uniquely lift \(\phi^{\prime}\) to an Adams graded Poisson automorphism of \(A_{\Omega}\), denoted by \(\sigma\), by Lemma 7.9(3). Clearly \(\sigma^{\prime}=\phi^{\prime}\). Let \(\varphi=\phi\circ\sigma^{-1}\). Then \(\varphi^{\prime}=Id_{P_{\Omega}}\). Since \(\sigma\) preserves the Adams grading, it suffices to show that \(\varphi\) is the identity. Since \(\varphi^{\prime}\) is the identity, we have
\[\varphi(x) =x+\Omega q_{1},\] \[\varphi(y) =y+\Omega q_{2},\] \[\varphi(z) =z+\Omega q_{3}\]
for some \(q_{1},q_{2},q_{3}\in A_{\Omega}\). Since \(\Omega\) does not have a linear term, a computation will show that \(\varphi(\Omega)=\Omega+\Omega\alpha(q_{1},q_{2},q_{3})\) where \(\alpha(q_{1},q_{2},q_{3})\in(A_{\Omega})_{\geq 1}\). Since \(\varphi\) preserves the Poisson center \(\Bbbk[\Omega]\) of \(A_{\Omega}\) [UZ, Proposition 1], \(\varphi(\Omega)=\Omega+\Phi\) where \(\Phi\in\Omega\Bbbk[\Omega]\). Since \(\varphi\) is an algebra automorphism of \(\Bbbk[\Omega]\), \(\Phi=0\) and \(\varphi(\Omega)=\Omega\).
Let \(\mathbb{B}:=\{1,x,y,z\}\cup\{b_{s}\}\) be a fixed \(\Bbbk\)-linear basis of \(P_{\Omega}\) consisting of a set of monomials \(x^{s_{1}}y^{s_{2}}z^{s_{3}}\). For example, if \(\Omega=x^{n+3}+y^{n+3}+z^{n+3}\), then we can choose
\[\mathbb{B}=\{x^{s_{1}}y^{s_{2}}z^{s_{3}}\mid s_{i}\geq 0\text{ for }i=1,2\text{ and }0 \leq s_{3}\leq n+2\}.\]
We consider \(\mathbb{B}\) as a fixed subset of monomial elements in \(A_{\Omega}\) in a canonical way. By the induction on the degree of elements, every element \(f\) in \(A_{\Omega}\) is of the form
\[f=1f^{1}(\Omega)+xf^{x}(\Omega)+yf^{y}(\Omega)+zf^{z}(\Omega)+\sum_{b_{s}}b_{s} f^{b_{s}}(\Omega)\]
where each \(f^{*}(\Omega)\) is in \(\Bbbk[\Omega]\). By re-cycling the letters, we write
\[\varphi(x)=1f^{1}(\Omega)+xf^{x}(\Omega)+yf^{y}(\Omega)+zf^{z}(\Omega)+\sum_{b_ {s}}b_{s}f^{b_{s}}(\Omega)\]
for some \(f^{*}(\Omega)\in\Bbbk[\Omega]\).
For each \(\xi\neq 0\), let \(\pi_{\xi}\) be the canonical quotient map from \(A_{\Omega}\to P_{\Omega-\xi}\). It is clear that the image of \(\mathbb{B}\) is a \(\Bbbk\)-linear basis of \(P_{\Omega-\xi}\). For simplicity, we continue to use \(b_{s}\) etc for \(\Bbbk\)-linear basis elements in \(P_{\Omega-\xi}\).
Let \(\varphi^{\prime}_{\xi}\) be the induced automorphism of \(P_{\Omega-\xi}\). Then \(\varphi^{\prime}_{\xi}\) is a Poisson algebra automorphism of \(P_{\Omega-\xi}\) and
\[\varphi^{\prime}_{\xi}(x)=1f^{1}(\xi)+xf^{x}(\xi)+yf^{y}(\xi)+zf^{z}(\xi)+ \sum_{b_{s}}b_{s}f^{b_{s}}(\xi).\]
By Lemma 7.6(1), \(\phi^{\prime}_{\xi}\) is linear. Thus \(f^{b_{s}}(\xi)=0\) for all \(\xi\neq 0\). Hence \(f^{b_{s}}(\Omega)=0\), consequently, \(\varphi(x)=1f^{1}(\Omega)+xf^{x}(\Omega)+yf^{y}(\Omega)+zf^{z}(\Omega)\). Since \(\varphi(x)=x+\Omega q_{1}\), \(\varphi(x)=x+x\Omega w_{11}(\Omega)+y\Omega w_{12}(\Omega)+z\Omega w_{13}(\Omega)\). Similarly, one has \(\varphi(y)=y+x\Omega w_{21}(\Omega)+y\Omega w_{22}(\Omega)+z\Omega w_{23}(\Omega)\) and \(\varphi(z)=z+x\Omega w_{31}(\Omega)+y\Omega w_{32}(\Omega)+z\Omega w_{33}(\Omega)\) for some polynomials \(w_{ij}(t)\in\Bbbk[t]\). Now by Lemma 8.3 below, \(\varphi\) is the identity as required.
Finally, we show the exact sequence of automorphism groups and the order bound. By Lemmas 7.6(1) and 7.9(3) and Theorem 8.1(2), we have the following natural identifications
\[\operatorname{Aut}_{Poi}(Q(P_{\Omega-\xi})) =\operatorname{Aut}_{Poi}(P_{\Omega-\xi})\] \[=\{\phi\in\operatorname{Aut}_{gr.alg}(\Bbbk[x,y,z])\mid\phi( \Omega)=\Omega,\ J(\phi)=1\}\] \[\subset\{\phi\in\operatorname{Aut}_{gr.alg}(\Bbbk[x,y,z])\mid \phi(\Omega)=J(\phi)\,\Omega\}\] \[=\operatorname{Aut}_{Poi}(P_{\Omega})\] \[=\operatorname{Aut}_{Poi}(Q(A_{\Omega})).\]
Hence, we can view \(\operatorname{Aut}_{Poi}(Q(P_{\Omega-\xi}))\) as a normal subgroup of \(\operatorname{Aut}_{Poi}(Q(A_{\Omega}))\) consisting of those Poisson automorphisms whose Jacobian determinant is trivial. So, the exact sequence follows. Now, for the order bound, after replacing \(\Bbbk\) by its algebraic closure, we may assume that \(\Bbbk\) is algebraically closed. Since every Poisson automorphism of \(P_{\Omega}\) is graded, \(G:=\operatorname{Aut}_{Poi}(P_{\Omega})\) is a subgroup of \(GL_{3}(\Bbbk)\). It remains to show that \(|G|\leq 42d(d-3)^{2}\) where \(d:=\deg\Omega\). Let \(B\) be the graded commutative algebra \(P_{\Omega}\) which is \(A/(\Omega)\) and let \(X\) be the corresponding smooth curve \(\operatorname{Proj}B\). Then the genus \(g\) of \(X\) is \(\frac{1}{2}(d-1)(d-2)\) by the genus-degree formula. By the Hurwitz's automorphism theorem [Hu], the order of the group \(\operatorname{Aut}(X)\) is bounded by \(84(g-1)\). There is a natural group homomorphism \(\pi\) from \(\operatorname{Aut}_{gr.alg}(B)\to\operatorname{Aut}(X)\) whose kernel consists of automorphisms of the form \(\eta_{\xi}:B\to B,b\mapsto\xi^{\deg b}b\) for some \(\xi\in\Bbbk\). Thus, we have an exact sequence
\[1\to\ker\pi\cap G\to G\to\operatorname{Aut}(X).\]
One can check that \(\ker\pi\cap G=\{\eta_{\xi}\mid\xi^{d-3}=1\}\). Thus \(|\ker\pi\cap G|=d-3\). Now
\[|G|\leq|\ker\pi\cap G||\operatorname{Aut}(X)|\leq(d-3)84(g-1)=42d(d-3)^{2}.\]
**Lemma 8.3**.: _Suppose \(\Omega\) is an i.s. potential of degree \(n\geq 3\)._
1. _Let_ \(f_{1},f_{2},f_{3}\) _be elements in_ \(\Bbbk x+\Bbbk y+\Bbbk z\) _such that_ \(f_{1}\Omega_{x}+f_{2}\Omega_{y}+f_{3}\Omega_{z}=0\)_. Then_ \(f_{i}=0\) _for all_ \(i=1,2,3\)
2. _Let_ \(\varphi\) _be a Poisson automorphism of_ \(A_{\Omega}\) _such that_ \(\varphi(\Omega)=\Omega\) _and_ \[\varphi(x) =x+x\Omega w_{11}(\Omega)+y\Omega w_{12}(\Omega)+z\Omega w_{13}(\Omega)\] \[\varphi(y) =y+x\Omega w_{21}(\Omega)+y\Omega w_{22}(\Omega)+z\Omega w_{23}(\Omega)\] \[\varphi(z) =z+x\Omega w_{31}(\Omega)+y\Omega w_{32}(\Omega)+z\Omega w_{33}(\Omega)\] _for some polynomials_ \(w_{ij}(t)\)_. Then_ \(\varphi=Id\)_._
Proof.: (1) In this case \((f_{1},f_{2},f_{3})\) is in the kernel of the map \(\overrightarrow{\nabla}\Omega\cdot\) in the Koszul complex
\[\begin{array}{c}0\to A[-3n+3]\xrightarrow{\overrightarrow{\nabla} \Omega}\oplus A[-2n+2]\xrightarrow{\overrightarrow{\nabla}\Omega\times} \oplus A[1-n]\xrightarrow{\overrightarrow{\nabla}\Omega\cdot}A\to A/( \Omega_{x},\Omega_{y},\Omega_{z}){\rightarrow}0.\\ \oplus A[-2n+2]\qquad\qquad\oplus A[1-n]\end{array}\]
Since \(\Omega\) has an isolated singularity at the origin, the above complex is exact by [Pi, Proposition 3.5]. Since \(f_{i}\) has degree \(1\) and \(\deg\Omega\geq 3\), \((f_{1},f_{2},f_{3})\) is not in the image of \(\overrightarrow{\nabla}\Omega\times\) if it is nonzero. Therefore \(f_{i}=0\) for all \(i=1,2,3\)
(2) We need to prove that \(w_{ij}(t)=0\) for all \(i,j\). Suppose to the contrary that some \(w_{ij}\neq 0\). Then we can write
\[\varphi(x) =x+f_{1}\Omega^{s}+hdt\] \[\varphi(y) =y+f_{2}\Omega^{s}+hdt\] \[\varphi(z) =z+f_{3}\Omega^{s}+hdt\]
where \(f_{1},f_{2},f_{3}\in\Bbbk x+\Bbbk y+\Bbbk z\) are not all zero, \(s\geq 1\), and \(hdt\) stands for linear combination of higher Adams degree terms. Through Taylor expansion,
\[\Omega =\phi(\Omega)\] \[=\Omega(x+f_{1}\Omega^{s}+hdt,y+f_{2}\Omega^{s}+hdt,z+f_{3}\Omega^ {s}+hdt)\] \[=\Omega+\Omega_{x}(f_{1}\Omega^{s})+\Omega_{y}(f_{2}\Omega^{s})+ \Omega_{z}(f_{3}\Omega^{s})+hdt\]
which implies that \(\Omega_{x}f_{1}+\Omega_{y}f_{2}+\Omega_{z}f_{3}=0\). By part (1), \(f_{i}=0\) for all \(i=1,2,3\), is a contradiction.
It is worth pointing out that the automorphism group of the Poisson field \(Q(P_{\Omega})\) is closely related to the automorphism group of the projective curve \(X=\operatorname{Proj}(A/(\Omega))\), and usually computable. To illustrate this fact, we consider a particular case where \(\Omega=x^{d}+y^{d}+z^{d}=0\) is the Fermat curve for some \(d\geq 5\).
Let
\[G(0,d):=\{(a,b,c)\in(\Bbbk^{\times})^{3}\mid ab=c^{d-1},bc=a^{d-1},ac=b^{d-1}\}\]
and
\[G(1,d):=\{(a,b,c)\in(\Bbbk^{\times})^{3}\mid ab=c^{d-1},bc=a^{d-1},ac=b^{d-1 },abc=1\}.\]
Suppose \(\Bbbk\) is algebraically closed. Denote by \(C_{n}\) the cyclic group of order \(n\). One can check directly that
\[G(0,d)=\{(uv,\frac{v^{d-2}}{u},v)\in(\Bbbk^{\times})^{3}\mid u^{d}=1,v^{d^{2 }}=v^{3d}\}\cong C_{d}\times C_{d(d-3)}\]
and
\[G(1,d)=\{(u,\frac{1}{uv},v)\in(\Bbbk^{\times})^{3}\mid u^{d}=v^{d}=1\}\cong C _{d}\times C_{d},\]
are finite groups.
**Proposition 8.4**.: _Suppose \(\Bbbk\) is algebraically closed. Let \(\Omega=x^{d}+y^{d}+z^{d}\) where \(d\geq 5\). Let \(P_{\Omega}\) and \(P_{\Omega-\xi}\) be defined as in Construction 0.6 where \(\xi\in\Bbbk^{\times}\)._
1. _There is a short exact sequence of groups_ \[1\to C_{d}\times C_{d(d-3)}\to\operatorname{Aut}_{Poi}(P_{\Omega})\to S_{3} \to 1.\] _Moreover,_ \(d\) _is even if and only if_ \(\operatorname{Aut}_{Poi}(P_{\Omega})\cong S_{3}\ltimes(C_{d}\times C_{d(d-3)})\)_._
2. _If_ \(d\) _is odd, there is a short exact sequence of groups_ \[1\to C_{d}\times C_{d}\to\operatorname{Aut}_{Poi}(P_{\Omega-\xi})\to C_{3}\to 1\] _and_ \(\operatorname{Aut}_{Poi}(P_{\Omega-\xi})\cong C_{3}\ltimes(C_{d}\times C_{d})\)_._
3. _If_ \(d\) _is even, there is a short exact sequence of groups_ \[1\to C_{d}\times C_{d}\to\operatorname{Aut}_{Poi}(P_{\Omega-\xi})\to S_{3}\to 1\] _and_ \(\operatorname{Aut}_{Poi}(P_{\Omega-\xi})\cong S_{3}\ltimes(C_{d}\times C_{d})\)_._
Proof.: (1) Let \(P=P_{\Omega}\). By Lemma 7.6(4), every Poisson automorphism of \(P\) is graded and hence \(\operatorname{Aut}_{Poi}(P)\) is a subgroup of \(GL_{3}(\Bbbk)\). Let \(X=\operatorname{Proj}(A/(\Omega))\). In the proof of Theorem 8.2, we have an exact sequence of groups
\[1\to\mu_{d-3}\to\operatorname{Aut}_{Poi}(P)\xrightarrow{\pi}\operatorname{Aut }(X)\]
where \(\mu_{d-3}\) is the multiplicative subgroup of \(\Bbbk^{\times}\) consisting of all the \((d-3)\)th roots of unity. For the rest of the proof of part (1), we write \((x,y,z)\) as \((x_{1},x_{2},x_{3})\). It is well-known (e.g., [ODR, Tz]) that \(\operatorname{Aut}(X)=\langle\phi_{1},\phi_{2},\phi_{3},\phi_{4}\rangle\cong S _{3}\ltimes(C_{d}\times C_{d})\) such that
\[\phi_{1}([x_{1}:x_{2}:x_{3}]) =[x_{2}:x_{1}:x_{3}],\] \[\phi_{2}([x_{1}:x_{2}:x_{3}]) =[x_{3}:x_{1}:x_{2}],\] \[\phi_{3}([x_{1}:x_{2}:x_{3}]) =[\omega\,x_{1}:x_{2}:x_{3}],\] \[\phi_{4}([x_{1}:x_{2}:x_{3}]) =[x_{1}:\omega\,x_{2}:x_{3}]\]
where \(\omega\) is a primitive \(d\)th root of unity. Let \(\phi\) be a Poisson algebra automorphism of \(P\). Since \(\pi(\phi)\) can be written in terms of \(\phi_{i}\) for \(1\leq i\leq 4\), it is clear that \(\phi\) preserves \(\Bbbk x_{1}\cup\Bbbk x_{2}\cup\Bbbk x_{3}\). Hence there are \(\sigma\in S_{3}\) and \((a_{1},a_{2},a_{3})\in(\Bbbk^{\times})^{3}\) such that
(E8.4.1) \[\phi(x_{1})=a_{1}x_{\sigma(1)},\quad\phi(x_{2})=a_{2}x_{\sigma(2)},\quad\phi( x_{3})=a_{3}x_{\sigma(3)}.\]
Applying \(\phi\) to the Poisson bracket \(\{x_{i},x_{j}\}=dx_{k}^{d-1}\) where \((i,j,k)=(1,2,3)\), or \((2,3,1)\) or \((3,1,2)\), we obtain that
(E8.4.2) \[a_{1}a_{2}=\operatorname{sgn}(\sigma)a_{3}^{d-1},\quad a_{2}a_{3}= \operatorname{sgn}(\sigma)a_{1}^{d-1},\quad a_{3}a_{1}=\operatorname{sgn}( \sigma)a_{2}^{d-1}.\]
If \(\phi\) preserves \(\Bbbk x_{1}\), \(\Bbbk x_{2}\) and \(\Bbbk x_{3}\) individually, then \(\sigma\) is the identity. In this case (E8.4.2) is equivalent to \((a_{1},a_{2},a_{3})\in G(0,d)\). By an elementary computation and the hypothesis on \(\Bbbk\) for every \(\sigma\in S_{3}\), (E8.4.2) has a solution \((a_{1},a_{2},a_{3})\) with \(d\geq 5\). Combining these facts, \(G(0,d)\) is a normal subgroup of \(\operatorname{Aut}_{Poi}(P)\) and \(\operatorname{Aut}_{Poi}(P)/G(0,d)\cong S_{3}\). Therefore, the main assertion in part (1) is proved.
Suppose \(d\) is even. For every \(\sigma\in S_{3}\), let \(\sigma^{\prime}\) be the automorphism of \(P\) defined by \(\sigma^{\prime}(x_{i})=\operatorname{sgn}(\sigma)x_{\sigma(i)}\). One can check by (E8.4.2) that \(\sigma^{\prime}\) is a Poisson algebra automorphism of \(P\). Moreover, the subgroup of \(\operatorname{Aut}_{Poi}(P)\) generated by \(\{\sigma^{\prime}\,|\,\sigma\in S_{3}\}\) is isomorphic to \(S_{3}\). Therefore \(\operatorname{Aut}_{Poi}(P_{\Omega})\cong S_{3}\ltimes G(0,d)\).
Conversely, suppose \(\operatorname{Aut}_{Poi}(P)\cong S_{3}\ltimes G(0,d)\). This means that \(S_{3}\) is a subgroup of \(\operatorname{Aut}_{Poi}(P)\). Let \(\sigma=(12)\) (\(\operatorname{sgn}(\sigma)=-1\)) and suppose \(\varphi\in\operatorname{Aut}_{Poi}(P)\) corresponds
to \(\sigma\). Then there are \(a_{1},a_{2},a_{3}\) such that \(\varphi(x_{1})=a_{1}x_{2}\) and \(\varphi(x_{2})=a_{2}x_{1}\) and \(\varphi(x_{3})=a_{3}x_{3}\). Since \(\varphi^{2}=Id\), we have \(a_{1}a_{2}=1=a_{3}^{2}\). If \(d\) is odd, an elementary computation shows that there is no \((a_{1},a_{2},a_{3})\) such that both \(a_{1}a_{2}=1=a_{3}^{2}\) and (E8.4.2) hold. This yields a contradiction. Therefore, \(d\) must be even.
(2) By Theorem 8.2, \(\operatorname{Aut}_{Poi}(P_{\Omega-\xi})\) is a normal subgroup of \(\operatorname{Aut}_{Poi}(P_{\Omega})\) consisting of those automorphisms whose Jacobian determinant is trivial. Hence, its proof is similar to the proof of part (1), except that we need to show that there is no Poisson algebra automorphism \(\varphi\) such that \(\varphi(x)=ay\) and \(\varphi(y)=bx\) and \(\varphi(z)=cz\) for some \(a,b,c\in\Bbbk^{\times}\). Suppose to the contrary that such an automorphism exists. Then we have
\[-ab=c^{d-1},\ -bc=a^{d-1},\ -ac=b^{d-1},\ abc=-1.\]
Since \(d\) is odd, the above system of equations has no solution. Therefore, the quotient group \(\operatorname{Aut}_{Poi}(P_{\Omega-1})/G(1,n)\) is isomorphic to \(C_{3}\) (not \(S_{3}\)). The rest is a routine verification.
(3) The proof is similar to the proof of part (2), whence it is omitted.
It is interesting to note that if \(\Omega=x^{d}+y^{d}+z^{d}\) where \(d\geq 5\) is even and \(\xi\neq 0\), then \(\operatorname{Aut}_{Poi}(P_{\Omega-\xi})\cong\operatorname{Aut}(X)\) where \(X=\operatorname{Proj}(A/(\Omega))\). We're curious about the following question:
**Question 8.5**.: What type of i.s. potentials \(\Omega\) can satisfy \(\operatorname{Aut}_{Poi}(P_{\Omega-\xi})\cong\operatorname{Aut}(X)\) with \(X=\operatorname{Proj}(A/(\Omega))\)?
In the remaining part of this section, we will explore other applications of Poisson valuations. Our goal is to provide essential ideas on how to use valuations to solve problems related to Poisson algebras. Hence, the presentation of each topic will be very brief.
### Dixmier property
**Definition 8.6**.: Let \(P\) be a Poisson algebra. We say \(P\) satisfies the _Dixmier property_ if every injective Poisson algebra morphism \(f:P\to P\) is bijective.
This property is related to the Dixmier conjecture [Di], which states that every endomorphism of the \(n\)-th Weyl algebra over a field of characteristic zero is bijective. The Dixmier Conjecture is stably equivalent to the Jacobian Conjecture [BK, BCW, Ts1, Ts2, vdEKC]. By [AvdE], these two conjectures are also equivalent to the Poisson conjecture, which states that every Poisson endomorphism of the \(n\)th canonical Poisson algebra \(W^{\otimes n}\) over a field of characteristic zero is bijective (where \(W\) is the Weyl Poisson polynomial ring given in Example 2.10). The Poisson conjecture asserts that \(W^{\otimes n}\) satisfies the Dixmier property. Proving the Dixmier property for certain Poisson algebras is undeniably challenging, but it is an invaluable pursuit.
Our main result of this subsection is
**Theorem 8.7**.: _Let \(\Omega\) be an i.s. potential of degree \(\geq 5\). Let \(P_{\Omega}\) and \(P_{\Omega-\xi}\) be defined as in Construction 0.6._
1. _Both_ \(P_{\Omega}\) _and_ \(Q(P_{\Omega})\) _satisfy the Dixmier property._
2. _Both_ \(P_{\Omega-\xi}\) _and_ \(Q(P_{\Omega-\xi})\)_, for_ \(\xi\in\Bbbk^{\times}\)_, satisfy the Dixmier property._
Proof.: We only give the proof for part (1). The proof of the part (2) is similar and omitted.
Let \(P=P_{\Omega}\). By Lemma 7.6(2), \(P\) has the Dixmier property.
Let \(K=Q(P_{\Omega})\) and let \(f:K\to K\) be an injective Poisson algebra map. By Theorem 4.9, \(\,{}^{1}\Gamma(K)=P\). By Lemma 4.2(1), the restriction of \(f\) on \(P\), denoted by \(f\mid_{P}\), is an injective Poisson algebra map \(P:=\,^{1}\Gamma(K)\to\,^{1}\Gamma(K)=P\). By Lemma 7.6(2), \(f\mid_{P}\) is bijective. Since \(K=Q(P)\), \(f\) is bijective. Thus, \(K\) has the Dixmier property.
**Remark 8.8**.: It is known that the Weyl Poisson field and the skew Poisson field do not possess the Dixmier property [GZ]. Therefore, \(Q(P_{\Omega})\) and \(Q(P_{\Omega-\xi})\) are among the first few Poisson fields that are proved to have the Dixmier property.
**Example 8.9**.: Let \(\Omega=x^{3}+y^{3}+z^{3}+\lambda xyz\). Then \(x\to\Omega x\), \(y\to\Omega y\), \(z\to\Omega z\) defines an injective Poisson algebra endomorphism of \(A_{\Omega}\) that is not bijective. So \(A_{\Omega}\) does not possess the Dixmier property.
### Rigidity of grading
In this subsection, we prove the following. Recall that we say a \(\mathbb{Z}\)-graded algebra \(A=\bigoplus_{i\in\mathbb{Z}}A_{i}\) is _connected graded_ if \(A_{i}=0\) for all \(i>0\) and \(A_{0}=\Bbbk\).
**Theorem 8.10**.: _Let \(\Omega\) be an i.s. potential of degree \(\geq 4\). Then \(P_{\Omega}\) has a unique connected grading such that it is Poisson \((\deg\Omega-3)\)-graded._
Proof.: Let \(P\) denote \(P_{\Omega}\) and let \(n=\deg\Omega-3\). By Construction 0.6, there is an Adams grading with \(\deg(x)=\deg(y)=\deg(z)=1\) such that \(P\) is Poisson \((-n)\)-graded. To make \(P\)\(n\)-graded, we need to set that \(\deg(x)=\deg(y)=\deg(z)=-1\) by our Definition 2.1. The associated valuation is denoted by \(\nu_{old}\) with \(\nu_{old}(f)=-1\) for \(f=x,y,z\).
Suppose \(P\) has a new connected grading such that \(P\) is connected graded and Poisson \(n\)-graded and let \(\mu\) be the \(n\)-valuation associated with this new grading as defined by (E3.1.1). In this case \(\mu(f)<0\) for all \(f\in P\setminus\Bbbk\). By Lemma 4.8(4), \(\mu=\nu_{old}\) as given in the previous paragraph. As a consequence, \(\mu(x)=\mu(y)=\mu(z)=-1\).
If \(x,y,\) and \(z\) are homogeneous with respect to the new grading, then two gradings coincide. It remains to show that \(x,y,z\) are homogenous in this new grading. Write \(x=x_{0}+a\), \(y=y_{0}+b\), and \(z=z_{0}+c\) where \(x_{0},y_{0},z_{0}\) are homogeneous of new degree \(-1\) and \(a,b,c\in\Bbbk\). Since every linear combination of \(x,y,z\) has \(\mu\)-value \(-1\) (as \(\mu=\nu_{old}\)), \(x_{0},y_{0},z_{0}\) are linearly independent. By Taylor expansion, working in \(P_{\Omega}\),
\[0 =\Omega(x,y,z)\] \[=\Omega(x_{0},y_{0},z_{0})+a\Omega_{x}(x_{0},y_{0},z_{0})+b \Omega_{y}(x_{0},y_{0},z_{0})+c\Omega_{z}(x_{0},y_{0},z_{0})+hdt\]
where \(hdt\) is a linear combination of higher degree terms with respect to the new grading. Since \(\Omega_{x}(x_{0},y_{0},z_{0})\), \(\Omega_{y}(x_{0},y_{0},z_{0})\), \(\Omega_{z}(x_{0},y_{0},z_{0})\) are linearly independent (as \(\Omega\) has an isolated singularity at origin), \(a=b=c=0\). So \(x,y,z\) are homogeneous as required.
### Rigidity of filtration
In this subsection, we prove the following.
**Theorem 8.11**.: _Let \(\Omega\) be an i.s. potential of degree \(\geq 4\) and \(\xi\neq 0\). Then \(P_{\Omega-\xi}\) has a unique filtration \(\mathbb{F}\) such that the associated graded ring \(\operatorname{gr}_{\mathbb{F}}(P_{\Omega-\xi})\) is a connected graded Poisson \((\deg\Omega-3)\)-graded domain._
Proof.: Let \(P\) be \(P_{\Omega-\xi}\) and \(n=\deg\Omega-3\). Let \(\mathbb{F}^{c}\) be the original filtration by the construction and let \(\nu^{c}\) be the associated valuation given in Lemma 3.2(3). In particular, \(\nu^{c}(f)=-1\) for \(f=x,y,z\). Since \(\operatorname{gr}_{\nu^{c}}(P)\cong P_{\Omega}\) which is Poisson \(n\)-graded, \(\nu^{c}\) is an \(n\)-valuation of \(P\).
Suppose \(P\) has another filtration, say \(\mathbb{F}\), such that \(\operatorname{gr}_{\mathbb{F}}P\) is a connected graded Poisson \(n\)-graded domain. Let \(\nu\) be the associated \(n\)-valuation of \(P\). In this case \(\nu(f)<0\) for some \(f\in P\). By Lemma 4.8(4), \(\nu=\nu^{c}\). Lemma 2.6, \(\mathbb{F}=\mathbb{F}^{c}\) as required.
### Proof of statements in Table 1
For the Weyl Poisson field \(K_{Weyl}\), the valuations constructed in Example 2.10, \(\{\nu_{\xi}\mid\xi\in\Bbbk\}\), are distinct faithful \(0\)-valuations (when we choose \(w=0\)). If \(\Bbbk\) is uncountable, we have obtained uncountably many faithful \(0\)-valuations.
For the skew Poisson field \(K_{q}\), the statement follows from Theorem 6.2. For \(Q(P_{\Omega})\) where \(\Omega=x^{3}+y^{3}+z^{3}+\lambda xyz\) and \(\lambda^{3}\neq-3^{3}\), the statement follows from Theorem 3.8(1). For \(Q(P_{\Omega-1})\) where \(\Omega=x^{3}+y^{3}+z^{3}+\lambda xyz\) and \(\lambda^{3}\neq-3^{3}\), the statement follows from Theorem 3.11(1).
For \(\Omega\) of degree \(\geq 4\) with an isolated singularity, the statement follows from Lemma 4.8(3,5).
### Acknowledgments
The authors thank Ken Goodearl and Milen Yakimov for many valuable conversations and correspondences on the subject and thank Sandor Kovacs for the proof of Lemma 5.5(1). Wang was partially supported by Simons collaboration grant #688403 and Air Force Office of Scientific Research grant FA9550-22-1-0272. Zhang was partially supported by the US National Science Foundation (No. DMS-2001015 and DMS-2302087). Part of this research work was done during the first three authors' visits to the Department of Mathematics at the University of Washington in June 2022 and January 2023. They are grateful for the fourth author's invitation and wish to thank the University of Washington for its hospitality.
|
2309.03160 | ResFields: Residual Neural Fields for Spatiotemporal Signals | Neural fields, a category of neural networks trained to represent
high-frequency signals, have gained significant attention in recent years due
to their impressive performance in modeling complex 3D data, such as signed
distance (SDFs) or radiance fields (NeRFs), via a single multi-layer perceptron
(MLP). However, despite the power and simplicity of representing signals with
an MLP, these methods still face challenges when modeling large and complex
temporal signals due to the limited capacity of MLPs. In this paper, we propose
an effective approach to address this limitation by incorporating temporal
residual layers into neural fields, dubbed ResFields. It is a novel class of
networks specifically designed to effectively represent complex temporal
signals. We conduct a comprehensive analysis of the properties of ResFields and
propose a matrix factorization technique to reduce the number of trainable
parameters and enhance generalization capabilities. Importantly, our
formulation seamlessly integrates with existing MLP-based neural fields and
consistently improves results across various challenging tasks: 2D video
approximation, dynamic shape modeling via temporal SDFs, and dynamic NeRF
reconstruction. Lastly, we demonstrate the practical utility of ResFields by
showcasing its effectiveness in capturing dynamic 3D scenes from sparse RGBD
cameras of a lightweight capture system. | Marko Mihajlovic, Sergey Prokudin, Marc Pollefeys, Siyu Tang | 2023-09-06T16:59:36Z | http://arxiv.org/abs/2309.03160v5 | # ResFields: Residual Neural Fields
###### Abstract
Neural fields, a category of neural networks trained to represent high-frequency signals, have gained significant attention in recent years due to their impressive performance in modeling complex 3D data, such as signed distance (SDFs) or radiance fields (NeRFs), via a single multi-layer perceptron (MLP). However, despite the power and simplicity of representing signals with an MLP, these methods still face challenges when modeling large and complex temporal signals due to the limited capacity of MLPs. In this paper, we propose an effective approach to address this limitation by incorporating temporal residual layers into neural fields, dubbed ResFields. It is a novel class of networks specifically designed to effectively represent complex temporal signals. We conduct a comprehensive analysis of the properties of ResFields and propose a matrix factorization technique to reduce the number of trainable parameters and enhance generalization capabilities. Importantly, our formulation seamlessly integrates with existing MLP-based neural fields and consistently improves results across various challenging tasks: 2D video approximation, dynamic shape modeling via temporal SDFs, and dynamic NeRF reconstruction. Lastly, we demonstrate the practical utility of ResFields by showcasing its effectiveness in capturing dynamic 3D scenes from sparse RGBD cameras of a lightweight capture system.
## 1 Introduction
Multi-layer Perceptron (MLP) is a common neural network architecture used for representing continuous spatiotemporal signals, known as neural fields. Its popularity stems from its capacity to encode continuous signals across arbitrary dimensions (Kim & Adali, 2003). Additionally, inherent implicit regularization (Goodfellow et al., 2016; Neyshabur et al., 2014) and spectral bias (Rahaman et al., 2019) equip MLPs with excellent interpolation capabilities. Due to these remarkable properties, MLPs have achieved widespread success in many applications such as image synthesis, animation, texture generation, and novel view synthesis (Tewari et al., 2022; Xie et al., 2022).
However, the spectral bias of MLPs (Rahaman et al., 2019), which refers to the tendency of neural networks to learn functions with low frequencies, presents a challenge when it comes to accurately representing complex real-world signals and capturing fine-grained details. Previous efforts have aimed to address the spectral bias by utilizing techniques like positional encoding (Vaswani et al., 2017; Mildenhall et al., 2020; Zhong et al., 2019; Muller et al., 2022) or special activation functions (Sitzmann et al., 2020; Fathony et al., 2020). However, even with these methods, representing fine-grained details remains a challenge, particularly when dealing with large spatiotemporal signals such as long videos or dynamic 3D scenes.
A straightforward way of increasing the capacity of MLPs is to increase the network complexity in terms of the total number of neurons. However, such an approach would make the inference and optimization slower and more GPU memory expensive, as time and memory complexity scales with respect to the total number of parameters. Another possibility is to meta-learn MLP weights (Sitzmann et al., 2020) and maintain specialized independent parameters, but this imposes slow training that does not scale to photo-realistic reconstructions (Tancik et al., 2021). By far the most popular approach for increasing modeling capacity is to partition the spatiotemporal field and fit
separate/local neural fields (Reiser et al., 2021; Muller et al., 2022; Chen et al., 2022). However, these approaches hinder global reasoning and generalization due to local gradient updates of grid structures (Peng et al., 2023).
The challenge that we aim to address is how to increase the model capacity in a way that is agnostic to the design choices of MLP neural fields. This includes architecture, input encoding, and activation functions. At the same time, we must maintain the implicit regularization property of neural networks and retain compatibility with existing techniques developed for reducing the spectral bias (Mildenhall et al., 2020; Sitzmann et al., 2020b). Our key idea is to substitute MLP layers with time-dependent layers (see Fig. 2) whose weights are modeled as trainable residual parameters \(\mathbf{\mathcal{W}}_{i}(t)\) added to the existing layer weights \(\mathbf{W}_{i}\). We dub neural fields implemented in this way ResFields.
Increasing the model capacity in this way offers three key advantages. First, the underlying MLP does not increase in width and hence, maintains the inference and training speed. This property is crucial for most practical downstream applications of neural fields, including NeRF (Mildenhall et al., 2020) which aims to solve inverse volume rendering (Drebin et al., 1988) by querying neural fields billions of times. Second, this modeling retains the implicit regularization and generalization properties of MLPs, unlike other strategies focused on spatial partitioning (Reiser et al., 2021; Muller et al., 2022; Peng et al., 2023; Isik et al., 2023). Finally, ResFields are versatile, easily extendable, and compatible with most MLP-based methods for spatiotemporal signals.
However, the straightforward implementation of ResFields could lead to reduced interpolation properties due to a large number of unconstrained trainable parameters. To this end, inspired by well-explored low-rank factorized layers (Denil et al., 2013; Ioannou et al., 2015; Khodak et al., 2021), we propose to implement the residual parameters as a global low-rank spanning set and a set of time-dependent coefficients. As we show in the following sections, this modeling enhances the generalization properties and further reduces the memory footprint caused by maintaining additional network parameters.
Figure 1: **ResField** extends an MLP architecture to effectively represent complex temporal signals by replacing the conventional linear layers with Residual Field Layers. As such, ResField is versatile and straightforwardly compatible with most existing temporal neural fields. Here we demonstrate its applicability on three challenging tasks by extending Siren (Sitzmann et al., 2020b) and TNeRF (Li et al., 2022): _(a)_ learning temporal signed distance fields and _(b)_ neural radiance fields from four RGB views and _(c)_ from three time-synchronized RGBD views captured by our lightweight rig. The figure is best viewed in electronic format on a color screen, please zoom-in to observe details.
Figure 2: **ResField MLP Architecture.**
In summary, our key contributions are:
* We propose an architecture-agnostic building block for modeling spatiotemporal fields that we dub ResFields.
* We systematically demonstrate that our method benefits a number of existing methods: Sitzmann et al. (2020b); Pumarola et al. (2021); Park et al. (2021a,b); Li et al. (2022); Cai et al. (2022); Cao & Johnson (2023); Fridovich-Keil et al. (2023).
* We validate ResFields on four challenging tasks and demonstrate state-of-the-art results (Fig. 1): 2D video approximation, temporal 3D shape modeling via signed distance functions, and neural-radiance field reconstruction of dynamic scenes from sparse calibrated RGB and RGBD cameras.
## 2 Related Work
**Neural field** is a field - a physical quantity that has a value for every point in time and space - that is parameterized fully or in part by a neural network (Xie et al., 2022), typically an MLP as the universal approximator (Kim & Adali, 2003). However, straightforward fitting of signals to regular MLPs yields poor reconstruction quality due to the spectral bias of learning low frequencies (Rahaman et al., 2019). Even though this issue has been alleviated through special input encodings (Mildenhall et al., 2020; Barron et al., 2021, 2022) or activation functions (Sitzmann et al., 2020b; Tancik et al., 2020; Fathony et al., 2020; Lindell et al., 2022; Shekarforoush et al., 2022), neural fields still cannot scale to long and complex temporal signals due to the limited capacity. A natural way of increasing the modeling capacity is to increase the network's size in terms of the number of parameters. However, this trivial solution does not scale with GPU and training time requirements.
**Hybrid neural fields** leverage explicit grid-based data structures with learnable feature vectors to improve the modeling capacity via spatial (Takikawa et al., 2021; Muller et al., 2022; Chen et al., 2022; Chan et al., 2022) and temporal (Shao et al., 2023; Fridovich-Keil et al., 2023; Cao & Johnson, 2023; Peng et al., 2023) partitioning techniques. However, these approaches sacrifice the desired global reasoning and implicit regularization (Neyshabur et al., 2014; Goodfellow et al., 2016) that is needed for generalization, especially for solving ill-posed problems like inverse rendering. In contrast, our solution, ResFields, focuses on improving pure neural network-based approaches that still hold state-of-the-art results across several important applications, as we will demonstrate later.
**Input-dependent MLP weights** is another common strategy for increasing the capacity of MLPs by directly regressing MLP weights, e.g. via a hypernetwork (Mehta et al., 2021; Wang et al., 2021b) or a convolutional (Peng et al., 2023) neural network. However, these approaches introduce an additional, much larger network that imposes a significant computational burden for optimizing neural fields. KiloNeRF (Reiser et al., 2021) proposes to speed up the inference of static neural radiance fields by distilling the learned radiance field into a grid of small independent MLPs. However, since a bigger MLP is still used during the first stage of the training, this model has the same scaling limitations as the original NeRF. Closest in spirit to our approach, the level-of-experts (LoE) model (Hao et al., 2022) introduces an input-dependent hierarchical composition of shared MLP weights at the expense of reduced representational capacity. Compared to LoE, ResFields demonstrate stronger generalization and higher representational power for modeling complex spatiotemporal signals.
**Temporal fields** are typically modeled by feeding the time-space coordinate pairs to neural fields. SIREN (Sitzmann et al., 2020b) was one of the first neural methods to faithfully reconstruct a 2D video signal. However, scaling this approach to 4D is infeasible and does not produce desired results as demonstrated in dynamic extensions of NeRF models (Pumarola et al., 2021; Li et al., 2022). Therefore, most of the existing solutions (Pumarola et al., 2021; Park et al., 2021a) decouple the learning problem into learning a static canonical neural field and a deformation neural network that transforms a query point from the observation to the canonical space where the field is queried. However, these methods tend to fail for more complex signals due to the difficulty of learning complex deformations via a neural network, as observed by Gao et al. (2022). To alleviate the problem, HyperNeRF (Park et al., 2021b) introduced an additional small MLP and per-frame learnable ambient codes to better capture topological variations, increase the modeling capacity, and simplify the learning of complex deformation. The recent NDR (Cai et al., 2022), a follow-up work of HyperNeRF, further improves the deformation field by leveraging invertible neural networks and more constrained SDF-based density formulation (Yariv et al., 2021). All of these methods are fully compatible with the introduced ResFields paradigm which consistently improves baseline results.
**Residual connections** have a long history in machine learning. They first appeared in Rosenblatt (1961) in the context of coupled perceptron networks. Rosenblatt's insight was that the residual connections increase the efficiency of responding to input signals. Since then, residual connections have been extensively studied and found a major practical utility as a solution to training deep neural networks by overcoming the vanishing gradient problem (Hochreiter, 1998; Srivastava et al., 2015; He et al., 2016) and became a de facto standard for modeling neural networks. Unlike these residual connections that are added to the output of MLP layers, our ResField layers model the residuals of the MLP weights, which in turn yields higher representation power of neural fields, making them more suitable for modeling complex real-world spatiotemporal signals. To the best of our knowledge, directly optimizing residual or multiplicative correctives of model parameters has been explored in the context of fine-tuning large language models (Karimi Mahabadi et al., 2021; Hu et al., 2021; Dettmers et al., 2023) or predicting model weights (Wang et al., 2021b), and has not been explored for directly training spatiotemporal neural fields.
## 3 ResFields: Residual Neural Fields for Spatiotemporal Signals
**Formulation.** Temporal neural fields encode continuous signals \(f:\mathbb{R}^{d}\times\mathbb{R}\mapsto\mathbb{R}^{c}\) via a neural network \(\Phi_{\theta}\), where the input is a time-space coordinate pair (\(t\in\mathbb{R}\), \(\mathbf{x}\in\mathbb{R}^{d}\)) and the output is a field quantity \(y\in\mathbb{R}^{c}\). More formally, the temporal neural field is defined as:
\[\Phi_{\theta}(t,\mathbf{x})=\sigma_{n}\big{(}\mathbf{W}_{n}(\phi_{n-1}\circ \phi_{n-2}\circ\cdots\circ\phi_{1})(t,\mathbf{x})+\mathbf{b}_{n}\big{)}, \tag{1}\]
\[\phi_{i}(t,\mathbf{x}_{i})=\sigma_{i}(\mathbf{W}_{i}\mathbf{x}_{i}+\mathbf{b} _{i}), \tag{2}\]
where \(\phi_{i}:\mathbb{R}_{i}^{N}\mapsto\mathbb{R}_{i}^{M}\) is the \(i\)th layer of the MLP, which consists of the linear transformation by the weight matrix \(\mathbf{W}_{i}\in\mathbb{R}^{N_{i}\times M_{i}}\) and the bias \(\mathbf{b}_{i}\in\mathbb{R}^{N_{i}}\) applied to the input \(\mathbf{x}_{i}\in\mathbb{R}^{M_{i}}\), followed by a non-linear activation function \(\sigma_{i}\). The network parameters \(\theta\) are optimized by minimizing a loss term \(\mathcal{L}\) directly w.r.t a ground truth signal or indirectly by relating a field quantity to the sensory input, e.g. via volume rendering equation for radiance field reconstruction.
**Limitations of MLPs.** To model complex and long signals, it is crucial for the underlying MLP to have a sufficient modeling capacity, which scales with the total number of parameters. However, as the MLP size increases, the training time of neural fields becomes slower while increasing the GPU memory requirements, ultimately leading to the bottleneck being the MLP's size. This is especially highlighted for dynamic radiance field reconstruction which requires solving an inverse rendering problem through billions of MLP queries. In the following, we introduce ResFields, an approach for alleviating the capacity bottleneck for modeling and reconstructing spatiotemporal signals.
**ResFields model.** We introduce residual field layers (Fig. 2) to effectively capture large and complex spatiotemporal signals. ResFields, an MLP that uses at least one residual field layer, alleviates the aforementioned capacity bottleneck without increasing the size of MLPs in terms of the number of layers and neurons. In particular, we replace a linear layer of an MLP \(\phi_{i}\) with our temporal time-conditioned residual layer defined as:
\[\phi_{i}(t,\mathbf{x}_{i})=\sigma_{i}((\mathbf{W}_{i}+\boldsymbol{\mathcal{W} }_{i}(t))\mathbf{x}_{i}+\mathbf{b}_{i})\,, \tag{3}\]
where \(\mathcal{W}_{i}(t):\mathbb{R}\mapsto\mathbb{R}^{N_{i}\times M_{i}}\) is time-dependent and models residuals of the network weights. This simple formulation increases the model capacity via additional trainable parameters without modifying the overall network architecture.
**ResFields factorization.** However, naively implementing \(\boldsymbol{\mathcal{W}}_{i}(t)\in\mathbb{R}^{N_{i}\times M_{i}}\) as a dictionary of trainable weights would yield a vast amount of independent and unconstrained parameters. This would result in a partitioning of spatiotemporal signal, akin to the space partitioning methods (Reiser et al., 2021; Muller et al., 2022; Shao et al., 2023), and hinder a global reasoning and implicit bias of MLPs, essential properties for solving under constrained problems such as a novel view synthesis from sparse setups. To this end, inspired by well-established low-rank factorized layers (Denil et al., 2013; Ioannou et al., 2015; Khodak et al., 2021), we directly optimize time-dependent coefficients and \(R_{i}\)-dimensional
Figure 3: **Factorization of \(\boldsymbol{\mathcal{W}}_{i}\).**
spanning set of residual network weights that are shared across the entire spatiotemporal signal (see Fig. 3). In particular, the residual of network weights are defined as
\[\boldsymbol{\mathcal{W}}_{i}(t)=\sum\nolimits_{r=1}^{R_{i}}\mathbf{v}_{i}(t)[r] \cdot\mathbf{M}_{i}[r], \tag{4}\]
where the coefficients \(\mathbf{v}(t)\in\mathbb{R}^{R_{i}}\) and the spanning set \(\mathbf{M}\in\mathbb{R}^{R_{i}\times N_{i}\times M_{i}}\) are trainable parameters; square brackets denote element selection. To model continuous coefficients over the time dimension, we implement \(\mathbf{v}\in\mathbb{R}^{T_{i}\times R_{i}}\) as a matrix and linearly interpolate its rows. Such formulation reduces the total number of trainable parameters and further prevents potential undesired overfitting that is common for field partition methods as we will demonstrate later (Sec. 4.4).
Please see the Sup. Mat. for further implementation details.
## 4 Experiments
To highlight the versatility of ResFields, we analyze our method on four challenging tasks: 2D video approximation via neural fields, learning of temporal signed distance functions, and volumetric reconstruction of dynamic scenes from calibrated RGB and RGBD cameras.
### 2D Video Approximation
Learning a mapping of pixel coordinates to the corresponding RGB colors is a popular benchmark for evaluating the model capacity of fitting complex signals (Muller et al., 2022; Sitzmann et al., 2020). For comparison, we use two videos (_bikes_ and _cat_ from Sitzmann et al. (2020)) that consist respectively of 250 and 300 frames (with resolutions at \(512\times 512\) and \(272\times 640\)) and fit neural representations by minimizing the mean squared error w.r.t ground truth RGB values.
Unlike the proposed setup in Sitzmann et al. (2020) where the focus is pure overfitting to the image values, our goal is to also evaluate the interpolation aspects of the models. For this, we leave out 10% of randomly sampled pixels for validation and fit the video signal on the remaining ones. We compare our approach against Instant NGP, a popular grid-based approach to neural field modeling, with the best hyperparameter configuration for the task (see supplementary). We also compare against a five-layer Siren network with 1024 neurons (denoted as Siren-1024), as a pure MLP-based approach. For our model, we choose a five-layer Siren network with 512 neurons, whose hidden layers are implemented as residual field layers with the rank \(R_{i}=10\) for all hidden layers (Siren-512+ResFields). We refer to the supplementary for more details and ablation studies on the number of factors, ranks, and layers for the experiment.
Figure 4: **2D video approximation. Comparison of different neural fields on fitting RGB videos. The training and test PSNR curves (left and right respectively) indicate the trade-off between the model’s capacity and generalization properties. Instant NGP offers good overfitting capabilities, however, it struggles to generalize to unseen pixels. A Siren MLP with 1024 neurons (Siren-1024), shows good generalization properties, however, it lacks representation power (low training and low test PSNR). A smaller Siren with 512 neurons implemented with ResFields (Siren-512+ResFields) demonstrates good generalization while offering higher model capacity. Besides the higher accuracy, our approach offers approximately 2.5 times faster convergence and 30% lower GPU memory requirements due to using a smaller MLP (Tab. 1). Results on the right provide a visual comparison of Siren with 256 neurons and Siren with 128 neurons implemented with ResField layers.**
**Insights.** We report training and test PSNR values averaged over the two videos in Fig. 4 and Tab. 1. Here, Instant-NGP offers extremely fast and good overfitting abilities. However, it demonstrates limited generalization to unseen pixels. Siren-1024 has good generalization properties, but clearly underfits the signal and suffers from blur artifacts. Unlike Siren-1024, Siren-512 with ResFields offers significantly higher reconstruction and generalization quality (36.37 vs 39.21 PSNR) while requiring 30% less GPU memory and being about \(2.5\) times faster to train.
This simple experiment serves as a proof of concept and highlights our ability to fit complex temporal signals with smaller MLP architectures, which has a significant impact on the practical downstream applications as we discuss in the following sections.
### Temporal Signed Distance Functions (SDF)
Signed-distance functions model the orthogonal distance of a given spatial coordinate \(\mathbf{x}\) to the surface of a shape, where the sign indicates whether the point is inside the shape. We model a temporal sequence of signed distance functions via a neural field network that maps a time-space coordinate pair (\(t\in\mathbb{R}\), \(\mathbf{x}\in\mathbb{R}^{3}\)) to a signed distance value (\(y\in\mathbb{R}\)).
We sample five sequences of different levels of difficulty (four from Deforming Things (Li et al., 2021) and one from ReSynth (Ma et al., 2021) and convert the ground-truth meshes to SDF values. We supervise all methods by the MAPE loss following Muller et al. (2022). To benchmark the methods, we extract a sequence of meshes from the learned neural fields via marching cubes (Lorensen and Cline, 1987) and report L1 Chamfer distance (CD\(\downarrow\)) and normal consistency (NC\(\downarrow\)) w.r.t the ground-truth meshes (scaled by \(10^{3}\) and \(10^{2}\) respectively). As a main baseline, we use the current state-of-the-art Siren network (five layers) and compare it against Siren implemented with our ResField layers, where residual field layers are applied to three middle layers. We empirically observe that using ResField on the first and last layers has a marginal impact on the performance since weight matrices are small and do not impose a bottleneck for modeling capacity.
**Insights.** Quantitative and qualitative results (Tab. 2, Fig. 1) demonstrate that ResFields consistently improve the reconstruction quality, with the higher rank increasingly improving results. Importantly, we observe that Siren with 128 neurons and ResFields (rank 40), performs better compared to the vanilla Siren with 256 neurons, making our method over two times faster while requiring less GPU memory due to using a much smaller MLP architecture. Alleviating this bottleneck is of utmost importance for the reconstruction tasks that require solving inverse rendering by querying the neural field billions of times as we demonstrate in the next experiment.
### Temporal Neural Radiance Fields (NeRF)
Temporal or Dynamic NeRF represents geometry and texture as a neural field that models a function of color and density. The model is trained by minimizing the pixel-wise error metric between the images captured from known camera poses and ones rendered via the differentiable ray marcher (Mildenhall et al., 2020). To better model geometry, we adopt the MLP architecture and signed distance field formulation from VolSDF (Yariv et al., 2021) that defines density function as Laplace's cumulative distribution function applied to SDF. We refer readers to the supplementary for the results with the NeRF backbone and further implementation details.
Following (Wang et al., 2021), all models are supervised by minimizing the difference between the rendered and the ground truth colors and further adopting the Eikonal (Gropp et al., 2020) and the mask loss terms for well-behaved surface reconstruction under the sparse capture setup:
\[\mathcal{L}=\mathcal{L}_{\text{color}}+\lambda_{1}\mathcal{L}_{\text{igr}}+ \lambda_{2}\mathcal{L}_{\text{mask}}. \tag{5}\]
We use four sequences from the Owlii (Xu et al., 2017) dataset to evaluate the methods. Compared to fully synthetic sequences previously utilized for the task (Pumarola et al., 2021), the dynamic
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{test PSNR \(t\)[\(t\)[\(t\)[\(t\)[\(t\)[\)]]]} & GPU \\ \hline NGP & 34.52 & 131 & 1.6G \\ \hline Siren-1024 & 36.37 & 3.55 & 9.7G \\ Siren-512+ResFields & **39.21** & **9.78** & **6.5G** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Video approximation.**
Figure 5: **Temporal radiance field** reconstruction on the Owlii dataset. The reported metrics above the images are averaged across all test views of a sequence (see Tab. 3 for all comparisons).
Owli sequences exhibit more rapid and complex high-frequency motions, making it a harder task for MLP-based methods. At the same time, the presence of ground truth 3D scans allows us to evaluate both geometry and appearance reconstruction quality, as compared to the sequences with only RGB data available (Li et al., 2022; Shao et al., 2023). We render 400 RGB training images from four static camera views from 100 frames/time intervals and 100 test images from a rotating camera from 100 frames. We report L1 Chamfer distance (CD\(\downarrow\)) (scaled by \(10^{3}\)) and the standard image-based metrics (PSNR\(\uparrow\), SSIM\(\uparrow\)).
We benchmark recent state-of-the-art methods and their variations implemented with ResField layers of rank ten (\(R_{i}=10\)) - TNeRF (Pumarola et al., 2021; Li et al., 2022), DyNeRF (Li et al., 2022), DNeRF (Pumarola et al., 2021), Nerfies (Park et al., 2021a), HyperNeRF (Park et al., 2021b), NDR (Cai et al., 2022), and HexPlane (Cao and Johnson, 2023; Fridovich-Keil et al., 2023) - as well as a recent timespace-partitioning method Tensor4D (Shao et al., 2023) (with a default training configuration). Please see the Sup. Mat. for further details.
**Insights**. We report all quantitative and qualitative results in Tab. 3 and Fig. 5. Results demonstrate that our method consistently improves all baseline methods, achieving new state-of-the-art results for sparse multi-view reconstruction of dynamic scenes. We further observe that more ResNet layers gradually improve results until the point of saturation (\(i=1,2,3\)). This experiment confirms that increasing the modeling capacity to a more-than-needed level does not cause overfitting. Importantly, the simplest/cheapest baseline method TNeRF implemented with ResFields performs better than every other more expensive baseline method in the original form. We believe that such speedup and lower memory requirements are of great benefit to the research community, as they enable the use of lower-end hardware for high
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c|c c} \hline \hline \multicolumn{2}{c}{_Mean_} & \multicolumn{3}{c}{Book} & \multicolumn{3}{c}{Classes} & \multicolumn{3}{c}{Hand} & \multicolumn{3}{c}{Writing} \\ & & LPP\(\uparrow\)s & SSIM & LPP\(\uparrow\)s & SSIM & LPP\(\uparrow\)s & SSIM & LPP\(\uparrow\)s & SSIM & LPP\(\uparrow\)s & SSIM \\ \hline TNeRF & 0.234 & 79.16 & 0.323 & 68.85 & 0.206 & 80.44 & 0.239 & 81.30 & 0.168 & 86.08 \\ +ResFields & **0.203** & **80.00** & **0.284** & **70.84** & **0.164** & **80.65** & **0.210** & **82.09** & **0.155** & **86.43** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Lightweight capture from three RGBD views.**
\begin{table}
\begin{tabular}{l c c c|c c c|c c c|c c c|c c} \hline \hline \multicolumn{2}{c}{_Mean_} & \multicolumn{3}{c}{BaseBaseball} & \multicolumn{3}{c}{Model} & \multicolumn{3}{c}{Dancer} & \multicolumn{3}{c}{Exercise} \\ & FPS\(\uparrow\) & CD\({}_{\text{SI}}\) & SSIM & PSNR\(\uparrow\) & CD\({}_{\text{SI}}\) & SSIM & PSNR\(\uparrow\) & CD\({}_{\text{SI}}\) & SSIM & PSNR\(\uparrow\) & CD\({}_{\text{SI}}\) & SSIM & PSNR\(\uparrow\) & CD\({}_{\text{SI}}\) & SSIM & PSNR\(\uparrow\) \\ \hline \hline Tensor4D (Shao et al., 2023) & 0.085 & 23.9 & 010.25 & 22.59 & 30.5 & 01.52 & 22.51 & 00.89 & 03.20 & 24.65 & 79.15 & 23.23 & 24.38 & 35.92 & 21.66 & 22.16 \\ HexPlane (Cao and Johnson, 2023) & 0.359 & 21.0 & 99.62 & 24.71 & 17.39 & 22.53 & 21.34 & 22.94 & 21.48 & 22.53 & 23.53 & 31.98 & 32.35 & 18.39 & 39.32 & 24.96 \\ + ResFields (\(i=1,2\)) & 0.357 & 17.8 & 93.51 & 25.61 & 14.9 & 03.96 & 25.91 & **21.3** & 92.58 & **26.19** & 17.97 & 93.16 & 24.36 & 16.3 & 94.30 & 25.33 \\ + ResFields (\(i=1,2\)) & 0.354 & 17.6 & 93.74 & 25.79 & **14.54** & **94.74** & **26.62** & 21.17 & **26.52** & **18.93** & **93.47** & **25.14** & **15.74** & **92.58** \\ \hline DyNeRF (Li et al., 2022) & 0.328 & 10.91 & 93.55 & 25.09 & 22.95 & 20.29 & 43.98 & 98.84 & 21.31 & 30.71 & 91.54 & 23.33 & 00.39 & 38.38 & 24.45 \\ + ResFields (\(i=1,2\)) & 0.327 & 20.83 & 96.99 & 25.57 & **17.45** & **26.54** & 21.92 & 24.56 & 21.93 & 25.26 & 16.49 & 33.55 & 25.20 & 17.94 & 95.25 & 25.17 \\ + ResFields (\(i=1,2\)) & 0.323 & 19.93 & 93.81 & 25.49 & 20.39 & 34.92 & 24.72 & 23.07 & 26.16 & **17.96** & **93.69** & 22.21 & **94.99** & **25.80** \\ + ResFields (\(i=1,\)) & 0.316 & 19.49 & 90.25 & 25.41 & 17.94 & 24.56 & 25.63 & 23.53 & **91.35** & 26.11 & 20.09 & 35.58 & **25.18** & **16.9** & 94.81 & 25.13 \\ \hline TNeRF (Li et al., 2022) & 0.359 & 17.9 & 14.98 & 26.18 & 18.14 & 93.47 & 26.33 & 20.32 & 29.31 & 26.52 & 19.39 & 35.35 & 25.09 & 14.91 & 53.36 & 26.77 \\ + ResFields (\(i=1\)) & 0.339 & 14.6 & 94.99 & 27.15 & **12.1** & **25.67** & 27.89 & 18.94 & 90.47 & 27.23 & 14.99 & 94.96 & 22.00 & 13.05 & 56.63 & 27.19 \\ + ResFields (\(i=1,2\)) & 0.33 & 0.344 & 14.92 & 25.71 & 24.12 & 25.84 & **27.98** & 18.3 & 94.33 & **27.81** & 13.49 & 28.75 & 12.95 & 95.82 & 27.40 \\ + ResFields (\(i=1,2\)) & 0.735 & 0.328 & 14.95 & 27.55 & 21.22 & **95.90** & 27.82 & **18.94** & **24.75** & **14.94** & **92.65** & **26.82** & **12.33** & **96.21** & **28.11** \\ \hline DNeRF (Pumarola et al., 2021) & 0.212 & 32.1 & 23.29 & 23.62 & 23.51 & 24.74 & **14.40** & 94.51 & 23.89 & 91.71 & 21.29 & 23.43 & 94.74 & 24.21 \\ + ResFields (\(i=1,2\)) & 0.214 & 14.9 & 15.76 & 23.13 & 29.58 & 28.26 & 11.15 & 27.03 & 14.91 & 26.66 & 24.12 & 12.88 & 95.95 & 27.79 \\ + ResFields (\(i=1,2\)) & 0.23 & 14.04 & 95.34 & 27.60 & 12.29 & 55.85 & 28.70 & 17.69 & 94.25 & **27.48** & **14.40** & 94.88 & 26.40 & 129.68 & 28.97 \\ + ResFields (\(i=1,\)) & 0.210 & **14.0** & **95.67** & **27.89** & **12.09** & **96.15** & **28.34** &
fidelity reconstructions. Given this observation, we set up a simple camera rig and captured longer and more complex sequences to better understand the limitations.
**Lightweight capture from three RGBD views.** We capture four sequences (150 frames) via synchronized Azure Kinects (three for reconstruction and one for validation) and compare TNeRF (w. depth supervision), a baseline with a good balance between computational complexity and accuracy, and its enhancement with ResFields applied to all middle layers. Quantitative evaluation in terms of mean SSIM\(\uparrow\) and LPIPS\(\downarrow\)(Zhang et al., 2018) reported in Tab. 4 demonstrates that ResFields consistently benefits the reconstruction (see visuals in Fig. 1 and the Sup. video). However, we observe that both methods struggle to capture thin and tiny surfaces such as the cord of sunglasses.
### Ablation study
**ResField modeling (Tab. 5).** Residual connections on the layer weights \((\mathbf{W}_{i}+\boldsymbol{\mathcal{W}}_{i}(t))\) are more powerful compared to modeling residuals on the layer output that is commonly used for conditional generation (Karras et al., 2020), directly modulating layer weights \((\mathbf{W}_{i}\odot\boldsymbol{\mathcal{W}}_{i}(t))\)(Mehta et al., 2021), or using time-dependent weights (\(\boldsymbol{\mathcal{W}}_{i}(t)\)) as in LoE (Hao et al., 2022). Tab. 5 summarizes the results of these variations on the video approximation task from Sec. 4.1.
**Factorization techniques (Tab. 6).** We compare our factorization (Eq. 4) with alternative techniques: no factorization (Reiser et al., 2021), regressing network parameters (Ha et al., 2017), hierarchical Levels-of-Experts (LoE) (Hao et al., 2022), and the classic CP Carroll and Chang (1970) and Tucker (1966). CP and Tucker with varying ranks demonstrate good generalization and overfitting results. No factorization achieves great training PSNR, but its generalization performance is sub-optimal which has been mitigated by the hierarchical formulation of LoE. The proposed factorization achieves the best generalization properties. The reported numbers in Tab. 6 are measured on the video approximation task for 30% of unseen pixels. See the Sup. Mat. for additional comparisons.
**Limitations.** Overall ResFields benefits spatiotemporal neural fields when the bottleneck lies in the modeling capacity rather than in solving unconstrained problems. Specifically, we do not observe an advantage on challenging ill-posed monocular reconstruction (Gao et al., 2022) when the main bottleneck is the lack of constraints rather than the network's capacity.
## 5 Discussion and Conclusion
We present a novel approach to overcome the limitations of spatiotemporal neural fields in effectively modeling long and complex temporal signals. Our key idea is to incorporate temporal residual layers into neural fields, dubbed ResFields. The advantage and utility of the method lie in its versatility and straightforward integration into existing works for modeling 2D and 3D temporal fields. ResFields increase the capacity of MLPs without expanding the network architecture in terms of the number of layers and neurons, which allows us to use smaller MLPs without sacrificing the reconstruction quality while achieving faster inference and training time with a lower GPU memory requirement. We believe that progress towards using lower-cost hardware is the key to democratizing research and making technology more accessible. We hope that our study contributes to development of neural fields and provides valuable insights for modeling signals. This, in turn, can lead to advancements in various domains, including computer graphics, computer vision, and robotics.
\begin{table}
\begin{tabular}{l c|c|c c} \hline \hline \multicolumn{2}{c|}{Factorization} & \multicolumn{2}{c|}{\#params} & \multicolumn{2}{c}{\#Mean PSNR} \\ \cline{3-5} \multicolumn{1}{c|}{Sien} & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline \multirow{7}{*}{
\begin{tabular}{} \end{tabular} } & None & 236 & 38.52 & 48.46 \\ & Hoe et al. (2021) & & 10.6 & 88.60 & 39.56 \\ & Hoe et al. (2017) & & 0.8 & 35.04 & 35.56 \\ & CP & 20 & 0.9 & 33.14 & 33.47 \\ & Carroll \& Chang (1970) & 40 & 1.0 & 33.41 & 33.75 \\ & 80 & 1.1 & 33.72 & 34.08 \\ & 40.64 & 7.1 & 33.76 & 34.1 \\ & 40.64 & 6.1 & 5.367 & 35.10 \\ & Tucker (1966) & & 80.64 & 2.0 & 35.08 & 35.59 \\ & 10.26 & 26.26 & 3.6 & 36.13 & 36.60 \\ & 40.25 & 26.56 & 9.5 & 38.31 & 39.33 \\ & 80.26 & 26.26 & 1.7 & 39.04 & 40.39 \\ & -(7.4) & 5.7 & 56.04 & 57.97 \\ & LoE & (8,163,15) & 15.5 & 19.87 & 42.27 \\ & Hao et al. (2022) & (163,264) & 30.2 & 40.53 & 41.15 \\ & 32.61 & 28.5 & 59.40 & 62.6 & 46.35 \\ & 10 & 8.7 & 93.97 & 40.80 \\ & Ours & 20 & 16.5 & 40.87 & 42.45 \\ & Eq. 3 & 40 & 32.3 & 41.60 & 43.72 \\ & 80 & 63.8 & 41.51 & 44.39 \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Factorization techniques.**
\begin{table}
\begin{tabular}{l c c} \hline \hline \multicolumn{2}{c}{\begin{tabular}{c} _Mean_ PSNR\(\uparrow\) \\ test \\ \end{tabular} } \\ \hline \multirow{3}{*}{
\begin{tabular}{} \end{tabular} } & 31.89 & 32.13 \\ & residual weights (\(\boldsymbol{\mathcal{W}}_{i}\)) & \(\boldsymbol{\mathcal{W}}_{i}(t)\) \\ & \(\phi_{i}(\mathbf{t},\mathbf{x}_{i})=\sigma_{i}(\mathbf{W}_{i}\mathbf{x}_{i}+ \mathbf{b}_{i})\) \\ & \(\phi_{i}(\mathbf{t},\mathbf{x}_{i})=\sigma_{i}(\mathbf{W}_{i}\mathbf{x}_{i}+ \mathbf{b}_{i})+\mathcal{W}_{i}(t)\) \\ & \(\phi_{i}(\mathbf{t},\mathbf{x}_{i})=\sigma_{i}(\mathbf{W}_{i}\mathbf{x}_{i}+ \mathbf{b}_{i})\) \\ & \(\phi_{i}(\mathbf{t},\mathbf{x}_{i})=\sigma_{i}(\mathbf{W}_{i}\mathbf{x}_{i}+ \mathbf{b}_{i})+\mathcal{W}_{i}(t)\) \\ & \(\phi_{i}(\mathbf{t},\mathbf{x}_{i})=\sigma_{i}((\mathbf{W}_{i}\mathbf{x}_{i}+ \mathbf{b}_{i})\mathbf{x}_{i}+\mathbf{b}_{i})\) \\ \hline \hline \end{tabular} } \\ \hline \end{tabular}
\end{table}
Table 5: **ResField modeling.**
Acknowledgments and Disclosure of Funding.We thank Hongrui Cai and Ruizhi Shao for providing additional details about the baseline methods and Anpei Chen, Shaofei Wang, Songyou Peng, and Theodora Kontogianni for constructive feedback and proofreading the manuscript. This project has been supported by the Innosuisse Flagship project PROFICIENCY No. PFFS-21-19.
Ethics Statement.In our pursuit of advancing signal modeling and representation techniques, our work holds the potential to bring positive advancements to various domains within the entertainment and AI industries, benefiting both research and practical applications. However, it is crucial to acknowledge the indirect influence of our efforts on the field of deep fakes, as our methodology contributes to the enhancement of photorealistic reconstruction from images.
Reproducibility.In our commitment to promoting openness and transparency in research, we provide comprehensive resources for replicating our results: _1)_ Open source code: we will release the source code used in our experiments, which can be found in the supplementary material. This code includes detailed documentation and instructions to facilitate the replication of our main results. _2)_ Pre-trained models: we will release our trained models to improve verifiability. _3)_ Dataset: our captured dataset will be made publicly available. _4)_ Supplementary documentation: in addition to the code, we provide a supplementary document that offers a deeper insight into our experimental setups, training techniques, and other crucial details.
|
2309.13373 | Asca: less audio data is more insightful | Audio recognition in specialized areas such as birdsong and submarine
acoustics faces challenges in large-scale pre-training due to the limitations
in available samples imposed by sampling environments and specificity
requirements. While the Transformer model excels in audio recognition, its
dependence on vast amounts of data becomes restrictive in resource-limited
settings. Addressing this, we introduce the Audio Spectrogram Convolution
Attention (ASCA) based on CoAtNet, integrating a Transformer-convolution hybrid
architecture, novel network design, and attention techniques, further augmented
with data enhancement and regularization strategies. On the BirdCLEF2023 and
AudioSet(Balanced), ASCA achieved accuracies of 81.2% and 35.1%, respectively,
significantly outperforming competing methods. The unique structure of our
model enriches output, enabling generalization across various audio detection
tasks. Our code can be found at https://github.com/LeeCiang/ASCA. | Xiang Li, Junhao Chen, Chao Li, Hongwu Lv | 2023-09-23T13:24:06Z | http://arxiv.org/abs/2309.13373v1 | # ASCA: LESS AUDIO DATA IS MORE INSIGHTFUL
###### Abstract
Audio recognition in specialized areas such as birdsong and submarine acoustics faces challenges in large-scale pre-training due to the limitations in available samples imposed by sampling environments and specificity requirements. While the Transformer model excels in audio recognition, its dependence on vast amounts of data becomes restrictive in resource-limited settings. Addressing this, we introduce the Audio Spectrogram Convolution Attention (ASCA) based on CoAtNet, integrating a Transformer-convolution hybrid architecture, novel network design, and attention techniques, further augmented with data enhancement and regularization strategies. On the BirdCLEF2023 and AudioSet(Balanced), ASCA achieved accuracies of 81.2% and 35.1%, respectively, significantly outperforming competing methods. The unique structure of our model enriches output, enabling generalization across various audio detection tasks. Our code can be found at [https://github.com/LeeCiang/ASCA](https://github.com/LeeCiang/ASCA).
Xiang Li\({}^{1,2}\) Junhao Chen\({}^{1,2}\) Chao Li\({}^{1,2}\) Hongwu Lv\({}^{1,2}\)\({}^{1}\) College of Computer Science and Technology, Harbin Engineering University, China
\({}^{2}\) Modeling and Emulation in E-Government National Engineering Laboratory, China Audio Detection, Audio Classification, Small-scale Audio Data, Self-attention Mechanism, Deep Learning
## 1 Introduction
Audio detection is important in several applications, such as music style recognition [2], environmental sound detection [3, 4], and instrument classification [5]. Traditionally, audio classification has relied on manually designed features such as spectral features and rhythmic patterns [6], as well as statistically based methods such as Gaussian Mixture Models (GMM) [7]. However, with the rise of deep learning, end-to-end neural network models have begun to make significant progress in audio classification tasks [8, 9]. Among these models, Recurrent Neural Networks (RNNs) [10, 11] and Convolutional Neural Networks (CNNs) [27] have become standard components for dealing with time-series dependencies in audio data.
Recently, Transformer-based models [12, 13], in particular the self-attention mechanism, have shown their potential advantages over traditional RNN models in audio classification. These models are able to capture long-range dependencies without the temporal constraints of RNNs. However, audio data often contains a lot of extraneous noise, while differences in recording equipment have a significant impact on the data, as well as a rich hierarchical structure. While Transformers address the issue of efficiency in iterative model evaluation, they may have poorer generalization capabilities than Convolutional Neural Networks.Transformers are very data-intensive [29] and often require pre-training on large datasets. The lack of pre-training on such a large scale is very detrimental to its performance.
In this work, we present Audio Spectrogram Convolution Attention (ASCA),It is derived from CoAtNet that solves small-scale image datasets, and we apply it to solve small-scale audio datasets, which is the first more comprehensive experiment using this Coatnet-based architecture in the field of audio processing, and in order to comprehensively evaluate its performance, we not only compare with AST [15] but also with EfficientNet [18], but also an in-depth comparison with the MAST structural model [17]. ASCA is unique in its highly optimized architecture. On several datasets, such as AudioSet [20], BirdCLef2023 [21], and VGGSound [22], ASCA achieves optimal performance on small-scale datasets.
## 2 Related Work
Feature extraction research on audio signal datasets has a long history, especially on small-scale datasets, and the use of convolutional architectures as basebone is often considered fruitful, and as deep learning continues to improve, advanced network architectures have been used for audio classification, including convolutional neural networks [27], while due to the great success of the self-attention mechanism in the field of NLP [12] and in the field of vision [25] with great success, convolution-attention networks [31, 32] as well as pure-attention networks [17] have been more widely used in audio processing. To better capture global context over long distances, researchers have introduced self-attention mechanisms.AST [15] pre-training using imagenet [26] outperforms previous techniques in multiple audio classification benchmarks; Swin Transformer [14] devises a shift-window strategy in an image transformer; and AudioCLIP [16] achieves ambient sound categorization (ESC) tasks with new state-of-the-art results with an accuracy of 90.07% on UrbanSound8K
and 97.15% on the ESC-50 dataset; MAST [17] uses hierarchical representation learning for efficient audio classification.MAST uses one-dimensional (and two-dimensional) pooling operations along the temporal (and frequency domains) in different phases, however, however, they do not explore these methods' performance or the modifications that can be made to them in order to train from scratch and perform well with small amounts of data.The study by Gani [23] et al. (2022) extensively explored ViT training with low data volumes and achieved success on small datasets. Their work is based on learning self-supervised induction biases from small datasets and fine-tuning these biases as a weight initialization scheme.The study by Lee et al [24] (2021) explores how ViTs can be modified to learn local induction biases. Instead, we will build on their work by exploring the use of hybrid models for training in low audio data conditions and presenting a motivation for using hybrid models in low data conditions, which is different from the above work.
## 3 Method
Figure 1 shows the architecture of the proposed audio spectrogram transformer (audio coatnet).In the case of small data volume, convolution is better than transformer for feature extraction; while self-attention accepts the global space, and since convolution has very good features for recognizing displaced images, then the input t-second audio waveform into a sequence of 128-dimensional log Mel filter bank (fbank) features computed every 10ms using a 25ms Hamming window. We input the spectrogram in a size of \(224\times 224\).
In order to construct a network using relative attention, we adopted an approach similar to CoAtNet, which proposes a network using 5 stages (S0, S1, S2, S3, S4), where S0 is a simple two-layer convolutional starting layer, and S1 uses an inverted residual block with Squeeze-Excitation [33]. In the study by Dai et al [1], they eliminated the possibility of using a C-C-C-T structure because of the allegedly low model performance. However, in our experiments, we found that the C-C-C-T design can achieve better performance in some cases. We believe it is due to this variability caused by the different sizes of datasets, and that the C-C-C-T structure is very well adapted to small datasets. Convolution has the property of translational equivalence. This means that if the input image is translated, the output feature map after convolution will be similarly translated accordingly. The weights of the convolution kernel are pre-learned and are the same for all inputs. Therefore, convolution does not directly support input adaptive weighting. Standard convolution has a localized Receptive Field, which means that each feature of the output is based on a small local region of the input. However, the Receptive Field can be increased indirectly by multilayer convolution and or a large convolution kernel to give it a global characterization. The self-attentive mechanism is not inherently translationally isotropic because it is global and position independent. However, when combined with position coding or other location information, it can realize spatially relevant functions. The attention mechanism assigns weights to each element in the input that are dynamically computed based on the content of the input, thus it implements input adaptive weighting. The self-attention mechanism inherently has a global receiving domain because it considers all positions in the input to compute each position in the output.
In details, the image is first subjected to the operation of convolutional dimensionality reduction, where the model focuses on the MBConv module [33]. This is an "inverted bottleneck" design, where the input channel size is first expanded by a factor of 4, and then the 4-fold wide hidden state is projected back to the original channel size, and residual joins are then performed between each module. The overall model formulation is as follows, where \(x_{i}\), \(y_{i}\) are the inputs and outputs at location i, respectively, \(w_{i-j}\) denotes the deep convolutional kernel, and \(\mathcal{G}\) denotes the global space. Here, the attention weights \(A_{i,j}\)are determined by \(w_{i-j}\) and \(x_{i}^{\top}x_{j}\). The update of the attention weights is very intuitive and only requires the summation of the global static convolution kernel, the previous convolution module:
\[y_{i}=\sum_{j\in\mathcal{G}}\frac{\exp\left(x_{i}^{\top}x_{j}+w_{i-j}\right)}{ \sum_{k\in\mathcal{G}}\exp\left(x_{i}^{\top}x_{k}+w_{i-k}\right)}x_{j} \tag{1}\]
It is worth mentioning that, the present model employs a relative self-attention mechanism [34], which is a major highlight compared to other audio processing models, where a relative positional encoding is introduced that captures the positional relationship between a query and a key. As a result, the computation of attention is modified as follows:
\[A_{i,j}=\sum_{k\in\mathcal{G}}\exp\left(x_{i}^{\top}x_{k}\right)\quad\text{ (standard self-attention)} \tag{2}\]
\[A_{i,j}=\sum_{k\in\mathcal{G}}\exp\left(x_{i}^{\top}x_{k}+\boldsymbol{w_{i-k} }\right)\quad\text{ (relative self-attention)} \tag{3}\]
this mechanism has been widely used especially in some variants of the Transformer architecture, such as Transformer-XL, which utilizes the relative attention mechanism to capture a longer range of dependencies, thus improving the performance of long text processing.
## 4 Experiments
In this section, we focus on evaluating the performance of ASCA on the audio dataset birdCLEF2023. We will show our results on the main birdCLEF2023 dataset and ablation experiments in Section 4.3 and Section 4.4, respectively. We will then present our experimental results on the ESC-50 dataset, audioset, and Speech Commands V2 dataset in Section 4.3.
### Dataset and Training Details
BirdCLEF2023 is made up of short audios of specific birdsongs shared by users of the xenocanto.org platform that constitute the training data. These audios have been tuned to a sampling rate of 32 kHz and converted to egg format, covering 264 bird species with a total of 16,942 recordings, for which we predicted only the underlying labels.
We also performed related tests on the AudioSet, UCR, and VGG-sound datasets. We took the small AudioSet with a total of 56,246 datasets, but still 527 species. To optimize the data, we used balanced sampling, data amplification and data enhancement techniques (e.g. mixup, masking and background noise).All models are not pre-trained, We used the adamW optimization tool [35] and the BCE binary cross-entropy loss function to train the model in batches of 12 samples. We tested on standard balanced and complete datasets and evaluated on the BirdCLEF2023 evaluation dataset. During testing, we set a starting learning rate of 5e-5 and performed 30 rounds of training while employing a cosine annealing strategy to adjust the learning rate.
### Model Optimization
In the optimization of model performance under low data conditions, we adopt a series of enhancement and regularization strategies. Among them, the enhancement part combines Mixup[36], stochastic masking and background noise(0.25)[37]. As for regularization, we try a variety of strategies, which include random depth regularization, batch normalization, and weighted noise. It is worth noting that batch normalization outperforms other normalization methods for small audio datasets.
### Results
We conducted tests on the BirdCLEF2023, AudioSet(Balanced), VGG-Sound and UCR datasets to evaluate the performance of various architectures using mAP as a metric for small-scale datasets, and the results are as Table 1.
### Ablation Study
We mainly investigated three experiments, namely the impact of model architecture on performance, the impact of different pre-training data on performance, and the impact of different attention window scales on performance.
#### 4.4.1 The impact of model architecture
Our experiments also compared the C-C-T-T architecture and the C-C-C-T architecture, where C is the convolution module and T is the attention module.
We found that the C-C-C-T architecture can handle small-scale data sets better, and too many self-attention mechanisms cannot It will definitely bring good results. On small-scale data sets, having a suitable combination of convolution and attention is the most important.
#### 4.4.2 The impact of different pre-training scales
As can be seen from Figure 2, using the ASCA architecture data set to train the model, we conducted experiments on pre-training data sets of different sizes.on small-scale data sets, the effect is better than all current mainstream models. As shown in Figure 2.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline ModelDataset & BirdCLEF & AudioSet (balanced) & UCR & VGG-Sound \\ & (\%) & (\%) & (\%) & (\%) \\ \hline Baseline & 77.5 & 25.1 & 85.3 & - \\ PANNs & 79.1 & 27.5 & 90.9 & - \\ AST & 80.6 & 33.7 & 93.4 & 78.1 \\ MAST & 80.8 & 34.2 & 93.9 & 81.3 \\ PSLA & 79.2 & 32.7 & 89.2 & 77.8 \\ \hline
**ASCA(ours)** & **81.2** & **35.1** & **94.2** & **82.0** \\ \hline \end{tabular}
\end{table}
Table 1: Experimental Results on Different Datasets
Figure 1: ASCA architecture
\begin{table}
\begin{tabular}{|c|c|} \hline Architect & result (\%) \\ \hline C-C-T-T & 75.3 \\ C-C-C-C & 79.4 \\ \hline
**C-C-C-T(USED)** & **81.2** \\ \hline \end{tabular}
\end{table}
Table 2: Model Architect
#### 4.4.3 Multi-scale QKV Spectrogram Feature Extraction
We investigated the effect of different ViT window divisions on the results, since the input image is 224\(\times\)224, we divided the following sizes: \(7\times 7\), \(14\times 14\), \(16\times 16\), and \(32\times 32\), and let's observe the effect of these models with different division sizes.
With different division windows, we find that in many cases 16\(\times\)16 is by far the best choice, and in the case of 16\(\times\)16 divisions, ASCA is still the best choice to capture the global information.The results are shown in Figure 3.
## 5 Conclusion
In this work, we presented the evaluation of audio_CoAtNet on several audio datasets, with a primary focus on the birdCLEF2023 dataset. Our findings revealed that the BirdCLEF2023 dataset, derived from user-shared audios of birdsongs on the xenocanto.org platform, serves as a robust foundation for assessing audio classification models. To optimize the dataset performance, we employed various data enhancement techniques, including mixup, masking, and background noise. With the adamW optimizer[35] and BCE binary cross-entropy loss function, our training achieved desirable outcomes.
In the journey towards model optimization, we integrated several enhancement and regularization strategies. Notably, batch normalization emerged as a superior approach, especially for smaller audio datasets. Our experiments on various datasets, as documented in Table 2, highlight the supremacy of our proposed ASCA model. This model not only outperformed others in the BirdCLEF2023 dataset but also show-cased exemplary results in other datasets.
In short, the reason why these models perform so well in low-data-mechanism tasks,is, in our opinion, as follows: augmentation and regularization are very important, especially on smaller datasets. The hybrid transformer-convolutional modeling approach is highly generalizable and does not face the problem of highly unstable training.
The ablation study conducted emphasized the significance of pre-training, especially when leveraging the ImageNet dataset.Our results further demonstrate that the ASCA architecture provides a better training foundation for small-scale datasets compared to MAST. In addition, the CCCT architecture proved to be advantageous in handling small-scale datasets. Our exploration into the ViT window divisions for spectrogram feature extraction identified 16\(\times\)16 as an optimal choice, especially in scenarios where global information capture is paramount. The ASCA architecture still outshone others, emphasizing its ability to process and interpret audio data effectively.
In conclusion, the ASCA model, combined with the appropriate dataset optimization and enhancement techniques, offers promising results in audio classification tasks. Our experiments on various datasets emphasize the model's versatility and effectiveness. The insights drawn from our research can serve as a guidepost for future studies in audio classification and model optimization.
Figure 3: The impact of different partition window sizes on the performance of the attention model
Figure 2: Performance of different models at different pretraining scales. |
2306.17790 | Theoretical Analysis of Heterodyne Rydberg Atomic Receiver Sensitivity
Based on Transit Relaxation Effect and Frequency Detuning | We conduct a theoretical investigation into the impacts of local microwave
electric field frequency detuning, laser frequency detuning, and transit
relaxation rate on enhancing heterodyne Rydberg atomic receiver sensitivity. To
optimize the output signal amplitude given the input microwave signal, we
derive the steady-state solutions of the atomic density matrix. Numerical
results show that laser frequency detuning and local microwave electric field
frequency detuning can improve the system detection sensitivity, which can help
the system achieve extra sensitivity gain. It also shows that the heterodyne
Rydberg atomic receiver can detect weak microwave signals continuously over a
wide frequency range with the same sensitivity or even more sensitivity than
the resonance case. To evaluate the transit relaxation effect, a modified
Liouville equation is used. We find that the transition relaxation rate
increases the time it takes to reach steady state and decreases the sensitivity
of the system detection. | Shanchi Wu, Chen Gong, Shangbin Li, Rui Ni, Jinkang Zhu | 2023-06-30T16:44:48Z | http://arxiv.org/abs/2306.17790v1 | Theoretical Analysis of Heterodyne Rydberg Atomic Receiver Sensitivity Based on Transit Relaxation Effect and Frequency Detuning
###### Abstract
We conduct a theoretical investigation into the impacts of local microwave electric field frequency detuning, laser frequency detuning, and transit relaxation rate on enhancing heterodyne Rydberg atomic receiver sensitivity. To optimize the output signal's amplitude given the input microwave signal, we derive the steady-state solutions of the atomic density matrix. Numerical results show that laser frequency detuning and local microwave electric field frequency detuning can improve the system detection sensitivity, which can help the system achieve extra sensitivity gain. It also shows that the heterodyne Rydberg atomic receiver can detect weak microwave signals continuously over a wide frequency range with the same sensitivity or even more sensitivity than the resonance case. To evaluate the transit relaxation effect, a modified Liouville equation is used. We find that the transition relaxation rate increases the time it takes to reach steady state and decreases the sensitivity of the system detection.
Rydberg atom, frequency detuning, sensitivity optimization, transit relaxation.
## I Introduction
Rydberg atoms show extremely strong microwave transition electric dipole moments, which are sensitive to external electromagnetic fields. At room temperature, electromagnetic fields can be measured with high sensitivity and precision using atomic quantum coherence effects.
Utilizing electromagnetically induced transparency (EIT) spectroscopy, a Rydberg atomic sensor with sensitivity 30 uV\(\cdot\)cm\({}^{-1}\)Hz\({}^{-1/2}\) and minimum detectable electric field intensity of \(8\) uV/cm has been demonstrated [1]. Rydberg atomic sensors' sensitivity can be raised to \(2\) uV\(\cdot\)cm\({}^{-1}\)Hz\({}^{-1/2}\) using balanced homodyne detection and frequency modulation methods based on optical interferometers [2, 3]. Combining the Rydberg atomic sensor with the traditional superheterodyne approach results in a significant increase in sensitivity. Such method introduces a local microwave signal and the phase and frequency measurement of microwaves can be realized by the Rydberg atomic sensor, improving the sensitivity of microwave electric field detection to \(55\) nV\(\cdot\)cm\({}^{-1}\)Hz\({}^{-1/2}\)[4]. The setting of laser parameters also affects the electric field detection sensitivity of the Rydberg atomic sensor. Numerical simulations and experimental results show that the sensitivity of the Rydberg atomic sensor is related to the amplitude intensity of the two lasers and microwave electric. The microwave electric detection sensitivity can be improved to \(12.5\) nV\(\cdot\)cm\({}^{-1}\)Hz\({}^{-1/2}\) by detuning the coupling laser frequency, which is the highest sensitivity achieved in experiments so far [5]. For applications in communication systems, predicting the Rydberg atomic sensor performance under different parameter settings through theoretical analysis and numerical simulation is significant.
Rydberg atomic sensors' detection sensitivity is fundamentally constrained by quantum noise, which is orders of magnitude less prevalent than thermal noise. The coherence time of the atoms can be decreased in real systems by a variety of variables, which lowers the system's sensitivity to detection. The main relaxation mechanisms that affect Rydberg atomic sensor's sensitivity include collision broadening, transition time broadening [6], power broadening [7] and laser linewidth [8, 9]. Atomic collisions with cavity walls as well as atomic collisions with one another cause the collision broadening effect. The collision broadening effect can be efficiently decreased by low atomic density. The atoms' mobility causes the transition time broadening effect, which depends on the temperature, atomic mass, and laser beam size. The power broadening originates from the instability of laser power, which can be squeezed by the power stabilization module in experiment. The laser linewidth is an inherent property of lasers, and narrow linewidth laser can improve the performance of Rydberg atomic sensors. Finding technical methods to eliminate or reduce the influence of these factors is crucial for future applications in communication system.
Recent progress in experiments show that reducing the laser linewidth and power fluctuation are feasible improvement methods. Experimental techniques with ultrastable cavities can achieve extremely narrow linewidth laser beam and high power stabilization, which can increase the system sensitivity to the level higher than \(100\) nV\(\cdot\)cm\({}^{-1}\)Hz\({}^{-1/2}\). The optical readout noise is the dominant factor limiting the detection sensitivity [4, 5]. Except for apparatus performance improvement, new detection and readout techniques are the options left. In the perspective of readout techniques, the experiment based on optical interferometer provides ideas for reducing readout noise. Methods that combine circulating cavity with compressed state light have the potential to further reduce the readout noise limit of the system. Existing experimental schemes adopt different
detection techniques depending on the atomic response regime, such as AT splitting effect of atomic resonance [1], AC Stark effect under off-resonant [10], and the heterodyne readout technique with tuning capability [11]. A continuously tunable electric field measurement based on the far off-resonant AC stark effect in a Rydberg atomic vapor cell shows comparable detection sensitivity with a resonant microwave-dressed Rydberg heterodyne receiver using the same system [12]. Additional research about electric field detuning can make smooth transitions among these operating regimes, especially for modulated signals. The frequency modulation spectroscopy [3], lower noise photodetector, and the setting of operating state in the experiment are all sensitivity optimization methods worth trying.
In our preliminary work, we have evaluated the optimal value of local microwave frequency detuning under high transmittance case [13]. In this work, we further characterize the effects of local microwave, probing laser and coupling laser frequency detuning on the sensitivity of Rydberg atomic sensors. In addition, we investigate the impact of transit relaxation on the sensitivity of the system. Assume that the heterodyne scheme can detect the microwave signal amplitude, phase, and frequency, and the system can be tuned by changing the local microwave signal frequency. We aim to build on this heterodyne reception framework.
The reminder of this paper is organized as follows. In Section II, we outline the basic detection principles of heterodyne schemes. In Section III, we optimize the frequency parameters based on atomic four-level model. We theoretically investigate the detection response to the local microwave, coupling laser and probing laser frequency detuning. In Section IV, we consider the transit relaxation effect caused by the atomic motion, and numerically characterize the system sensitivity variation. Finally in Section V, we conclude this work.
## II Theoretical Model
A schematic diagram of the detection system based on Rydberg atomic sensor is shown in Fig. 1. The probing laser and coupling laser simultaneously excite the atoms in the atomic vapor cell, and the photodetector receives the transmission probing laser. The probing laser and coupling laser are locked on the EIT transmission resonance. When the microwave signal radiates cross the atomic vapor cell, the resonant microwave electric field can help improve the transmission performance and the AC Stark shift caused by off-resonant microwave electric field also changes the probing laser transmission, thereby changing the signal intensity received by the photodetector. Because the system response is sensitive to the direction of polarization, we set that the system lies in the \(xoz\) plane, and the probing laser and coupling laser have the same polarization in the \(yoz\) plane. The incident microwave electric filed has the \(y\)-axis polarization and \(\beta\) is the angle difference. It has been demonstrated that the microwave electric field polarization can also be obtained by atomic system [14]. In this work, we assume that the lasers and microwave signal have the same polarization, i.e., the polarization angle \(\beta\) between lasers and microwave is zero.
The diagram for a typical atomic four-level structure is shown in Fig. 2. The probing laser and coupling laser are tuned with detuning \(\Delta_{p}\) and \(\Delta_{c}\), respectively; and the local microwave electric field is tuned with detuning \(\Delta_{L}\). Assume that the frequency and phase difference between local microwave electric field and signal microwave electric field is \(\delta_{s}\) and \(\phi_{s}\), respectively. The corresponding Hamiltonian in the rotating frame is given by
\[H=\frac{\hbar}{2}\begin{pmatrix}0&\Omega_{p}&0&0\\ \Omega_{p}&2\Delta_{p}&\Omega_{c}&0\\ 0&\Omega_{c}&2\left(\Delta_{p}+\Delta_{c}\right)&\Omega_{L}+\Omega_{s}e^{-iS \left(t\right)}\\ 0&0&\Omega_{L}+\Omega_{s}e^{iS\left(t\right)}&2\left(\Delta_{p}+\Delta_{c}+ \Delta_{L}\right)\end{pmatrix}, \tag{1}\]
where \(\Omega_{p}\) and \(\Omega_{c}\) are the corresponding Rabi frequencies for transitions \(\left|1\right\rangle\rightarrow\left|2\right\rangle\) and \(\left|2\right\rangle\rightarrow\left|3\right\rangle\), respectively. The local microwave electric field and signal microwave electric field couple with Rydberg transition between state \(\left|3\right\rangle\) and state \(\left|4\right\rangle\), with Rabi frequencies \(\Omega_{L}\) and \(\Omega_{s}\), respectively. Let \(\Omega=\left|\Omega_{L}+\Omega_{s}e^{-iS\left(t\right)}\right|\), where \(S\left(t\right)=2\pi\delta_{s}t+\phi_{s}\) is the cumulative phase difference of the signal microwave relative to the local microwave.
Due to the relaxation effect caused by the spontaneous radiation, collision of atoms and transit relaxation effect, as well as the atoms repopulation, the complete Liouville equation for the rotating-frame density matrix of the system can be written as [15]
\[i\hbar\frac{d}{dt}\rho=\left[H,\ \rho\right]-i\hbar\frac{1}{2}\left(\Gamma \rho+\rho\Gamma\right)+i\hbar\Lambda, \tag{2}\]
where \(\rho\) is the density matrix. Relaxation matrix \(\Gamma\), which represents the relaxation rate of each state, is given by
\[\Gamma=\begin{pmatrix}\gamma&0&0&0\\ 0&\gamma+\gamma_{2}&0&0\\ 0&0&\gamma+\gamma_{3}+\gamma_{c}&0\\ 0&0&0&\gamma+\gamma_{4}\end{pmatrix}, \tag{3}\]
where \(\gamma_{2}\), \(\gamma_{3}\) and \(\gamma_{4}\) represent the spontaneous decay rates of atoms on the three high levels; \(\gamma_{c}\) and \(\gamma\) are the relaxation rates of the atoms collision and transit effect, respectively. We assume that each level undergoes the same relaxation rate \(\gamma\) due to the exit of atoms from the laser beam.
In addition, the atoms that spontaneously decay from the upper states also repopulate the lower sates. In this work, we only consider the decay paths shown in Fig. 2. Repopulation matrix \(\Lambda\) is given by
\[\Lambda=\begin{pmatrix}\gamma+\gamma_{2}\rho_{22}+\gamma_{4}\rho_{44}&0&0&0 \\ 0&\gamma_{3}\rho_{33}&0&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix}. \tag{4}\]
The dynamics equation of the atomic system can be expressed as
\[\frac{d}{dt}\rho=-\frac{i}{\hbar}\left[H,\rho\right]+L, \tag{5}\]
where \(L=-\frac{1}{2}\left(\Gamma\rho+\rho\Gamma\right)+\Lambda\) represents the Lindbladian operator that defines the relaxation progress in the system.
Under resonant conditions, i.e., \(\Delta_{c}=\Delta_{p}=\Delta_{L}=0\), ignoring the atom collision and transit time broadening effect,
we have \(\gamma=\gamma_{c}=0\). The steady-state solution \(\rho_{21}(t)\) is given by
\[\rho_{21}(t)=-\frac{i\gamma_{2}\Omega_{p}\Omega^{2}}{\gamma_{2}^{2}\Omega^{2}+2 \Omega_{c}^{2}\Omega_{p}^{2}+2\Omega_{p}^{4}+2\Omega_{p}^{2}\Omega^{2}}. \tag{6}\]
The linear susceptibility can be written as
\[\chi\left(t\right)=-\frac{2N_{0}\mu_{12}^{2}}{\hbar\epsilon_{0}\Omega_{p}} \rho_{21}\left(t\right), \tag{7}\]
where \(N_{0}\) is the total density of atoms, \(\mu_{12}\) is the dipole moment of transition \(\left|1\right\rangle\rightarrow\left|2\right\rangle\) and \(\epsilon_{0}\) is the permittivity in vacuum. Assuming \(\Omega_{s}\ll\Omega_{L}\), it has been proved that the optimal local microwave filed amplitude can improve the system sensitivity. The optimal Rabi frequency of \(\Omega_{L}\) is given by [4]
\[\Omega_{L}=\Omega_{p}\sqrt{\frac{2\left(\Omega_{c}^{2}+\Omega_{p}^{2}\right)}{ 3\left(2\Omega_{p}^{2}+\gamma_{2}^{2}\right)}}. \tag{8}\]
In this work, based on the steady-state solution of the system Liouville equation, we derive the expression of the linear susceptibility and output optical power, and then find the optimal values. In general, we expect that the output optical power has the formalism as follows,
\[P\left(t\right)=\bar{P}_{0}+\kappa\Omega_{s}\cos\left(2\pi\delta_{s}t+\phi_{s }\right). \tag{9}\]
We aim to maximize the absolute value of conversion coefficient \(\kappa\), which represents the detection sensitivity.
## III Parameters Optimization
### _Methodology_
In experimental systems, in order to reduce the interaction and collision rate between atoms, the atomic density and laser power are usually limited, although increasing the laser power can theoretically increase the signal-to-noise ratio of the output signal and improve the sensitivity. Therefore, in the current experimental setups, the detection laser power is usually at a weak level, and the coupled laser power is also limited. For Rydberg atom sensors using EIT readout, the optimal Rabi frequencies for transitions \(\left|1\right\rangle\rightarrow\left|2\right\rangle\) and \(\left|2\right\rangle\rightarrow\left|3\right\rangle\) depend on the decay rate of each state and relaxation rate caused by atom collision [16]. We analyze the influence of laser frequencies and local microwave electric field frequency detuning on the system sensitivity.
To gain an intuitive understanding, we begin with analytical derivations. We set a detuning parameter \(\Delta\) (which can be \(\Delta_{p}\), \(\Delta_{c}\), or \(\Delta_{L}\)). Assuming \(\gamma_{3}=\gamma_{4}=\gamma_{c}=\gamma=0\), the steady-state solution \(\rho_{21}\left(\Delta\right)\) can be obtained based on Eq. (5). The corresponding linear susceptibility of the atomic system can be written as
\[\mathfrak{J}\left(\chi\left(t,\ \Delta\right)\right)=\chi_{0}\left(\Delta \right)+\chi_{1}\left(\Delta\right)\Omega_{s}\cos\left(2\pi\delta_{s}t+\phi_{ s}\right), \tag{10}\]
which has a time-invariant part \(\chi_{0}\left(\Delta\right)\) and the coefficient of time variant part \(\chi_{1}\left(\Delta\right)\). The first part contributes a constant signal component at the photodetector, and the second part reflects the information transmission of signal microwave field.
The probing laser transmission is associated with the imaginary component of the susceptibility as
\[\begin{split} P\left(t\right)&=P_{i}e^{-kL\Im \left(\chi\left(t,\ \Delta\right)\right)}\\ &=e^{-kL\chi_{0}\left(\Delta\right)}P_{i}e^{-kL\chi_{1}\left( \Delta\right)\Omega_{s}\cos\left(2\pi\delta_{s}t+\phi_{s}\right)},\end{split} \tag{11}\]
where \(k=2\pi/\lambda_{p}\) is the wavevector of probing laser, and \(L\) is the length of atomic vapor cell along the laser beam
Fig. 1: The diagram of the atomic heterodyne detection system.
Fig. 2: The diagram of four-level configuration.
propagation direction. The peak-to-peak value of the output signal is
\[P_{pp}\left(\Delta\right)=e^{-kL\chi_{0}\left(\Delta\right)}P_{i}\left(e^{kL\chi _{1}\left(\Delta\right)\Omega_{s}}-e^{-kL\chi_{1}\left(\Delta\right)\Omega_{s}} \right). \tag{12}\]
Since the amplitude of microwave signal is linearly dependent on its corresponding Rabi frequency \(\Omega_{s}\), we can optimize coefficients \(\chi_{0}\left(\Delta\right)\) and \(\chi_{1}\left(\Delta\right)\) in Eq. (12) to maximize \(P_{pp}\left(\Delta\right)\). Two cases are considered in the following.
**General case:** In weak signal regime, \(\left|kL\chi_{1}\left(\Delta\right)\Omega_{s}\right|\) is much lower than one. The output laser power can be approximated as
\[\begin{split} P\left(t\right)&\approx e^{-kL\chi _{0}\left(\Delta\right)}P_{i}\left(1-kL\chi_{1}\left(\Delta\right)\Omega_{s} \cos\left(2\pi\delta_{s}t+\phi_{s}\right)\right)\\ &=\bar{P}_{0}\left(\Delta\right)+\kappa\left(\Delta\right)P_{i} \Omega_{s}\cos\left(2\pi\delta_{s}t+\phi_{s}\right)\end{split}, \tag{13}\]
where \(\bar{P}_{0}\left(\Delta\right)=P_{i}e^{-kL\chi_{0}\left(\Delta\right)}\) is the DC component of the output optical power and \(\kappa\left(\Delta\right)=-e^{-kL\chi_{0}\left(\Delta\right)}kL\chi_{1}\left( \Delta\right)\) represents the conversion coefficient to signal. The DC component is associated with the photodetector operation point. Our goal is to maximize the conversion coefficient \(\left|\kappa\left(\Delta\right)\right|\) through parameter optimization.
**High transmittance case:** If \(\left|kL\chi_{0}\left(\Delta\right)\right|\) is much lower than one, \(e^{-kL\chi_{0}\left(\Delta\right)}\) can be approximated as a constant. The output laser power can be approximated as
\[\begin{split} P\left(t\right)&\approx P_{i}\left(1 -kL\chi_{1}\left(\Delta\right)\Omega_{s}\cos\left(2\pi\delta_{s}t+\phi_{s} \right)\right)\\ &=\bar{P^{{}^{\prime}}_{0}}\left(\Delta\right)+\kappa^{\prime} \left(\Delta\right)P_{i}\Omega_{s}\cos\left(2\pi\delta_{s}t+\phi_{s}\right), \end{split} \tag{14}\]
where \(\bar{P^{{}^{\prime}}_{0}}\left(\Delta\right)\approx P_{i}\) is the DC component of the output optical power and \(\kappa^{\prime}\left(\Delta\right)=-kL\chi_{1}\left(\Delta\right)\) represents the conversion coefficient. In high transmittance case, the optimal problem of maximizing conversion coefficient \(\left|\kappa^{\prime}\left(\Delta\right)\right|\) is equivalent to maximizing the coefficient \(\left|\chi_{1}\right|\).
### _Local Microwave Detuning_
Assuming that \(\Delta_{c}=\Delta_{p}=\gamma_{3}=\gamma_{4}=0\), the steady-state solution \(\rho_{21}(\Delta_{L})\) can be obtained according to Eq. (5) as Eq. (15). In the weak signal regime, \(\Omega_{s}\ll\Omega_{L}\), the first order approximation of \(\Omega_{s}/\Omega_{L}\) is given by Eq. (16). The corresponding linear susceptibility of the atomic system can be written as
\[\Im\left(\chi\left(t,\ \Delta_{L}\right)\right)=\chi_{0}\left(\Delta_{L} \right)+\chi_{1}\left(\Delta_{L}\right)\Omega_{s}\cos S\left(t\right), \tag{17}\]
which has a time-invariant part as Eq. (18), and a time variant part coefficient as Eq. (19).
In the general case, the sensitivity optimization problem is formulated as follows.
**(P1): Sensitivity Maximization in General Case via Local Microwave Detuning**
\[\Delta_{L}^{*}=\underset{\Delta_{L}}{\text{argmax}}\ \kappa(\Delta_{L})= \underset{\Delta_{L}}{\text{argmax}}\ e^{-kL\chi_{0}\left(\Delta_{L}\right)} \chi_{1}\left(\Delta_{L}\right). \tag{20}\]
Adopting the following variable substitution,
\[\frac{\Omega_{p}^{2}}{\gamma_{2}^{2}}\to x,\ \frac{\Omega_{c}^{2}}{\gamma_{2}^{2}} \to y,\ \frac{\Omega_{L}^{2}}{\gamma_{2}^{2}}\to z,\ \frac{\Delta_{L}^{2}}{\gamma_{2}^{2}}\to w, \tag{21}\]
and we can get an equivalent optimization problem as Eq. (22), where \(C=2kLN_{0}\mu_{12}^{2}/(\hbar\epsilon_{0}\gamma_{2})\) is a constant. Thus, the optimal value of \(w^{*}\) is the maximum point of function \(g(x,y,z,w)\) as Eq. (23).
Letting \(u=x(x+y)\), and \(v=z(2x+1)\), the optimal value for \(w\) is
\[w^{*}=\frac{x^{2}z\left(Cz-2u+\sqrt{4\left(u+v\right)^{2}+\left(Cz\right)^{2}} \right)}{8u^{2}}. \tag{24}\]
The corresponding microwave field detuning is obtained as
\[\Delta_{L}^{*}=\frac{\Omega_{p}^{2}\Omega_{L}}{2U}\sqrt{\frac{C\gamma_{2}^{2} \Omega_{L}^{2}-2U+\sqrt{4\left(U+V\right)^{2}+\left(C\gamma_{2}^{2}\Omega_{L}^{ 2}\right)^{2}}}{2}}, \tag{25}\]
where \(U=\Omega_{p}^{2}\left(\Omega_{p}^{2}+\Omega_{c}^{2}\right)\), and \(V=\Omega_{L}^{2}\left(2\Omega_{p}^{2}+\gamma_{2}^{2}\right)\).
In high transmittance case, the optimization problem in Eq. (20) is simplified as follows.
**(P2): Sensitivity Maximization in High Transmittance Case via Local Microwave Detuning**
\[\Delta_{L}^{**}=\underset{\Delta_{L}}{\text{argmax}}\ \chi_{1}\left(\Delta_{L} \right). \tag{26}\]
According to Eq. (19) and Eq. (21), we can get the following equivalent optimization problem,
\[w^{**}=\underset{w}{\text{argmax}}\ \left\{\begin{array}{c}\frac{z^{3/2}(x+y)( xz+4w(x+y))}{\left[z^{2}+4w(x+y)^{2}+22xz(x+y+z)\right]^{2}}\end{array} \right\}. \tag{27}\]
\[\chi_{0}\left(\Delta_{L}\right)=\frac{2N_{0}\mu_{12}^{2}}{\hbar\epsilon_{0}} \frac{\gamma_{2}\Omega_{L}^{4}}{\gamma_{2}^{2}\Omega_{L}^{4}+4\Delta_{L}^{2} \left(\Omega_{c}^{2}+\Omega_{p}^{2}\right)^{2}+2\Omega_{L}^{2}\Omega_{p}^{2} \left(\Omega_{p}^{2}+\Omega_{c}^{2}+\Omega_{L}^{2}\right)}, \tag{18}\]
\[\chi_{1}\left(\Delta_{L}\right)=\frac{2N_{0}\mu_{12}^{2}}{\hbar\epsilon_{0}} \frac{4\gamma_{2}\Omega_{L}^{3}\left(\Omega_{p}^{2}+\Omega_{c}^{2}\right) \left(\Omega_{L}^{2}\Omega_{p}^{2}+4\Delta_{L}^{2}\left(\Omega_{p}^{2}+\Omega_ {c}^{2}\right)\right)}{\left[\gamma_{2}^{2}\Omega_{L}^{4}+4\Delta_{L}^{2} \left(\Omega_{c}^{2}+\Omega_{p}^{2}\right)^{2}+2\Omega_{L}^{2}\Omega_{p}^{2} \left(\Omega_{c}^{2}+\Omega_{p}^{2}+\Omega_{L}^{2}\right)\right]^{2}}. \tag{19}\]
\[w^{*}=\underset{w}{\text{arg}\max}\left\{\begin{array}{c}\exp[-\frac{Cz^{2} }{z^{2}+4w\left(x+y\right)^{2}+2xz\left(x+y+z\right)}]\cdot\frac{z^{3/2}\left( x+y\right)\left(xz+4w\left(x+y\right)\right)}{\left[z^{2}+4w\left(x+y\right)^{2}+2 xz\left(x+y+z\right)\right]^{2}}\end{array}\right\}, \tag{22}\]
\[g\left(x,y,z,w\right)=\exp[-\frac{Cz^{2}}{z^{2}+4w\left(x+y\right)^{2}+2xz \left(x+y+z\right)}]\cdot\frac{z^{3/2}\left(x+y\right)\left(xz+4w\left(x+y \right)\right)}{\left[z^{2}+4w\left(x+y\right)^{2}+2xz\left(x+y+z\right) \right]^{2}}. \tag{23}\]
Thus, the optimal value is the maximum point of function with respect to \(w\),
\[h\left(x,y,z,w\right)=\frac{z^{3/2}\left(x+y\right)\left(xz+4w\left(x+y\right) \right)}{\left[z^{2}+4w\left(x+y\right)^{2}+2xz\left(x+y+z\right)\right]^{2}}. \tag{28}\]
Deriving \(h(x,y,z,w)\) with respect to \(w\), we have
\[\frac{\partial h}{\partial w}=\frac{4z^{3/2}\left(x+y\right)^{2}\left(\left(2 x+1\right)z^{2}-4w\left(x+y\right)^{2}\right)}{\left(4w\left(x+y\right)^{2}+z \left(2x\left(x+y+z\right)+z\right)\right)^{3}}. \tag{29}\]
Since \(x,y,z>0\), the optimal value for \(w\) is
\[w^{**}=\frac{\left(2x+1\right)z^{2}}{4\left(x+y\right)^{2}}, \tag{30}\]
and the corresponding local microwave frequency detuning is
\[\Delta_{L}^{**}=\frac{\Omega_{L}^{2}}{2\left(\Omega_{p}^{2}+\Omega_{c}^{2} \right)}\sqrt{2\Omega_{p}^{2}+\gamma_{2}^{2}}. \tag{31}\]
### _Probing Laser Detuning_
Assuming that \(\Delta_{c}=\Delta_{L}=\gamma_{3}=\gamma_{4}=\gamma_{c}=\gamma=0\), the steady-state solution \(\rho_{21}(\Delta_{p})\) can be obtained according to Eq. (5) as Eq. (32). In weak signal regime, \(\Omega_{s}\ll\Omega_{L}\), the first order approximation of \(\Omega_{s}/\Omega_{L}\) is given by Eq. (33). The corresponding imaginary component of linear susceptibility \(\chi\left(t,\Delta_{c}\right)\) of the atomic system can be written as
\[\Im\left(\chi\left(t,\Delta_{p}\right)\right)=\chi_{0}(\Delta_{p})+\chi_{1}( \Delta_{p})\Omega_{s}\cos S(t), \tag{34}\]
which has a time-invariant part
\[\chi_{0}(\Delta_{p})=-\frac{2N_{0}\mu_{12}^{2}}{\epsilon_{0}\hbar\Omega_{p}} \frac{\gamma_{2}\Omega_{p}\left(4\Delta_{p}^{2}-\Omega_{L}^{2}\right)^{2}}{D_{p }(\Omega_{L})}, \tag{35}\]
and a time variant part
\[\chi_{1}(\Delta_{p})=\frac{2N_{0}\mu_{12}^{2}}{\epsilon_{0}\hbar\Omega_{p}} \frac{4\gamma_{2}\Omega_{p}\Omega_{L}A_{p}}{D_{c}^{2}(\Omega_{L})}. \tag{36}\]
The sensitivity optimization problem in the general case is formulated as follows.
**(P4): Sensitivity Maximization in General Case via Coupling Laser Detuning**
\[\Delta_{c}^{*}=\underset{\Delta_{c}}{\text{argmax}}\ \kappa(\Delta_{c})= \underset{\Delta_{c}}{\text{argmax}}\ e^{-kL\chi_{0}(\Delta_{c})}\chi_{1}\left( \Delta_{c}\right). \tag{37}\]
We also numerically characterize the coupling laser detuning effect on the weak signal response in Section IV.
### _Coupling Laser Detuning_
Assuming that \(\Delta_{p}=\Delta_{L}=\gamma_{3}=\gamma_{4}=\gamma_{c}=\gamma=0\), the steady-state solution \(\rho_{21}(\Delta_{c})\) can be obtained according to Eq. (5) as Eq. (38). In weak signal regime, \(\Omega_{s}\ll\Omega_{L}\), the first order approximation of \(\Omega_{s}/\Omega_{L}\) is given by Eq. (39). The corresponding imaginary component of linear susceptibility \(\chi\left(t,\Delta_{c}\right)\) of the atomic system can be written as
\[\Im\left(\chi\left(t,\Delta_{c}\right)\right)=\chi_{0}(\Delta_{c})+\chi_{1}( \Delta_{c})\Omega_{s}\cos S(t), \tag{38}\]
which has a time-invariant part
\[\chi_{0}(\Delta_{c})=-\frac{2N_{0}\mu_{12}^{2}}{\epsilon_{0}\hbar\Omega_{p}} \frac{\gamma_{2}\Omega_{p}\left(4\Delta_{c}^{2}-\Omega_{L}^{2}\right)^{2}}{D_{c }(\Omega_{L})}, \tag{39}\]
and a time variant part
\[\chi_{1}(\Delta_{c})=-\frac{2N_{0}\mu_{12}^{2}}{\epsilon_{0}\hbar\Omega_{p}} \frac{4\gamma_{2}\Omega_{p}\Omega_{L}A_{c}}{D_{c}^{2}(\Omega_{L})}. \tag{40}\]
The sensitivity optimization problem in the general case is formulated as follows.
**(P4): Sensitivity Maximization in General Case via Coupling Laser Detuning**
\[\Delta_{c}^{*}=\underset{\Delta_{c}}{\text{argmax}}\ \kappa(\Delta_{c})= \underset{\Delta_{c}}{\text{argmax}}\ e^{-kL\chi_{0}(\Delta_{c})}\chi_{1}\left( \Delta_{c}\right). \tag{41}\]
We also numerically characterize the coupling laser detuning effect on the weak signal response in Section IV.
\[\begin{split}\rho_{21}(\Delta_{p})&=\frac{\Omega_{p}\left(4 \Delta_{p}^{2}-\Omega^{2}\right)\left(i\gamma_{2}\left(4\Delta_{p}^{2}-\Omega^{ 2}\right)+2\Delta_{p}\left(4\Delta_{p}^{2}-\Omega^{2}-\Omega^{2}\right)\right) }{D_{p}(\Omega)},\\ D_{p}(\Omega)&=64\Delta_{p}^{6}+\gamma_{2}^{2}\left( \Omega^{2}-4\Delta_{p}^{2}\right)^{2}+4\Delta_{p}^{2}\left(\left(\Omega^{2}+ \Omega_{c}^{2}\right)^{2}+2\Omega_{p}^{2}\left(\Omega_{p}^{2}+\Omega_{c}^{2}-2 \Omega^{2}\right)\right)\\ &-32\Delta_{p}^{4}\left(\Omega^{2}+\Omega_{c}^{2}-\Omega_{p}^{2} \right)+2\Omega_{p}^{2}\Omega^{2}\left(\Omega_{p}^{2}+\Omega_{c}^{2}+\Omega^{ 2}\right).\end{split} \tag{32}\]
\[\begin{split}\Im(\rho_{21}(\Delta_{p}))&\approx \frac{\gamma_{2}\Omega_{p}\left(4\Delta_{p}^{2}-\Omega_{L}^{2}\right)^{2}}{D_{ p}(\Omega_{L})}-\frac{4\gamma_{2}\Omega_{p}\Omega_{L}A_{p}\Omega_{s}\cos S \left(t\right)}{D_{p}^{2}(\Omega_{L})},\\ A_{p}&=\left(4\Delta_{p}^{2}-\Omega_{L}^{2}\right) \left(\Omega_{L}^{2}\Omega_{p}^{2}\left(\Omega_{c}^{2}+\Omega_{p}^{2}\right)- 16\Delta_{p}^{4}\Omega_{c}^{2}+4\Delta_{p}^{2}\left(\Omega_{c}^{2}\left( \Omega_{c}^{2}+\Omega_{L}^{2}\right)\right)\right)\\ &+3\Omega_{p}^{2}\left(4\Delta_{p}^{2}-\Omega_{L}^{2}\right) \left(\Omega_{p}^{2}+\Omega_{c}^{2}\right).\end{split} \tag{33}\]
\[\begin{split}\rho_{21}(\Delta_{c})&=\frac{i\gamma_{2 }\Omega_{p}\left(4\Delta_{c}^{2}-\Omega^{2}\right)^{2}-2\Omega_{p}\Omega_{c}^{ 2}\Delta_{c}\left(4\Delta_{c}^{2}-\Omega^{2}\right)}{D_{c}(\Omega)},\\ D_{c}(\Omega)&=32\Delta_{c}^{4}\Omega_{p}^{2}+\gamma _{2}^{2}\left(\Omega^{2}-4\Delta_{c}^{2}\right)^{2}+2\Omega_{p}^{2}\Omega^{2} \left(\Omega_{p}^{2}+\Omega_{c}^{2}+\Omega^{2}\right)\\ &+4\Delta_{c}^{2}\left(\left(\Omega_{p}^{2}+\Omega_{c}^{2}\right) ^{2}+\Omega_{p}^{2}\left(\Omega_{p}^{2}-4\Omega^{2}\right)\right).\end{split} \tag{38}\]
### _Transit Relaxation Effect_
Our approach adopts each non-interacting and independent atom that participates as an identical and stable atomic sensor of the microwave electric field. Except for spontaneous radiation and atoms collision, transit relaxation is an important type of relaxation to thermal vapor Rydberg atomic sensors. Due to random motion of atoms, atoms continuously enter or leave the area covered by the laser beam. We assume that atoms in the ground state enter the interaction region from outside the beam at a rate \(\gamma\), while the atoms in different states in the interaction region leave at the same rate. When the atoms leave, the number of excited atoms and the coherence between the different states are destroyed. The characteristic time for this process to occur is determined by the average time of flight through the cross section of the laser beams, which defines the effective relaxation rate [16]
\[\gamma=\sqrt{\frac{8k_{B}T}{\pi m}}\frac{1}{w\sqrt{2\ln 2}}, \tag{44}\]
where \(w\) is the \(1/e^{2}\) beam waist, \(m\) is the atom mass, \(T\) the ensemble temperature, and \(k_{B}\) is Boltzmann's constant. Fig. 3 shows the relationship between the \(1/e^{2}\) beam waist and transit relaxation rate for Cs atoms at \(T=300\) K.
For non-zero transit relaxation rate \(\gamma\), we assume that \(\Delta_{p}=\Delta_{L}=\Delta_{p}=\gamma_{3}=\gamma_{4}=\gamma_{c}=0\). We also expect that the corresponding imaginary component of linear susceptibility has a time-invariant part and a time variant part,
\[\Im\left(\chi\left(t,\gamma\right)\right)=\chi_{0}(\gamma)+\chi_{1}(\gamma) \Omega_{s}\cos\left(S(t)\right). \tag{45}\]
The sensitivity optimization problem is formulated as follows.
**(P5): Sensitivity Maximization in General Case via Transit Relaxation Rate**
\[\gamma^{*}=\underset{\gamma}{\text{argmax}}\ \kappa(\gamma)=e^{-kL\chi_{0}( \gamma)}\chi_{1}\left(\gamma\right). \tag{46}\]
We numerically characterize the transit relaxation effect on the weak signal response in Section IV
## IV Numerical Results
We adopt some parameters from the experimental setups in [4]. Cs atoms were filled in a vapour cell at room temperature. The cell contained ground-state atoms at a total density of
Fig. 3: Transit relaxation rate to the \(1/e^{2}\) beam waist at \(T=300\) K.
\(N_{0}=4.89\times 10^{10}\) cm\({}^{-3}\). For Rydberg atom EIT, the residence time in the Rydberg state is small. Under typical conditions, the population in Rydberg states is \(~{}0.0001\). Only a small distribution of atomic velocity classes in the vapor cell are selected by the EIT lasers, roughly \(~{}1/400\) for the scheme shown in Fig. 2[17]. The effective atoms density for atomic vapor is approximated as \(N_{0,\text{eff}}\approx 10^{8}\) cm\({}^{-3}\). The four-level configuration in Fig. 2 using four states in a Cs atom: \(|1\rangle\to 6S_{1/2},F=4\); \(|2\rangle\to 6S_{3/2},F=5\); \(|3\rangle\to 47D_{5/2}\); and \(|4\rangle\to 48P_{3/2}\). States \(|2\rangle\), \(|3\rangle\), and \(|4\rangle\) have the inverse lifetimes \(\gamma_{2}=2\pi\times 5.2\) MHz, \(\gamma_{3}=2\pi\times 3.9\) kHz, and \(\gamma_{4}=2\pi\times 1.7\) kHz. In this section, we ignore the atoms collision effect, \(\gamma_{e}=0\), the effective Rabi frequencies for the transitions \(|1\rangle\rightarrow|2\rangle\) and \(|2\rangle\rightarrow|3\rangle\) are \(\Omega_{p}=2\pi\times 5.7\) MHz, \(\Omega_{c}=2\pi\times 0.97\) MHz, respectively. The frequency intervals between the hyperfine states \(6P_{3/2}\) are \(151.2\) MHz (\(F=2\to F=3\)), \(201.2\) MHz (\(F=3\to F=4\)), and \(251.1\) MHz (\(F=4\to F=5\)) [18]. In the case of resonance, the system is locked to a specific hyperfine state. We consider a detuning range from \(-50\) MHz to \(50\) MHz in numerical results.
In Section III, we assume that \(|kL\chi_{1}(\Delta)\Omega_{s}|\ll 1\) and solve the optimization problem **(P1)** to maximize the conversion coefficient \(|\kappa(\Delta)|\). In high transmittance case, the optimization problem is simplified as **(P2)** to maximize \(|\kappa^{\prime}(\Delta)|\). According to Eq. (6), the valid range for the general case is estimated as \(\Omega_{s}<10\) kHz. The high transmittance case corresponds to thin medium with \(L\leq 1\) cm. For \(L=1\) cm and \(\Omega_{s}=2\pi\times 1\) kHz, the numerical values of \(kL\chi_{0}(\Delta_{L})\) and \(kL\chi_{1}(\Delta_{L})\Omega_{s}\) are shown in Fig. 4, which means that the assumptions in general case and high transmittance case can be satisfied.
For local microwave frequency detuning, the conversion coefficient \(|\kappa(\Delta_{L})|\) in the general case is shown in Fig. 5, and conversion coefficient \(|\kappa^{\prime}(\Delta_{L})|\) in high transmittance case is shown in Fig. 6. According to Eq. (25) and Eq. (31), the theoretical optimal values are marked as dashed red lines. It is seen that the solution in high transmittance case is numerically close to the original one in the general case. In the general case, the sensitivity gains at \(\Omega_{L}/2\pi=2\) MHz, 4 MHz, 6 MHz by local microwave frequency detuning are 0.06 dB, 0.71 dB, and 1.85 dB, respectively. In high transmittance case, the sensitivity gains at \(\Omega_{L}/2\pi=2\) MHz, 4 MHz, 6 MHz by local microwave frequency detuning are 0.09 dB, 0.77 dB, and 1.96 dB, respectively. It is predictable that stronger local microwave power can result in greater sensitivity gains.
For probing and coupling laser frequency detunings, conversion coefficients \(|\kappa(\Delta_{p})|\) and \(|\kappa(\Delta_{c})|\) in the general case are shown in Fig. 7 and Fig. 8, respectively. The ideal optimization problem is to maximize the absolute value of \(\kappa(\Delta_{p})\) and \(\kappa(\Delta_{c})\), but the analytical optimal values are obtained by the normal form. Thus, we can find another local maximal value, corresponding to the minimal value of \(\kappa(\Delta_{p})\) and \(\kappa(\Delta_{c})\). In the general case, the sensitivity gains at \(\Omega_{L}/2\pi=2\) MHz, 4 MHz, 6 MHz by probing laser frequency detuning are 0.85 dB, 1.27 dB, and 2.96 dB, respectively. And the sensitivity gains at \(\Omega_{L}/2\pi=2\) MHz, 4 MHz, 6 MHz by coupling laser frequency detuning are 1.05 dB, 1.59 dB, and 3.41 dB, respectively.
Fig. 5: Conversion coefficient \(|\kappa(\Delta_{L})|\) with respect to local microwave detuning \(\Delta_{L}\) in the general case **(P1)**.
Fig. 6: Conversion coefficient \(|\kappa^{\prime}(\Delta_{L})|\) with respect to local microwave detuning \(\Delta_{L}\) in high transmittance case **(P2)**.
As the transit relaxation rate increases, the conversion coefficient decreases, as shown in Fig. 9. We can reduce the influence of the transit relaxation effect by reducing the size of the atomic cavity and the atomic number density. Correspondingly, the number of effective atoms involved in the interaction decreases, and the detection sensitivity reduces.
## V Conclusion
Based on the atomic four-level model, we have derived the steady-state solutions of the density matrix of the Rydberg atomic system operating with frequency detuning. Given the input microwave signal, we have established an optimization problem of the linear susceptibility of atomic medium to maximize the amplitude of the output signal. We have theoretically obtained the optimal value of local microwave electric field frequency detuning and numerically analyzed the effects of probing laser and coupling laser frequency detuning on the optimization target. In addition, considering the existence of the atomic transit relaxation effect, we have modified the system Liouville equation, and derived and shown the influence of the transit relaxation rate on the system detection sensitivity and the time to reach steady state. Such results show that setting the appropriate frequency detuning can improve the system detection sensitivity, resulting in extra several dB sensitivity gain through this method when the device performance cannot be further improved. Considering the relationship between the local microwave electric field and the microwave signal in the heterodyne scheme, such results also mean that the heterodyne Rydberg atomic receiver can provide the same or better sensitivity as the resonance case in continuous weak microwave signal detection over a large frequency range [19]. The relaxation effect reduces the sensitivity of the system, which should be eliminated or reduced in experiments.
|
2309.07306 | A Cancellation Law for Probabilistic Processes | We show a cancellation property for probabilistic choice. If distributions mu
+ rho and nu + rho are branching probabilistic bisimilar, then distributions mu
and nu are also branching probabilistic bisimilar. We do this in the setting of
a basic process language involving non-deterministic and probabilistic choice
and define branching probabilistic bisimilarity on distributions. Despite the
fact that the cancellation property is very elegant and concise, we failed to
provide a short and natural combinatorial proof. Instead we provide a proof
using metric topology. Our major lemma is that every distribution can be
unfolded into an equivalent stable distribution, where the topological
arguments are required to deal with uncountable branching. | Rob van Glabbeek, Jan Friso Groote, Erik de Vink | 2023-09-13T20:51:11Z | http://arxiv.org/abs/2309.07306v1 | # A Cancellation Law for Probabilistic Processes
###### Abstract
We show a cancellation property for probabilistic choice. If \(\mu\oplus\rho\) and \(\nu\oplus\rho\) are branching probabilistic bisimilar, then \(\mu\) and \(\nu\) are also branching probabilistic bisimilar. We do this in the setting of a basic process language involving non-deterministic and probabilistic choice and define branching probabilistic bisimilarity on distributions. Despite the fact that the cancellation property is very elegant and concise, we failed to provide a short and natural combinatorial proof. Instead we provide a proof using metric topology. Our major lemma is that every distribution can be unfolded into an equivalent stable distribution, where the topological arguments are required to deal with uncountable branching.
## 1 Introduction
A familiar property of the real numbers \(\mathbb{R}\) is the additive cancellation law: if \(x+z=y+z\) then \(x=y\). Switching to the Boolean setting, and interpreting \(+\) by \(\vee\) and \(=\) by \(\Leftrightarrow\), the property becomes \((x\lor z)\Leftrightarrow(y\lor z)\) implies \(x\Leftrightarrow y\). This is not generally valid. Namely, if \(z\) is true, nothing can be derived regarding the truth values of \(x\) and \(y\). Algebraically speaking, the reals provide an 'additive inverse', and the Booleans do not have a 'disjunctive' version of it.
A similar situation holds for strong bisimilarity in the pure non-deterministic setting vs. strong bisimilarity in the mixed non-deterministic and probabilistic setting. When we have \(E+G\xleftrightarrow{}F+G\) for the non-deterministic processes \(E+G\) and \(F+G\), it may or may not be the case that \(E\xleftrightarrow{}F\). However, if \(P_{\,1/2}\oplus R\xleftrightarrow{}Q_{\,1/2}\oplus R\) for the probabilistic processes \(P_{\,1/2}\oplus R\) and \(Q_{\,1/2}\oplus R\), with probabilistic choice \({}_{\,1/2}\oplus\), we can exploit a semantic characterization of bisimilarity as starting point of a calculation. The characterization reads
\[P\xleftrightarrow{}Q\quad\text{iff}\quad\forall C\in\mathcal{E}/\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We find that it does but the proof is involved. A number of initial attempts were directed towards finding a straightforward combinatorial proof, but all failed. A proof in a topological setting, employing the notion of sequential compactness to deal with potentially infinite sequences of transitions is reported in this paper. We leave the existence of a shorter, combinatorial proof as an open question.
Our strategy to prove the above cancellation law for probabilistic processes and branching probabilistic bisimilarity is based on two intermediate results: (i) every probabilistic process unfolds into a so-called _stable_ probabilistic process, and (ii) for stable probabilistic processes a characterization of the form (1) does hold. Intuitively, a stable process is a process that cannot do an internal move without leaving its equivalence class.
In order to make the above more concrete, let us consider an example. For the ease of presentation we use distributions directly, rather than probabilistic processes. Let the distributions \(\mu\) and \(\nu\) be given by
\[\begin{array}{rl}\mu&=\ \frac{1}{2}\delta(a\cdot\partial(\mathbf{0}))\oplus \frac{1}{2}\delta(b\cdot\partial(\mathbf{0}))\\ \nu&=\ \frac{1}{3}\delta(\tau\cdot(\partial(a\cdot\partial(\mathbf{0}))\ _{1}\oplus\ \partial(b\cdot\partial(\mathbf{0}))))\oplus\frac{1}{3}\delta(a\cdot \partial(\mathbf{0}))\oplus\frac{1}{3}\delta(b\cdot\partial(\mathbf{0}))\end{array}\]
with \(a\) and \(b\) two different actions. The distribution \(\mu\) assigns probability \(0.5\) to \(a\cdot\partial(\mathbf{0})\), meaning an \(a\)-action followed by a deadlock with probability \(1\), and probability \(0.5\) to \(b\cdot\partial(\mathbf{0})\), i.e. a \(b\)-action followed by deadlock with probability \(1\). The distribution \(\nu\) assigns both these non-deterministic processes probability \(\frac{1}{3}\) and assigns the remaining probability \(\frac{1}{3}\) to \(\tau\cdot(\partial(a\cdot\partial(\mathbf{0}))\ _{1}\oplus\ \partial(b\cdot \partial(\mathbf{0})))\), where a \(\tau\)-action precedes a 50-50 percent choice between the processes mentioned earlier. Below, we show that \(\mu\) and \(\nu\) are branching probabilistic bisimilar, i.e. \(\mu\xleftrightarrow{}_{b}\nu\). However, if \(C_{1}\), \(C_{2}\) and \(C_{3}\) are the three different equivalence classes of \(\tau\cdot(\partial(a\cdot\partial(\mathbf{0}))\ _{1}\oplus\ \partial(b\cdot \partial(\mathbf{0}))\), \(a\cdot\partial(\mathbf{0})\) and \(b\cdot\partial(\mathbf{0})\), respectively, we have
\[\mu[C_{1}]=0\neq\frac{1}{3}=\nu[C_{1}],\ \mu[C_{2}]=\frac{1}{2}\neq\frac{1}{ 3}=\nu[C_{2}],\ \text{and}\ \mu[C_{3}]=\frac{1}{2}\neq\frac{1}{3}=\nu[C_{3}].\]
Thus, although \(\mu\xleftrightarrow{}_{b}\nu\), it does not hold that \(\mu[C]=\nu[C]\) for every equivalence class \(C\). Note that the distribution \(\nu\) is not stable, in the sense that it allows an internal transition to the branchingly equivalent \(\nu\).
As indicated, we establish in this paper a cancellation law for branching probabilistic bisimilarity in the context of mixed non-deterministic and probabilistic choice, exploiting the process language of [7], while dealing with distributions of finite support over non-deterministic processes for its semantics. We propose the notion of a stable distribution and show that every distribution can be unfolded into a stable distribution by chasing its (partial) \(\tau\)-transitions. Our framework, including the notion of branching probabilistic bisimulation, builds on that of [20, 17].
Another trait of the current paper, as in [20, 17], is that distributions are taken as semantic foundation for bisimilarity, rather than seeing bisimilarity primarily as an equivalence relation on non-deterministic processes, which is subsequently lifted to an equivalence relation on distributions, as is the case for the notion of branching probabilistic bisimilarity of [28, 27] and also of [3, 2]. The idea to consider distributions as first-class citizens for probabilistic bisimilarity stems from [12]. In the systematic overview of the spectrum [4], also Baier et al. argue that a behavioral relation on distributions is needed to properly deal with silent moves.
Metric spaces and complete metric spaces, as well as their associated categories, have various uses in concurrency theory. In the setting of semantics of probabilistic systems, metric topology has been advocated as underlying denotational domain, for example in [6, 22, 26]. For quantitative comparison of Markov systems, metrics and pseudo-metric have been proposed for a quantitative notion of behavior equivalence, see e.g. [11, 14, 8]. The specific use of metric topology in this paper to derive an existential property of a transition system seems new.
The remainder of the paper is organized as follows. Section 2 collects some definitions from metric topology and establishes some auxiliary results. A simple process language with non-deterministic and probabilistic choice is introduced in Section 3, together with examples and basic properties of the operational semantics. Our definition of branching probabilistic bisimilarity is given in Section 4, followed by a congruence result with respect to probabilistic composition and a confluence property. The main contribution of the paper is presented in Sections 5 and 6. Section 5 shows in a series of continuity lemmas that the set of branching probabilistic bisimilar descendants is a (sequentially) compact set. Section 6 exploits these results to argue that unfolding of a distribution by inert \(\tau\)-transitions has a stable end point, meaning that a stable branchingly equivalent distribution can be reached. With that result in place, a cancellation law for branching probabilistic bisimilarity is established. Finally, Section 7 wraps up with concluding remarks and a discussion of future work.
## 2 Preliminaries
For a non-empty set \(X\), we define \(\mathit{Distr}(X)\) as the set of all probability distributions over \(X\) of finite support, i.e., \(\mathit{Distr}(X)=\{\,\mu\colon X\to[0,1]\mid\sum_{x\in X}\mu(x)=1\), \(\mu(x)>0\) for finitely many \(x\in X\)\(\}\). We use \(\mathit{spt}(\mu)\) to denote the finite set \(\{\,x\in X\mid\mu(x)>0\,\}\). Often, we write \(\mu=\bigoplus_{i\in I}p_{i}\cdot x_{i}\) for an index set \(I\), \(p_{i}\geqslant 0\) and \(x_{i}\in X\) for \(i\in I\), where \(p_{i}>0\) for finitely many \(i\in I\). Implicitly, we assume \(\sum_{i\in I}p_{i}=1\). We also write \(r\mu\oplus(1-r)\nu\) and, equivalently, \(\mu\cdot\oplus\nu\) for \(\mu,\nu\in\mathit{Distr}(X)\) and \(0\leqslant r\leqslant 1\). As expected, we have that \((r\mu\oplus(1-r)\nu)(x)=(\mu_{r}\oplus\nu)(x)=r\mu(x)+(1-r)\nu(x)\) for \(x\in X\). The _Dirac distribution_ on \(x\), the unique distribution with support \(x\), is denoted \(\delta(x)\).
The set \(\mathit{Distr}(X)\) becomes a complete1 metric space when endowed with the sup-norm [15], given by \(d(\mu,\nu)=\sup_{x\in X}\mid\mu(x)-\nu(x)\rvert\). This distance is also known as the distance of uniform convergence or Chebyshev distance.
Footnote 1: A _Cauchy sequence_ is a sequence of points in a metric space whose elements become arbitrarily close to each other as the sequence progresses. The space is _complete_ if every such sequence has a limit within the space.
**Theorem 1**.: _If \(Y\subseteq X\) is finite, then \(\mathit{Distr}(Y)\) is a sequentially compact subspace of \(\mathit{Distr}(X)\). This means that every sequence in \(\mathit{Distr}(Y)\) has a convergent subsequence with a limit in \(\mathit{Distr}(Y)\)._
Proof.: \(\mathit{Distr}(Y)\) is a bounded subset of \(\mathbb{R}^{\,n}\), where \(n:=|Y|\) is the size of \(Y\). It also is closed. For \(\mathbb{R}^{\,n}\) equipped with the Euclidean metric, the sequential compactness of closed and bounded subsets is known as the Bolzano-Weierstrass theorem [24]. When using the Chebyshev metric, the same proof applies.
In Section 5 we use the topological structure of the set of distributions over non-deterministic processes to study unfolding of partial \(\tau\)-transitions. There we make use of the following representation property.
**Lemma 2**.: _Suppose the sequence of distributions \((\mu_{i})_{i=0}^{\infty}\) converges to the distribution \(\mu\) in \(\mathit{Distr}(X)\). Then a sequence of distributions \((\mu_{i}^{\prime})_{i=0}^{\infty}\) in \(\mathit{Distr}(X)\) and a sequence of probabilities \((r_{i})_{i=0}^{\infty}\) in \([0,1]\) exist such that \(\mu_{i}=(1-r_{i})\,\mu\oplus r_{i}\mu_{i}^{\prime}\) for \(i\in\mathbb{N}\) and \(\lim_{i\to\infty}r_{i}=0\)._
Proof.: Let \(i\in\mathbb{N}\). For \(x\in\mathit{spt}(\mu)\), the quotient \(\mu_{i}(x)/\mu(x)\) is non-negative, but may exceed \(1\). However, \(0\leqslant\min\{\,\frac{\mu_{i}(x)}{\mu(x)}\mid x\in\mathit{spt}(\mu)\,\}\leqslant 1\), since the numerator cannot strictly exceed the denominator for all \(x\in\mathit{spt}(\mu)\). Let \(r_{i}=1-\min\{\,\frac{\mu_{i}(x)}{\mu(x)}\mid x\in\mathit{spt}(\mu)\,\}\) for \(i\in\mathbb{N}\). Then we have \(0\leqslant r_{i}\leqslant 1\).
For \(i\in\mathbb{N}\), define \(\mu^{\prime}_{i}\in\mathit{Distr}(X)\) as follows. If \(r_{i}>0\) then \(\mu^{\prime}_{i}(x)=1/r_{i}\cdot\big{[}\mu_{i}(x)-(1-r_{i})\mu(x)\big{]}\) for \(x\in X\); if \(r_{i}=0\) then \(\mu^{\prime}_{i}=\mu\). We verify for \(r_{i}>0\) that \(\mu^{\prime}_{i}\) is indeed a distribution: (i) For \(x\notin\mathit{spt}(\mu)\) it holds that \(\mu(x)=0\), and therefore \(\mu^{\prime}_{i}(x)=1/r_{i}\cdot\mu_{i}(x)\geqslant 0\). For \(x\in\mathit{spt}(\mu)\),
\[\mu^{\prime}_{i}(x)=1/r_{i}\cdot\big{[}\mu_{i}(x)-(1-r_{i})\mu(x)\big{]}=\mu(x )/r_{i}\cdot\big{[}\frac{\mu_{i}(x)}{\mu(x)}-\frac{\mu_{i}(x_{min})}{\mu(x_{ min})}\big{]}\geqslant 0\]
for \(x_{min}\in\mathit{spt}(\mu)\) such that \(\mu_{i}(x_{min})/\mu(x_{min})\) is minimal. (ii) In addition,
\[\sum\{\,\mu^{\prime}_{i}(x)\mid x\in X\,\}=1/r_{i}\cdot\sum\{\,\mu _{i}(x)\mid x\notin\mathit{spt}(\mu)\,\}+1/r_{i}\cdot\sum\{\,\mu_{i}(x)-(1-r_{i })\mu(x)\mid x\in\mathit{spt}(\mu)\,\}=\] \[1/r_{i}\cdot\sum\{\,\mu_{i}(x)\mid x\in X\,\}-(1-r_{i})/r_{i} \cdot\sum\{\,\mu(x)\mid x\in\mathit{spt}(\mu)\,\}=1/r_{i}-(1-r_{i})/r_{i}=r_{i }/r_{i}=1.\]
Therefore, \(0\leqslant\mu^{\prime}_{i}(x)\leqslant 1\) and \(\sum\{\,\mu^{\prime}_{i}(x)\mid x\in X\,\}=1\).
Now we prove that \(\mu_{i}=(1-r_{i})\mu\oplus r_{i}\mu^{\prime}_{i}\). If \(r_{i}=0\), then \(\mu_{i}=\mu\), \(\mu^{\prime}_{i}=\mu\), and \(\mu_{i}=(1-r_{i})\mu\oplus r_{i}\mu^{\prime}_{i}\). If \(r_{i}>0\), then \(\mu_{i}(x)=(1-r_{i})\mu(x)\oplus r_{i}\mu^{\prime}_{i}(x)\) by definition of \(\mu^{\prime}_{i}(x)\) for all \(x\in X\). Thus, also \(\mu_{i}=(1-r_{i})\mu\oplus r_{i}\mu^{\prime}_{i}\) in this case.
Finally, we show that \(\lim_{i\to\infty}r_{i}=0\). Let \(x^{\prime}_{min}\in\mathit{spt}(\mu)\) be such that \(\mu(x^{\prime}_{min})\) is minimal. Then we have
\[r_{i}=1-\min\{\,\frac{\mu_{i}(x)}{\mu(x)}\mid x\in\mathit{spt}(\mu)\,\}=\max\{ \,\frac{\mu(x)-\mu_{i}(x)}{\mu(x)}\mid x\in\mathit{spt}(\mu),\,\mu(x)\geqslant \mu_{i}(x)\,\}\leqslant\frac{d(\mu,\mu_{i})}{\mu(x^{\prime}_{min})}\]
By assumption, \(\lim_{i\to\infty}d(\mu,\mu_{i})=0\). Hence also \(\lim_{i\to\infty}r_{i}=0\), as was to be shown.
The following combinatorial result is helpful in the sequel.
**Lemma 3**.: _Let \(I\) and \(J\) be finite index sets, \(p_{i},q_{j}\in[0,1]\) and \(\mu_{i},\nu_{j}\in\mathit{Distr}(X)\), for \(i\in I\) and \(j\in J\), such that \(\bigoplus_{i\in I}p_{i}\mu_{i}=\bigoplus_{j\in J}q_{j}\nu_{j}\). Then \(r_{ij}\geqslant 0\) and \(\rho_{ij}\in\mathit{Distr}(X)\) exist such that \(\sum_{j\in J}r_{ij}=p_{i}\) and \(p_{i}\cdot\mu_{i}=\bigoplus_{j\in J}r_{ij}\cdot\rho_{ij}\) for all \(i\in I\), and \(\sum_{i\in I}r_{ij}=q_{j}\) and \(q_{j}\cdot\nu_{j}=\bigoplus_{i\in I}r_{ij}\cdot\rho_{ij}\) for all \(j\in J\)._
Proof.: Let \(\xi=\bigoplus_{i\in I}p_{i}\cdot\mu_{i}=\bigoplus_{j\in J}q_{j}\cdot\nu_{j}\). We define \(r_{ij}=\sum_{x\in\mathit{spt}(\xi)}\frac{p_{i}\mu_{i}(x)\cdot q_{j}\,\nu_{j}(x) }{\xi(x)}\) for all \(i\in I\) and \(j\in J\). In case \(r_{ij}=0\), choose \(\rho_{ij}\in\mathit{Distr}(X)\) arbitrarily. In case \(r_{ij}\neq 0\), define \(\rho_{ij}\in\mathit{Distr}(X)\), for \(i\in I\) and \(j\in J\), by
\[\rho_{ij}(x)=\left\{\begin{array}{cl}\frac{p_{i}\mu_{i}(x)\cdot q_{j}\,\nu_{j} (x)}{r_{ij}\xi(x)}&\text{if }\xi(x)>0,\\ 0&\text{otherwise}\end{array}\right.\]
for all \(x\in X\). By definition of \(r_{ij}\) and \(\rho_{ij}\) it holds that \(\sum\{\,\rho_{ij}(x)\mid x\in X\,\}=1\). So, \(\rho_{ij}\in\mathit{Distr}(X)\) indeed.
We verify \(\sum_{j\in J}r_{ij}=p_{i}\) and \(p_{i}\cdot\mu_{i}=\bigoplus_{j\in J}r_{ij}\cdot\rho_{ij}\) for \(i\in I\).
\[\sum_{j\in J}r_{ij} =\sum_{j\in J}\sum_{x\in\mathit{spt}(\xi)}\,p_{i}\mu_{i}(x)\cdot q_{ j}\,\nu_{j}(x)/\xi(x)\] \[=\sum_{x\in\mathit{spt}(\xi)}\,p_{i}\mu_{i}(x)\cdot\sum_{j\in J}q_ {j}\,\nu_{j}(x)/\xi(x)\] \[=\sum_{x\in\mathit{spt}(\xi)}\,p_{i}\mu_{i}(x) (\text{since }\xi=\bigoplus_{j\in J}q_{j}\cdot\nu_{j})\] \[=p_{i}\sum_{x\in\mathit{spt}(\xi)}\,\mu_{i}(x)\] \[=p_{i}\,.\]
Next, pick \(y\in X\) and \(i\in I\). If \(\xi(y)=0\), then \(p_{i}\mu_{i}(y)=0\), since \(\xi(y)=\sum_{i\in I}p_{i}\mu_{i}(y)\), and \(r_{ij}=0\) or \(\rho_{ij}(y)=0\) for all \(j\in J\), by the various definitions, thus \(\sum_{j\in J}r_{ij}\rho_{ij}(y)=0\) as well.
Suppose \(\xi(y)>0\). Put \(J_{i}=\{\,j\in J\mid r_{ij}>0\,\}\). If \(j\in J\setminus J_{i}\), i.e. if \(r_{ij}=0\), then \(p_{i}\mu_{i}(y)q_{j}\nu_{j}(y)/\xi(y)=0\) by definition of \(r_{ij}\). Therefore we have
\[\begin{array}{rcl}\sum_{j\in J}r_{ij}\rho_{ij}(y)&=&\sum_{j\in J_{i}}r_{ij} \rho_{ij}(y)\\ &=&\sum_{j\in J_{i}}r_{ij}p_{i}\mu_{i}(y)\cdot q_{j}\nu_{j}(y)/(r_{ij}\xi(y)) \\ &=&\sum_{j\in J_{i}}p_{i}\mu_{i}(y)\cdot q_{j}\nu_{j}(y)/\xi(y)\\ &=&\sum_{j\in J}p_{i}\mu_{i}(y)\cdot q_{j}\nu_{j}(y)/\xi(y)\\ &=&p_{i}\mu_{i}(y)/\xi(y)\cdot\sum_{j\in J}q_{j}\nu_{j}(y)\\ &=&p_{i}\mu_{i}(y)\end{array}\qquad\text{ (summand zero for $j\in J\setminus J_{i}$)}\]
The statements \(\sum_{i\in I}r_{ij}=q_{j}\) and \(q_{j}\cdot\nu_{j}=\bigoplus_{i\in I}r_{ij}\cdot\rho_{ij}\) for \(j\in J\) follow by symmetry.
## 3 An elementary processes language
In this section we define a syntax and transition system semantics for non-deterministic and probabilistic processes. Depending on the top operator, following [7], a process is either a non-deterministic process \(E\in\mathcal{E}\), with constant \(\mathbf{0}\), prefix operators \(\alpha\cdot\) and non-deterministic choice \(+\), or a probabilistic process \(P\in\mathcal{P}\), with the Dirac operator \(\partial\) and probabilistic choices \({}_{r}\oplus\).
**Definition 4** (Syntax).: _The classes \(\mathcal{E}\) and \(\mathcal{P}\) of non-deterministic and probabilistic processes, respectively, over the set of actions \(\mathcal{A}\), are given by_
\[E::=\mathbf{0}\mid\alpha\cdot P\mid E+E\qquad\qquad P::=\partial(E)\mid P_{r}\oplus P\]
_with actions \(\alpha\) from \(\mathcal{A}\) and where \(0\leqslant r\leqslant 1\)._
We use \(E,F,\dots\) to range over \(\mathcal{E}\) and \(P,Q,\dots\) to range over \(\mathcal{P}\). The probabilistic process \(P_{1}\cap P_{2}\) behaves as \(P_{1}\) with probability \(r\) and behaves as \(P_{2}\) with probability \(1-r\).
We introduce a complexity measure \(c:\mathcal{E}\cup\mathcal{P}\rightarrow\mathbb{N}\) for non-deterministic and probabilistic processes based on the size of a process. It is given by \(c(\mathbf{0})=0\), \(c(a\cdot P)=c(P)+1\), \(c(E+F)=c(E)+c(F)\), and \(c(\partial(E))=c(E)+1\), \(c(P_{r}\oplus Q)=c(P)+c(Q)\).
ExamplesAs illustration, we provide the following pairs of non-deterministic processes, which are branching probabilistic bisimilar in the sense of Definition 9.
1. \(\mathbf{H_{1}}=a\cdot\big{(}P_{\,\frac{1}{2}}\oplus\,(P_{\,\frac{1}{2}}\oplus \,Q)\big{)}\) and \(\mathbf{H_{2}}=a\cdot\big{(}P_{\,\frac{1}{2}}\oplus\,(Q_{\,\frac{1}{2}}\oplus \,Q)\big{)}\)
2. \(\mathbf{G_{1}}=a\cdot(P_{\,\frac{1}{2}}\oplus\,Q)\) and \(\mathbf{G_{2}}=a\cdot\big{(}\partial\big{(}\tau\cdot(P_{\,\frac{1}{2}}\oplus \,Q)\big{)}_{\,\frac{1}{2}}\oplus\,(P_{\,\frac{1}{2}}\oplus\,Q)\big{)}\)
3. \(\mathbf{I_{1}}=a\cdot\partial(b\cdot P+\tau\cdot Q)\) and \(\mathbf{I_{2}}=a\cdot\partial(\tau\cdot\partial(b\cdot P+\tau\cdot Q)+b\cdot P +\tau\cdot Q)\)
The examples \(\mathbf{H_{1}}\) and \(\mathbf{H_{2}}\) are taken from [23], and \(\mathbf{G_{1}}\) and \(\mathbf{G_{2}}\) are taken from [17]. The processes \(\mathbf{G_{2}}\) and \(\mathbf{I_{2}}\) contain a so-called inert \(\tau\)-transition.
As usual, the SOS semantics for \(\mathcal{E}\) and \(\mathcal{P}\) makes use of two types of transition relations [21, 7, 7].
**Definition 5** (Operational semantics).:
1. _The transition relations_ \(\rightarrow\subseteq\mathcal{E}\times\mathcal{A}\times\mathit{Distr}(\mathcal{E})\) _and_ \(\mapsto\subseteq\mathcal{P}\times\mathit{Distr}(\mathcal{E})\) _are given by_ \[\infer{\alpha\cdot P\xrightarrow{\alpha}\mu}{\alpha\cdot\mathit{P} \xrightarrow{\alpha}\mu}\] (pref) \[\infer{E_{1}+E_{2}\xrightarrow{\alpha}\mu_{1}}{E_{1}+E_{2} \xrightarrow{\alpha}\mu_{1}}\] (nd-choice 1) \[\infer{E_{2}\xrightarrow{\alpha}\mu_{2}}{E_{1}+E_{2} \xrightarrow{\alpha}\mu_{2}}\] (nd-choice 2) \[\infer{\alpha}{\partial(E)\mapsto\delta(E)}\] (Dirac) \[\infer{P_{1}\mapsto\mu_{1}\quad P_{2}\mapsto\mu_{2}}{P_{1}\, \mapsto\,P_{2}\mapsto\mu_{1}\,\mapsto\,\mu_{2}}\] (p-choice)
2. _The transition relation_ \(\rightarrow\subseteq\mathit{Distr}(\mathcal{E})\times\mathcal{A}\times \mathit{Distr}(\mathcal{E})\) _is such that_ \(\mu\xrightarrow{\alpha}\mu^{\prime}\) _whenever_ \(\mu=\bigoplus_{i\in I}p_{i}\cdot E_{i}\)_,_ \(\mu^{\prime}=\bigoplus_{i\in I}p_{i}\cdot\mu^{\prime}_{i}\)_, and_ \(E_{i}\xrightarrow{\alpha}\mu^{\prime}_{i}\) _for all_ \(i\in I\)_._
In rule (Dirac) of the relation \(\mapsto\) we have that the syntactic Dirac process \(\partial(E)\) is coupled to the semantic Dirac distribution \(\delta(E)\). Similarly, in (p-choice), the syntactic probabilistic operator \({}_{r}\oplus\) in \(P_{1}\,_{r}\oplus P_{2}\) is replaced by semantic probabilistic composition in \(\mu_{1}\,_{r}\oplus\mu_{2}\). Thus, with each probabilistic process \(P\in\mathcal{P}\) we associate a distribution \([\![P]\!]\in\mathit{Distr}(\mathcal{E})\) as follows: \([\![\partial(E)]\!]=\delta(E)\) and \([\![P\!_{r}\!\oplus Q]\!]=[\![P]\!]\,_{r}\oplus[\![Q]\!]\), which is the distribution \(r[\![P]\!]\oplus(1-r)[\![Q]\!]\).
The relation \(\rightarrow\) for non-deterministic processes is finitely branching, but the relation \(\rightarrow\) for probabilistic processes is not. Following [27, 26], the transition relation \(\rightarrow\) on distributions as given by Definition 5 allows for a probabilistic combination of non-deterministic alternatives resulting in a so-called combined transition. For example, for the process \(E=a\cdot(P\,_{\ddagger}\,Q)+a\cdot(P\,_{\ddagger}\,Q)\) of [6], we have that the Dirac process \(\partial(E)=\partial(a\cdot(P\,_{\ddagger}\,\partial)+a\cdot(P\,_{\ddagger}\, \partial))\) provides an \(a\)-transition to \([\![P\,_{\ddagger}\,\oplus\,Q]\!]\) as well as an \(a\)-transition to \([\![P\,_{\ddagger}\,\oplus\,Q]\!]\). So, since we can represent the distribution \(\delta(E)\) by \(\delta(E)=\frac{1}{2}\delta(E)\oplus\frac{1}{2}\delta(E)\), the distribution \(\delta(E)\) also has a combined transition
\[\delta(E)=\tfrac{1}{2}\delta(E)\oplus\tfrac{1}{2}\delta(E)\xrightarrow{a} \tfrac{1}{2}[\![P\,_{\ddagger}\,\oplus\,Q]\!]\oplus\tfrac{1}{2}[\![P\,_{ \ddagger}\,\oplus\,Q]\!]=[\![P\,_{\ddagger}\,\oplus\,Q]\!].\]
As noted in [28], the ability to combine transitions is crucial for obtaining transitivity of probabilistic process equivalences that take internal actions into account.
ExampleReferring to the examples of processes above, we have, e.g,
\[\mathbf{H_{1}}\!: \delta(a\cdot(P\,_{1}\oplus(P\,_{\ddagger}\,\oplus\,Q))) \xrightarrow{a}[\![P\,_{\ddagger}\,\oplus\,(P\,_{\ddagger}\,\oplus\,Q)]\!]= \tfrac{1}{2}[\![P]\!]\oplus\tfrac{1}{2}[\![Q]\!]\] \[\mathbf{H_{2}}\!: \delta(a\cdot(P\,_{\ddagger}\,\oplus\,(Q\,_{\ddagger}\,\oplus\,Q))) \xrightarrow{a}[\![P\,_{\ddagger}\,\oplus\,Q]\!]=\tfrac{1}{2}[\![P]\!]\oplus \tfrac{1}{2}[\![Q]\!]\] \[\mathbf{G_{2}}\!: \!a\cdot\big{(}\big{(}\tau\cdot(P\,_{\ddagger}\,\oplus\,Q)\big{)} _{\ddagger}\,(P\,_{\ddagger}\,\oplus\,Q)\big{)}\xrightarrow{a}\delta\big{(} \tau\cdot(P\,_{\ddagger}\,\oplus\,Q)\big{)}\!\xrightarrow{1}\!\oplus(P\,_{ \ddagger}\,\oplus\,Q)\,.\]
Because a transition of a probabilistic process yields a distribution, the \(a\)-transitions of \(\mathbf{H_{1}}\) and \(\mathbf{H_{2}}\) have the same target. It is noted that \(\mathbf{G_{2}}\) doesn't provide a further transition unless both its components \(P\) and \(Q\) do so to match the transition of \(\tau\cdot(P\,_{\ddagger}\,\oplus\,Q)\).
In preparation to the definition of the notion of branching probabilistic bisimilarity in Section 4 we introduce some notation.
**Definition 6**.: _For \(\mu,\mu^{\prime}\!\in\!\mathit{Distr}(\mathcal{E})\) and \(\alpha\!\in\!\mathcal{A}\) we write \(\mu\xrightarrow{(\alpha)}\mu^{\prime}\) iff (i) \(\mu\xrightarrow{\alpha}\mu^{\prime}\), or (ii) \(\alpha=\tau\) and \(\mu^{\prime}=\mu\), or (iii) \(\alpha=\tau\) and there exist \(\mu_{1},\mu_{2},\mu^{\prime}_{1},\mu^{\prime}_{2}\in\mathit{Distr}(\mathcal{E})\) such that \(\mu=\mu_{1}\,_{r}\oplus\,\mu_{2}\), \(\mu^{\prime}=\mu^{\prime}_{1}\,_{r}\oplus\,\mu^{\prime}_{2}\), \(\mu_{1}\xrightarrow{\tau}\mu^{\prime}_{1}\) and \(\mu_{2}=\mu^{\prime}_{2}\) for some \(r\in(0,1)\)._
Cases (i) and (ii) in the definition above correspond with the limits \(r=1\) and \(r=0\) of case (iii). We use \(\Rightarrow\) to denote the reflexive transitive closure of \(\xrightarrow{(\tau)}\). A transition \(\mu\xrightarrow{(\tau)}\mu^{\prime}\) is called a partial transition, and a transition \(\mu\Rightarrow\mu^{\prime}\) is called a weak transition.
**Example**:
1. According to Definition 6 we have \[\tfrac{1}{3}\delta(\tau\cdot(P_{\frac{1}{4}}\oplus Q))\oplus\tfrac{2}{3}[\![P_{ \frac{1}{4}}\oplus Q]\!]\!\!\!\xrightarrow{(\tau)}\!\!\!\!\xrightarrow{1}{3}[\![P_{ \frac{1}{4}}\oplus Q]\!]\oplus\tfrac{2}{3}[\![P_{\frac{1}{4}}\oplus Q]\!]=[\![P_{ \frac{1}{4}}\oplus Q]\!].\]
2. There are typically multiple ways to construct a weak transition \(\Rightarrow\). Consider the weak transition \(\tfrac{1}{2}\delta(\tau\cdot\partial(\tau\cdot P))\oplus\tfrac{1}{3}\delta( \tau\cdot P)\oplus\tfrac{1}{6}[\![P]\!]\Rightarrow[\![P]\!]\) which can be obtained, among uncountably many other possibilities, via \[\tfrac{1}{2}\delta(\tau\cdot\partial(\tau\cdot P))\oplus\tfrac{1}{3} \delta(\tau\cdot P)\oplus\tfrac{1}{6}[\![P]\!]\!\xrightarrow{(\tau)}\] \[\tfrac{1}{2}\delta(\tau\cdot P))\oplus\tfrac{1}{3}\delta(\tau\cdot P )\oplus\tfrac{1}{6}[\![P]\!]\!\!\xrightarrow{(\tau)}\] or via \[\tfrac{1}{2}\delta(\tau\cdot\partial(\tau\cdot P))\oplus\tfrac{1}{3} \delta(\tau\cdot P)\oplus\tfrac{1}{6}[\![P]\!]\!\xrightarrow{(\tau)}\tfrac{1}{ 2}\delta(\tau\cdot\partial(\tau\cdot P))\oplus\tfrac{1}{3}\delta(P)\oplus \tfrac{1}{6}[\![P]\!]=\] \[\tfrac{1}{2}\delta(\tau\cdot\partial(\tau\cdot P))\oplus\tfrac{1}{2} \delta(\tau\cdot\!\!\!\xrightarrow{(\tau)}\tfrac{1}{2}\delta(\tau\cdot P)\oplus \tfrac{1}{2}[\![P]\!]\!]\xrightarrow{(\tau)}\tfrac{1}{2}[\![P]\!]\oplus \tfrac{1}{2}[\![P]\!]=[\![P]\!].\]
3. The distribution \(\tfrac{1}{2}\delta(\tau\cdot\partial(a\cdot\partial(\mathbf{0})+b\cdot \partial(\mathbf{0})))\oplus\tfrac{1}{2}\delta(a\cdot\partial(c\cdot \partial(\mathbf{0})))\) doesn't admit a \(\tau\)-transition nor an \(a\)-transition. However, we have \[\tfrac{1}{2}\delta(\tau\cdot\partial(a\cdot\partial(\mathbf{0})+b \cdot\partial(\mathbf{0})))\oplus\tfrac{1}{2}\delta(a\cdot\partial(c\cdot \partial(\mathbf{0})))\xrightarrow{(\tau)}\] \[\tfrac{1}{2}\partial(a\cdot\partial(\mathbf{0})+b\cdot\partial( \mathbf{0}))\oplus\tfrac{1}{2}\delta(a\cdot\partial(c\cdot\partial(\mathbf{ 0})))\xrightarrow{a}\tfrac{1}{2}\delta(\mathbf{0})\oplus\tfrac{1}{2}\delta(c \cdot\partial(\mathbf{0})).\]
The following lemma states that the transitions \(\xrightarrow{\alpha}\), \(\xrightarrow{(\alpha)}\), and \(\Rightarrow\) of Definitions 5 and 6 can be probabilistically composed.
**Lemma 7**.: _Let, for a finite index set \(I\), \(\mu_{i},\mu^{\prime}_{i}\in\operatorname{{Distr}}(\mathcal{E})\) and \(p_{i}\geqslant 0\) such that \(\sum_{i\in I}p_{i}=1\)._
1. _If_ \(\mu_{i}\xrightarrow{\alpha}\mu^{\prime}_{i}\) _for all_ \(i\in I\)_, then_ \(\bigoplus_{i\in I}p_{i}\cdot\mu_{i}\xrightarrow{\alpha}\bigoplus_{i\in I}p_{ i}\cdot\mu^{\prime}_{i}\)_._
2. _If_ \(\mu_{i}\xrightarrow{(\tau)}\mu^{\prime}_{i}\) _for all_ \(i\in I\)_, then_ \(\bigoplus_{i\in I}p_{i}\cdot\mu_{i}\xrightarrow{(\tau)}\bigoplus_{i\in I}p_{ i}\cdot\mu^{\prime}_{i}\)_._
3. _If_ \(\mu_{i}\Rightarrow\mu^{\prime}_{i}\) _for all_ \(i\in I\)_, then_ \(\bigoplus_{i\in I}p_{i}\cdot\mu_{i}\Rightarrow\ \bigoplus_{i\in I}p_{i}\cdot\mu^{\prime}_{i}\)_._
Proof.: Let \(\mu=\bigoplus_{i\in I}p_{i}\cdot\mu_{i}\) and \(\mu^{\prime}=\bigoplus_{i\in I}p_{i}\cdot\mu^{\prime}_{i}\). Without loss of generality, we may assume that \(p_{i}>0\) for all \(i\in I\).
(a) Suppose \(\mu_{i}\xrightarrow{\alpha}\mu^{\prime}_{i}\) for all \(i\in I\). Then, by Definition 5, \(\mu_{i}=\bigoplus_{j\in J_{i}}p_{ij}\cdot E_{ij}\), \(\mu^{\prime}_{i}=\bigoplus_{j\in J_{i}}p_{ij}\cdot\eta_{ij}\), and \(E_{ij}\xrightarrow{\alpha}\eta_{ij}\) for \(j\in J_{i}\) for a suitable index set \(J_{i}\), \(p_{ij}>0\) and \(\eta_{ij}\in\operatorname{{Distr}}(\mathcal{E})\). Define the index set \(K\) and probabilities \(q_{k}\) for \(k\in K\) by \(K=\{\,(i,j)\mid i\in I,\,j\in J_{i}\,\}\) and \(q_{(i,j)}=p_{i}p_{ij}\) for \((i,j)\in K\), so that \(\sum_{k\in K}q_{k}=1\). Then we have \(\mu=\bigoplus_{k\in K}q_{k}\cdot E_{ij}\) and \(\mu^{\prime}=\bigoplus_{k\in K}q_{k}\cdot\eta_{ij}\). Therefore, by Definition 5, it follows that \(\mu\xrightarrow{\alpha}\mu^{\prime}\).
(b) Let \(\mu_{i}\xrightarrow{(\tau)}\mu^{\prime}_{i}\) for all \(i\in I\). Then, for all \(i\in I\), by Definition 6, there exists \(r_{i}\in[0,1]\) and \(\mu^{\text{stay}}_{i},\mu^{\text{go}}_{i},\mu^{\prime\prime}_{i}\in \operatorname{{Distr}}(\mathcal{E})\), such that \(\mu_{i}=\mu^{\text{stay}}_{i}\!\!
Likewise, the next lemma allows _probabilistic decomposition_ of transitions \(\xrightarrow{\alpha}\), \(\xrightarrow{(\alpha)}\) and \(\Rightarrow\).
**Lemma 8**.: _Let \(\mu,\mu^{\prime}\in\mathit{Distr}(\mathcal{E})\) and \(\mu=\bigoplus_{i\in I}p_{i}\cdot\mu_{i}\) with \(p_{i}>0\) for \(i\in I\)._
1. _If_ \(\mu\xrightarrow{\alpha}\mu^{\prime}\)_, then there are_ \(\mu^{\prime}_{i}\) _for_ \(i\in I\) _such that_ \(\mu_{i}\xrightarrow{\alpha}\mu^{\prime}_{i}\) _for_ \(i\in I\) _and_ \(\mu^{\prime}=\bigoplus_{i\in I}p_{i}\cdot\mu^{\prime}_{i}\)_._
2. _If_ \(\mu\xrightarrow{(\tau)}\mu^{\prime}\)_, then there are_ \(\mu^{\prime}_{i}\) _for_ \(i\in I\) _such that_ \(\mu_{i}\xrightarrow{(\tau)}\mu^{\prime}_{i}\) _for_ \(i\in I\) _and_ \(\mu^{\prime}=\bigoplus_{i\in I}p_{i}\cdot\mu^{\prime}_{i}\)_._
3. _If_ \(\mu\Rightarrow\mu^{\prime}\)_, then there are_ \(\mu^{\prime}_{i}\) _for_ \(i\in I\) _such that_ \(\mu_{i}\Rightarrow\mu^{\prime}_{i}\) _for_ \(i\in I\) _and_ \(\mu^{\prime}=\bigoplus_{i\in I}p_{i}\cdot\mu^{\prime}_{i}\)_._
Proof.: (a) Suppose \(\mu\xrightarrow{\alpha}\mu^{\prime}\). By Definition 5\(\mu=\bigoplus_{j\in J}q_{j}\cdot E_{j}\), \(\mu^{\prime}=\bigoplus_{j\in J}q_{j}\cdot\eta_{j}\), and \(E_{j}\xrightarrow{\alpha}\eta_{j}\) for all \(j\in J\), for suitable index set \(J\), \(q_{j}>0\), \(E_{j}\in\mathcal{E}\), and \(\eta_{j}\in\mathit{Distr}(\mathcal{E})\). By Lemma 3 there are \(r_{ij}\geqslant 0\) and \(\rho_{ij}\in\mathit{Distr}(\mathcal{E})\) such that \(\sum_{j\in J}r_{ij}=p_{i}\) and \(p_{i}\mu_{i}=\bigoplus_{j\in J}r_{ij}\rho_{ij}\) for \(i\in I\), and \(\sum_{i\in I}r_{ij}=q_{j}\) and \(q_{j}\cdot\delta(E_{j})=\bigoplus_{i\in I}r_{ij}\rho_{ij}\) for all \(j\in J\). Hence, \(\rho_{ij}=\delta(E_{j})\) for \(i\in I\), \(j\in J\).
For all \(i\in I\), let \(\mu^{\prime}_{i}=\bigoplus_{j\in J}\left(r_{ij}/p_{i}\right)\eta_{j}\). Then \(\mu_{i}\xrightarrow{\alpha}\mu^{\prime}_{i}\), for all \(i\in I\), by Lemma 7(a). Moreover, it holds that \(\bigoplus_{i\in I}p_{i}\mu^{\prime}_{i}=\bigoplus_{i\in I}p_{i}\cdot\bigoplus_ {j\in J}\left(r_{ij}/p_{i}\right)\eta_{j}=\bigoplus_{j\in J}\bigoplus_{i\in I }r_{ij}\cdot\eta_{j}=\bigoplus_{j\in J}q_{j}\cdot\eta_{j}=\mu^{\prime}\).
(b) Suppose \(\mu\xrightarrow{(\tau)}\mu^{\prime}\). By Definition 6, either (i) \(\mu\xrightarrow{\tau}\mu^{\prime}\), or (ii) \(\mu^{\prime}=\mu\), or (iii) there exist \(\nu_{1},\nu_{2},\nu^{\prime}_{1},\nu^{\prime}_{2}\in\mathit{Distr}(\mathcal{E})\) such that \(\mu=\nu_{1}{}_{r}\oplus\nu_{2}\), \(\mu^{\prime}=\nu^{\prime}_{1}{}_{r}\oplus\nu^{\prime}_{2}\), \(\nu_{1}\xrightarrow{\tau}\nu^{\prime}_{1}\) and \(\nu_{2}=\nu^{\prime}_{2}\) for some \(r\in(0,1)\). In case (i), the required \(\mu^{\prime}_{i}\) exist by the first statement of this lemma. In case (ii) one can simply take \(\mu^{\prime}_{i}:=\mu_{i}\) for all \(i\in I\). Hence assume that case (iii) applies. Let \(J:=\{1,2\}\), \(q_{1}:=r\) and \(q_{2}:=1-r\). By Lemma 3 there are \(r_{ij}\in[0,1]\) and \(\rho_{ij}\in\mathit{Distr}(\mathcal{E})\) with \(\sum_{j\in J}r_{ij}=p_{i}\) and \(\mu_{i}=\bigoplus_{j\in J}\frac{r_{ij}}{p_{i}}\cdot\rho_{ij}\) for all \(i\in I\), and \(\sum_{i\in I}r_{ij}=q_{j}\) and \(\nu_{j}=\bigoplus_{i\in I}\frac{r_{ij}}{q_{j}}\cdot\rho_{ij}\) for all \(j\in J\).
Let \(I^{\prime}:=\{i\in I\mid r_{i1}>0\}\). Since \(\nu_{1}=\bigoplus_{i\in I^{\prime}}\frac{r_{i1}}{r}\cdot\rho_{i1}\xrightarrow{ \tau}\nu^{\prime}_{1}\), by the first statement of the lemma, for all \(i\in I^{\prime}\) there are \(\rho^{\prime}_{i1}\) such that \(\rho_{i1}\xrightarrow{\tau}\rho^{\prime}_{i1}\) and \(\nu^{\prime}_{1}=\bigoplus_{i\in I^{\prime}}\frac{r_{i1}}{r}\cdot\rho^{\prime} _{i1}\). For all \(i\in I\backslash I^{\prime}\) pick \(\rho^{\prime}_{i1}\in\mathit{Distr}(\mathcal{E})\) arbitrarily. It follows that \(\mu_{i}=\rho_{i1}\xrightarrow{r_{i1}}\oplus\rho_{i2}\xrightarrow{(\tau)}\rho^{ \prime}_{i1}\xrightarrow{r_{i1}}\oplus\rho_{i2}=:\mu^{\prime}_{i}\) for all \(i\in I\). Moreover, \(\bigoplus_{i\in I}p_{i}\cdot\mu^{\prime}_{i}=\bigoplus_{i\in I}p_{i}\cdot(\rho^{ \prime}_{i1}\xrightarrow{r_{i1}}\oplus\rho_{i2})=(\bigoplus_{i\in I}\frac{r_{i1 }}{r}\cdot\rho^{\prime}_{i1})\cdot\oplus(\bigoplus_{i\in I}\frac{r_{i2}}{1-r} \cdot\rho_{i2})=\nu^{\prime}_{1}{}_{r}\oplus\nu_{2}=\mu^{\prime}\).
(c) The last statement follows by transitivity from the second one.
## 4 Branching probabilistic bisimilarity
In this section we recall the notion of branching probabilistic bisimilarity [17]. The notion is based on a decomposability property due to [10] and a transfer property.
**Definition 9** (Branching probabilistic bisimilarity).:
1. _A relation_ \(\mathcal{R}\subseteq\mathit{Distr}(\mathcal{E})\times\mathit{Distr}(\mathcal{E})\) _is called weakly decomposable iff it is symmetric and for all_ \(\mu,\nu\in\mathit{Distr}(\mathcal{E})\) _such that_ \(\mu\,\mathcal{R}\,\nu\) _and_ \(\mu=\bigoplus_{i\in I}p_{i}\cdot\mu_{i}\) _there are_ \(\bar{\nu},\nu_{i}\in\mathit{Distr}(\mathcal{E})\)_, for_ \(i\in I\)_, such that_ \[\nu\Rightarrow\bar{\nu},\ \mu\,\mathcal{R}\,\bar{\nu},\ \bar{\nu}=\bigoplus_{i\in I}p_{i}\cdot\nu_{i},\text{ and }\ \mu_{i}\,\mathcal{R}\,\nu_{i}\text{ for all }i\in I.\]
2. _A relation_ \(\mathcal{R}\subseteq\mathit{Distr}(\mathcal{E})\times\mathit{Distr}(\mathcal{E})\) _is called a_ branching _probabilistic bisimulation relation iff it is weakly decomposable and for all_ \(\mu,\nu\in\mathit{Distr}(\mathcal{E})\) _with_ \(\mu\,\mathcal{R}\,\nu\) _and_ \(\mu\xrightarrow{\alpha}\mu^{\prime}\)_, there are_ \(\bar{\nu},\nu^{\prime}\in\mathit{Distr}(\mathcal{E})\) _such that_ \[\nu\Rightarrow\bar{\nu},\ \bar{\nu}\xrightarrow{(\alpha)}\nu^{\prime},\ \mu\,\mathcal{R}\,\bar{\nu},\text{ and }\ \mu^{\prime}\, \mathcal{R}\,\nu^{\prime}.\]
3. _Branching probabilistic bisimilarity_ \(\xleftrightarrow_{\mathcal{R}}\subseteq\mathit{Distr}(\mathcal{E})\times \mathit{Distr}(\mathcal{E})\) _is defined as the largest branching probabilistic bisimulation relation on_ \(\mathit{Distr}(\mathcal{E})\)_._
Note that branching probabilistic bisimilarity is well-defined following the usual argument that any union of branching probabilistic bisimulation relations is again a branching probabilistic bisimulation relation. In particular, (weak) decomposability is preserved under arbitrary unions. As observed in [16], branching probabilistic bisimilarity is an equivalence relation.
Two non-deterministic processes are considered to be branching probabilistic bisimilar iff their Dirac distributions are, i.e., for \(E,F\in\mathcal{E}\) we have \(E\xleftrightarrow{}_{b}F\) iff \(\delta(E)\xleftrightarrow{}_{b}\delta(F)\). Two probabilistic processes are considered to be branching probabilistic bisimilar iff their associated distributions over \(\mathcal{E}\) are, i.e., for \(P,Q\in\mathcal{P}\) we have \(P\xleftrightarrow{}_{b}Q\) iff \(\llbracket P\rrbracket\xleftrightarrow{}_{b}\llbracket Q\rrbracket\).
For a set \(M\subseteq\mathit{Distr}(\mathcal{E})\), the convex closure \(cc(M)\) is defined by
\[cc(M)=\{\bigoplus_{i\in I}p_{i}\mu_{i}\mid\sum_{i\in I}p_{i}=1,\ \mu_{i}\in M,\ I \text{ a finite index set}\}.\]
For a relation \(\mathcal{R}\subseteq\mathit{Distr}(\mathcal{E})\times\mathit{Distr}(\mathcal{E})\) the convex closure of \(\mathcal{R}\) is defined by
\[cc(\mathcal{R})=\{\,\langle\bigoplus_{i\in I}p_{i}\mu_{i},\bigoplus_{i\in I}p_ {i}\nu_{i}\rangle\mid\mu_{i}\mathcal{R}\nu_{i},\ \sum_{i\in I}p_{i}=1,\ I\text{ a finite index set}\,\}.\]
The notion of weak decomposability has been adopted from [23, 25]. The underlying idea stems from [10]. Weak decomposability provides a convenient dexterity to deal with combined transitions as well as with sub-distributions. For example, regarding sub-distributions, to distinguish the probabilistic process \(\frac{1}{2}\partial(a\cdot\partial(\mathbf{0}))\oplus\frac{1}{2}\partial(b \cdot\partial(\mathbf{0}))\) from \(\partial(\mathbf{0})\) a branching probabilistic bisimulation relation relating \(\frac{1}{2}\delta(a\cdot\partial(\mathbf{0}))\oplus\frac{1}{2}\delta(b\cdot \partial(\mathbf{0}))\) and \(\delta(\mathbf{0})\) is by weak decomposability also required to relate \(\delta(a\cdot\partial(\mathbf{0}))\) and \(\delta(b\cdot\partial(\mathbf{0}))\) to subdistributions of a weak descendant of \(\delta(\mathbf{0})\), which can only be \(\delta(\mathbf{0})\) itself. Since \(\delta(a\cdot\partial(\mathbf{0}))\) has an \(a\)-transition while \(\delta(\mathbf{0})\) has not, and similar for a \(b\)-transition of \(\delta(b\cdot\partial(\mathbf{0}))\), it follows that \(\frac{1}{2}\partial(a\cdot\partial(\mathbf{0}))\oplus\frac{1}{2}\partial(b \cdot\partial(\mathbf{0}))\) and \(\partial(\mathbf{0})\) are not branching probabilistic bisimilar.
By comparison, on finite processes, as used in this paper, the notion of branching probabilistic bisimilarity of Segala & Lynch [28] can be defined in our framework exactly as in (b) and (c) above, but taking a decomposable instead of a weakly decomposable relation, i.e. if \(\mu\mathcal{R}\nu\) and \(\mu=\bigoplus_{i\in I}p_{i}\mu_{i}\) then there are \(\nu_{i}\) for \(i\in I\) such that \(\nu=\bigoplus_{i\in I}p_{i}\nu_{i}\) and \(\mu_{i}\mathcal{R}\nu_{i}\) for \(i\in I\). This yields a strictly finer equivalence.
**Example**
1. The distributions \(\delta(\mathbf{G_{1}})=\delta(a\cdot(P\,_{\frac{1}{2}}\oplus\,Q))\) and \(\delta(\mathbf{G_{2}})=\delta(a\cdot(\partial(\tau\cdot(P\,_{\frac{1}{2}}\oplus \,Q))\,_{\frac{1}{2}}\oplus\,(P\,_{\frac{1}{2}}\oplus\,Q)))\) both admit at the top level an \(a\)-transition only: \[\delta(a\cdot(P\,_{\frac{1}{2}}\oplus\,Q)) \xrightarrow{a}\frac{1}{2}\llbracket P\rrbracket\oplus\frac{1}{2} \llbracket Q\rrbracket\] \[\delta(a\cdot(\partial(\tau\cdot(P\,_{\frac{1}{2}}\oplus\,Q))\,_ {\frac{1}{2}}\oplus\,(P\,_{\frac{1}{2}}\oplus\,Q))) \xrightarrow{a}\frac{1}{3}\delta(\tau\cdot(P\,_{\frac{1}{2}}\oplus\,Q)) \oplus\frac{1}{3}\llbracket P\rrbracket\oplus\frac{1}{3}\llbracket Q \rrbracket.\] Let the relation \(\mathcal{R}\) contain the pairs \[\langle\delta(\tau\cdot(P\,_{\frac{1}{2}}\oplus\,Q)),\frac{1}{2} \llbracket P\rrbracket\oplus\frac{1}{2}\llbracket Q\rrbracket\rangle\quad\text{ and}\quad\langle\mu,\mu\rangle\ \text{for}\ \mu\in\mathit{Distr}(\mathcal{E}).\] The symmetric closure \(\mathcal{R}^{\dagger}\) of \(\mathcal{R}\) is clearly a branching probabilistic bisimulation relation. We claim that therefore also its convex closure \(cc(\mathcal{R}^{\dagger})\) is a branching probabilistic bisimulation relation. Considering that \(\langle\delta(\tau\cdot(P\,_{\frac{1}{2}}\oplus\,Q)),\frac{1}{2} \llbracket P\rrbracket\oplus\frac{1}{2}\llbracket Q\rrbracket\rangle\) and \(\langle\frac{1}{2}\llbracket P\rrbracket\oplus\frac{1}{2}\llbracket Q \rrbracket,\frac{1}{2}\llbracket P\rrbracket\oplus\frac{1}{2}\llbracket Q \rrbracket\rangle\) are in \(\mathcal{R}\), we have that \[\langle\frac{1}{3}\delta(\tau\cdot(P\,_{\frac{1}{2}}\oplus\,Q)\oplus\frac{2}{3 }(\frac{1}{2}\llbracket P\rrbracket\oplus\frac{1}{2}\llbracket Q\rrbracket), \frac{1}{3}(\frac{1}{2}\llbracket P\rrbracket\oplus\frac{1}{2}\llbracket Q \rrbracket)\rangle\oplus\frac{2}{3}(\frac{1}{2}\llbracket P\rrbracket\oplus \frac{1}{2}\llbracket Q\rrbracket))\rangle\in cc(\mathcal{R}^{\dagger}).\] Adding the pair of processes \(\langle\delta(a\cdot(P\,_{\frac{1}{2}}\oplus\,Q)),\delta(a\cdot(\partial(\tau \cdot(P\,_{\frac{1}{2}}\oplus\,Q))\,_{\frac{1}{2}}\oplus\,(P\,_{\frac{1}{2}} \oplus\,Q)))\rangle\) and closing for symmetry, then yields a branching probabilistic bisimulation relation relating \(\delta(\mathbf{G_{1}})\) and \(\delta(\mathbf{G_{2}})\).
2. The \(a\)-derivatives of \(\mathbf{I_{1}}\) and \(\mathbf{I_{2}}\), i.e. the distributions \(I_{1}^{\prime}=\delta(b\cdot P+\tau\cdot Q)\) and \(I_{2}^{\prime}=\delta(\tau\cdot\partial(b\cdot P+\tau\cdot Q)+b\cdot P+\tau\cdot Q)\) are branching probabilistic bisimilar. A \(\tau\)-transition of \(I_{2}^{\prime}\) partially based on its left branch, can be simulated by \(I_{1}^{\prime}\) by a partial transition: \[\begin{array}{l}I_{2}^{\prime}=r\cdot\llbracket I_{2}^{\prime}\rrbracket \oplus(1-r)\cdot\llbracket I_{2}^{\prime}\rrbracket\quad\stackrel{{ \tau}}{{\longrightarrow}}\quad r\cdot\delta(b\cdot P+\tau\cdot Q) \oplus(1-r)\cdot\llbracket Q\rrbracket\\ I_{1}^{\prime}=r\cdot\llbracket I_{1}^{\prime}\rrbracket\oplus(1-r)\cdot \llbracket I_{1}^{\prime}\rrbracket\quad\stackrel{{(\tau)}}{{ \longrightarrow}}\quad r\cdot\llbracket I_{1}^{\prime}\rrbracket\oplus(1-r) \cdot\llbracket Q\rrbracket\ =\ r\cdot\delta(b\cdot P+\tau\cdot Q)\oplus(1-r)\cdot \llbracket Q\rrbracket.\end{array}\] A \(\tau\)-transition of \(I_{1}^{\prime}\) can be directly simulated by \(I_{2}^{\prime}\) of course. It follows that the relation \(\mathcal{R}=\{\langle\delta(\mathbf{I_{1}}),\delta(\mathbf{I_{2}})\rangle, \langle I_{1}^{\prime},I_{2}^{\prime}\rangle\}^{\dagger}\cup\{\,\langle\mu, \mu\rangle\mid\mu\in\mathit{Distr}(\mathcal{E})\,\}\), the symmetric relation containing the pairs mentioned and the diagonal of \(\mathit{Distr}(\mathcal{E})\), constitutes a branching probabilistic bisimulation relation containing \(\mathbf{I_{1}}\) and \(\mathbf{I_{2}}\).
In the sequel we frequently need that probabilistic composition respects branching probabilistic bisimilarity of distributions, i.e. if, with respect to some index set \(I\), we have distributions \(\mu_{i}\) and \(\nu_{i}\) such that \(\mu_{i}\leftrightarrows_{b}\nu_{i}\) for \(i\in I\), then also \(\mu\leftrightarrows_{b}\nu\) for the distributions \(\mu=\bigoplus_{i\in I}p_{i}\mu_{i}\) and \(\nu=\bigoplus_{i\in I}p_{i}\nu_{i}\). The property directly follows from the following lemma, which is proven in [16].
**Lemma 10**.: _Let distributions \(\mu_{1},\mu_{2},\nu_{1},\nu_{2}\in\mathit{Distr}(\mathcal{E})\) and \(0\leqslant r\leqslant 1\) be such that \(\mu_{1}\leftrightarrows_{b}\nu_{1}\) and \(\mu_{2}\leftrightarrows_{b}\nu_{2}\). Then it holds that \(\mu_{1\,\,r}\oplus\,\mu_{2}\leftrightarrows_{b}\nu_{1\,\,r}\oplus\,\nu_{2}\)._
We apply the above property in the proof of the next result. In the sequel any application of Lemma 10 will be done tacitly.
**Lemma 11**.: _Let \(\mu,\nu\in\mathit{Distr}(\mathcal{E})\) such that \(\mu\leftrightarrows_{b}\nu\) and \(\mu\Rightarrow\,\mu^{\prime}\) for some \(\mu^{\prime}\in\mathit{Distr}(\mathcal{E})\). Then there are \(\nu^{\prime}\in\mathit{Distr}(\mathcal{E})\) such that \(\nu\Rightarrow\,\nu^{\prime}\) and \(\mu^{\prime}\leftrightarrows_{b}\nu^{\prime}\)._
Proof.: We check that a partial transition \(\mu\stackrel{{(\tau)}}{{\longrightarrow}}\mu^{\prime}\) can be matched by \(\nu\) given \(\mu\leftrightarrows_{b}\nu\). So, suppose \(\mu=\mu_{1\,\,r}\oplus\,\mu_{2},\mu_{1}\stackrel{{\tau}}{{ \longrightarrow}}\mu_{1}^{\prime}\), and \(\mu^{\prime}=\mu_{1\,\,r}^{\prime}\oplus\mu_{2}\). By weak decomposability of \(\leftrightarrows_{b}\) we can find distributions \(\bar{\nu},\nu_{1},\nu_{2}\) such that \(\nu\Rightarrow\,\bar{\nu}=\nu_{1\,\,r}\oplus\nu_{2}\) and \(\mu\leftrightarrows_{b}\bar{\nu}\), \(\nu_{1}\leftrightarrows_{b}\mu_{1}\), \(\nu_{2}\leftrightarrows_{b}\mu_{2}\). Choose distributions \(\bar{\nu}_{1},\bar{\nu}_{1}^{\prime}\) such that \(\nu_{1}\Rightarrow\,\bar{\nu}_{1}\stackrel{{(\tau)}}{{ \longrightarrow}}\nu_{1}^{\prime}\) and \(\bar{\nu}_{1}\leftrightarrows_{b}\mu_{1}\), \(\nu_{1}^{\prime}\leftrightarrows_{b}\mu_{1}^{\prime}\). Put \(\nu^{\prime}=\nu_{1\,\,r}^{\prime}\oplus\nu_{2}\). Then \(\nu\Rightarrow\,\nu^{\prime}\), using Lemma 7c, and we have by Lemma 10 that \(\nu^{\prime}=\nu_{1\,\,r}^{\prime}\oplus\,\nu_{2}\leftrightarrows_{b}\mu_{1}^{ \prime}\cap\,\mu_{2}=\mu^{\prime}\) since \(\nu_{1}^{\prime}\leftrightarrows_{b}\mu_{1}^{\prime}\) and \(\nu_{2}\leftrightarrows_{b}\mu_{2}\).
## 5 Branching probabilistic bisimilarity is continuous
Fix a finite set of non-deterministic processes \(\mathcal{F}\subseteq\mathcal{E}\) that is _transition closed_, in the sense that if \(E\in\mathcal{F}\) and \(E\stackrel{{\alpha}}{{\longrightarrow}}\bigoplus_{i\in I}p_{i} \cdot F_{i}\) then also \(F_{i}\in\mathcal{F}\). Consequently, if \(\mu\in\mathit{Distr}(\mathcal{F})\) and \(\mu\stackrel{{(\alpha)}}{{\longrightarrow}}\mu^{\prime}\) then \(\mu^{\prime}\in\mathit{Distr}(\mathcal{F})\). Also, if \(\mu\in\mathit{Distr}(\mathcal{F})\) and \(\mu\Rightarrow\bar{\mu}\) then \(\bar{\mu}\in\mathit{Distr}(\mathcal{F})\). By Theorem 1\(\mathit{Distr}(\mathcal{F})\) is a sequentially compact subspace of the complete metric space \(\mathit{Distr}(\mathcal{E})\), meaning that every sequence \((\mu_{i})_{i=0}^{\infty}\) in \(\mathit{Distr}(\mathcal{F})\) has a subsequence \((\mu_{i_{k}})_{k=0}^{\infty}\) such that \(\lim_{k\rightarrow\infty}\,\mu_{i_{k}}=\mu\) for some distribution \(\mu\in\mathit{Distr}(\mathcal{F})\). In particular, if \(\lim_{i\rightarrow\infty}\,\mu_{i}=\mu\) and \(\mu_{i}\in\mathit{Distr}(\mathcal{F})\), then also \(\mu\in\mathit{Distr}(\mathcal{F})\), i.e. \(\mathit{Distr}(\mathcal{F})\) is a closed subset of \(\mathit{Distr}(\mathcal{E})\). Due to the finitary nature of our process algebra, each distribution \(\mu\in\mathit{Distr}(\mathcal{E})\) occurs in \(\mathit{Distr}(\mathcal{F})\) for some such \(\mathcal{F}\), based on \(\mathit{spt}(\mu)\).
In the following three lemmas we establish a number of continuity results. Assume \(\lim_{i\rightarrow\infty}\nu_{i}=\nu\). Then Lemma 12 states that, for a Dirac distribution \(\delta(E)\), if \(\delta(E)\stackrel{{\alpha}}{{\longrightarrow}}\nu_{i}\) for \(i\in\mathbb{N}\) then also \(\delta(E)\stackrel{{\alpha}}{{\longrightarrow}}\nu\). Lemma 13 extends this and shows that, for a general distribution \(\mu\), if \(\mu\stackrel{{\alpha}}{{\longrightarrow}}\nu_{i}\) for \(i\in\mathbb{N}\) then \(\mu\stackrel{{\alpha}}{{\longrightarrow}}\nu\). Finally, Lemma 14 establishes the limit case: if \(\lim_{i\rightarrow\infty}\mu_{i}=\mu\) and \(\mu_{i}\stackrel{{\alpha}}{{\longrightarrow}}\nu_{i}\) for \(i\in\mathbb{N}\) then \(\mu\stackrel{{\alpha}}{{\longrightarrow}}\nu\).
**Lemma 12**.: _Let \(E\in\mathcal{F}\) be a non-deterministic process, \(\alpha\in\mathcal{A}\) an action, \((\nu_{i})_{i=0}^{\infty}\in\mathit{Distr}(\mathcal{F})^{\infty}\) an infinite sequence in \(\mathit{Distr}(\mathcal{F})\), and \(\nu\in\mathit{Distr}(\mathcal{F})\) a distribution satisfying \(\lim_{i\rightarrow\infty}\nu_{i}=\nu\). If, for all \(i\in\mathbb{N}\), \(\delta(E)\stackrel{{(\alpha)}}{{\longrightarrow}}\nu_{i}\) then it holds that \(\delta(E)\stackrel{{(\alpha)}}{{\longrightarrow}}\nu\)._
Proof.: For \(E\in\mathcal{F}\) and \(\alpha\in\mathcal{A}\), define \(E\!\!\upharpoonright\!\alpha=cc(\{\,\mu\mid E\xrightarrow{\alpha}\mu\})\), pronounced \(E\) 'after' \(\alpha\), to be the convex closure in \(Distr(\mathcal{E})\) of all distributions that can be reached from \(E\) by an \(\alpha\)-transition. Then \(\delta(E)\xrightarrow{\alpha}\nu\) iff \(\nu\in E\!\upharpoonright\!\alpha\). Recall that transitions for non-deterministic processes are not probabilistically combined. See Definition 5. Since \(E\!\!\upharpoonright\!\alpha\subseteq\mathit{Distr}(\mathcal{F})\) is the convex closure of a finite set of distributions, it is certainly closed in the space \(Distr(\mathcal{F})\). Since it holds that \(\delta(E)\xrightarrow{\alpha}\nu_{i}\) for all \(i\in\mathbb{N}\), one has \(\nu_{i}\in E\!\upharpoonright\!\alpha\) for \(i\in\mathbb{N}\). Hence, \(\lim_{i\to\infty}\nu_{i}=\nu\) implies that \(\nu\in E\!\upharpoonright\!\alpha\), i.e. \(\delta(E)\xrightarrow{\alpha}\nu\).
For \(E\in\mathcal{F}\), define \(E\!\!\upharpoonright\!(\tau):=cc(\{\mu\mid E\xrightarrow{\tau}\mu\}\cup\{E\})\). Then \(\delta(E)\xrightarrow{(\tau)}\nu\) iff \(\nu\in E\!\upharpoonright\!(\tau)\). The set \(E\!\upharpoonright\!(\tau)\subseteq\mathit{Distr}(\mathcal{F})\) is closed, and thus \(\nu_{i}\in E\!\upharpoonright\!(\tau)\) implies \(\nu\in E\!\upharpoonright\!(\tau)\), which means \(\delta(E)\xrightarrow{(\tau)}\nu\).
The above result for Dirac distributions holds for general distributions as well.
**Lemma 13**.: _Let \(\mu,\nu\in\mathit{Distr}(\mathcal{F})\), \(\alpha\in\mathcal{A}\), \((\nu_{i})_{i=0}^{\infty}\in\mathit{Distr}(\mathcal{F})^{\infty}\), and assume \(\lim_{i\to\infty}\nu_{i}=\nu\). If it holds that \(\mu\xrightarrow{(\alpha)}\nu_{i}\) for all \(i\in\mathbb{N}\), then also \(\mu\xrightarrow{(\alpha)}\nu\)._
Proof.: Suppose \(\mu\xrightarrow{(\alpha)}\nu_{i}\) for all \(i\in I\). Let \(\mu=\bigoplus_{j=1}^{k}p_{j}\cdot E_{j}\). By Lemma 8, for all \(i\in\mathbb{N}\) and \(1\leqslant j\leqslant k\) there are \(\nu_{ij}\) such that \(\delta(E_{j})\xrightarrow{(\alpha)}\nu_{ij}\) and \(\nu_{i}=\bigoplus_{j=1}^{k}p_{j}\cdot\nu_{ij}\). The countable sequence \((\nu_{i1},\nu_{i2},\ldots,\nu_{ik})_{i=0}^{\infty}\) of \(k\)-dimensional vectors of probability distributions need not have a limit. However, by the sequential compactness of \(\mathit{Distr}(\mathcal{F})\) this sequence has an infinite subsequence in which the first components \(\nu_{i_{1}}\) converge to a limit \(\eta_{1}\). That sequence in turn has an infinite subsequence in which also the second components \(\nu_{i_{2}}\) converge to a limit \(\eta_{2}\). Going on this way, one finds a subsequence \((\nu_{i_{1}1},\nu_{i_{2}2},\ldots,\nu_{i_{k}k})_{h=0}^{\infty}\) of \((\nu_{i1},\nu_{i2},\ldots,\nu_{ik})_{i=0}^{\infty}\) for \(i_{0}<i_{1}<\ldots\) that has a limit, say \(\lim_{h\to\infty}(\nu_{i_{k}1},\nu_{i_{k}2},\ldots,\nu_{i_{k}k})=(\eta_{1}, \eta_{2},\ldots,\eta_{k})\). Using that \(\lim_{h\to\infty}\nu_{i_{k}}=\nu\), one obtains \(\nu=\bigoplus_{j=1}^{k}p_{j}\cdot\eta_{j}\). For each \(j=1,\ldots,k\), by Lemma 12, since \(\delta(E_{j})\xrightarrow{(\alpha)}\nu_{ij}\) for all \(i\in I\) and \(\lim_{h\to\infty}\nu_{i_{k}j}=\eta_{j}\), we conclude that \(\delta(E_{j})\xrightarrow{(\alpha)}\eta_{j}\). Thus, by Lemma 7, \(\mu=\bigoplus_{j=1}^{k}p_{j}\cdot E_{j}\xrightarrow{(\alpha)}\bigoplus_{j=1}^ {k}p_{j}\cdot\eta_{j}=\nu\).
Next, we consider a partial transition over a convergent sequence of distributions.
**Lemma 14**.: _Let \((\mu_{i})_{i=0}^{\infty},(\nu_{i})_{i=0}^{\infty}\in\mathit{Distr}(\mathcal{F} )^{\infty}\) such that \(\lim_{i\to\infty}\mu_{i}=\mu\) and \(\lim_{i\to\infty}\nu_{i}=\nu\). If it holds that \(\mu_{i}\xrightarrow{(\alpha)}\nu_{i}\) for all \(i\in\mathbb{N}\), then also \(\mu\xrightarrow{(\alpha)}\nu\)._
Proof.: Since \(\lim_{i\to\infty}\mu_{i}=\mu\), we can write \(\mu_{i}=(1-r_{i})\mu\oplus r_{i}\mu_{i}^{\prime\prime}\), for suitable \(\mu_{i}^{\prime\prime}\in\mathit{Distr}(\mathcal{F})\) and \(r_{i}\geqslant 0\) such that \(\lim_{i\to\infty}r_{i}=0\), as guaranteed by Lemma 2. Because \(\mu_{i}\xrightarrow{(\alpha)}\nu_{i}\), by Lemma 8 there are distributions \(\nu_{i}^{\prime},\nu_{i}^{\prime\prime}\in\mathit{Distr}(\mathcal{F})\) for \(i\in\mathbb{N}\) such that \(\nu_{i}=(1-r_{i})\nu_{i}^{\prime}\oplus r_{i}\nu_{i}^{\prime\prime}\), \(\mu\xrightarrow{(\alpha)}\nu_{i}^{\prime}\), and \(\mu_{i}^{\prime\prime}\xrightarrow{(\alpha)}\nu_{i}^{\prime\prime}\). We have \(\lim_{i\to\infty}\nu_{i}^{\prime}=\nu\) as well, since \(\lim_{i\to\infty}r_{i}=0\). Thus, \(\lim_{i\to\infty}\nu_{i}^{\prime}=\nu\) and \(\mu\xrightarrow{(\alpha)}\nu_{i}^{\prime}\) for \(i\in\mathbb{N}\). Therefore, it follows by Lemma 13 that \(\mu\xrightarrow{(\alpha)}\nu\).
For \(\mu,\nu\in\mathit{Distr}(\mathcal{F})\), we write \(\mu\Rightarrow_{n}\nu\) if there are \(\eta_{0},\eta_{1},\ldots,\eta_{n}\in\mathit{Distr}(\mathcal{F})\) such that \(\mu=\eta_{0}\xrightarrow{(\tau)}\eta_{1}\xrightarrow{(\tau)}\ldots\xrightarrow{( \tau)}\eta_{n}=\nu\). Clearly, it holds that \(\mu\Rightarrow_{n}\nu\) for some \(n\in\mathbb{N}\) in case \(\mu\Rightarrow\nu\), because \(\Rightarrow\) is the transitive closure of \(\xrightarrow{(\tau)}\).
We have the following pendant of Lemma 14 for \(\Rightarrow_{n}\).
**Lemma 15**.: _Let \((\mu_{i})_{i=0}^{\infty},(\nu_{i})_{i=0}^{\infty}\in\mathit{Distr}(\mathcal{F} )^{\infty}\), \(\lim_{i\to\infty}\mu_{i}=\mu\) and \(\lim_{i\to\infty}\nu_{i}=\nu\). If \(\mu_{i}\Rightarrow_{n}\nu_{i}\) for all \(i\in\mathbb{N}\) then \(\mu\Rightarrow_{n}\nu\)._
Proof.: By induction on \(n\). Basis, \(n=0\): Trivial. Induction step, \(n+1\): Given \((\mu_{i})_{i=0}^{\infty},(\nu_{i})_{i=0}^{\infty}\in\mathit{Distr}(\mathcal{F} )^{\infty}\), \(\mu=\lim_{i\to\infty}\mu_{i}\), and \(\nu=\lim_{i\to\infty}\nu_{i}\), suppose \(\mu_{i}\Rightarrow_{n+1}\nu_{i}\) for all \(i\in\mathbb{N}\). Let \((\eta_{i})_{i=0}^{\infty}\in\mathit{Distr}(\mathcal{F})^{\infty}\) be such that \(\mu_{i}\xrightarrow{(\tau)}\eta_{i}\Rightarrow_{n}\nu_{i}\) for all \(i\in\mathbb{N}\). Since \(\mathit{Distr}(\mathcal{F})\) is sequentially compact, the sequence \((\eta_{i})_{i=0}^{\infty}\) has a convergent subsequence \((\eta_{i_{k}})_{k=0}^{\infty}\); put \(\eta=\lim_{k\to\infty}\eta_{i_{k}}\). Because \(\mu_{i_{k}}\xrightarrow{(\tau)}\eta_{i_{k}}\) for all \(k\in\mathbb{N}\), one has \(\mu\xrightarrow{(\tau)}\eta\) by Lemma 14. Since \(\eta_{i_{k}}\Rightarrow_{n}\nu_{i_{k}}\) for \(k\in\mathbb{N}\), the induction hypothesis yields \(\eta\Rightarrow_{n}\nu\). It follows that \(\mu\Rightarrow_{n+1}\nu\)
We adapt Lemma 15 to obtain a continuity result for weak transitions \(\Rightarrow\).
**Lemma 16**.: _Let \((\mu_{i})_{i=0}^{\infty},(\nu_{i})_{i=0}^{\infty}\in\mathit{Distr}(\mathcal{F})^{ \infty}\), \(\lim_{i\to\infty}\mu_{i}=\mu\) and \(\lim_{i\to\infty}\nu_{i}=\nu\). If \(\mu_{i}\Rightarrow\nu_{i}\) for all \(i\in\mathbb{N}\), then \(\mu\Rightarrow\nu\)._
Proof.: Since \(\mathcal{F}\) contains only finitely many non-deterministic processes, which can do finitely many \(\tau\)-transitions only, a global upperbound \(N\) exists such that if \(\mu\Rightarrow\nu\) then \(\mu\Rightarrow_{k}\nu\) for some \(k\leqslant N\).
Moreover, as each sequence \(\mu=\eta_{0}\xrightarrow{(\tau)}\eta_{1}\xrightarrow{(\tau)}\ldots \xrightarrow{(\tau)}\eta_{k}=\nu\) with \(k<N\) can be extended to a sequence \(\mu=\eta_{0}\xrightarrow{(\tau)}\eta_{1}\xrightarrow{(\tau)}\ldots \xrightarrow{(\tau)}\eta_{N}=\nu\), namely by taking \(\eta_{i}=\nu\) for all \(k<i\leqslant N\), on \(\mathcal{F}\) the relations \(\Rightarrow\) and \(\Rightarrow_{N}\) coincide. Consequently, Lemma 16 follows from Lemma 15.
The following theorem says that equivalence classes of branching probabilistic bisimilarity in \(\mathit{Distr}(\mathcal{F})\) are closed sets of distributions.
**Theorem 17**.: _Let \(\hat{\mu},\hat{\nu}\in\mathit{Distr}(\mathcal{F})\) and \((\nu_{i})_{i=0}^{\infty}\in\mathit{Distr}(\mathcal{F})^{\infty}\) such that \(\hat{\mu}\xleftrightarrow{}_{b}\nu_{i}\) for all \(i\in\mathbb{N}\) and \(\hat{\nu}=\lim_{i\to\infty}\nu_{i}\). Then it holds that \(\hat{\mu}\xleftrightarrow{}_{b}\hat{\nu}\)._
Proof.: Define the relation \(\mathcal{R}\) on \(\mathit{Distr}(\mathcal{F})\) by
\[\mu\,\mathcal{R}\,\nu \iff \exists(\mu_{i})_{i=0}^{\infty},(\nu_{i})_{i=0}^{\infty}\in \mathit{Distr}(\mathcal{F})^{\infty}\,;\] \[\lim_{i\to\infty}\mu_{i}=\mu\wedge\lim_{i\to\infty}\nu_{i}=\nu \wedge\forall i\,{\in}\,\mathbb{N}\,;\,\mu_{i}\xleftrightarrow{}_{b}\nu_{i}\]
As \(\hat{\mu}\,\mathcal{R}\,\hat{\nu}\) (taking \(\mu_{i}:=\hat{\mu}\) for all \(i\in I\)), it suffices to show that \(\mathcal{R}\) is a branching probabilistic bisimulation.
Suppose \(\mu\,\mathcal{R}\,\nu\). Let \((\mu_{i})_{i=0}^{\infty},(\nu_{i})_{i=0}^{\infty}\in\mathit{Distr}(\mathcal{F })^{\infty}\) be such that \(\lim_{i\to\infty}\mu_{i}=\mu\), \(\lim_{i\to\infty}\nu_{i}=\nu\), and \(\mu_{i}\xleftrightarrow{}_{b}\nu_{i}\) for all \(i\,{\in}\,\mathbb{N}\). Since \(\lim_{i\to\infty}\mu_{i}=\mu\), there exist \((\mu_{i}^{\prime})_{i=0}^{\infty}\in\mathit{Distr}(\mathcal{F})^{\infty}\) and \((r_{i})_{i=0}^{\infty}\in\mathbb{R}^{\infty}\) such that \(\mu_{i}=(1-r_{i})\,\mu\oplus r_{i}\mu_{i}^{\prime}\) for all \(i\,{\in}\,\mathbb{N}\) and \(\lim_{i\to\infty}r_{i}=0\).
(i) Towards weak decomposability of \(\mathcal{R}\) for \(\mu\) vs. \(\nu\), suppose \(\mu=\bigoplus_{j\in J}q_{j}\cdot\bar{\mu}_{j}\). So, for all \(i\,{\in}\,\mathbb{N}\), we have that \(\mu_{i}=(1-r_{i})\left(\bigoplus_{j\in J}q_{j}\cdot\bar{\mu}_{j}\right)\oplus r _{i}\mu_{i}^{\prime}\). By weak decomposability of \(\xleftrightarrow{}_{b}\), there exist \(\bar{\nu}_{i}\), \(\nu_{i}^{\prime}\) and \(\nu_{ij}\) for \(i\,{\in}\,\mathbb{N}\) and \(j\,{\in}\,J\) such that \(\nu_{i}\xleftrightarrow{}_{b}\bar{\nu}_{i}\), \(\mu_{i}\xleftrightarrow{}_{b}\bar{\nu}_{i}\), \(\bar{\nu}_{i}=(1-r_{i})\big{(}\bigoplus_{j\in J}q_{j}\cdot\nu_{ij}\big{)} \oplus r_{i}\nu_{i}^{\prime}\), \(\mu_{i}^{\prime}\xleftrightarrow{}_{b}\nu_{i}^{\prime}\), and \(\bar{\mu}_{j}\xleftrightarrow{}_{b}\nu_{ij}\) for \(j\,{\in}\,J\).
The sequences \((\nu_{ij})_{i=0}^{\infty}\) for \(j\,{\in}\,J\) may not converge. However, by sequential compactness of \(\mathit{Distr}(\mathcal{F})\) (and successive sifting out for each \(j\,{\in}\,J\)) an index sequence \((i_{k})_{k=0}^{\infty}\) exists such that the sequences \((\nu_{i_{k}})_{k=0}^{\infty}\) converge, say \(\lim_{k\to\infty}\nu_{i_{k}j}=\bar{\nu}_{j}\) for \(j\,{\in}\,J\). Put \(\bar{\nu}=\bigoplus_{j\in J}q_{j}\cdot\bar{\nu}_{j}\). Then it holds that
\[\lim_{k\to\infty}\bar{\nu}_{i_{k}}=\lim_{k\to\infty}(1-r_{i_{k}})\big{(} \bigoplus_{j\in J}q_{j}\cdot\nu_{i_{k}j}\big{)}\oplus r_{i_{k}}\,\nu_{i_{k}}^{ \prime}=\lim_{k\to\infty}\bigoplus_{j\in J}q_{j}\cdot\nu_{i_{k}j}=\bigoplus_{j \in J}q_{j}\cdot\bar{\nu}_{j}=\bar{\nu}\]
as \(\lim_{k\to\infty}r_{i_{k}}=0\) and probabilistic composition is continuous. Since \(\nu_{i_{k}}\xleftrightarrow{}_{b}\bar{\nu}_{i_{k}}\) for all \(k\,{\in}\,\mathbb{N}\), one has \(\lim_{k\to\infty}\nu_{i_{k}}\Rightarrow\lim_{k\to\infty}\bar{\nu}_{i_{k}}\), i.e. \(\nu\Rightarrow\bar{\nu}\), by Lemma 16. Also, \(\mu_{i_{k}}\xleftrightarrow{}_{b}\bar{\nu}_{i_{k}}\) for all \(k\,{\in}\,\mathbb{N}\). Therefore, by definition of \(\mathcal{R}\), we obtain \(\mu\,\mathcal{R}\,\bar{\nu}\). Since \(\bar{\mu}_{j}\xleftrightarrow{}_{b}\nu_{i_{k}j}\) for all \(k\,{\in}\,\mathbb{N}\) and \(j\,{\in}\,J\), it follows that \(\bar{\mu}_{j}\,\mathcal{R}\,\bar{\nu}_{j}\) for \(j\,{\in}\,J\). Thus, \(\nu\Rightarrow\bar{\nu}=\bigoplus_{j\in J}q_{j}\cdot\bar{\nu}_{j}\), \(\mu\,\mathcal{R}\,\bar{\nu}\), and \(\bar{\mu}_{j}\,\mathcal{R}\,\bar{\nu}_{j}\) for all \(j\,{\in}\,J\), as was to be shown. Hence the relation \(\mathcal{R}\) is weakly decomposable.
(ii) For the transfer property, suppose \(\mu\xrightarrow{\alpha}\mu^{\prime}\) for some \(\alpha\in\mathcal{A}\). Since, for each \(i\,{\in}\,\mathbb{N}\), \(\mu_{i}\xleftrightarrow{}_{b}\nu_{i}\) and \(\mu_{i}=(1-r_{i})\,\mu\oplus r_{i}\mu_{i}^{\prime}\), it follows from weak decomposability of \(\xleftrightarrow{}_{b}\) that distributions \(\bar{\nu}_{i}\), \(\nu_{i}^{\prime}\) and \(\nu_{i}^{\prime\prime}\) exist such that \(\nu_{i}\Rightarrow\bar{\nu}_{i}\), \(\mu_{i}\xleftrightarrow{}_{b}\bar{\nu}_{i}\), \(\bar{\nu}_{i}=(1-r_{i})\,\nu_{i}^{\prime}\oplus r_{i}\nu_{i}^{\prime\prime}\) and \(\mu\xleftrightarrow{}_{b}\nu_{i}^{\prime}\). By the transfer property for \(\xleftrightarrow{}_{b}\), for each \(i\,{\in}\,\mathbb{N}\) exist \(\bar{\eta}_{i},\eta_{i}^{\prime}\in\mathit{Distr}(\mathcal{E})\) such that
\[\nu_{i}^{\prime}\xleftrightarrow{}_{b}\bar{\eta}_{i},\ \bar{\eta}_{i} \xrightarrow{(\alpha)}\eta_{i}^{\prime},\ \mu\xleftrightarrow{}_{b}\bar{\eta}_{i},\ \mathrm{and}\ \mu^{\prime}\xleftrightarrow{}_{b}\eta_{i}^{\prime}.\]
We have \(\bar{\nu}_{i}^{\prime}\in\mathit{Distr}(\mathcal{F})\) for \(i\,{\in}\,\mathbb{
\((\tilde{\eta}^{\prime}_{i})_{i=0}^{\infty}\) have converging subsequences \((\tilde{\nu}^{\prime}_{i_{k}})_{k=0}^{\infty}\), \((\tilde{\eta}_{i_{k}})_{k=0}^{\infty}\), and \((\tilde{\eta}^{\prime}_{i_{k}})_{k=0}^{\infty}\), respectively. Put \(\tilde{\nu}=\lim_{k\to\infty}\nu^{\prime}_{i_{k}}\), \(\tilde{\eta}=\lim_{k\to\infty}\tilde{\eta}_{i_{k}}\), and \(\eta^{\prime}=\lim_{k\to\infty}\eta^{\prime}_{i_{k}}\). As \(\lim_{k\to\infty}r_{i_{k}}=0\), one has \(\lim_{k\to\infty}\tilde{\nu}_{i_{k}}=\lim_{k\to\infty}\nu^{\prime}_{i_{k}}=\tilde {\nu}\).
Since \(\nu_{i_{k}}\Rightarrow\tilde{\nu}_{i_{k}}\) for \(k\in\mathbb{N}\), we obtain \(\lim_{k\to\infty}\nu_{i_{k}}\Rightarrow\lim_{k\to\infty}\tilde{\nu}_{i_{k}}\) by Lemma 16, thus \(\nu\Rightarrow\tilde{\nu}\). Likewise, as \(\nu^{\prime}_{i_{k}}\Rightarrow\tilde{\eta}_{i_{k}}\) for all \(k\in\mathbb{N}\), one has \(\tilde{\nu}\Rightarrow\tilde{\eta}\), and therefore \(\nu\Rightarrow\tilde{\eta}\). Furthermore, because \(\tilde{\eta}_{i_{k}}\xrightarrow{(\alpha)}\eta^{\prime}_{i_{k}}\) for \(k\in\mathbb{N}\), it follows that \(\tilde{\eta}\xrightarrow{(\alpha)}\eta^{\prime}\), now by Lemma 14. From \(\mu\leftrightarroweq_{b}\tilde{\eta}_{i_{k}}\) for all \(k\in\mathbb{N}\), we obtain \(\mu\mathcal{R}\tilde{\eta}\) by definition of \(\mathcal{R}\). Finally, \(\mu^{\prime}\leftrightarroweq_{b}\eta^{\prime}_{i_{k}}\) for all \(k\in\mathbb{N}\) yields \(\mu^{\prime}\mathcal{R}\tilde{\eta}^{\prime}\). Thus \(\nu\Rightarrow\tilde{\eta}\xrightarrow{(\alpha)}\eta^{\prime}\), \(\mu\mathcal{R}\tilde{\eta}\), and \(\mu^{\prime}\mathcal{R}\tilde{\eta}^{\prime}\), which was to be shown.
The following corollary of Theorem 17 will be used in the next section.
**Corollary 18**.: _For each \(\mu\in\mathit{Distr}(\mathcal{E})\), the set \(T_{\mu}=\{\,\nu\!\in\!\mathit{Distr}(\mathcal{E})\mid\nu\!\leftrightarroweq_{b }\mu\wedge\mu\Rightarrow\nu\,\}\) is a sequentially compact set._
Proof.: For \(\mu=\bigoplus_{i\in I}p_{i}\!\cdot\!E_{i}\), the set of processes \(\mathcal{F}=\{\,E\in\mathcal{E}\mid E\text{ occurs in }E_{i}\text{ for some }i\in I\,\}\) is finite and closed under transitions. Clearly, \(\mu\in\mathit{Distr}(\mathcal{F})\). Moreover, \(\mathit{Distr}(\mathcal{F})\) is a sequentially compact subset of \(\mathit{Distr}(\mathcal{E})\). Taking \(\mu_{i}=\mu\) for all \(i\in\mathbb{N}\) in Lemma 16 yields that \(\{\,\nu\mid\mu\Rightarrow\nu\,\}\) is a closed subset of \(\mathit{Distr}(\mathcal{F})\). Similarly, the set \(\{\,\nu\mid\nu\!\leftrightarroweq_{b}\mu\,\}\) is a closed subset of \(\mathit{Distr}(\mathcal{F})\) by Theorem 17. The statement then follows since the intersection of two closed subsets of \(\mathit{Distr}(\mathcal{F})\) is itself closed, and hence sequentially compact.
## 6 Cancellativity for branching probabilistic bisimilarity
With the results of Section 5 in place, we turn to stable processes and cancellativity. In the introduction we argued that in general it doesn't need to be the case that two branching probabilistic bisimilar distributions assign the same weight to equivalence classes. Here we show that this property does hold when restricting to stable distributions. We continue to prove the announced unfolding result, that for every distribution \(\mu\) there exists a stable distribution \(\sigma\) such that \(\mu\Rightarrow\sigma\) and \(\mu\leftrightarroweq_{b}\sigma\). That result will be pivotal in the proof of the cancellation theorem, Theorem 22.
**Definition 19**.: _A distribution \(\mu\in\mathit{Distr}(\mathcal{E})\) is called stable if, for all \(\bar{\mu}\in\mathit{Distr}(\mathcal{E})\), \(\mu\Rightarrow\bar{\mu}\) and \(\mu\leftrightarroweq_{b}\bar{\mu}\) imply that \(\bar{\mu}=\mu\)._
Thus, a distribution \(\mu\) is called stable if it cannot perform internal activity without leaving its branching bisimulation equivalence class. By definition of \(\xrightarrow{(\tau)}\) it is immediate that if \(\bigoplus_{i\in I}p_{i}\!\cdot\!\mu_{i}\) is a stable distribution with \(p_{i}>0\) for \(i\in I\), then also each probabilistic component \(\mu_{i}\) is stable. Also, because two stable distributions \(\mu\) and \(\nu\) don't have any non-trivial partial \(\tau\)-transitions, weak decomposability between them amounts to decomposability, i.e. if \(\mu\leftrightarroweq_{b}\nu\) and \(\mu=\bigoplus_{i\in I}p_{i}\mu_{i}\) then distributions \(\nu_{i}\) for \(i\in I\) exist such that \(\nu=\bigoplus_{i\in I}p_{i}\nu_{i}\) and \(\mu_{i}\leftrightarroweq_{b}\nu_{i}\) for \(i\in I\).
The next result states that, contrary to distributions in general, two stable distributions are branching bisimilar precisely when they assign the same probability on all branching bisimilarity classes of \(\mathcal{E}\).
**Lemma 20**.: _Let \(\mu,\nu\in\mathit{Distr}(\mathcal{E})\) be two stable distributions. Then it holds that \(\mu\leftrightarroweq_{b}\nu\) iff \(\mu[C]=\nu[C]\) for each equivalence class \(C\) of branching probabilistic bisimilarity in \(\mathcal{E}\)._
Proof.: Suppose \(\mu=\bigoplus_{i\in I}p_{i}\!\cdot\!E_{i}\), \(\nu=\bigoplus_{j\in J}q_{j}\!\cdot\!F_{j}\), and \(\mu\leftrightarroweq_{b}\nu\). By weak decomposability, \(\nu\Rightarrow\tilde{\nu}=\bigoplus_{i\in I}p_{i}\!\cdot\!\nu_{i}\) for suitable \(\nu_{i}\in\mathit{Distr}(\mathcal{E})\) for \(i\in I\) with \(\nu_{i}\leftrightarroweq_{b}\delta(E_{i})\) and \(\tilde{\nu}\leftrightarroweq_{b}\mu\). Hence, \(\tilde{\nu}\leftrightarroweq_{b}\mu\leftrightarroweq_{b}\nu\). Thus, by stability of \(\nu\), we have \(\tilde{\nu}=\nu\). Say, \(\nu_{i}=\bigoplus_{j\in J}q_{ij}\!\cdot\!F_{j}\) with \(q_{ij}\geqslant 0\), for \(i\in I\), \(j\in J\). Since \(\nu_{i}\leftrightarroweq_{b}\delta(E_{i})\), we have by weak decomposability, \(\delta(E_{i})\Rightarroweq\bigoplus_{j\in J}q_{ij}\!\cdot\!\mu^{\prime}_{ij}\) such that \(\delta(E_{i})\leftrightarroweq_{b}\bigoplus_{j\in J}q_{ij}\!\cdot\!\mu^{\prime}_{ij}\)
and \(\mu^{\prime}_{ij}\xleftrightarrow_{b}\delta(F_{j})\) for suitable \(\mu^{\prime}_{ij}\in\mathit{Distr}(\mathcal{E})\). Since \(\mu\) is stable, so is \(\delta(E_{i})\). Hence \(\delta(E_{i})=\bigoplus_{j\in J}q_{ij}\cdot\mu^{\prime}_{ij}\), \(\mu^{\prime}_{ij}=\delta(E_{i})\), and \(E_{i}\xleftrightarrow_{b}F_{j}\) if \(q_{ij}>0\). Put \(p_{ij}=p_{i}q_{ij}\), \(E_{ij}=E_{i}\) if \(q_{ij}>0\), and \(E_{ij}=\mathbf{0}\) otherwise, \(F_{ij}=F_{j}\) if \(q_{ij}>0\), and \(F_{ij}=\mathbf{0}\) otherwise, for \(i\in I\), \(j\in J\). Then it holds that
\[\mu =\bigoplus_{i\in I}p_{i}\cdot E_{i}=\bigoplus_{i\in I}p_{i} \cdot\big{(}\bigoplus_{j\in J}q_{ij}\cdot E_{i}\big{)}=\bigoplus_{i\in I} \bigoplus_{j\in J}p_{i}q_{ij}\cdot E_{i}=\bigoplus_{i\in I}\bigoplus_{j\in J}p _{ij}\cdot E_{ij}\] \[\nu =\bigoplus_{i\in I}p_{i}\cdot\nu_{i}\ =\bigoplus_{i\in I}p_{i} \cdot\big{(}\bigoplus_{j\in J}q_{ij}\cdot F_{j}\big{)}=\bigoplus_{i\in I} \bigoplus_{j\in J}p_{i}q_{ij}\cdot F_{j}=\bigoplus_{i\in I}\bigoplus_{j\in J }p_{ij}\cdot F_{ij}.\]
Now, for any equivalence class \(C\) of \(\mathcal{E}\) modulo \(\xleftrightarrow_{b}\), it holds that \(E_{ij}\in C\Leftrightarrow F_{ij}\in C\) for all indices \(i\in I\), \(j\in J\). So, \(\mu[C]=\sum_{i\in I,j\in J\colon E_{ij}\in C}p_{ij}=\sum_{i\in I,j\in J\colon F _{ij}\in C}p_{ij}=\nu[C]\).
For the reverse direction, suppose \(\mu=\bigoplus_{i\in I}p_{i}\cdot E_{i}\), \(\nu=\bigoplus_{j\in J}q_{j}\cdot F_{j}\), with \(p_{i},q_{j}>0\), and \(\mu[C]=\nu[C]\) for each equivalence class \(C\in\mathcal{E}/\xleftrightarrow\).
For \(i\in I\) and \(j\in J\), let \(C_{i}\) and \(D_{j}\) be the equivalence class in \(\mathcal{E}\) of \(E_{i}\) and \(F_{j}\) modulo \(\xleftrightarrow_{b}\). Define \(r_{ij}=\delta_{ij}p_{i}q_{j}/\mu[C_{i}]\), for \(i\in I\), \(j\in J\), where \(\delta_{ij}=1\) if \(E_{i}\xleftrightarrow_{b}F_{j}\) and \(\delta_{ij}=0\) otherwise. Then it holds that
\[\sum_{j\in J}r_{ij}=\sum_{j\in J}\frac{\delta_{ij}p_{i}q_{j}}{\mu[C_{i}]}=\frac {p_{i}}{\mu[C_{i}]}\sum_{j\in J}\delta_{ij}q_{j}=\frac{p_{i}\nu[C_{i}]}{\mu[C_ {i}]}=p_{i}.\]
Since \(\delta_{ij}p_{i}q_{j}/\mu[C_{i}]=\delta_{ij}p_{i}q_{j}/\nu[D_{j}]\) for \(i\in I\), \(j\in J\), we also have \(\sum_{i\in I}r_{ij}=q_{j}\). Therefore, we can write \(\mu=\bigoplus_{i\in I}\bigoplus_{j\in J}r_{ij}\cdot E_{ij}\) and \(\nu=\bigoplus_{i\in I}\bigoplus_{j\in J}r_{ij}\cdot F_{ij}\) for suitable \(E_{ij}\) and \(F_{ij}\) such that \(E_{ij}\xleftrightarrow_{b}F_{ij}\). Calling Lemma 10 it follows that \(\mu\xleftrightarrow_{b}\nu\).
Next, in Lemma 21, we are about to prove a crucial property for our proof of cancellativity, the proof of Theorem 22 below. Generally, a distribution may allow inert partial transitions. However, the distribution can be unfolded to reach via inert partial transitions a stable distribution, which doesn't have these by definition. To obtain the result we will rely on the topological property of sequential compactness of the set \(T_{\mu}=\{\,\mu^{\prime}\mid\mu^{\prime}\xleftrightarrow_{b}\mu\wedge\mu \Rightarrow\,\mu^{\prime}\,\}\) introduced in the previous section.
**Lemma 21**.: _For all \(\mu\in\mathit{Distr}(\mathcal{E})\) there is a stable distribution \(\sigma\in\mathit{Distr}(\mathcal{E})\) such that \(\mu\Rightarrow\sigma\)._
Proof.: Define the _weight_ of a distribution by \(wgt(\mu)=\sum_{E\in\mathcal{E}}\mu(E)\cdot c(E)\), i.e., the weighted average of the complexities of the states in its support. In view of these definitions, \(E\xrightarrow{\alpha}\mu\) implies \(wgt(\mu)<wgt(\delta(E))\) and \(\mu\xrightarrow{\alpha}\mu^{\prime}\) implies \(wgt(\mu^{\prime})<wgt(\mu)\). In addition, \(\mu\Rightarrow\,\mu^{\prime}\) implies \(wgt(\mu^{\prime})\leqslant wgt(\mu)\).
For a distribution \(\mu\in\mathit{Distr}(\mathcal{E})\), the set \(T_{\mu}\) is given by \(T_{\mu}=\{\,\mu^{\prime}\mid\mu^{\prime}\xleftrightarrow_{b}\mu\wedge\mu \Rightarrow\,\mu^{\prime}\,\}\). Consider the value \(\inf\{\mathit{wgt}(\mu^{\prime})\mid\mu^{\prime}\in T_{\mu}\,\}\). By Corollary 18, \(T_{\mu}\) is a sequentially compact set. Since the infimum over a sequentially compact set will be reached, there exists a distribution \(\sigma\) such that \(\mu\Rightarrow\sigma\), \(\sigma\xleftrightarrow_{b}\mu\), and \(wgt(\sigma)=\inf\{\,wgt(\mu^{\prime})\mid\mu^{\prime}\in T_{\mu}\,\}\). By definition of \(T_{\mu}\), the distribution \(\sigma\) must be stable.
We have arrived at the main result of the paper, slightly more general formulated compared to the description in the introduction. The message remains the same: if two distributions are branching probabilistic bisimilar and have components that are branching probabilistic bisimilar, then the components that remain after cancelling the earlier components are also branching probabilistic bisimilar. As we see, the previous lemma is essential in the proof as given.
**Theorem 22** (Cancellativity).: _Let \(\mu,\mu^{\prime},\nu,\nu^{\prime}\in\mathit{Distr}(\mathcal{E})\) and \(0<r\leqslant 1\) be such that \(\mu\,\cap\,\,\nu\xleftrightarrow_{b}\mu^{\prime}\,\cap\,\,\nu^{\prime}\) and \(\nu\xleftrightarrow_{b}\nu^{\prime}\). Then it holds that \(\mu\xleftrightarrow_{b}\mu^{\prime}\)._
Proof.: Choose \(\mu\), \(\mu^{\prime}\), \(\nu\), \(\nu^{\prime}\), and \(r\) according to the premise of the theorem. By Lemma 21, a stable distribution \(\sigma\) exists such that \(\mu\,\cap\,\,\nu\Rightarrow\sigma\) and \(\sigma\xleftrightarrow_{b}\mu\,\cap\,\,\nu\). By weak decomposability, we can find distributions \(\tilde{\mu}\) and \(\tilde{\nu}\) such that \(\sigma\Rightarrow\tilde{\mu}\,\cap\,\tilde{\nu}\), \(\tilde{\mu}\xleftrightarrow_{b}\mu\), and \(\tilde{\nu}\xleftrightarrow_{b}\nu\). By stability of \(\sigma\) we have \(\sigma=\tilde{\mu}\,\cap\,\tilde{\nu}\)
Thus \(\bar{\mu}\,_{r}\oplus\bar{\nu}\) is stable. Symmetrically, there are distributions \(\bar{\mu}^{\prime}\) and \(\bar{\nu}^{\prime}\) such that \(\bar{\mu}^{\prime}\,\,\,\,\bar{\nu}^{\prime}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ |
2309.08035 | Interpretability-Aware Vision Transformer | Vision Transformers (ViTs) have become prominent models for solving various
vision tasks. However, the interpretability of ViTs has not kept pace with
their promising performance. While there has been a surge of interest in
developing {\it post hoc} solutions to explain ViTs' outputs, these methods do
not generalize to different downstream tasks and various transformer
architectures. Furthermore, if ViTs are not properly trained with the given
data and do not prioritize the region of interest, the {\it post hoc} methods
would be less effective. Instead of developing another {\it post hoc} approach,
we introduce a novel training procedure that inherently enhances model
interpretability. Our interpretability-aware ViT (IA-ViT) draws inspiration
from a fresh insight: both the class patch and image patches consistently
generate predicted distributions and attention maps. IA-ViT is composed of a
feature extractor, a predictor, and an interpreter, which are trained jointly
with an interpretability-aware training objective. Consequently, the
interpreter simulates the behavior of the predictor and provides a faithful
explanation through its single-head self-attention mechanism. Our comprehensive
experimental results demonstrate the effectiveness of IA-ViT in several image
classification tasks, with both qualitative and quantitative evaluations of
model performance and interpretability. Source code is available from:
https://github.com/qiangyao1988/IA-ViT. | Yao Qiang, Chengyin Li, Prashant Khanduri, Dongxiao Zhu | 2023-09-14T21:50:49Z | http://arxiv.org/abs/2309.08035v1 | # Interpretability-Aware Vision Transformer
###### Abstract
Vision Transformers (ViTs) have become prominent models for solving various vision tasks. However, the interpretability of ViTs has not kept pace with their promising performance. While there has been a surge of interest in developing _post hoc_ solutions to explain ViTs' outputs, these methods do not generalize to different downstream tasks and various transformer architectures. Furthermore, if ViTs are not properly trained with the given data and do not prioritize the region of interest, the _post hoc_ methods would be less effective. Instead of developing another _post hoc_ approach, we introduce a novel training procedure that inherently enhances model interpretability. Our interpretability-aware ViT (IA-ViT) draws inspiration from a fresh insight: both the class patch and image patches consistently generate predicted distributions and attention maps. IA-ViT is composed of a feature extractor, a predictor, and an interpreter, which are trained jointly with an interpretability-aware training objective. Consequently, the interpreter simulates the behavior of the predictor and provides a faithful explanation through its single-head self-attention mechanism. Our comprehensive experimental results demonstrate the effectiveness of IA-ViT in several image classification tasks, with both qualitative and quantitative evaluations of model performance and interpretability.
## Introduction
The Transformer architecture [23], originally designed for natural language processing (NLP) tasks [4], has recently found application in computer vision (CV) tasks with the emergence of Vision Transformer (ViT) [13]. ViT utilizes the multi-head self-attention (MSA) mechanism as its foundation, enabling it to proficiently capture long-range dependencies among pixels or patches within images. As a result, ViTs have demonstrated superior performance over state-of-the-art convolutional neural networks (CNNs) in numerous computer vision (CV) tasks, including but not limited to image classification [12, 13, 14, 15], object detection [16, 17, 18], action recognition [19, 20], and medical imaging segmentation [10, 21].
Since ViTs are extensively employed in high-stakes decision-making fields like healthcare [20, 12] and autonomous driving [16], there exists a significant demand for gaining insights into their decision-making process. Nonetheless, ViTs continue to function as black-box models, lacking transparency and explanations for both their training process and predictions. To tackle this challenge, explainable AI (XAI) has arisen as a specialized field within AI, with the goal of ensuring that end users can intuitively understand and trust the models' outputs by providing explanations for their behavior [15, 16].
XAI is a rapidly growing field encompassing numerous research directions. One strand focuses on _post hoc_ explanation techniques, which aim to obtain explanations by approximating a pre-trained model and its predictions [21, 16, 17, 18, 19, 20, 21, 22, 23, 24]. Although there has been an increasing interest in developing _post hoc_ solutions for Transformers, most of them either rely on the attention weights within the MSA mechanism [16, 17] or utilize back-propagation gradients to generate explanations [21, 18, 19, 20, 22, 23, 24]. It is important to highlight that these approaches have limitations in terms of their ability to elucidate the decision-making processes of trained models and can be impacted by different input schemes [20, 21, 22, 23]. Conversely, a different strand of research focuses on modifying neural architectures [21, 16] and/or incorporating explanations into the learning process [20, 21, 22, 23] for better interpretability. Building explainable ViT models during training remains largely uncharted waters. Recent studies tend to modify the ViT architecture and rely on external knowledge to provide faithful explanations [21, 16, 17].
Among the efforts to enhance interpretability during the training process, we propose our novel interpretability-aware ViT (IA-ViT). Our inspiration comes from the observation that, in ViT models, the downstream classification tasks only utilize the embedding of the class (CLS) patch. In contrast, the feature embeddings of the image patches, which are
learned using multi-layer MSA blocks, are underutilized and often neglected. Nevertheless, we have discovered that these neglected patch embeddings also contain discriminative features crucial for classification. Consequently, both the CLS patch and the image patches generate uniform attention maps and predictive distributions, as illustrated in Fig 1. In addition to qualitative confirmation, we also quantitatively substantiate this discovery in Table 1. Therefore, we suggest leveraging the valuable attributes of these image patches to generate explanations while utilizing the CLS patch embedding for prediction.
Specifically, we consider that interpretation and prediction are two distinct but closely related tasks that can be simultaneously optimized during training. Therefore, aside from ViT's inherent predictor, we introduce an interpreter into the model architecture as the interpretability-aware component. This interpreter comprises a single-head self-attention (SSA) mechanism and a linear head. SSA is employed to generate explanations through its attention weights, while the linear head maps the embeddings of image patches into the class dimension aiming to simulate the behavior of the predictor. In this way, IA-ViT preserves its high expressive capability with an interpretability-aware training objective while providing stable and faithful high-quality explanations.
We summarize our major contributions as follows: (1) We propose a novel IA-ViT architecture, which leverages the feature embeddings from the image patches beside the CLS patch to provide consistent, faithful, and high-quality explanations while maintaining high predictive performance. (2) Our novel developed interpretability-aware training objective has been demonstrated effective in enhancing the interpretability of IA-ViT. (3) We conduct a comprehensive comparison of our approach with several state-of-the-art methods, validating the quality and consistency of explanations generated by IA-ViT.
## Related Works
### Explainable AI
Depending on the method of explanation generation, general _post hoc_ techniques in XAI can be broadly categorized into three groups: perturbation, approximation, and back-propagation. Perturbation methods, such as RISE [21], Extremal Perturbations [17], and SHAP [16], attempt to generate explanations by purposely perturbing the input images. However, these methods are often characterized by time-consuming and inefficient performance in practical applications. Approximation methods employ an external agent as the explainer for black-box models, such as LIME [22] and FLINT [23]. Nonetheless, these approaches might not accurately capture the true predictive mechanism of the models. Back-propagation techniques apply the back-propagation scheme to generate gradients [25, 26, 27, 28] or gradient-related [26, 27, 28, 29, 30, 31, 32] explanations. However, these methods may not faithfully reveal the decision-making process of trained models and might exhibit less reliable and robust [1, 30, 31].
Different from _post hoc_ methods, alternative methods suggest making alterations to either the architectures [21, 22, 23, 33, 34], the loss functions [24, 25, 26], or both [27, 28, 29, 30]. Nevertheless, certain methods depend on factors like the presence of ground truth explanations [10], the accessibility of annotations concerning incorrect explanations for specific inputs [26], or external knowledge sources [27]. Moreover, some interpretability constraints can potentially restrict the model's expressive capabilities, which may lead to a trade-off with prediction performance.
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \multirow{2}{*}{\(L_{2}\)} & \multirow{2}{*}{CR (\%)} & \multicolumn{3}{c}{Kendall’s \(\tau\) Correlation} \\ & & Top2 & Top3 & Top4 & Top5 \\ \hline
0.0005655 & 0.95 & 0.79 & 0.64 & 0.52 & 0.42 \\ \hline \end{tabular}
\end{table}
Table 1: Illustration of uniform attention maps and predictive distributions for 1000 samples drawn from the ImageNet dataset. \(L_{2}\) denotes the average \(L_{2}\) distance between the attention maps generated from the CLS and image patches. Top1 Consistency Rate (CR) illustrates the consistency rate between Top1 predictions produced using the CLS and image patches. Additionally, we employ Kendall’s \(\tau\) Correlation to measure the Kendall rank correlation coefficient between these two sets of predictions.
Figure 1: Illustration of uniform attention maps and predictive distributions produced by both the CLS patch and other image patches.
### Explanation Methods for ViTs
Motivated by the impressive success of Transformer architecture in NLP tasks [21], researchers have made efforts to extend the use of Transformer-based models to CV tasks [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. Meanwhile, researchers have been actively exploring ways to enhance their interpretability. One popular approach involves analyzing the attention weights of MSA in ViTs [21, 22], however, the simple utilization may not provide reliable explanations [20, 22]. Other approaches have been proposed to reason the decision-making process of ViTs, such as using gradients [13, 2, 23, 24], attributions [13, 22, 23], and redundancy reduction [22]. However, these _post hoc_ methods still have the same limitations as discussed before.
Recently, some approaches have emerged to modify the ViT architecture to improve interpretability. The Concept-Transformer [20], for instance, exposes explanations of a ViT model's output in terms of attention over user-defined high-level concepts. However, the usability of these methods largely depends on the presence of these human-annotated concepts. [16] proposed ViT-NeT, which interprets the decision-making process through a tree structure and prototypes with visual explanations. However, this method is not broadly applicable to various Transformer architectures and requires additional tree structures and external knowledge.
Different from existing works, we propose IA-ViT to directly improve its interpretability during the training process with the interpretability-aware objective. Moreover, our approach does not require external knowledge, such as predefined human-labeled concepts like Concept-Transformer [20] and additional complex architectures like ViT-NeT [16].
## Preliminary
### Overview of Vision Transformer
ViTs [14] for image classification take a sequence of sliced patches from an image as input and model their long-range dependencies with Linear Patch Embedding and Positional Encoding, stacked MSA blocks, and Feed-Forward Networks (FFN). Formally, an input image is first split into a sequence of fixed-sized 2D patches \(\mathbf{X}=[x_{1},x_{2},...,x_{N}]\) where N is the number of patches (e.g. \(N=14\times 14\)). These raw patches are then mapped into \(d\)-dimensional patch embeddings \(\mathbf{Z}=[\mathbf{z}_{1},\mathbf{z}_{2},...,\mathbf{z}_{N}]\) with a linear layer. Positional embeddings \(\mathbf{E}_{\mathrm{pos}}\) are also optionally added to patch embeddings to augment them with positional information. A learnable embedding \(\mathbf{z}_{\mathrm{cls}}\) termed CLS patch is appended to the sequence of patch embeddings, which serves as the representation of the image. To summarize, the input to the first block is formulated as:
\[\mathbf{Z}=[\mathbf{z}_{\mathrm{cls}};\mathbf{z}_{1};\mathbf{z}_{2};\cdots; \mathbf{z}_{N}]+\mathbf{E}_{\mathrm{pos}}, \tag{1}\]
where \(\mathbf{z}\in\mathbb{R}^{d}\) and \(\mathbf{E}_{\mathrm{pos}}\in\mathbb{R}^{(N+1)\times d}\).
Similar to Transformers [21], the backbone network of ViTs comprises \(L\) blocks, where each block consists of an MSA and an FFN. Particularly, a single-head self-attention (SSA) is computed as:
\[\mathbf{A}=\mathrm{Softmax}\bigg{(}\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d_{k} }}\bigg{)}\;\;\mathrm{and}\;\;\mathbf{S}=\mathbf{A}\mathbf{V}, \tag{2}\]
where \(\mathbf{Q},\mathbf{K},\mathbf{V}\) are query, key, and value matrices, which are projected from the same patch embeddings \(\mathbf{Z}\) respectively, and \(d_{k}\) is a scaling factor. For more effective attention on different representation sub-spaces, MSA concatenates the output from several SSAs as:
\[\mathbf{S}_{i,l}=\mathrm{SSA}(\mathbf{Z}_{l}W_{i,l}^{Q},\mathbf{Z}_{l}W_{i,l} ^{K},\mathbf{Z}_{l}W_{i,l}^{V}), \tag{3}\]
where \(W_{i,l}^{Q},W_{i,l}^{K},W_{i,l}^{V}\) are the parameter matrices in the \(i\)-th attention head of the \(l\)-th block, and \(\mathbf{Z}_{l}\) denotes the input at the \(l\)-th block, and projects it with another parameter matrix \(W_{l}^{O}\) as:
\[\mathrm{MSA}(\mathbf{Z}_{l})=[\mathbf{S}_{1,l};\cdots;\mathbf{S}_{H,l}]W_{l} ^{O}, \tag{4}\]
The output from MSA is then fed into FFN, a two-layer MLP, that produces the output of the block \(\mathbf{Z}_{l+1}\). Residual connections are also applied on both MSA and FFN as:
\[\mathbf{Z}_{l}^{\prime}=\mathrm{MSA}(\mathbf{Z}_{l})+\mathbf{Z}_{l}, \tag{5}\]
\[\mathbf{Z}_{l+1}=\mathrm{FFN}(\mathbf{Z}_{l}^{\prime})+\mathbf{Z}_{l}^{\prime}. \tag{6}\]
The final prediction is produced by a linear layer taking the CLS patch from the last block (\(\mathbf{Z}_{L}^{0}\)) as inputs.
### Problem Formulation
In the context of a \(\{(x_{i},y_{i})_{i=1}^{n}\}\), conventional _post hoc_ explanation methods typically involve an _explainer_ module \(g\). This module takes the pre-trained model \(f\) and an input \(x\) to produce an explanation \(e\) for the output \(y\), formally: \(g:f\times x\to e\). The space of potential explanations \(e\) is usually determined by the specific explanation method in use. For example, a method employing saliency maps may define \(e\) as normalized distributions indicating the importance of individual input elements, such as tokens and pixels.
In our work, we propose a general problem named _Learning with Interpretation_, which advocates that the interpretation task should be integrated into the training process of the model itself, as opposed to treating them as separate _post hoc_ procedures. The core idea is to design a dedicated module, referred to as an interpreter, as an integral part of the model. This interpreter module relies on the predictor and is trained concurrently with it to furnish interpretability for the trained model. Essentially, this approach augments the model's training process, encompassing not only the prediction objective but also an additional interpretability-aware objective.
Concretely, we design IA-ViT and a novel interpretability-aware training scheme to address the _Learning with Interpretation_ problem, as shown in Fig 2. Our training framework
for IA-ViT consists of three key objectives for the minimization of dedicated losses and regularization terms for the three functional modules: (1) A primary objective focusing on target prediction, aiming to minimize Cross-Entropy loss \(\mathcal{L}_{\mathrm{pred}}\); (2) An additional objective centered on simulation, which encourages the interpreter to emulate the behavior of the predictor, and this is quantified as \(\mathcal{L}_{\mathrm{kd}}\) using knowledge distillation; (3) An attention regularizer that aligns the attention weights form the MSA blocks with the interpretable SSA block \(\mathcal{L}_{\mathrm{match}}\).
## Our Approach - IA-ViT
### IA-ViT Architecture
The proposed IA-ViT framework consists of three components: feature extractor \(h\), predictor \(f\), and interpreter \(g\), as shown in Figure 2. The feature extractor, comprising a stack of \(L\) MSA blocks, takes the input image \(x\) and encodes it into \(\mathbf{z}\in\mathbb{R}^{(N+1)\times d}\): \(\mathbf{z}=h(x)\), where \(N\) represents the number of image patches and \(d\) is the embedding dimension. Subsequently, the predictor \(f\) utilizes the feature embedding of the class token \(\mathbf{z}^{0}\) from \(\mathbf{z}\) to make predictions via a linear head: \(\hat{y}_{\mathrm{pred}}=f(\mathbf{z}^{0})\). Conversely, the interpreter \(g\) takes the remaining feature embeddings as inputs, processing them through an SSA block followed by a linear head to generate the prediction \(\hat{y}_{\mathrm{int}}=g(\mathbf{z}^{1},\cdots,\mathbf{z}^{N})\). Thus, IA-ViT employs both the predictor and the interpreter to generate two highly simulated predictions \(\hat{y}_{\mathrm{pred}}\) and \(\hat{y}_{\mathrm{int}}\), while sharing the feature extractor \(h\).
### Interpretability of IA-ViT
The rationale behind incorporating an interpreter into IA-ViT is to enhance its interpretability by gaining insights into its prediction process. It is crucial that the interpreter faithfully replicates the behavior of the predictor, ensuring that its output closely aligns with the predictor's output for a given input. Essentially, the predictor's function is to convey the crucial aspects of the input that influence the final prediction, while the interpreter complements this by offering supplementary insights into the model's decision-making process without altering the actual prediction.
Attention weights derived from MSA blocks can offer interpretable clues, but existing attention weights-based explanation methods Serrano and Smith (2019); Abnar and Zuidema (2020) only provide _post hoc_ explanations, which are limited in their ability to provide faithful explanations of the model's decision-making process. To address this problem, the interpreter of IA-ViT applies an SSA mechanism, which dynamically aligns its attention weights with the discriminative patterns from the feature embeddings, to provide reliable and faithful explanations.
Given the input from the feature embeddings \(\mathbf{Z}^{\prime}=[\mathbf{z}^{1},\cdots,\mathbf{z}^{N}]\), we obtain the projected key, query, and value as:
\[\mathbf{Q}=\mathbf{Z}^{\prime}\mathbf{W}^{Q},\ \ \mathbf{K}=\mathbf{Z}^{\prime} \mathbf{W}^{K},\ \ \mathrm{and}\ \ \mathbf{V}=\mathbf{Z}^{\prime}\mathbf{W}^{V}, \tag{7}\]
where \(\mathbf{W}^{Q}\in\mathbb{R}^{d\times d}\), \(\mathbf{W}^{K}\in\mathbb{R}^{d\times d}\), and \(\mathbf{W}^{V}\in\mathbb{R}^{d\times d}\) are trainable transform matrices. Note \(\mathbf{Z}^{\prime}\) does not contain the feature embedding of the class patch \(\mathbf{z}^{0}\). Based on SSA Eq.2, we obtain the attention weights \(\mathbf{A}\) that characterize the amount of attention paid to each patch and the SSA features \(\mathbf{S}\). According to Eq.2, we get \(\|\mathbf{A}\|\leq 1\). Therefore, \(\mathbf{S}\) is upper-bounded as:
\[\|\mathbf{S}\|=\|\mathbf{A}\|\ \|\mathbf{V}\|\cos(\mathbf{A},\mathbf{V})\leq\| \mathbf{V}\|. \tag{8}\]
When \(\mathbf{S}\) is optimized, the attention weights \(\mathbf{A}\) are proportional to \(\mathbf{V}\). To achieve maximal output, \(\mathbf{A}\) is driven to align with the discriminative features in \(\mathbf{V}\). Therefore, \(\mathbf{S}\) can only achieve this upper bound if all possible solutions of \(\mathbf{v}\in\mathbf{V}\) are encoded as eigenvectors of \(\mathbf{A}\). This maximization suggests with the attention weight \(\mathbf{A}\), we will obtain an inherently explainable decomposition of input patterns.
Consequently, the SSA mechanism within the interpreter produces an attention map that inherently combines the contributions of discriminative input patterns with respect to the model's outputs in an interpretable manner. This attention map offers more informative insights compared to the attention weights derived solely from the MSA blocks. It excels at emphasizing the specific input features that the model relied upon to make its predictions.
### Learning with Interpretation
Within the framework of _Learning with Interpretation_, the interpreter's goal extends beyond optimizing predictions alone; it also involves comprehending the rationale behind the model's predictions concurrently. Therefore, IA-ViT adopts a joint training approach for the predictor and interpreter. This allows the interpreter to acquire insights that align with the predictions made by the predictor, ultimately enhancing the overall interpretability of the model. In this approach, the interpreter and predictor collaborate to produce accurate predictions while concurrently offering explanations for these predictions. This dual functionality can prove invaluable in various domains, including healthcare and finance, where the interpretability of learned models holds paramount importance.
Classification ObjectiveGiven an input image \(x\) with its corresponding label \(y\), the final prediction is produced by the extractor and the predictor. Typically, the training process for the feature extractor and predictor involves minimizing the cross-entropy loss, which measures the disparity between the predicted probability distribution and the true labels. Formally, the cross-entropy loss is expressed as:
\[\mathcal{L}_{\mathrm{ce}}=-\frac{1}{n}\sum_{i=1}^{n}y_{i}\log(f(h(x_{i})), \tag{9}\]
where \(f\) and \(h\) are the predictor and feature extractor components of IA-ViT, respectively.
Simulation ObjectiveKnowledge distillation (KD) is a technique introduced in Hinton et al. (2015), wherein a larger capacity teacher model is used to transfer its "dark knowledge" to a more compact student model. The goal of KD is to achieve a student model that not only inherits better qualities from the teacher but is also more efficient for inference due to its compact size. Recent studies Tang et al. (2020) have highlighted the success of KD with several desirable effects, such as label smoothing from universal knowledge, injecting
domain knowledge of class relationships to student's output logit layer geometry, and gradient rescaling based on the teacher's measurement of instance difficulty.
Our simulation objective is formulated to force the interpreter's predictions to simulate the behavior of the predictor, as opposed to relying directly on ground truth labels but the soft labels generated by the predictor. Therefore, we apply KD as the simulation objective. In more detail, the logits generated by the predictor are denoted as \(\mathbf{q}\), which is the output distribution computed by applying softmax over the outputs:
\[q_{i}=\frac{\exp(f(h(x))_{i})}{\sum_{j=1}^{C}\exp(f(h(x))_{j})}, \tag{10}\]
where \(C\) is the number of classes. To obtain a smooth distribution, the logits are usually scaled by a temperature factor \(\tau>1\). Similarly, the interpreter produces a softened class probability distribution \(\mathbf{p}\). Then we apply KD to these two probabilities:
\[\mathcal{L}_{\mathrm{kd}}=-\tau^{2}(\mathbf{q}/\tau)\log(\mathbf{p}/\tau). \tag{11}\]
By optimizing \(\mathcal{L}_{\mathrm{kd}}\), the interpreter is trained to predict the same class as the predictor with a high probability, which increases the fidelity of interpretations to the model's outputs.
Attention RegularizationTo further improve the interpretability of IA-ViT, we introduce an additional regularization term into the objective. This term serves to reduce the Maximum Mean Discrepancy (MMD) [10, 13] between the attention distribution of MSA in the feature extractor, denoted as \(\mathbf{\alpha}_{\mathrm{ext}}\), and the attention distribution of the SSA in the interpreter, denoted as \(\mathbf{\alpha}_{\mathrm{int}}\). This helps to ensure that the attention weights used by the feature extractor and the interpreter are generated from the same distribution, which can further improve the interpretability of the model.
Since MSA in the feature extractor employs multi-headed attention with multiple different attention vectors in each block, we aggregate these attentions by summing up the attention from the class token to other tokens in the last layer. This summation is then averaged across all attention heads to get \(\mathbf{\alpha}_{\mathrm{ext}}\), i.e., \(\mathbf{A}_{0}\). In contrast, \(\mathbf{\alpha}_{\mathrm{int}}\) can be directly extracted from SSA in the interpreter. MMD compares the sample statistics between \(\mathbf{\alpha}_{\mathrm{int}}\) and \(\mathbf{\alpha}_{\mathrm{ext}}\), and if the discrepancy is small, \(\mathbf{\alpha}_{\mathrm{int}}\) and \(\mathbf{\alpha}_{\mathrm{ext}}\) are then likely to follow the same distribution. Thus, the attention regularizer is formulated as:
\[\mathcal{L}_{\mathrm{reg}}=\mathrm{MMD}(\mathbf{\alpha}_{\mathrm{int}},\mathbf{\alpha} _{\mathrm{ext}}). \tag{12}\]
We conduct an in-depth analysis of this attention regularization to obtain a more comprehensive understanding of its positive impacts on the IA-ViT training process. Using the kernel trick, the empirical estimate of \(M\) can be obtained as:
\[\begin{split} M=&\bigg{[}\frac{1}{n^{2}}\sum_{i,j=1} ^{n}\mathcal{K}(\alpha_{i}^{\mathrm{i}},\alpha_{j}^{\mathrm{i}})+\frac{1}{n^{ 2}}\sum_{i,j=1}^{n}\mathcal{K}(\alpha_{i}^{\mathrm{e}},\alpha_{j}^{\mathrm{e} })\\ &-\frac{2}{n^{2}}\sum_{i=1}^{n}\sum_{j=1}^{n}\mathcal{K}(\alpha_{i} ^{\mathrm{i}},\alpha_{j}^{\mathrm{e}})\bigg{]}^{1/2},\end{split} \tag{13}\]
where \(\mathcal{K}(\cdot,\cdot)\) is a kernel function, and \(n\) is the number of samples. Moreover, for notational simplicity we use \(\mathbf{\alpha}^{\mathrm{e}}\) and \(\mathbf{\alpha}^{\mathrm{i}}\) to denote \(\mathbf{\alpha}_{\mathrm{ext}}\) and \(\mathbf{\alpha}_{\mathrm{int}}\), respectively. [10]
Figure 2: IA-ViT architecture consists of three major components: feature extractor, predictor, and interpreter. Both the predictor and the interpreter generate the class prediction for this cat image. KD is applied on the two logits in the simulation objective. The attention weights in SA and MSA are aligned via MMD during the training process for better explanations.
2006) showed that if \(\mathcal{K}\) is a characteristic kernel, then \(\mathrm{MMD}(\mathbf{\alpha}^{\mathrm{e}},\mathbf{\alpha}^{\mathrm{i}})\) = 0 asymptotically if and only \(\mathbf{\alpha}^{\mathrm{i}}\) and \(\mathbf{\alpha}^{\mathrm{e}}\) are generated from the same distribution. A typical choice of \(\mathcal{K}\) is the Gaussian kernel with bandwidth parameter \(\sigma\):
\[\mathcal{K}(x,y)=\exp\Big{(}\frac{-\|x-y\|^{2}}{\sigma}\Big{)}. \tag{14}\]
With the Gaussian kernel, minimizing MMD is equivalent to matching all orders of moments of the two distributions.
Inspired by the idea of (Li et al., 2023), we further analyze the effect of MMD on our regularization. Since \(\mathbf{\alpha}^{\mathrm{i}}\) and \(\mathbf{\alpha}^{\mathrm{e}}\) are symmetric in MMD, we only present the attention weights of \(\mathbf{\alpha}^{\mathrm{i}}\) here without loss of generality. We first formulate the gradient of the regularization loss with respect to \(\mathbf{\alpha}^{\mathrm{i}}\) as:
\[\begin{split}\nabla_{\alpha^{\mathrm{i}}_{i}}M=\frac{2}{\sqrt{M} }\nabla_{\alpha^{\mathrm{i}}}\bigg{[}&\frac{1}{n^{2}}\sum_{j=1 }^{n}\mathcal{K}(\alpha^{\mathrm{i}}_{i},\alpha^{\mathrm{i}}_{j})\\ &-\frac{2}{n^{2}}\sum_{j=1}^{n}\mathcal{K}(\alpha^{\mathrm{i}}_{i },\alpha^{\mathrm{e}}_{j})\bigg{]}.\end{split} \tag{15}\]
The gradient with respect to \(x\) for Gaussian kernel \(\mathcal{K}\) is:
\[\nabla_{x}\mathcal{K}(x,y)=-2\exp\bigg{(}\frac{-\|x-y\|^{2}}{\sigma}\bigg{)} \frac{x-y}{\sigma}. \tag{16}\]
Since \(\sigma\) here is data-dependent and treated as a hyperparameter, it is not back propagated in the training process and practice set as the median of sample pairwise distances. We thus get
\[\begin{split}\nabla_{\alpha^{\mathrm{i}}_{i}}M=& -\frac{2}{\sqrt{M}}\bigg{[}\frac{1}{n^{2}}\sum_{j=1}^{n}\exp\bigg{(}-\frac{\| \alpha^{\mathrm{i}}_{i}-\alpha^{\mathrm{i}}_{j}\|^{2}}{\sigma}\bigg{)}\frac{ \alpha^{\mathrm{i}}_{i}-\alpha^{\mathrm{i}}_{j}}{\sigma}\\ &-\frac{2}{n^{2}}\sum_{j=1}^{n}\exp\bigg{(}-\frac{\|\alpha^{ \mathrm{i}}_{i}-\alpha^{\mathrm{e}}_{j}\|^{2}}{\sigma}\bigg{)}\frac{\alpha^{ \mathrm{i}}_{i}-\alpha^{\mathrm{e}}_{j}}{\sigma}\bigg{]},\end{split} \tag{17}\]
by the linearity of the gradient operator. We notice that for function \(g_{a}(x)=\exp(-x^{2}/a)x/a\) (\(a\) is some constant), \(g_{a}(x)\to 0\) exponentially as \(x\rightarrow\infty\). We further achieve
\[\begin{split}\|\nabla_{\alpha^{\mathrm{i}}}M\|\leq\frac{2}{ \sqrt{M}}\left[\frac{1}{n^{2}}\sum_{j=1}^{n}g_{\sigma}(\|\alpha^{\mathrm{i}}_{i }-\alpha^{\mathrm{i}}_{j}\|)\right.\\ \left.+\frac{2}{n^{2}}\sum_{j=1}^{n}g_{\sigma}(\|\alpha^{\mathrm{i }}_{i}-\alpha^{\mathrm{e}}_{j}\|)\right]\end{split} \tag{18}\]
using the triangle inequality for the \(L_{2}\) norm for fixed \(\sigma\). \(\sqrt{M}\) here is a constant for all samples within the training mini-batch.
We observe that when \(\alpha^{\mathrm{i}}\) deviates significantly away from the majority of samples of the same class, i.e., noisy samples or outliers, \(\|\alpha^{\mathrm{i}}_{i}-\alpha^{\mathrm{i}}_{j}\|\) and \(\|\alpha^{\mathrm{i}}_{i}-\alpha^{\mathrm{e}}_{j}\|\) are large, the magnitude of its gradient in the regularization loss diminishes from Eq.18. More specifically, \(\alpha^{\mathrm{i}}\) has negligible impact on the regularization term. On the other hand, training IA-ViT with the regularization term promotes the alignment of attention weights representations of samples that stay close in attention weights distribution. The attention weights deviating from the majority are likely low-density or even outliers from the distribution perspective. Overall, such behavior of the regularization loss implies that it can help IA-ViT better capture information from high-density areas and reduce the distraction of low-density areas in learning feature representations on the data manifold, as shown in Figure 4.
Overall ObjectiveCombining all of these objectives, the overall training objective is formulated as the weighted sum of \(\mathcal{L}_{\mathrm{ce}}\), \(\mathcal{L}_{\mathrm{kd}}\), and \(\mathcal{L}_{\mathrm{reg}}\). Formally, it is expressed as::
\[\mathcal{L}=\beta\mathcal{L}_{\mathrm{ce}}+(1-\beta)(\mathcal{L}_{\mathrm{kd} }+\mathcal{L}_{\mathrm{reg}}), \tag{19}\]
where \(\beta\in(0,1)\) is a hyperparameter that balances the contributions of each term.
## Experiment Settings
### Datasets
We evaluate the performance of IA-ViT using various benchmark datasets tailored for image classification tasks, including CIFAR10 (Krizhevsky et al., 2009), STL10 (Coates et al., 2011), Dog and Cat (Elson et al., 2007), and CelebA (Liu et al., 2018) (specifically for hair color prediction). Dataset statistics are presented in Table 2. The images from these datasets are upsampled to a standard resolution of \(224\times 224\) for both training and testing.
### Model Architectures
We employ the vanilla ViT-B/16 architecture (Dosovitskiy et al., 2020) as the transformer backbone for our model. Specifically, we use the base version with patches of size \(16\times 16\), which was exclusively pre-trained on the ImageNet-21k dataset. This backbone consists of 12 stacked MSA blocks, each containing 12 attention heads. The model utilizes a total of 196 patches, and each patch is flattened and projected into a 768-dimensional vector. Positional embeddings are added to these patch embeddings, and the resulting embeddings are then processed by the feature extractor. Following this, the predictor utilizes the feature embeddings of the class patch and passes them through two fully-connected layers and a softmax layer to produce logits for prediction. In contrast, the interpreter operates on the feature embeddings from other image patches. It employs a single SSA block, followed by two fully-connected layers and a softmax layer, to generate logit scores for interpretation.
### Implementation Details
Both the ViT and IA-ViT models are trained using Stochastic Gradient Descent (SGD) with a momentum parameter of 0.9.
\begin{table}
\begin{tabular}{l c c c} \hline Datasets & Training Size & Test Size & Class Numbers \\ CIFAR10 & 50,000 & 10,000 & 10 \\ STL10 & 5,000 & 8,0000 & 10 \\ Dog\&Cat & 20,000 & 5,000 & 2 \\ CelebA & 10,000 & 3,000 & 2 \\ \hline \end{tabular}
\end{table}
Table 2: Dataset Statistics
The training process begins with an initial learning rate of 3e-2, lasting for 200 epochs. A constant batch size of 64 is maintained, and gradient clipping is applied to ensure that the global norm does not exceed 1. A cosine decay learning rate schedule with a linear warm-up phase is implemented. The values for the temperature parameter (\(\tau\)) in \(\mathcal{L}_{\mathrm{kd}}\) and the hyperparameter (\(\beta\)) in \(\mathcal{L}\) are adjusted flexibly to strike a balance between achieving optimal predictive performance and interpretability, tailored to the specific datasets. The model with the highest accuracy on the validation sets is ultimately chosen as the final model.
### Baseline Explanation Methods
Rollout [1] is an explanation method that relies on attention mechanisms to identify the most crucial input patches for generating an output in Transformer-based models. AttGrads [1] is an explanation method that utilizes gradients of the attention weights to pinpoint the most significant patches. In this context, experiments have been conducted to compare and evaluate the relative performance and effectiveness of these two methods in providing explanations for the models under consideration.
### Evaluation Metrics
To evaluate the IA-ViT model's performance comprehensively, we report accuracy metrics for both the predictor and the interpreter. We employ attribution maps, which are visual representations highlighting the input pixels considered significant or insignificant in relation to a predicted label. This approach is used for a qualitative evaluation of the explanation quality [12]. Furthermore, we utilize insertion score and deletion score as quantitative evaluation metrics. In the first round of experiments, we replace the most important pixels with black pixels, following the approach of [2]. In the second round, we replace these pixels with Gaussian-blurred pixels, as per [20]. We report the average performance across both rounds of experiments. Since both deletion and insertion scores can be influenced by shifts in distribution when pixels are removed or added, we employ the difference between the insertion and deletion scores as an additional metric for comparison [1]. Focusing on their relative differences helps to mitigate the impact of these distribution shifts [1].
## Results and Discussion
### Model Performance Evaluations
Table 3 presents a performance comparison between IA-ViT and the vanilla ViT models. It is important to highlight that the IA-ViT models' final predictions rely on the predictor's outputs. We apply performance drop rate (PDR) to evaluate the performance degradation, formally:
\[\mathrm{PDR}=1-\frac{\mathrm{Accuracy_{IA-ViT}}}{\mathrm{Accuracy_{ViT}}}. \tag{20}\]
The average PDR among these datasets is 1.16%, which indicates that there is no substantial decrease in accuracy when employing the IA-ViT model with its integrated interpreter.
### Quantitative Evaluations
The quantitative evaluations shown in Table 4 demonstrate that the attention weights (Atts) from the interpreter in IA-ViT outperform the Rollout and AttGrads methods for ViT in terms of deletion and insertion scores across all datasets. This further illustrates that the explanations generated by the interpreter of IA-ViT clearly capture the most important discriminative pixels or patches for the image classification tasks. Similarly, the results of the difference between insertion and deletion scores across a varying percentage of deleted/inserted pixels, as shown in Figure 3, clearly show that the form of the interpreter in IA-ViT outperforms the other baselines in terms of Area Under the Curve (AUC) among all tasks. These quantitative evaluations collectively provide compelling evidence of IA-ViT's superior interpretability compared to the _post hoc_ methods designed for ViT.
### Qualitative Evaluations
The examples provided in Figure 4 vividly illustrate the superior quality of the attribution maps produced by IA-ViT's interpreter when compared to the _post hoc_ methods Rollout and AttGrads designed for ViT. A key observation from this figure is that the heatmaps generated by IA-ViT's interpreter exhibit more focused attention on the target objects, whereas the heatmaps generated by Rollout are dispersed across both the background and class entities. In contrast, AttGrads produces heatmaps that primarily highlight areas unrelated to the target. It's essential to emphasize that the results depicted in Figure 4 are representative of the typical outcomes observed in our experiments and have not been cheery-picked.
\begin{table}
\begin{tabular}{c|c|c c|c} \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Metrics} & \multicolumn{2}{c|}{ViT} & IA-ViT \\ & & Rollout & AttGrads & Atts \\ \hline \multirow{2}{*}{CIFAR10} & Deletion\(\downarrow\) & 0.3817 & 0.3036 & **0.2479** \\ & Insertion\(\uparrow\) & 0.6141 & 0.5583 & **0.7082** \\ \hline \multirow{2}{*}{STL10} & Deletion & 0.3874 & 0.4124 & **0.3254** \\ & Insertion & 0.5967 & 0.5546 & **0.6436** \\ \hline \multirow{2}{*}{Dog\&Cat} & Deletion & 0.6785 & 0.7354 & **0.6232** \\ & Insertion & 0.8322 & 0.7921 & **0.8783** \\ \hline \multirow{2}{*}{CelebA} & Deletion & 0.7260 & 0.7536 & **0.5977** \\ & Insertion & 0.8275 & 0.8123 & **0.8719** \\ \hline \end{tabular}
\end{table}
Table 4: Quantitative evaluation using deletion and insertion scores. The deletion score is the lower the better, while the insertion score is the higher the better. The best results are in bold.
\begin{table}
\begin{tabular}{c|c|c c c} \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{ViT} & \multicolumn{3}{c}{IA-ViT} \\ & & Predictor & Interpreter & PDR (\%) \\ \hline CIFAR10 & 98.93 & 97.51 & 97.24 & 1.43 \\ STL10 & 99.31 & 97.73 & 95.42 & 1.59 \\ Dog\&Cat & 99.72 & 98.82 & 97.76 & 0.90 \\ CelebA & 96.87 & 96.16 & 96.09 & 0.73 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of the classification accuracies of the ViT and IA-ViT models. PDR refers to the performance drop rate.
Additionally, these qualitative examples also highlight the effectiveness of the attention regularization utilized in the training objective, as discussed in Section. IA-ViT models possess the capability to extract information from regions with high information density while mitigating the influence of regions with low information density during the feature learning process. Therefore, the interpreter produces high-quality explanations that densely emphasize the target object. This is clearly evident in Figure 4, where the heatmaps generated by the interpreter distinctly highlight the target objects (e.g., hair, dog, cat) while disregarding the background or other irrelevant noise. In contrast, the explanations generated by the Rollout and AttGrads methods merely accentuate certain irrelevant areas and fail to capture the precise shape of the target objects.
### Fairness Learning
The examples from the CelebA dataset, specifically the hair color prediction task, illustrate that the attribution maps produced by the interpreter of IA-ViT concentrate intensely on the hair region, prioritizing it over other facial features. This is evident in the first two rows of Figure 4. On the contrary, the explanations generated by the Rollout method for the ViT model demonstrate that the ViT models tend to learn some spurious features that might be related to the sensitive attribute (in this case, gender) but not the real feature that is relevant to the hair color prediction.
These observations have motivated us to evaluate the performance of IA-ViT in the context of fairness learning [14, 13, 12]. Specifically, we employ two commonly used fairness metrics: demographic parity (DP) and equality of odds (EO) for fairness evaluation. DP measures whether the true positive rates are equal across all groups categorized by a sensitive label \(s\) (e.g., gender), particularly comparing the vulnerable minority group (\(s=0\)) to others (\(s=1\)). It is formally defined as:
\[\mathrm{DP}=\mathrm{TPR}_{s=1}-\mathrm{TPR}_{s=0}. \tag{21}\]
EO, on the other hand, is used to examine the disparities in both the true positive rates and the false positive rates within the vulnerable group compared to others:
\[\mathrm{EO}=\frac{1}{2}[\mathrm{TPR}_{s=1}-\mathrm{TPR}_{s=0}]+\frac{1}{2}[ \mathrm{FPR}_{s=1}-\mathrm{FPR}_{s=0}]. \tag{22}\]
Table 5 demonstrates that the IA-ViT model outperforms the ViT model in both fairness metrics. The reduced DP and EO values indicate that IA-ViT's training effectively mitigates bias, resulting in a fairer model. This further demonstrates the effectiveness of our interpretability-aware training, which indeed extracts "real" features rather than spurious ones.
## Conclusion
In this work, we propose an interpretability-aware variant of the Vision Transformer (ViT) named IA-ViT. Our moti
Figure 4: Examples of attribution maps obtained by _post hoc_ methods Rollout and AttGrads for ViT and attention weights of IA-ViT.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{\(Y\): Hair Color \(S\): Gender} \\ \cline{2-4} & ACC\(\uparrow\) & DP\(\downarrow\) & EO\(\downarrow\) \\ \hline ViT & 96.89 & 12.95 & 8.69 \\ IA-ViT & 96.59 & 9.81 & 5.76 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Fairness and accuracy comparison for ViT and IA-ViT over the target (Y) and sensitive (S) on the CelebA dataset.
Figure 3: Quantitative Performance Comparison in terms of differences between insertion and deletion scores.
vation stems from the consistent predictive distributions and attention maps generated by both the CLS patch and image patches in ViT models. IA-ViT consists of three major components: a feature extractor, a predictor, and an interpreter. By training the predictor and interpreter jointly, we enable the interpreter to acquire explanations that align with the predictor's predictions, enhancing overall interpretability. As a result, IA-ViT not only maintains strong predictive performance but also delivers consistent, reliable, and high-quality explanations. Extensive experiments validate the efficacy of our interpretability-aware training approach in improving interpretability across various benchmark datasets when compared to several baseline explanation methods.
|
2309.17115 | SAppKG: Mobile App Recommendation Using Knowledge Graph and Side
Information-A Secure Framework | Due to the rapid development of technology and the widespread usage of
smartphones, the number of mobile applications is exponentially growing.
Finding a suitable collection of apps that aligns with users needs and
preferences can be challenging. However, mobile app recommender systems have
emerged as a helpful tool in simplifying this process. But there is a drawback
to employing app recommender systems. These systems need access to user data,
which is a serious security violation. While users seek accurate opinions, they
do not want to compromise their privacy in the process. We address this issue
by developing SAppKG, an end-to-end user privacy-preserving knowledge graph
architecture for mobile app recommendation based on knowledge graph models such
as SAppKG-S and SAppKG-D, that utilized the interaction data and side
information of app attributes. We tested the proposed model on real-world data
from the Google Play app store, using precision, recall, mean absolute
precision, and mean reciprocal rank. We found that the proposed model improved
results on all four metrics. We also compared the proposed model to baseline
models and found that it outperformed them on all four metrics. | Daksh Dave, Aditya Sharma, Shafii Muhammad Abdulhamid, Adeel Ahmed, Adnan Akhunzada, Rashid Amin | 2023-09-29T10:17:04Z | http://arxiv.org/abs/2309.17115v1 | # SAppKG: Mobile App Recommendation Using Knowledge Graph and Side Information-A Secure Framework
###### Abstract
Due to the rapid development of technology and the widespread usage of smartphones, the number of mobile applications is exponentially growing. Finding a suitable collection of apps that aligns with users' needs and preferences can be challenging. However, mobile app recommender systems have emerged as a helpful tool in simplifying this process. But there is a drawback to employing app recommender systems. These systems need access to user data, which is a serious security violation. While users seek accurate opinions, they do not want to compromise their privacy in the process. We address this issue by developing SAppKG, an end-to-end user privacy-preserving knowledge graph architecture for mobile app recommendation based on knowledge graph models such as SAppKG-S and SAppKG-D, that utilized the interaction data and side information of app attributes. We tested the proposed model on real-world data from the Google Play app store, using precision, recall, mean absolute precision, and mean reciprocal rank. We found that the proposed model improved results on all four metrics. We also compared the proposed model to baseline models and found that it outperformed them on all four metrics.
* [label=*,leftmargin=*]
* **Index Terms** Knowledge graph, link prediction, mobile apps, privacy, recommender system, semantic information.
## I Introduction
With the development of hardware technologies, a significant increase has also been observed in the number of mobile applications available to the user to perform a particular task. Due to the availability of such a large number of apps, it becomes a challenge for a user to select an appropriate set of apps that satisfies the user's needs. Recommender systems are based on information filtering and are traditionally classified into content-based filtering (CBF), collaborative filtering (CF), and hybrid methods [1]. In the content-based filtering (CBF) approach, items chosen for recommendation are based on the user's current preferences and the similarity of the items. However, CBF suffers from the problem of overspecialization [2], which means that it cannot diversify the existing interests of users because it can only provide recommendations based on the current interests of the user. Collaborative Filtering(CF), on the other hand, takes into account a user's prior interactions and searches for similar users to make recommendations based on common preferences. But the downside of CF is that it is difficult to add any item's side features to increase model quality. Side features, also known as side information, are any additional information about the input data, such as user/item features, images, text, etc., that, if included, can significantly improve the model's performance. For example, in mobile app
suggestions, side information of an app may include app category, age limit, installations, user ratings, and so on.
Around 3.29 million apps in the Google play store and over 2.1 million apps in the Apple app store are available as of May 4, 2022 [3]. A simple search for "'calculator" in the Google Play store yields more than 100 results for the user. Rather, a person installs, on average, 35 apps out of these millions of apps available in app stores [4]. The result is a high sparsity in user-app interaction data. Side information compensates for the sparsity by providing additional data and features about user-app interactions. Researchers have tried to include as much side information as possible in the app recommendation algorithms; however, they have only been able to use a limited number of side information types. Furthermore, researchers treated the side information as separate user and app properties, ignoring the relationships and semantics between them [5].
Guo et al. [6] used only the apps' category, and Cao et al. [7] only took into account the version of an app. Therefore, properly integrating and utilizing side information to produce good app recommendations is a topic in exploration that faces difficulty in the current research world. The ultimate goal of an app recommender system is to identify a small set of apps that meet the user's personal interests and requirements. To accomplish this, app recommender systems must gather user information in one of two ways: users voluntarily providing app preferences and requirements, or from users' mobile devices using installed app collection, sensors, phone records, contacts, emails, web browsing history, and several other [8]. While it helps app recommender systems provide more personalized recommendations, it also compromises users' privacy. Recommending apps to users without directly infringing on their privacy is a difficult task.
Existing approaches [11, 6, 9, 10, 11, 12, 13] were unable to account for the multi-relational structure of app recommendations. Knowledge graphs are better suited for such scenarios since knowledge graphs have a heterogeneous structure that allows for the inclusion of such complex interactions between items and incorporating side information. The fundamental benefit of knowledge graph-based models is their ability to represent diverse, complicated, unstructured data using rich ontologies and semantics, making the knowledge graph a commonly used tool in recommendation systems [14]. Zhang et al. [5] created a knowledge graph to deliver app suggestions while adding side information such as content topic, app size, and app popularity. However, a fundamental disadvantage of this approach is that it makes use of user data from the user-app interaction matrix, which raises privacy concerns.
To resolve the aforementioned challenges, we present SAppKG, a knowledge graph-based secure method- an ontology for making app recommendations without specifically utilizing any user information. We developed a knowledge graph using only side information from apps, such as content rating, app genreID, etc. More specifically, we proposed two frameworks, SAppKG-S and SAppKG-D, for securely and privately recommending new apps using a knowledge graph.
In this paper, our contributions are:
* We propose a mobile app recommendation framework called SAppKG based on similar apps without accessing the user's data to ensure user privacy and solve the problem of data sparsity by incorporating side information.
* We proposed SAppKG-S, a shallow embedding-based secure model for app recommendation that employs a range of shallow embedding models such as TransD, TransH, and Complex.
* We also presented a novel model, SAppKG-D, which integrates shallow and deep embedding techniques and employs TransD to identify unique embeddings for individual nodes. These embeddings are then combined with those produced by a graph convolution network to generate recommendations for apps based on the acquired embeddings.
* We also introduced a support metric that measures the relatedness between two relations.
* We collected real data from the Google Play store for capturing the relationships and side information about app connections.
* We compared a proposed framework with baselines and evaluated the proposed framework on the basis of precision, recall, mean absolute precision, and "Mean Reciprocal Ranking and found improved results.
The rest of the paper is organized as follows: Section II contains the literature review, Section III discusses the methodology, Section IV discusses the results, and Section V lists references.
## II Related Work
We divided our literature review based on mobile app recommendation and security in mobile app recommendation.
### _Mobile-App Recommendation_
Given the importance of mobile app recommendation for both users and platforms, several mobile app recommendation approaches have been developed. Committee [9] created AppJoy, a personalized app recommendation system that proposes mobile apps based on the user's app usage history (by examining users' current app consumption habits), a client-server architecture, and a collaborative filtering algorithm. Liang et.al. [10] focused on exploiting the permissions and functionalities needed for an app, arguing that the combination of permissions, functionality, and user interests of the app plays an important role in recommending a personalized app. The authors offer the App Risk Score Method(ARSM), which reflects the app's trustworthiness based on a relation between app permissions and user ratings. Then, a modified matrix factorization algorithm, MFPF (Matrix Factorization Algorithm Based on Permissions and Functionalities), predicts a rating for a new app from a specific user based on the user's interest in that
app and permission similarities with another app. Tu et al. [13] present IMCF+(Interest-aware matrix co-factorization plus), a collaborative filtering approach based on the assumption that when a user posts on social media about what they like, they are more likely to install related apps on their mobile devices. IMCF+ is a transfer learning-based technique that uses user-app usage data, user tweet/post data, and app-to-tweet word correlation data and generates a personalized ranking for unseen apps per user. Whereas, Guo et.al. [6] claims that collaborative filtering (CF) and matrix factorization (MF) techniques suffer from poor feature extraction, prohibiting them from correctly utilizing the acquired features. Authors offer KDFM(Knowledge-based Deep Factorization Machine), which uses categorical (app name, user name, etc.) and textual knowledge(user reviews, app description, etc.) to estimate user ratings for mobile apps. The distinctive feature of this technique is the implementation of an attention-aware deep encoder to map textual knowledge into the topic-based dense representation. Liang et al. [15] advance the notion of attention-based app recommendation by proposing MV-AFM (Multi-view Attentional Factorization Machines), which analyses the associations between features from various views(represented by a set of features) using the attention mechanism. This method introduces two-tier attention networks and separates feature interactions based on various views. A key aspect of MV-AFM is the ability of two attention sub-networks to distinguish between feature weights inside each view and interactions across views. Xu et al. [12] claim that two users would have similar app preferences if their contextual app usage patterns were the same. Authors feed a user-app interaction matrix into a neural network with two components: an app context prediction module and a user preference prediction module. The former provides context for an app, whereas the latter forecast a user's app preferences. Contextual data from app usage patterns enhance the prediction of user preferences over applications, and user preferences for each app provide the recommended apps. Maherswari et.al. [16] signifies the importance of recommending a proper version of the app to the user. The authors propose integrating Probabilistic Matrix Factorization (PMF), an advanced and more powerful matrix factorization approach, with the Version Evolution Progress Model(VEPM). PMF is used to find the latent feature representation for the user and app, whereas VEPM represents the evolution of mobile app versions and trains a model to improve rating prediction performance.
KGEP (Knowledge Graph Convolutional Embedding Propagation Model) [5] is a major technique that uses knowledge graphs to provide app recommendations to users. The innovation of this technique is that it incorporates side information1 about relationships into a knowledge graph (KG) based on the link between users and apps. Furthermore, final vectors for users and apps are generated using a technique similar to KGCN2 embeddings [17] and merged with generic KG embeddings(TransD embeddings). The probability value of a user acquiring a certain app is then computed using these vectors to provide final app recommendations. A recent study on app recommender systems by Tejaswi. et al. [18] proposed a unique Multi-Criteria Mobile App Recommender System (MCMARS) model that recommends the top apps to the users as well as helps the developers in improving their app performance through intelligent recommendations.
### _Security in Mobile App Recommendation_
To provide more personalized recommendations, app recommendation systems often access user information in some way. It seriously breaches user privacy. In their study, Sandhu et al. [8] highlights three important aspects of mobile recommendation systems: the sources used to acquire user data, the privacy issues raised by various data collection techniques, and remedies to these problems. The authors also discuss the privacy-personalization trade-off, which forces users to decide between privacy and personalized suggestions. Highly personalized recommendations mean privacy must be compromised, but less accurate recommendations arise from putting more importance on privacy. Additionally, there are instances when users must decide whether to accept the privacy policy established by mobile recommendation systems or reject it and avoid using the system altogether. Furthermore, because the privacy policies of mobile recommender systems are written in a technical and complex manner, it is challenging for the typical user to understand them, and users are uninformed of the magnitude of privacy loss. Beg et al. [19] give a detailed overview of the numerous methods suggested to ensure privacy and security in the recommendation of mobile apps. According to the authors, the availability of multiple sensors and the permissions granted to third-party programs on mobile devices make privacy problems in mobile devices more detrimental. The authors mention methods implemented to protect privacy: correlation-based, deep neural-based adversarial learning, encryption-based, perturbation and noise-based, differential privacy-based, and homographic encryption. Additionally, Ravi et al. [20] suggest the implementation of a secure framework, SECRECSY, to facilitate recommendation systems that have migrated their data and infrastructure to a cloud-based platform. This measure upholds users' confidentiality throughout the process of generating recommendations.
Zhu et al. [21] suggested a recommendation system for apps that can analyze the level of privacy of an app depending on the permissions it seeks. The authors then recommend apps while trying to balance an app's popularity and users' apprehensions about security. This approach has the drawback of being non-personalized and is unable to offer customized app suggestions based on users' needs and interests. Xu et al. [22]propose PPMARS-C (privacy-preserving mobile app recommendation system for cloud services) and PPMARS-S (privacy-preserving mobile app recommendation system for social networks) as two mobile
app recommendation models that preserve privacy. While the latter is a distributed system that makes suggestions in social networks, the former is a centralized system that delivers app recommendations for a cloud service environment. Both approaches employ data regarding users' trust behaviours with regard to mobile apps they have downloaded and installed. To ensure the security of identification, data transfer, and data processing, public-key encryption and homomorphic encryption are utilized.
Recent studies [23, 24] have tried to address the privacy challenges in the recommendation systems and developed a qualitative approach through extensive peer-review of articles for helping the researchers adopt the best approaches in the recommenders systems for resilience in privacy and security for risk mitigation.
Compared to the approaches mentioned above, the advantages of our method are twofold. The first thing to note is that each of the above methods makes use of data on how the user and the app interacted. Second, despite the ability of knowledge graphs to deliver rich information relating to data, little effort has been made in the area of mobile app recommendations using knowledge graphs.
## III Methodology
Our framework, depicted in Figure-3, comprises three modules that collectively enable effective app recommendation. The first module, Data Scraping, and Entity Identification, involves extracting app data and side information from JSON files, followed by entity identification through pre-processing. The second module, Constructing a Knowledge Graph and KG Quality Check, employs four techniques to generate a knowledge graph (KG) representing app data, and capturing entity relationships. A thorough quality check assesses the statistical characteristics and relation scores of the KG. The third module encompasses three subsections: Initializing the KG Embeddings utilizes the SAppKG-S model to initialize embeddings encoding semantic information and relationships. The subsequent subsections, SAppKG-S and SAppKG-D, focus on training and evaluating KG-based recommendation models using shallow and deep learning techniques. The KG embeddings play a vital role in enhancing recommendation accuracy and performance.
### Data Scraping and Entity Identification
We created a dataset of real mobile apps from the Google Play store that includes information about the apps from three major categories--_Photography_, _Productivity_ and _Games_--and extracted data for 200 apps from each category. The extracted data contained information about thirteen app attributes. These are-_appld_, _adSupported_, _contentRating_, _editorsChoice_, _generld_, _installs_, _offersIAP_, _ratings_, _released_, _reviews_, _scoreText_, _size_, and _video_. Then, in order to represent extracted data in structural form and capture rich semantic information, we build a knowledge graph (KG).
To build the KG we pre-processed the data of thirteen selected attributes using techniques such as-
* Normalization: This included eliminating redundant and unstructured data and making the data appear similar across all records and fields. Example: Size
* Quantile mapping: Quantile mapping defines the bins using percentiles based on the distribution of the data, not the actual numeric edges of the bins. The data is divided into a set of quantiles and we have mapped the data falling within a similar quantile range to a single value. Examples: Ratings, Reviews, ScoreText
* Interval mapping: It maps the data to a bin if the data falls within the bin interval range.Examples: Installs, Released, Size
* Category mapping:It puts the data having a similar set of categories into the same bin. Examples: AdSupported, Editor's Choice, offersIAP, video, ContentRating, Generld
The appld is considered as the nodeld, and the remaining twelve properties are chosen as node attributes. Based on the twelve different node attributes mentioned above, we constructed twelve relations between the nodes (i.e., apps). If the node attribute values of the two apps are similar, a certain relation connects the two apps. For example, if the _ContentRating_ of the two apps \(a_{i}\) and \(a_{j}\) was similar, the relation CRSIMILAR was established between them. Table 1 summarizes the twelve relations that were created based on various attributes of the nodes and the corresponding side information that each node possesses.
Figure 2: SapplG: Knowledge graph.
### _Constructing a Knowledge Graph_
1) Privacy and Security
To protect user privacy and ensure security, our model adopts a privacy-preserving strategy that avoids direct usage of the user dataset. Mobile app attributes can be broadly categorized into two types: design-specific attributes and user-specific attributes. Design-specific attributes refer to characteristics that are available at the time of app launch and remain
Figure 3: Proposed framework for SAppKG.
constant, such as the app's size, ad support, pricing model (free or paid), and compatibility requirements. These attributes provide insights into the app's structure, features, and monetization strategy. On the other hand, user-specific attributes capture information that evolves over time based on user interactions with the app. These attributes include the number of installs, user reviews, ratings, app score, and other user-generated feedback. User-specific attributes reflect the app's popularity, user satisfaction, and overall performance.
Fig. 1 illustrates the structure of the knowledge graph proposed by Zhang. et al. [5], which encompasses user nodes, app nodes, and app attribute nodes. These nodes are interconnected through various relations, representing the associations between them. In our proposed methodology, we prioritize the use of app nodes as the primary entities in the construction of the knowledge graph. To achieve this, we employ transformation and encoding techniques to effectively incorporate the side information associated with apps within the graph's relations. By adopting this approach, we are able to address important concerns such as user privacy preservation and prevention of information leaks, especially in the event of a knowledge graph leak. Furthermore, this methodology offers the additional benefit of reducing the overall number of nodes in the knowledge graph, optimizing its structure, and enhancing its efficiency.
The process of our KG construction is discussed in detail in the next section. In our graph, we leverage a combination of both design-specific(size, Ad Support, free or paid, etc.) and user-specific(installs, reviews, ratings, app Score, etc) interaction attributes available in the dataset. Fig. 2 gives a glimpse of our Knowledge Graph that gets built on completing the KG construction processes. We can see that the same apps can be connected to a single app by multiple relations and using the same relation a single app can be connected to multiple apps. We group the values into appropriate categories and interconnect the nodes based on their category similarities to build the KG.
To evaluate the effectiveness of our model in app-app recommendations, even without relying on the user dataset, we conduct a comparative analysis with the KGEP model [5] as shown in Table 12. Through this performance evaluation, we quantitatively measure the efficiency and efficacy of our proposed SappKG-D model, showcasing its ability to deliver accurate recommendations. The experimental results section presents a detailed and in-depth analysis of the table, providing insights and observations derived from the comparison.
#### 2.2.2 Building A Kg
The subsequent step involves building the relationships, so we needed to model these relationships into a tangible graph.
The Table 3 shows the relationship attributes for _contentRating_ which has four values- _Everyone_, _Teen_, _Everyone_ _10_+and _Mature 17_+We use Category mapping and group the _Everyone_ values into one category and repeat the same for the other values. The genre has three main values-_Photography_, _Productivity_, and _Game_ as shown in the Table 3. All the sub-categories of the _GAME_ were grouped into a single category, _GAMES_, to reduce the feature space. We use Category mapping on the obtained labels
Installs attribute has 18 values- With the help of plotting we categorize these 18 groups into four groups-_(0 - 500+)_, _(1,000+ - 5,000+)_, _(1,00,000+ - 50,00,000+)_,
\begin{table}
\begin{tabular}{l l l l l l} \hline Relation & Relation & Head Feature & Tail Feature &
\begin{tabular}{l} No. of \\ Groups \\ \end{tabular} & Related Side Information \\ \hline
0 & ADSIMILAR & AD & AD & 2 & App supports advertisements \\ \hline
1 & CRSIMILAR & CR & CR & 4 & App’s content rating data \\ \hline
2 & GCSIMILAR & EC & EC & 2 & App’s base editor’s choice \\ \hline
3 & GDSIMILAR & Gld & Gld & 3 & App’s genre I data \\ \hline
4 & INSSMILAR & Installs & Installs & 4 & Number of app installs data \\ \hline
5 & IAPSMILAR & IAP & IAP & 2 & App supports In-App-Purchases \\ \hline
6 & RTGISMILAR & Ratings & Ratings & 5 & Number of users who have rated the app \\ \hline
7 & RTGISMILAR & Released & Released & 7 & App’s released date \\ \hline
8 & REVISMILAR & Reviews & Reviews & 5 & Number of users who rated as well as written reviews \\ \hline
9 & STSIMLAR & ScoreText & ScoreText & 8 & Mean rating of all the users who rated the app \\ \hline
10 & STSIMLAR & Size & Size & 5 & App’s size entity data \\ \hline
11 & VSMILAR & Video & Video & 2 & App supports video \\ \hline \end{tabular}
\end{table} TABLE 1: Head relation tail table.
\begin{table}
\begin{tabular}{l l l l l} \hline Attributes & \#Supported & editor\_to\_Race & different\_Videos \\ \hline TRUE & 956 & 44 & 926 & 726 \\ \hline FALSE & 837 & 1749 & 867 & 1067 \\ \hline \end{tabular}
\end{table} TABLE 2: Relation (0,2,5,11) - Building(AdSupported, editorsChoice, offersIPA, video).
\begin{table}
\begin{tabular}{l l l l} \hline ContentRating & & Generid \\ \hline Everyone & 1369 & Photography & 550 \\ \hline Teen & 267 & Protectivity & 482 \\ \hline Everyone 10+ & 113 & Games(Total) & 755 \\ \hline Mature 17+ & 44 & \\ \hline \end{tabular}
\end{table} TABLE 3: Relation 1-Building (Content Rating and GenreId).
(1,00,00,000+ - 500,00,00,000+)_. We use Interval mapping to group the values present in a particular interval into its interval-specific categories. Fig. 4 shows the install key-value pair plot.
Table 4 has four relations. The relation _Ratings_ and _Reviews_ has a continuous distribution of values. We use _qcut_ as a Quantile-based discretization function that divides up the underlying data into equal-sized bins. The function defines the bins using percentiles based on the distribution of the data, not the actual numeric edges of the bins. So, there are a total of five quantile ranges _0-0.2,0.2-0.4,0.4-0.6,0.6-0.8,0.8-1.0_. The quantiles group the values into the appropriate quantile by assigning the value in between the quantile range and accordingly labelling it between _0 - 4_.
The relation _Released_ has a continuous distribution of dates. We subtract the Released date from today's date to get the number of days _(Number of days = Today's date - Released)_. We use interval mapping and put these values into intervals, grouping them as released within _1 month_, _2 month_, _3 month_, _4 month_, _5 month_, _6 month_, _7 month_, _8 month_, _9 month_, _10 month_, _11 month_, _12 months_ and released after _a year_. The relation _ScoreText_ has a discreet set of values grouped from _0.0_ to _5.0_ constituting fifty intervals. We use _qcut_ to divide the data into 8 quantile ranges to differentiate them The relation Size has a continuous distribution of values in an object format. We normalize the data into a _Kb_ format. We then use interval mapping and put these values into intervals, binning them into values _0-6_. The bins are labelled as _(0-1)-0_, _(1-20000)-1_, _(20000-40000)-2_, _(40000-60000)-3_, _(60000-80000)-4_, _(80000-10000)-5_, _(100000-200000)-6_. The apps that take less than _1 kb_ space, _vary with the device_, or have no size specifications have been grouped into a single category. The next category is of apps with sizes less than _20 Mb_, _40Mb_, _60Mb_, _80Mb_, _100Mb_. The last category is apps with more than _100 Mb_.
#### 3.2.3 Knowledge Graph Quality Check
In order to examine the issues encountered while designing a Knowledge Graph we examine its statistical characteristics helping us get a close idea about its structural properties [25].
To measure the interdependence between the different relationships, Schanbel et al. used a cosine-based similarity of the embeddings for two words to measure the human relatedness scores and present a novel evaluation framework based on direct comparisons between embeddings. They developed a novel Coherence task to measure the intuition that neighbourhoods in the embedding space should be semantically or syntactically related [26]. To measure the relatedness, we measure the relatedness score between every two relations. To compute the relatedness between labels \(r_{i}\) and \(r_{j}\), we first define the support of \(r_{i}\)\(r_{j}\) as, shown in (1):
\[supp(r_{i}\ \rightarrow\ r_{j}\ )=\frac{|r_{j}\cap r_{i}|}{|r_{i}|} \tag{1}\]
\(r_{i}\) and \(r_{j}\) denote the number of nodes that have a relation \(r_{i}\) or \(r_{j}\) and \(ri\alpha j\) denotes the number of nodes that have both relations \(r_{i}\) and \(r_{j}\). The support function is not symmetric, inspired by the definition of F-measure, the relatedness of \(r_{i}\) and \(r_{j}\) is defined as follows in (2):
\[R(r_{i}\ \rightarrow\ r)\
study. The proposed framework consists of four modules as shown in Fig. 3; the first module is related to Data Collection and Preprocessing, the second module is Building a Knowledge Graph, the third module is Building Node Embeddings, the fourth is using shallow embedding methods to build and train the KG on different embeddings using the SAppKG-S model, and the fifth is using SAppKG-D; A deep embedding based approach for embeddings propagation through a modified KGCN.
The two proposed methods SAppKG-S and SAppKG-D are described below.
1) SAppKG-S To generate node embeddings, most approaches rely on a shallow embedding method that performs an embedding lookup to map nodes to their embeddings. Each node is then trained to produce a unique embedding. In this paper, we introduce the SAppKG-S method, which also employs a shallow embedding approach for recommending apps in a knowledge graph. We compare the performance of various shallow embedding models in the SAppKG-S framework.
We use Pykg2vec, a library built on PyTorch that learns the entities and relations representation. State-of-the-Art Knowledge Graph Embedding algorithms have been implemented using this library. Table VII shows the different embeddings models used in our study categorized on the basis of their model training into 3 categories:
1. Pairwise (margin) based Training KGE Models * NTN [27]: The Neural Tensor Network (NTN) is a neural embedding model designed for Knowledge Graphs. It incorporates second-order correlations into nonlinear neural networks. The score function is given in (3): \[f_{r}(h,t)=u_{r}^{T}f(h^{T})\widehat{W}_{t}+M_{r}\
relations. One vector to describe the entity's meaning and another to dynamically build a mapping matrix \(M_{\textit{rb}}\ M_{\textit{rt}}\). These matrices project head and tail entities from entity space to relation space. The score function is shown in (6): \[f_{r}(h,t)=-||(r_{p}e^{T}+I)e_{h}+r-(r_{p}e^{T}+I)e_{t}||^{2}\] \[h_{p}\] (6)
* Rescal [31]: Following the Tensor Factorization methodology, the Rescal model transforms the Knowledge Graph's (head, relation, tail) triple into a three-way tensor, \(X\). Here, \(X\) is of dimensions (n*n*m), where n is the number of entities, m is the number of relations, and \(X_{ijk}\ =\ 1\) indicates that there is a relation between entities. RESCAL model computes a score for a triple using the formula as in (7): \[f_{r}(h,t)=h^{T}M_{r}t\] (7) where \(h\), \(t\in R^{d}\) are embeddings of head and tail, and \(M_{r}\in R^{d,d}\) is matrix form of relations.
* RotatE [32]: Before RotatE, it was not feasible to describe complicated relations like symmetry/antisymmetry, inversion, and composition in a Knowledge graph. The RotatE model follows Euler's identity \(e^{\textit{d}}=cos\vartheta\ +\textit{isi}\textit{{{{{\mu}}}}}\) and is designed to map entities and relations onto a complex vector space. It accomplishes this by defining each relation as a rotation from the source entity to the target entity. RotatE follows the score function as given in (8): \[f_{r}(h,t)=-||h\ \cdot\ r-t||^{2}\] (8) where'represents hadamard product.
2. Pointwise based Training KGE Models- * ComplEx [33]: ComplEx embedding strategies attempt to address the problem of representing antisymmetric relationships in the Knowledge Graph. It accomplishes this by calculating the Hermitian dot product of head and tail entities. The score function it uses is given by (9): \[f_{r}(h,t)=Re(\ h,r,\vec{t}\ )\] (9) where (\(\cdot\)) denotes the generalized dot product and \(\cdot\) denotes the conjugate of complex vectors.
* DistMult [34]: The DistMult embedding model is a simplified version of the Rescal model, which involves transforming the relation matrix, Mr, into a diagonal matrix. As a consequence, there is a decrease in the number of parameters, leading to increased scalability and improved performance. The score function is thus given in (10): \[f_{r}(h,t)=h^{T}diag(r)t\] (10)
* SimplE [35]: The SimplE embedding model improves upon the canonical polyadic decomposition method of tensor factorization by obtaining two interdependent embeddings for each entity. The scoring function of SimplE is defined in (11)as the average of triples (\(h_{i}\ \nu,\ t_{j}\)) and (\(h_{j}\ r^{-1},\ t_{l}\)), i.e. \[f_{r}(h,t)=\frac{1}{2}(\ (h_{i},r,t_{j})+(h_{j},r^{-1},t_{l})\ )\] (11) where \(r^{-1}\) represents inverse of relation r.
3. Projection-Based (Multiclass) Training KGE Models * TuckER [36]: TuckER is a generalized version of the linear models Rescal, DistMult, ComplEx, and SimplE. TuckER uses the tucker decomposition of binary tensors to represent (h,r,t) triples in a knowledge graph. The scoring function followed by TuckER is given in (12): \[f_{r}(h,t)=W\ \times_{1}h\ \times_{2}w_{r}\ \times_{3}t\] (12) where \(\times\)denotes tensor product along the \(\vec{i}^{\textit{th}}\) mode and \(W\in R^{n*n,m}\) (n-number of entities, m-number of relations).
4. TuckER [37]: TuckER is a generalized version of the linear models Rescal, DistMult, ComplEx, and SimplE. TuckER uses the tucker decomposition of binary tensors to represent (h,r,t) triples in a knowledge graph. The scoring function followed by TuckER is given in (12): \[f_{r}(h,t)=W\ \times_{1}h\ \times_{2}w_{r}\ \times_{3}t\] (13) where \(\times\)denotes tensor product along the \(\vec{i}^{\textit{th}}\) mode and \(W\in R^{n*n,m}\) (n-number of entities, m-number of relations).
5. TuckER [38]: TuckER is a generalized version of the linear models Rescal, DistMult, ComplEx, and SimplE. TuckER uses the tucker decomposition of binary tensors to represent (h,r,t) triples in a knowledge graph. The scoring function followed by TuckER is given in (12): \[f_{r}(h,t)=W\ \times_{1}h\ \times_{2}w_{r}\ \times_{3}t\] (14) where \(\times\)denotes tensor product along the \(\vec{i}^{\textit{th}}\) mode and \(W\in R^{n*n,m}\) (n-number of entities, m-number of relations).
6. TuckER [39]: TuckER is a generalized version of the linear models Rescal, DistMult, ComplEx, and SimplE. TuckER uses the tucker decomposition of binary tensors to represent (h,r,t) triples in a knowledge graph. The scoring function followed by TuckER is given in (12): \[f_{r}(h,t)=W\ \times_{1}h\ \times_{2}w_{r}\ \times_{3}t\] (15) where \(\times\)denotes tensor product along the \(\vec{i}^{\textit{th}}\) mode and \(W\in R^{n*n,m}\) (n-number of entities, m-number of relations).
7. TuckER [39]: TuckER is a generalized version of the linear models Rescal, DistMult, ComplEx, and SimplE. TuckER uses the tucker decomposition of binary tensors to represent (h,r,t) triples in a knowledge graph. The scoring function followed by TuckER is given in (12): \[f_{r}(h,t)=W\ \times_{1}h\ \times_{2}w_{r}\ \times_{3}t\] (16) where \(\times\)denotes tensor product along the \(\vec{i}^{\textit{th}}\) mode and \(W\in R^{n*n,m}\) (n-number of entities, m-number of relations).
8. TuckER [39]: TuckER is a generalized version of the linear models Rescal, DistMult, ComplEx, and SimplE. TuckER uses the tucker decomposition of binary tensors to represent (h,r,t) triples in a knowledge graph. The scoring function followed by TuckER is given in (12): \[f_{r}(h,t)=W\ \times_{1}h\ \times_{2}w_{r}\ \times_{3}t\] (17) where \(\times\)denotes tensor product along the \(\vec{i}^{\textit{th}}\) mode and \(W\in R^{n*n,m}\) (n-number of entities, m-number of relations).
9. TuckER [40]: TuckER is a generalized version of the linear models Rescal, DistMult, ComplEx, and SimplE. TuckER uses the tucker decomposition of binary tensors to represent (h,r,t) triples in a knowledge graph. The scoring function followed by TuckER is given in (12): \[f_{r}(h,t)=W\ \times_{1}h\ \times_{2}w_{r}\ \times_{3}t\] (18) where \(\times\)denotes tensor product along the \(\vec{i}^{\textit{th}}\) mode and \(W\in R^{n*n,m}\) (n-number of entities, m-number of relations).
10. TuckER [39]: TuckER is a generalized version of the linear models Rescal, DistMult, ComplEx, and SimplE. TuckER uses the tucker decomposition of binary tensors to represent (h,r,t) triples in a knowledge graph. The scoring function followed by TuckER is given in (12): \[f_{r}(h,t)=W\ \times_{1}h\ \times_{2}w_{r}\ \times_{3}t\] (19) where \(\times\)denotes tensor product along the \(\vec{i}^{\textit{th}}\) mode and \(W\in R^{n*n,m}\) (n-number of entities, m-number of relations).
11. TuckER [39]: TuckER is a generalized version of the linear models Rescal, DistMult, ComplEx, and SimplE. TuckER uses the tucker decomposition of binary tensors to represent (h,r,t) triples in a knowledge graph. The scoring function followed by TuckER is given in (12): \[f_{r}(h,t)=W\ \times_{1}h\ \times_{2}w_{r}\ \times_{3}t\] (20) where \(\times\)denotes tensor product along the \(\vec{i}^{\textit{th}}\) mode and \(W\in R^{n*n,m}\) (n-number of entities, m-number of relations).
12. TuckER [39]: TuckER is a generalized version of the linear models Rescal, DistMult, ComplEx, and SimplE. TuckER uses the tucker decomposition of binary tensors to represent (h,r,t) triples in a knowledge graph. The scoring function followed by TuckER is given in (12): \[f_{r}(h,t)=W\ \times_{1}h\ \times_{2}w_{r}\ \times_{3}t\] (10) where \(\times\)denotes tensor product along the \(\vec{i}^{\textit{th}}\) mode and \(W\in R^{n*n,m}\) (n-number of entities, m-number of relations).
13. TuckER [39]: TuckER is a generalized version of the linear models Rescal, DistMult, ComplEx, and SimplE. TuckER uses the tucker decomposition of binary tensors to represent (h,r,t) triples in a knowledge graph. The scoring function followed by TuckER is given in (12): \[f_{r}(h,t)=W\ \times_{1}h\ \times_{2}w_{r}\ \times_{3}t\] (11) where \(\times\)denotes tensor product along the \(\vec{i}^{\textit{th}}\) mode and \(W\in R^{n*n,m}\) (n-number of entities, m-number of relations).
14. TuckER [39]: TuckER is a generalized version of the linear models Rescal, DistMult, ComplEx, and SimplE. TuckER uses the tucker decomposition of binary tensors to represent (h,r,t) triples in a knowledge graph. The scoring function followed by TuckER is given in (12): \[f_{r}(h,t)=W\ \times_{1}h\ \times_{2}w_{r}\ \times_{3}t\] (12) where \(\times\)denotes tensor product along the \(\vec{i}^{\textit{th}}\) mode and \(W\in R^{n*n,m}\) (n-number of entities, m-number of relations).
The Algorithm 1 gives the working of our proposed algorithm. Pykg2vec makes the computational time shorter by generating mini-batches through the utilization of multi-processing units. We pass the _(Head, Tail, Relation)_ KG file into the KG controller, which looks after its parsing tasks and generates the training, test, and validation sets. The control then moves onto the Batch Generator that makes queues of the mini-batches. These mini-batches are then processed by the core models that contain state-of-the-art Knowledge Graph embedding algorithms. Each of their modules has a loss function along with embedding operations.
The models are supplied with a configuration file that gives the necessary information to parse the datasets along with the baseline hyperparameters. The next part is the Training module which takes an instance of the KGE model and trains the model. The last part is the Evaluator that performs link prediction and provides the accuracy scores of the models. A Bayesian hyperparameter optimizer helps find the optimal golden hyperparameter by minimizing the loss function and improving the accuracy of our model. This Bayesian optimizer performs better than the brute force approaches and hence helps in significantly reducing the computational time.
The general principle involves learning the relations and entities that have been represented as facts. A negative sampling technique is followed where a chunk of negative triplets gets sampled from the set of positive triples by corrupting their entities. A scoring function is used to punish the negative triples and praise the positive triples. The maximizing and minimization of the scoring function is determined by the Bayesian optimization algorithm where the KG methods get evaluated based on their ability to predict the missing entity values in the negative triples _(?, r, t)_ or _(h, r,?)_. We perform 2 experiments for performing the task of Link prediction. In the 1\({}^{st}\) experiment we train our Knowledge Graph across different embedding models given in Table 7. In experiment-2 we perform a set of 4 sub-experiments where we drop a group of similar attributes and analyze its impact on the task of Link Prediction as shown in Table 9 and Table 10.
#### 2) SAppKG-D
The shallow embedding approaches suffer from drawbacks; they lack syntactic representation. To alleviate these limitations, shallow encoders can also be replaced with more sophisticated encoders that depend more generally on the structure and attributes of the graph. Some encoders can be generalized beyond the shallow embedding. For instance, the encoder can use node features or the local graph structure around each node as an input to generate an embedding. The key idea is that we want to generate representations of nodes that actually depend on the structure of the graph, as well as any feature information possessed by the graph. This structural information can be useful for many tasks.
In this paper, we propose a novel approach for app recommendation using knowledge graph embeddings. Our model, called SAppKG-D, combines a shallow embedding approach with a relation-weighted graph convolution-based deep embedding technique to extract higher-order semantic information from the knowledge graph. We address the limitations of shallow embeddings by using an encoder that depends on the graph structure and attributes to generate node embeddings that incorporate the structural information of the graph. Specifically, we use TransD embeddings and employ a neighbor aggregation process to propagate information between nodes in the graph, capturing higher-order connectivity and relations between entities. Our model is trained using a negative sampling strategy and the Adam optimizer to learn the recommendation model parameters. The model is evaluated using precision, recall, and mean average precision (MAP-N) metrics. The Algorithm 2 gives the working of our SAppKG-D model. SAppKG-D is a graph convolution-based deep embedding technique that is used to generate node embeddings. Mathematically, the model is represented in (13):
\[W_{e}=[w_{e,s};w_{e,s}]\in\mathbb{R}^{d_{d}+d_{d}} \tag{13}\]
here, \(w_{e,s}\) and \(w_{e,d}\) represent the shallow and deep embeddings, respectively.
To incorporate the influence of neighboring apps, we employ an aggregation process in our proposed graph. In the knowledge graph (KG), the user-specific app attributes, such as installs, ratings, and other relevant user-centric information, dynamically change with time and are influenced by the availability of other similar competitor apps. Therefore, our model aggregates information from tail apps to head apps, taking into account the user-attributes associated with each app. Additionally, we introduce a weighted aggregation mechanism that considers the side information of the neighboring apps, enabling us to capture the relevant user-centric attributes and characteristics of neighboring apps. This approach enhances the overall representation and understanding of each app within the graph, considering the impact of user preferences and interactions Specifically, given an app _app_1 and a node _app_2 in the ARKG G), we define \(\textbf{N}_{\textit{sp}}\): \((h_{\textit{sp}}\textbf{r},\textbf{t})\) (\(h\textit{ app}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\)) (\(h\textit{ app}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\)) (\(h\textit{ app}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\)) (\(h\textit{ app}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\)) (\(h\textit{ app}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\)) \(\Lambda\in\textit{G}\)) as the set of triplets where _app_2 is the head entity. The aggregated vector of neighbors for _app_2, specific to _app_1, is computed as follows in (14):
\[\begin{array}{l}\underset{\begin{subarray}{c}\textbf{supp}^{2}\\ (h,r,t)\in N_{\textit{app}1}\end{subarray}}{\textbf{\}\textbf{\}\textbf{\} \textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{ \}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{ \}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\} \textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\} \textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\} \textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\} \textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\} \textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\} \textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}} \textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\} \textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}} \textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{ \}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\} \textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}} \textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}} \textbf{\end{array}} \tag{14}\]
here, \(\textbf{t}\in\mathbb{R}^{d}\) represents the vector of the tail entity \(t\), and \(w^{\prime}_{app1}\) is the weight between app _app_1 and relation \(r\), characterizing the importance of relation \(r\) to app _app_1. The weight\({}_{\textit{up}}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\} \textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\} \textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\} \textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}\textbf{\}}\textbf{\}\textbf{\}\) computed as follows in (15):
\[w^{\prime}_{app\,1}=\textbf{\}\frac{\textbf{p}}{(h_{\textit{s}},r,t)\in N_{ \textit{app}2}}\exp(\pi(\textbf{app1,r})) \tag{15}\]
In the above equations, \(\textbf{app1}\in\mathbb{R}^{d}\) and \(\textbf{r}\in\mathbb{R}^{d}\) represent the embeddings of app _app_1 and relation \(r\), respectively. The function \(\pi:\mathbb{R}^{d}\subset\mathbb{R}^{d}\)--\(\mathbb{R}^{d}\) refers to a weight score function that maps two vectors from the Euclidean space \(\mathbb{R}^{d}\) to a single
vector in \(\mathsf{R}^{d}\). In the context of the research paper, this function is used to calculate the weights between the app _app1_ and the relation \(r\).
To seamlessly update the embeddings for the next layer, we perform the following steps. For each app node _app1_, we concatenate its current representation \(\mathbf{y}_{app1}\) with the aggregated vector of its neighboring apps \(\mathbf{v}_{app1}^{N_{app2}}\). This concatenation captures the combined information from the app itself and its neighboring apps. This concatenated vector is then passed through a fully connected layer with a nonlinear activation function \(\sigma\), which transforms it into the new representation of _app1_. The update process can be formulated as follows as given in (16):
\[\mathbf{v}_{app1}^{{}^{\prime}}=\sigma\ \ \ \mathbf{W}\cdot\ \ \mathbf{v}_{app1}\| \mathbf{v}_{app1}^{N_{app2}}\ \ +\ \mathbf{b} \tag{16}\]
here, \(\mathbf{v}_{app1}^{{}^{\prime}}\) (i.e., the output of this layer) represents the new representation of the app node _app1_ specific to its connections with other apps. The transformation weight and bias are denoted as \(\mathbf{W}\) and \(\mathbf{b}\), respectively. The symbol "\(\|\)" denotes the concatenation operation, which combines the current representation of _app1_ with the aggregated vector of its neighboring apps. This process ensures that the embeddings are continuously updated and refined as the information flows through the layers.
After updating the embeddings for each app node in the previous step, our next objective is to incorporate higher-order connectivity, which plays a crucial role in improving the quality of recommendations. To achieve this, we leverage the concept of information propagation among different layers of the knowledge graph. This enables us to capture higher-order structural proximity among entities and enhance the representation of their relationships.
In our approach, we stack multiple propagation layers, typically _K_-1 layers, when we are at the \(K^{th}\) level. This process allows us to propagate information and update the embeddings based on aggregated information from neighboring entities. By iteratively aggregating and incorporating information from multiple layers, we can capture and integrate higher-order dependencies and structural patterns within the knowledge graph, leading to more accurate and comprehensive recommendations.
To formalize this process, we utilize Equation (16) for propagating embeddings along higher-order connectivity. For convenience, we denote the representation of node \(v\) specific to app1 at depth \(k\) - 1 as \(\mathbf{v}_{app1}^{(k)}\), which combines the initial representations of node \(v\) and its neighbors up to \(k\) hops away. This information propagation technique is illustrated in the SAppKG-D: Training and Evaluation subsection having the information propagation image in Figure-3. By incorporating higher-order connectivity, we can effectively capture the structural proximity between entities and enhance the recommendation process. Through a series of information propagation layers, our model updates the representations of app nodes to capture their higher-order dependencies and 10ghg-range inter-relatedness. The final embedding \(\mathbf{v}_{app1}^{(K)}\) of an app node \(v\) specific to app _app1_ reflects its structural connections up to \(K\) hops. Additionally, leveraging the general knowledge graph (KG) embeddings, we gain insights into the relational distances between entities.
To predict interactions between apps _app1_ and _app1_, we combine their embeddings into a unified vector as shown in Equation (17):
\[\mathbf{a}_{\text{app1}}^{*}=\mathbf{a}_{\text{1}}\ \|\ \mathbf{a}_{\text{app1}}^{(K)} \tag{17}\]
In this equation, the vectors \(\mathbf{a}_{\text{1}}\) capture the relationship-specific information and the characteristics of app _app1_ derived from the KG embeddings. On the other hand, the embedding \(\mathbf{a}_{\text{app1}}^{(K)}\) represents the final output specific to app _app1_, obtained through the convolutional embedding propagation component. By concatenating these vectors, we create a unified representation that combines both relationship-specific information and the learned characteristics of the app, enabling us to predict their interaction. The matching score between app _app1_ and app _app1_ is computed by taking the inner product of their embeddings as shown in (18):
\[\texttt{Yap1, app2}=\mathbf{a}_{\text{app1}}^{*}\cdot\mathbf{a}_{\text{app2}}^{*} \tag{18}\]
To train our app recommendation model, we employ negative sampling and utilize a binary cross-entropy loss function with
\(L_{2}\) norm regularization as shown in (19):
\[\begin{array}{l}L_{\text{CEP}}=\begin{array}{c}\raisebox{-1.72pt}{ \includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \end{array}-\log(V_{\text{app1, app2}})\\ +\begin{array}{c}\raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \end{array} \tag{19}\]
here, \(\boldsymbol{\mathcal{A}}\) represents the set of all apps. Trn*p1 denotes the training instances involving app _app1_, and Neg*p1app2 represents the randomly sampled negative app instances associated with app _app1_ and app _app1_. The term \(\boldsymbol{\mathcal{Y}_{\text{app1, app2}}}\) denotes the predicted probability of a positive interaction between app _app1_ and app _app1_, while \(\boldsymbol{\mathcal{Y}_{\text{app1, i}}}\) represents the predicted probability of a positive interaction between app _app1_ and a negative app instance \(i\). The first part of the loss function corresponds to the log-likelihood of positive interactions, while the second part captures the log-likelihood of negative interactions. In addition, the regularization term \(\lambda|\boldsymbol{\mathcal{Q}}_{2}^{2}\) is included to prevent overfitting. Here, \(\lambda\) represents the regularization coefficient, and \(\boldsymbol{\mathcal{G}}\) denotes the model parameters.
To optimize the model, we use the Adam optimizer, which is a popular optimization algorithm known for its efficiency and effectiveness in training deep neural networks. By minimizing the loss function (19) using negative sampling and regularization, our model learns to make accurate predictions and generate meaningful app recommendations
To assess the effectiveness of our model, we utilize several performance metrics, namely precision (P), recall (R), and mean average precision (MAP-\(N\)). Precision is calculated as the ratio of the number of recommended relevant apps (TP) to the total number of recommended apps (TP-FP), and can be expressed as (20):
\[\text{P}=\frac{\text{TP}}{\text{TP}+\text{FP}} \tag{20}\]
Recall measures the ratio of the number of recommended relevant apps (TP) to the total number of relevant apps (TP + FN), and is given by equation (21):
\[\text{R}=\frac{\text{TP}}{\text{TP}+\text{FN}} \tag{21}\]
Mean average precision (MAP-\(N\) ) considers the precision at each relevant position in the top-\(N\) recommended apps and calculates the average across all relevant apps. It can be formulated as (22):
\[\text{MAP-}N=\frac{1}{|R|}\begin{array}{c}\raisebox{-1.72pt}{ \includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \end{array}\begin{array}{c}\text{Prec(app)}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \end{array} \tag{22}\]
In this equation, \(R\) represents the set of relevant apps, \(|R|\) denotes the total number of relevant apps, and \(\boldsymbol{\mathcal{A}_{\text{c}}}\) represents the set of recommended apps. Prec(app) denotes the precision at each relevant app position, and Rank(app) represents the relevance position of each relevant app in the recommended list. The equation captures the essence of MAP-\(N\), which evaluates the average precision of relevant apps considering their positions in the recommended list. These
The metrics provide a comprehensive evaluation of the model's performance in generating relevant app recommendations, considering both the accuracy and comprehensiveness of the recommendations.
## IV Experimental Setup
We scraped the data from Google Play Store for different categories to train the model over a diverse set of data categories as reported in Table 6. _Photography_ and _Productivity_, contain the extracted apps from four subcategories namely _Top Free_, _Top Paid_, _Grossing_ and _Trending_. And for the _Games_, the extracted apps are from five sub-categories namely _Top Free_, _Top Paid_, _Grossing_, _New Free_, and _New Paid_. The _New Free_ and _New Paid_ contain apps that have been launched within a month from today's date. As the Games genre didn't have a Trending category we extracted the New Free and New Paid categories.
The metrics used for evaluating the Link Prediction task include the following:
1. _Mean Rank [37]-_ Lists out the rank of the answer in the predicted list and then displays their mean result as shown in (23). \[MR=\begin{array}{c}\raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \end{array}\begin{array}{c}|Q|\\ i\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \end{array}\begin{array}{c}|Q|\\ i\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \raisebox{-1.72pt}{\includegraphics[width=1.72pt]{2014}}\\ \end{array}\begin{array}{c}|Q|\\ i\\ \raisebox{-1.72pt}{\includegraphics[width=1.
## V Experimental Results
### SApkG-S
SAppKG-S is a shallow embedding-based methodology that proposes apps using shallow embedding models in a knowledge graph while also providing comparisons between various such models through the metrics of MR, MRR, and Hits@K.
#### V-1
The Table-7 shows the Mean Ranking, Mean Reciprocal Ranking, and Hits@K scores for the different embedding models. A low score of Mean Ranking is assumed to be better, and from Table-7, we can observe that the RotatE model has the lowest mean ranking score and hence the best performance followed by Translation based models TransH and TransD.
The column of mean reciprocal ranking has a score range of 0 to 1. It is observed that the model of Complex has a score of 0.668 which is the nearest to 1 hence the best among all the other models. This is followed by RotatE and CP. The next 4 columns have a score of Hits@(1,3,5,10). It can be observed RotatE and ComplEx perform the best. These models have scores in the range of 0.6 - 0.8 followed by TransH. DistMult, CP, KG2E, and TransD. We compare the Model performance by taking their averages across all the hits.
From the Table 7, we can conclude that the neural network. and projection-based models perform the worst as they are computationally costly. RotatE performs the best as it replaces the traditional translation-based operations with a rotation based operation helping to distinguish different relations like antisymmetry. composition, and symmetry. RotatE outperforms the other models indicating that the rotation-based operation helps model the different relation types and their semantic aspects [38].
Compared to translational-based models TransE, TransD, TransH DistMult, and other models. The Complex, RotatE advantage through multi-part high-dimensional embeddings to achieve state-of-the-art performances on link prediction. All parts of the embeddings get adjusted simultaneously in these models [39]. Model Optimization and selection of the appropriate loss while training stands of utmost importance: most distance-based and translation-based models like TransE, TransD and TransH use a margin rank loss function. ComplEx uses a binary logistic loss function to obtain the best results. RotatE tweaks the loss function without the regularization term by adding a margin parameter that significantly enhances the performance [39].
#### V-2
After obtaining the model training scores and performances, we evaluate the effect of the trained KG attributes on the KG performance. For conducting this experiment, we group the attributes into 4 group of categories labelled as ex 1, ex 2, ex 3 and ex 4. Attributes that convey a similar category or type of side information and are highly correlated with one another are grouped together. The Table 8 given below shows the list of attributes that have been grouped together along with their grouping description. The grouped attributes are dropped from the Knowledge Graph and the modified Graph is trained on the rest of the attributes and its performance is noted for Rotate and ComplEx embeddings. We train all the 4 modified graphs on RotatE and ComplEx, since the performance of the original graph is the best for these 2 models. Hence, we limit our training analysis to these 2 models. Table 9 given below shows the performance of the 4 modified graphs on ComplEx, We can find that dropping released and size attributes in Exp 3 improves the performance of the Complex model probably as these attributes might be causing noise in the predictions. Dropping the features in Exp 4 doesn't change much from the original model performance and the model performance remains the same. Dropping the features in Exp 2 leads to a slight decrease in the performance of the overall model. Dropping the attributes in Exp 1 significantly decreased our model performance making it close to zero. Hence, we can conclude that dropping the _released, size_ attributes and preserving the binary features would improve our model performance on the Complex model.
The presented Table 10 highlights the performance of the modified graphs on RotatE. Based on the table, we can conclude that dropping the attributes in Exp 4 leads to an improvement in the model's performance. On the other hand, dropping the features in Exp 3 does not significantly alter the performance of the original model performance. Additionally, dropping the features in Exp 2 leads to a slight decrease in the overall model performance, whereas dropping the attributes in Exp 1 causes a moderate decrease in the model's performance. Therefore, it can be concluded that preserving the binary feature group would improve the model performance, and this result aligns with the ComplEx model results. In summary, the results suggest that certain attribute groups can be dropped to improve the model's performance, while preserving other groups, such as the binary features, can positively impact the model's performance.
#### V-3
In our third experiment, we evaluated the performance of the User-App Knowledge Graph dataset, used by [5], on the SAppKG-S model. Our results show that the mean ranking and mean reciprocal ranking of our Knowledge graph outperform the User-App KG for the TransD model. Although the User-App KG performs better than our KG for Hits@1, our KG stands out from the User-App KG for Hits@(3,5,10). The comparison results of SAppKG-S and KGEP are shown in Table 11. Specifically, the filtered mean rank and MRR of our SAppKG-S model are better than those of the KGEP model. Additionally, our SAppKG-S model outperforms the KGEP model for Hits@(3,5,10), while the KGEP model performs slightly better than our SAppKG-S model for Hits@1.
technique to extract higher-order semantic information from the knowledge graph to propose apps.
#### 4.2.1 Exp-4
In our fourth experiment, we evaluated the performance of our Knowledge Graph (KG) on the Graph Convolutional Network (GCN) used by Zhang et al. [5] for training and testing the User-App dataset. To establish a baseline for comparison, we used their results as our reference. Table 12 shows that our KG performed slightly worse in terms of precision, with a margin of--8% to -20% compared to the standard baseline, we saw a positive deviation margin ranging from 15% to 49% in the recall. In terms of the Mean Average Precision (mAP-N), our model performed slightly better, with a range of up to 7%.
To achieve the best possible results, we experimented with various hyperparameters and identified the following settings as optimal: aggregator type as concat, a neighbour sample size of 7, embedding dimensions of 16, one iteration for computing entity representation, a batch size of 10, L2 regularizer weight of 1e-7, and a learning rate of 0.005. We trained our model for 200 epochs, using a training/test/validation split ratio of 0.6, 0.2, and 0.2, and achieved the best result at the 69th epoch.
To evaluate the performance of our graph, we perform relation prediction using SAppKG-D. As we only have 11 relations, we make predictions for 1, 3, 5, and 7 relations and measure precision, recall, and MAP-N for each. We find that all three metrics improve linearly as the number of predicted relations increases, with the best results obtained for predicting 7 relations.
For relation prediction, we use the hyperparameters: aggregator type of concat, embedding dimension of 16, one iteration for computing entity representation, a batch size of 10, L2 regularizer weight of 1e-7, the learning rate of 0.005, and neighbour sample size of 5. We use a training/test/validation split ratio of 0.6/0.2/0.2 and train for 200 epochs. Table 14 shows that with these hyperparameters we achieve a precision of 0.291, recall of 0.622, and MAP-N of 0.383, representing a significant improvement over our baseline results.
#### 4.2.2 Exp-5
In our fifth experiment, we focused on evaluating the average inference time of the SAppKG-S and SAppKG-D models for entity prediction on the Google Play Store and Apple App Store datasets. To ensure robustness and facilitate a fair comparison between Android and iOS apps, we adhered
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Model & \multicolumn{1}{l}{Filtered MR} & \multicolumn{1}{l}{\begin{tabular}{l} MRR, \\ Filered MRR \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Hits1 \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Hits3 \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Hits5 \\ \end{tabular} } & \multicolumn{1}{l}{
\begin{tabular}{l} Hits10 \\ \end{tabular} } \\ \hline Complex & 107.6109 & **0.6684** & **0.5918** & **0.737** & 0.7574 & 0.7767 \\ \hline CP & 334.0777 & 0.4537 & 0.3723 & 0.5215 & 0.5495 & 0.5759 \\ \hline DistMult & 143.8794 & 0.355 & 0.1639 & 0.5212 & 0.5876 & 0.6403 \\ \hline KGE & 159.3505 & 0.1962 & 0 & 0.31 & 0.472 & 0.6074 \\ \hline NN & 309.7834 & 0.0926 & 0.0374 & 0.0918 & 0.13 & 0.2042 \\ \hline Rescal & 121.0371 & 0.0894 & 0.0286 & 0.0808 & 0.1248 & 0.2095 \\ \hline Rotate & **143.7446** & 0.5053 & 0.3172 & 0.6591 & **0.7644** & **0.6837** \\ \hline SimpleL & 889.9388 & 0.0049 & 0.0007 & 0.002 & 0.0037 & 0.0069 \\ \hline TransD & 88.5658 & 0.1975 & 0.0008 & 0.2876 & 0.475 & 0.6471 \\ \hline TransE & 125.9044 & 0.2101 & 0 & 0.3442 & 0.5062 & 0.6324 \\ \hline TransH & 73.1269 & 0.2341 & 0 & 0.3635 & 0.5725 & 0.7309 \\ \hline TuckER & 913.3816 & 0.0036 & 0.0001 & 0.0006 & 0.0016 & 0.0047 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Training Results.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{Copies} & \multirow{2}{*}{Filtered MR} & \multicolumn{1}{l}{\begin{tabular}{l} MRR, \\ Filered MRR \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Hits1 \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Hits3 \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Hits5 \\ \end{tabular} } & \multicolumn{1}{l}{
\begin{tabular}{l} Hits10 \\ \end{tabular} } \\ \hline Exp & 87.4976 & 0.0041 & 0 & 0 & 0.0017 & 0.0043 \\ \hline Exp-2 & 241.758 & 0.0473 & 0.4501 & 0.5455 & 0.5707 & 0.6699 \\ \hline Exp-3 & 65.3812 & 0.0896 & 0.5945 & 0.7641 & 0.8078 & 0.8477 \\ \hline Exp-4 & 241.791 & 0.0637 & 0.5039 & 0.7045 & 0.7111 & 0.7178 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Complex results.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{Copies} & \multirow{2}{*}{Filtered MR} & \multicolumn{1}{l}{\begin{tabular}{l} HiRR, \\ Filered MRR \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Hits1 \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Hits3 \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Hits5 \\ \end{tabular} } & \multicolumn{1}{l}{
\begin{tabular}{l} Hits10 \\ \end{tabular} } \\ \hline Exp & 23.7441 & 0.3209 & 0.15 & 0.6033 & 0.5475 & 0.725 \\ \hline Exp-2 & 16.7458 & 0.3535 & 0.127 & 0.4444 & 0.5948 & 0.7677 \\ \hline Exp-3 & 12.2457 & 0.4485 & 0.238 & 0.3013 & 0.3713 & 0.3839 \\ \hline Exp-4 & 37.7873 & 0.6788 & 0.5844 & 0.7555 & 0.8179 & 0.8847 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Rotate Results.
to a standardized data scraping and knowledge graph construction process, as described in the methodology section. This approach allowed us to curate a dataset comprising 1793 apps, carefully selected to match the size and categories of the Google Play Store dataset. To assess the performance of our model, we conducted experiments using a test set consisting of 20% of the total apps in our dataset, resulting in 358 apps. In order to measure the average inference time for each model (SAppKG-S with ComplEx, SAppKG-S with RotatE, and SAppKG-D), we employed Equation 26.
\[\begin{split}\includegraphics[width=14.226378pt]{images/
models. Our KG has twelve relations, and we found that modelling these many relations would be a tedious task. However, we observed that RotatE and ComplEx performed the best in this task due to their high-dimensional multi-art embeddings used to model the different relations.
We also proposed a hybrid approach called SAppKG-D, which combines shallow embedding with relation-weighted graph convolution-based deep embedding techniques to extract higher-order semantic information from the KG for app recommendations. Our results showed that our approach outperformed the baseline results. Furthermore, we found that binary attributes play an important role in preserving the structural information of the KG, and removing user data from the KG while incorporating app attributes in the form of edges gives similar results to the state-of-the-art KGEP model while maintaining user privacy. Overall, our work demonstrates the effectiveness of Knowledge Graph Embedding methods in building a mobile app recommender system and highlights the potential of our proposed approaches, SAppKG-S and SAppKG-D, for improving the performance of app recommendations.
In the future, we plan to enhance our model by increasing the graph inter-connectedness to further increase density. Given that we were limited by computational power, the graph was trained for only 500 epochs, so we can also explore training it for longer to see if performance can be improved. We also plan to increase the number of relationships and entities in the KG and connect nodes that match entities to make the graph denser and less sparse. These improvements have the potential to further enhance the performance and utility of our proposed model.
|
2309.11564 | Hierarchical reinforcement learning with natural language subgoals | Hierarchical reinforcement learning has been a compelling approach for
achieving goal directed behavior over long sequences of actions. However, it
has been challenging to implement in realistic or open-ended environments. A
main challenge has been to find the right space of sub-goals over which to
instantiate a hierarchy. We present a novel approach where we use data from
humans solving these tasks to softly supervise the goal space for a set of long
range tasks in a 3D embodied environment. In particular, we use unconstrained
natural language to parameterize this space. This has two advantages: first, it
is easy to generate this data from naive human participants; second, it is
flexible enough to represent a vast range of sub-goals in human-relevant tasks.
Our approach outperforms agents that clone expert behavior on these tasks, as
well as HRL from scratch without this supervised sub-goal space. Our work
presents a novel approach to combining human expert supervision with the
benefits and flexibility of reinforcement learning. | Arun Ahuja, Kavya Kopparapu, Rob Fergus, Ishita Dasgupta | 2023-09-20T18:03:04Z | http://arxiv.org/abs/2309.11564v1 | # Hierarchical reinforcement learning with natural language subgoals
###### Abstract
Hierarchical reinforcement learning has been a compelling approach for achieving goal directed behavior over long sequences of actions. However, it has been challenging to implement in realistic or open-ended environments. A main challenge has been to find the right space of sub-goals over which to instantiate a hierarchy. We present a novel approach where we use data from humans solving these tasks to softly supervise the goal space for a set of long range tasks in a 3D embodied environment. In particular, we use unconstrained natural language to parameterize this space. This has two advantages: first, it is easy to generate this data from naive human participants; second, it is flexible enough to represent a vast range of sub-goals in human-relevant tasks. Our approach outperforms agents that clone expert behavior on these tasks, as well as HRL from scratch without this supervised sub-goal space. Our work presents a novel approach to combining human expert supervision with the benefits and flexibility of reinforcement learning.
## 1 Introduction
Despite several recent successes of reinforcement learning, a major challenge has been using it in real world settings. Goal-directed behavior over long time horizons has thus far been challenging for traditional RL and its relatively data hungry process of exploration and temporal credit assignment. This has been especially limiting in real-world-like embodied tasks that operate over motor-control action spaces that make even relatively simple tasks require a long series of motor actions. RL has primarily thrived in worlds that accommodate simple abstract action spaces like games, where a single 'action' can elicit large changes in the environment. However, this is limiting - a central advantage of generic embodied action spaces is that they are realistic, flexible, and permit open-ended and emergent behaviors. RL's inability to operate over these action spaces (due to challenges in exploration and long-term credit assignment over long action sequences) has been a major impediment to its application in real-world settings.
An influential approach to extending RL to long range tasks has been to use hierarchies over the space of actions. Intuitively, this means that the 'action space' that one actually does credit assignment and exploration over are temporally extended _sequences of actions_(Sutton et al., 1998; Hauskrecht et al., 2013) that achieve _subgoals_ on the path toward achieving the target task. The main challenge here has been to devise (or learn) a general enough space of subgoals that both effectively reduces the planning horizon but is also expressive enough to permit interesting behaviors (Da Silva et al., 2012; Mankowitz et al., 2018). The core challenge here is to find the right set of abstractions for a given domain and set of tasks.
In this work, we investigate natural language as a way to parameterize this subgoal space. Language is a lossy channel - a text description of an agent trajectory will discard a lot of (detailed, grounded, visual) information. However, language has evolved explicitly to still be expressive enough to represent the vast majority of ideas, goals, and behaviors relevant to humans. This makes it a strong contender for specifying subgoals that both effectively reduce complexity, while retaining expressivity where it matters.
Language also has the added advantage that we can crowd-source it from naive human participants. In this work, we explicitly elicit hierarchical trajectories with linguistic subgoals from human participants. One participant breaks down a task into sub-goals and another executes these sub-goals in an embodied action space. In this paper, we describe a way to use this data to softly supervise a hierarchical agent that can learn to solve complex long-horizon tasks in a 3-D embodied environment.
## 2 Methods
### Environment and tasks
We use a 3-D embodied environment in Unity (Ward et al., 2020), showing proof of concept of our method on four tasks. Similar to the tasks described in DMLab (Beattie et al., 2016), the goal in these tasks is to find and consume an apple. To acquire the goal apple, the agent must unlock a gate by placing a color-matched key object on a corresponding sensor (Fig 1D). The main challenge in these tasks comes from requiring several steps, including information gathering (to know which key is needed and which sensor to place it on); details are in Appendix A. For the purpose of the main results, we classify the tasks into two Easy and two Hard tasks. The main feature of the Hard tasks that makes them more difficult than the Easy ones is that they contain several distractors, so finding the right key requires information gathering / exploration, and specifying which object to pick up can be challenging.
### Data collection
Similar to Abramson et al. (2020) we collect data using two players, a 'Setter' and a 'Solver'. For the given tasks, a single controllable avatar is available and controlled by the 'Solver'. Given the task goal, the 'Setter' instructs the 'Solver', via a chat interface, on how to solve the task. The 'Setter' can observe the 'Solver' but cannot interact with the environment directly. Data was collected across many goal-directed tasks, including those described here.
### Agent training and architecture
Our hierarchical agent has two components - a 'low-level' (LL) agent that produces motor commands for the agent and a 'high-level' (HL) agent that provides subgoals for the agent. Both use the same architecture, as described in Abramson et al. (2020).
Figure 1: **Set up and example episode.** A. Observations, outputs and losses for the ’low-level’ (LL) agent; B. Same for the ’high-level’ (HL) agent. C. Example episode with observations to and output text from the HL agent; D. Top-down view of level Key Choice Hard.
**Low-level agent:** We first pre-train the 'low-level' agent to follow relatively simple language commands (Fig 1A). To do this we follow Abramson et al. (2020), to imitate (i.e. behaviorally clone) expert humans on a large range of language conditional tasks in 3-D Unity environments similar to our goal environment.
In particular, we use the 'Solver' data - the player that receives language instructions and controls the embodied avatar to achieve these instructions - to train this LL agent. This data includes observation sequences \(\mathbf{o}_{\leq T}\equiv(\mathbf{o}_{0},\mathbf{o}_{1},\mathbf{o}_{2},\ldots, \mathbf{o}_{T})\) (first person images from the environment), action sequences \(\mathbf{a}_{\leq T}\equiv(\mathbf{a}_{0},\mathbf{a}_{1},\mathbf{a}_{2},\ldots, \mathbf{a}_{T})\) (10-D actuator actions) and language instructions \(\mathbf{g}_{\leq T}\equiv(\mathbf{g}_{0},\mathbf{g}_{1},\mathbf{g}_{2},\ldots, \mathbf{g}_{T})\) (natural language sentences padded to length 24). With this data, we learn a policy that optimizes the supervised training objective:
\[\mathcal{L}_{BC}^{\text{LL}}=-\frac{1}{B}\sum_{n=1}^{B}\sum_{t=0}^{K}\ln\pi( \mathbf{a}_{n,t}\mid\mathbf{o}_{n,\leq t},\mathbf{g}_{n,t})\]
where \(B\) is the minibatch size, \(K\) is the backpropagation-through-time window size.
The LL agent is then frozen. Our main contribution is training a second 'high-level' agent which can issue language commands to this 'low-level' agent (Fig 1B). These language commands act as sub-goals in a hierarchical setup and this hierarchy allows us to achieve longer and more complex tasks than would be possible without it.
**High-level agent**: The 'high-level' (HL) agent policy is trained with a supervised training objective, to match the language commands in the data, as well as a reinforcement learning objective to optimize the language commands for goal-directed behavior.
_Supervised training loss_: Rather than imitating the motor behavior of the expert trajectories from the 'Solver' data, we learn a policy to output language commands produced by the 'Setter' (which we then use as language input for the LL agent). This data consists of linguistic subgoals \(\mathbf{g}_{\leq T}\equiv(\mathbf{g}_{0},\mathbf{g}_{1},\mathbf{g}_{2},\ldots,\mathbf{g}_{T})\) as well the observations \(\mathbf{o}_{\leq T}\equiv(\mathbf{o}_{0},\mathbf{o}_{1},\mathbf{o}_{2},\ldots,\mathbf{o}_{T})\). We optimize a policy (using a behavioral cloning, or BC loss) to produce the subgoals \(\mathbf{g}_{\leq T}\equiv(\mathbf{g}_{0},\mathbf{g}_{1},\mathbf{g}_{2},\ldots,\mathbf{g}_{T})\) conditional on the observations \(\mathbf{o}_{\leq T}\equiv(\mathbf{o}_{0},\mathbf{o}_{1},\mathbf{o}_{2},\ldots,\mathbf{o}_{T})\).
\[\mathcal{L}_{BC}^{\text{HL}}=-\frac{1}{B}\sum_{n=1}^{B}\sum_{t=0}^{K}\ln\pi( \mathbf{g}_{n,t}\mid\mathbf{o}_{n,\leq t}),\]
_Reinforcement learning loss_: We can generate simulated trajectories in the tasks specified, where we sample subgoals from the HL agent and issue these to the frozen LL agent. The HL agent is only queried once every 8 timesteps, so the LL agent sees the same language input for 8 timesteps. We then get environment rewards \(\mathbf{r}_{\leq T}\equiv(\mathbf{r}_{0},\mathbf{r}_{1},\mathbf{r}_{2},\ldots,\mathbf{r}_{T})\) from the environment. The LL agent is effectively treated a 'part of the environment' for RL training of the HL agent. We used V-trace (Espeholt et al., 2018) and augment the agent architecture with a value head and optimize:
\[\mathcal{L}_{RL}^{\text{HL}}=\frac{1}{B}\sum_{n=1}^{B}\sum_{t=0}^{K}\mathbf{R} _{n,t}-\mathbf{V}_{n,t}\ln\pi(\mathbf{g}_{n,t}\mid\mathbf{o}_{n,\leq t})\]
where \(\mathbf{R}_{n,t}\) is total return and \(\mathbf{V}_{n,t}\) is the estimated state-value.
_Combining the losses:_ The total loss for the HL agent weights the two losses. We vary the relative weights of these losses in experiments.
\[\mathcal{L}^{\text{HL}}=w_{BC}\mathcal{L}_{BC}^{\text{HL}}+w_{RL}\mathcal{L }_{RL}^{\text{HL}}\]
## 3 Experiments
In all our experiments, we jointly train on all 4 tasks (details in App A). Before pursuing quantitative controls, we first qualitatively describe the behavior observed. As depicted in Figure 1C, we see that
the commands generated by the 'high level' agent are semantically meaningful. Since the the LL agent is frozen, the high and low level agents cannot develop a different communication protocol via RL - the HL agent is restricted to using commands that the LL agent trained on human generated instructions can understand. This adds interpretability to the agent's behavior.
In the next few sections, we ablate the two main factors essential to our approach: the architecture (hierarchical vs flat; Section 3.1) and the loss (cloning expert behavior vs reinforcement from the environment; Section 3.2). We also run some analyses on the trained hierarchical agent to understand its behavior (Section 3.3).
### Hierarchical agent outperforms flat agent
In this section we compare our approach to a flat again that directly produces the actions without a hierarchy. For this baseline, we use the same architecture we used for the HL agent, but change the way it interacts with the environment. Instead of producing language instructions once every 8 timesteps (which are fed to a pre-trained LL agent which outputs environment actions), it directly output environment actions at every timestep to get RL reward. This action head also receives a BC loss computed on the expert environment actions taken by the 'Solver'. Recent work shows that simply predicting language sub-goals via an auxiliary head for the LL agent might also help it learn complex tasks by shaping its representations (Lampinen et al., 2022; Kumar et al., 2022). Following these, we also give the agent an auxiliary prediction loss on the language instructions produced by the 'Setter' at every time step (see App B for details). This agent has access to all the same data as the hierarchical one, and the architecture is the main difference.
We see in Fig 2A that the flat agent can pick up on the simpler tasks, but not on the harder tasks - while the hierarchical agent can learn both. The hierarchical agent also learns the easy tasks faster.
### Both BC and RL losses are necessary
We now evaluate whether both losses are necessary for the hierarchical agent. We show in Fig 3 that when trained with only BC or only RL loss, the agent cannot learn any of these tasks (within the number of updates we ran for), while the agent trained with both losses learns quickly.
We also split this up by task in the Appendix (Figure 7), this shows that the combination of both losses is necessary to perform reliably well on all the tasks.
Figure 2: **Results. This plot shows results from various sections in the text. A: a hierarchical agent learns much better than a flat agent; details in Section 3.1. B: both BC and RL losses are necessary for good performance; details in Section 3.2. C & D: analysis of BC+RL hierarchical agent’s outputs; details in Section 3.3.**
### Analysis of the hierarchical BC+RL agent
We then performed some simple analyses of the instructions produced by the hierarchical BC+RL agent. First, we compare the number of instructions required to solve the level, restricting only to levels that were successfully completed. We find that the Hard levels (that could not be solved by the flat baseline or baselines with skewed BC / RL relative losses) do in fact require many more instructions from the HL agent to complete (Figure 2C).
We then examined the instructions actually produced. We discarded instructions that appeared less than 100 times across 80 recorded episodes (20 per level). We found that the diversity of instructions required by the hard tasks is higher than that required by the easy tasks - as noted by the flatter distribution over instructions seen for the red bars in Figure 2D compared to the blue bars. Further, we note that the instructions that occur more frequently in the hard levels than the easy ones might be associated with error-correcting for the LL agent, most notably the 'Drop it' instruction. Generic 'Move' commands, notably the 'Move back' / 'Move forward' command, are also used significantly more in the Hard tasks - possibly because moving around the level is useful for the information gathering required to find the right key in these tasks.
## 4 Discussion and Future Work
This work contributes to a growing field of language in embodied agents (Luketina et al., 2019). Closely related is learning in text-based RL games (He et al., 2015; Narasimhan et al., 2015); we operate in embodied action spaces, and our HL agent handles problems specific to this setting, like error correcting a learned LL agent (Figure 2D). Language has previously been used as the abstraction in HRL. These use templated language (Andreas et al., 2017; Jiang et al., 2019), or focus on generalization in the LL agent (Chen et al., 2020). We instead train the HL agent with output supervision from natural language. There are several ways to expand this work. Following several recent papers (Ahn et al., 2022; Driess et al., 2023; Dasgupta et al., 2023; Wang et al., 2023), future work can use pre-trained language models to get a good prior over possible language commands.
|
2309.05277 | Interactive Class-Agnostic Object Counting | We propose a novel framework for interactive class-agnostic object counting,
where a human user can interactively provide feedback to improve the accuracy
of a counter. Our framework consists of two main components: a user-friendly
visualizer to gather feedback and an efficient mechanism to incorporate it. In
each iteration, we produce a density map to show the current prediction result,
and we segment it into non-overlapping regions with an easily verifiable number
of objects. The user can provide feedback by selecting a region with obvious
counting errors and specifying the range for the estimated number of objects
within it. To improve the counting result, we develop a novel adaptation loss
to force the visual counter to output the predicted count within the
user-specified range. For effective and efficient adaptation, we propose a
refinement module that can be used with any density-based visual counter, and
only the parameters in the refinement module will be updated during adaptation.
Our experiments on two challenging class-agnostic object counting benchmarks,
FSCD-LVIS and FSC-147, show that our method can reduce the mean absolute error
of multiple state-of-the-art visual counters by roughly 30% to 40% with minimal
user input. Our project can be found at
https://yifehuang97.github.io/ICACountProjectPage/. | Yifeng Huang, Viresh Ranjan, Minh Hoai | 2023-09-11T07:27:32Z | http://arxiv.org/abs/2309.05277v1 | # Interactive Class-Agnostic Object Counting
###### Abstract
We propose a novel framework for interactive class-agnostic object counting, where a human user can interactively provide feedback to improve the accuracy of a counter. Our framework consists of two main components: a user-friendly visualizer to gather feedback and an efficient mechanism to incorporate it. In each iteration, we produce a density map to show the current prediction result, and we segment it into non-overlapping regions with an easily verifiable number of objects. The user can provide feedback by selecting a region with obvious counting errors and specifying the range for the estimated number of objects within it. To improve the counting result, we develop a novel adaptation loss to force the visual counter to output the predicted count within the user-specified range. For effective and efficient adaptation, we propose a refinement module that can be used with any density-based visual counter, and only the parameters in the refinement module will be updated during adaptation. Our experiments on two challenging class-agnostic object counting benchmarks, FSCD-LVIS and FSC-147, show that our method can reduce the mean absolute error of multiple state-of-the-art visual counters by roughly 30% to 40% with minimal user input. Our project can be found at [https://yifehuang97.github.io/ICACountProjectPage/](https://yifehuang97.github.io/ICACountProjectPage/).
## 1 Introduction
The need for counting objects in images arises in many applications, and significant progress has been made for both class-specific [17, 30, 13, 46, 47, 9, 3, 24, 44, 25, 19, 38, 23, 16, 34, 36, 1] and class-agnostic [49, 35, 33, 41, 51, 26, 31, 32] counting. However, unlike in many other computer vision tasks where the predicted results can be verified for reliability, visual counting results are difficult to validate, as illustrated in Fig. 1.
Mistakes can be made, and often there are no mechanisms to correct them. To enhance the practicality of visual counting methods, the results need to be more intuitive and verifiable, and feedback mechanisms should be incorporated to allow errors to be corrected. This necessitates a human-in-the-loop framework that can interactively display the predicted results, collect user feedback, and adapt the visual counter to reduce counting errors.
It is, however, challenging to develop an interactive framework for visual counting. The first challenge is to provide the user with an intuitive visualizer for the counting result. Current state-of-the-art visual counting methods typically generate a density map and then sum the density values to obtain the final count. However, as shown in Fig. 1, verifying the final predicted count can be difficult, as can verifying the intermediate density map, due to the mismatch between the continuous nature of the density map and the discrete nature of the objects in the image. The second challenge is to design an appropriate user interaction method that requires minimal user effort while being suited for providing feedback on object counting. The third challenge is developing an effective adaptation scheme for the selected interaction type that can incorporate user feedback and improve the performance of visual counters. In this paper, we address all three aforementioned challenges to develop an interactive framework for visual counting.
For the first challenge, we propose a novel segmentation method that segments a density map into non-overlapping regions, where the sum of density values in each region is a near-integer value that can be easily verified. This provides the user with a more natural and understandable interpretation of the predicted density map. Notably, developing such
Figure 1: Given an input image and several exemplar objects, a class-agnostic counter will output a density map and the total object count. It is often challenging to validate these outputs, making it difficult to adopt automatic visual counting in practice. To improve the practicality of a visual counter, we propose an interactive framework that allows a human user to quickly detect mistakes and improve performance based on the identified errors.
an algorithm that must also be suitably fast for an interactive system is challenging, which constitutes a technical contribution of our paper.
For the second challenge, we propose a novel type of interaction that enables the user to provide feedback with just two mouse clicks: the first click selects the region, and the second click selects the appropriate range for the number of objects in the chosen region. The proposed user interaction method is unique as it is specifically tailored for object counting and requires minimal user effort. Firstly, the auto-generated segmentation map allows the user to select an image region using just one mouse click, which is faster compared to drawing a polygon or scribbles. Secondly, by leveraging the humans' subitizing ability, which allows them to estimate the number of objects in a set quickly without counting them individually, we can obtain an approximate count with just another mouse click, which is quicker than one by one counting using dot annotations.
For the third challenge, we develop an interactive adaptation loss based on range constraints. To update the visual counter efficiently and effectively and to reduce the disruption of the learned knowledge in the visual counter, we propose the refinement module that directly refines the spatial similarity feature in the regression head. Furthermore, we propose a technique to estimate the user's feedback confidence and use this confidence to adjust the learning rate and gradient steps during the adaptation process.
In this paper, we primarily focus on class-agnostic counting, and we demonstrate the effectiveness of our framework with experiments on FSC-147 [35] and FSCD-LVIS [31]. However, our framework can be extended to category-specific counting, as will be seen in our experiments on several crowd-counting and car-counting benchmarks, including ShanghaiTech [52], UCF-QNRF [11], and CARPK [8]. We also conduct a user study to investigate the practicality of our method in a real-world setting.
In short, the main contribution of our paper is a framework that improves the accuracy and practicality of visual counting. Our technical contributions include: (1) a novel segmentation method that quickly segments density maps into non-overlapping regions with near-integer density values, which enhances the interpretability of predicted density maps for users; (2) an innovative user feedback scheme that requires minimal user effort for object counting by utilizing subitizing ability and auto-generated segmentation maps; and (3) an effective adaptation approach that incorporates the user's feedback into the visual counter through a refinement module and a confidence estimation method.
## 2 Related Works
**Visual counting.** Various visual counting methods have been proposed, e.g., [17, 30, 13, 46, 47, 9, 3, 24, 44, 25, 19, 38, 23, 16], but most of them are class-specific counters, requiring large amounts of training data with hundreds of thousands of annotated objects. To address this limitation and enable counting of objects across multiple categories, several class-agnostic counters have been proposed [49, 35, 33, 41, 51, 26, 31]. These methods work by regressing the object density map based on the spatial correlation between the input image and the provided exemplars. However, in many cases, a limited number of exemplars are insufficient to generalize over object instances with varying shapes, sizes, and appearances.
**Interactive counting.** There exists only one prior method for interactive counting [2]. This method uses low-level features and ridge regression to predict the density map. To visualize the density map, it uses MSER [29] or spectral clustering [40] to generate some candidate regions, then seeks a subset of candidate regions that can keep the integrality of each region in the subset. At each iteration, the user must draw a region of interest and mark the objects in this region with dot annotations. Additionally for the first iteration, the user has to specify the diameter of a typical object. This method [2] has two drawbacks. First, it requires significant effort from the user to draw a region of interest, specify the typical object size, and mark all objects in the region. Second, MSER and spectral clustering may not generate suitable candidate regions for dense scenes, as will be shown in Sec. 3.2. To alleviate the user's burden, the counting results should be easy to verify, and feedback should be simple to provide. In this paper, we propose a density map visualization method that can generate regions by finding and expanding density peaks. Unlike MSER and spectral clustering, our approach works well on dense density maps.
**Interactive methods for other computer vision tasks.** Various interactive methods have been developed for other computer vision tasks, such as object detection [50], tracking [39], and segmentation [12, 42, 5, 22, 20, 10, 28, 27, 18, 21, 48, 43]. While the success of these methods is inspiring, none of them are directly applicable to visual counting due to unique technical challenges. Unlike object detection, tracking, and segmentation, the immediate and final outputs of visual counting are difficult to visualize and verify. Designing an interactive framework for visual counting requires addressing the technical challenges discussed in the introduction section, none of them has been considered in previous interactive methods.
## 3 Proposed Approach
We propose an interactive framework for visual counting, as illustrated in Fig. 2. Each interactive iteration consists of two phases. The first phase visualizes the predicted density map to collect user feedback. The second phase uses the provided feedback to improve the visual counter.
### Overview of the two phases
In the first phase, we will visualize the density map by segmenting it into regions \(\{R_{1},\cdots,R_{n}\}\) with the following desiderata:
1. Non-overlapping: \(R_{i}\cap R_{j}=\emptyset\) for all \(i\neq j\),
2. Total coverage: \(\overset{n}{\cup}_{i=1}^{n}R_{i}=\) the predicted density map,
3. Moderate size: each region is not too big or too small,
4. Near-integer and small integral: the sum of density values within each region should be close to an integer and smaller than a verifiable counting limit.
The above desiderata are for visualization and easy verification of the results. The last desideratum is motivated by humans' subitizing ability, which is the ability to identify the number of objects in an image simply by quickly looking at them, not by counting them one by one.
In the second phase of each iteration, the user is prompted to pick a region and specify the range for the number of objects in that region. Let \(R\) denote the selected region and \(c=(c_{l},c_{u}]\) the range specified by the user for the number of objects in \(R\), a loss \(\mathcal{L}(R,c)\) will be generated and used to adapt the counting model. For efficient and effective adaptation, instead of adapting the whole counting network, we propose a refinement module that directly refines the feature map in the regression head and we only adapt the parameters of this module using gradient descent.
### Density map segmentation algorithm
One technical contribution of this paper is the development of a fast segmentation algorithm called Iterative Peak Selection and Expansion (IPSE) that satisfies the desiderata described in Sec. 3.1. The input to this algorithm is a smoothed density map, and the output is a set of non-overlapping regions. IPSE is an iterative algorithm where the output set will be grown one region at a time, starting from an empty set. To yield a new region for the output, it starts from the pixel \(p\) with the highest density value (called the peak) among the remaining pixels that have not been included in any previously chosen region. IPSE seeks a region \(R\) containing \(p\) that minimizes the below objective:
\[h(R)= \frac{|R_{s}-\lceil R_{s}-\frac{1}{2}\rceil|}{\max(1,\lceil R_{s} -\frac{1}{2}\rceil)}+\frac{\max(0,T_{l}-R_{a})}{T_{l}}\] \[+\lceil\max(0,R_{s}-C)\rceil, \tag{1}\]
where \(R_{s}\) denote the sum of density values in \(R\) and \(R_{a}\) the area of \(R\). \(T_{l}\) is the preferred minimum area for the region \(R\), and \(C\) is the preferred maximum number of objects in the region. The first term of the objective function encourages the sum of the densities to be close to an integer. It also encourages the region not to have too small count. The second term penalizes small regions. The region cannot be too big either. The expansion algorithm will stop when the area reaches a predefined upper bound. The last term penalizes regions with a total density greater than \(C\).
Because there are exponentially many regions containing a given peak \(p\), finding the optimal region \(R\) that minimizes \(h(R)\) is an intractable problem. Fortunately, we only need to obtain a sufficiently good solution. We therefore restrict the search space to a smaller list of expanding regions \(S_{0}\subset S_{1}\subset\cdots\subset S_{m}\) and perform an exhaustive search among this list. This list can be constructed by starting from the seed \(p\), i.e., \(S_{0}=\{p\}\) and constructing \(S_{i+1}\) from \(S_{i}\) by adding a pixel \(q\) selected from neighboring pixels of \(S_{i}\) that have not been part of any existing output regions. If multiple such neighboring pixels are available, we prioritize pixels with positive density values and select the one closest to \(p\). The process terminates when any of the following conditions is met: (1) all neighboring pixels of \(S_{i}\) have been included in an output region; (2) the area or the sum of density values in \(S_{i}\) has reached a predefined limit; or (3) the proportion of zero-density pixels in \(S_{i}\) has exceeded a predefined threshold.
The above peak selection and expansion process is used repeatedly to segment the density map, with each iteration commencing from the seed pixel \(p\) that has the highest density value among the remaining pixels that have not been included in any prior output region. If all the remaining pixels have zero-density values, a random seed location is selected. The process continues until complete coverage of the density map is achieved, at which point all small regions are merged into their neighboring regions.
**Comparison with other density segmentation methods.** Our algorithm for segmenting the density map bares some similarity to gerrymandering or political redistricting.
Figure 2: We propose a practical approach for visual counting based on interactive user’s feedback. In each iteration: (1) the visual counter estimates the density map for the input image; (2) the density map is segmented and visualized; (3) the user selects a region and provides a range for the number of objects in the region; (4) an objective function is defined based for the provided region and count range, and the parameters of a refinement module are updated by optimizing this objective function.
Gerrymandering involves dividing a large region into several smaller regions while adhering to certain constraints related to the population and contiguity of the regions. Most methods for this problem are based on weighted graph partitioning or heuristic algorithms [6, 37, 14, 4, 7]. However, an object density map contains several hundred thousand pixels, making these methods too slow for interactive systems. For example, the time and iteration limit of [6] is 600 seconds and 1000 iterations, which cannot meet the real-time requirements of interactive systems. In contrast, our method takes less than one second, as reported in Sec. 4.3.
Another approach for visualizing a density map is to use MSER [29] or spectral clustering[40] to generate some candidate regions, as used in [2]. MSER and spectral clustering, however, often fail to generate suitable candidate regions for dense scenes, as shown in Fig. 3.
### Interactive feedback and adaptation
Upon presenting the segmented density map, the user will be prompted to pick a region \(R\) and choose a numeric range \(c\) for the number of objects in \(R\), from a list of range options, \(c\in\{(-\infty,0],(0,r],(r,2r],\ldots,(C-r,C],(C,\infty)\}\), where \(r\) is the range interval and \(C\) is the counting limit. This method of user interaction is innovative and specifically tailored for object counting, necessitating only two mouse clicks per iteration. The reason for using a range instead of an exact number is because it can be ambiguous to determine an exact number for a region. Despite our segmentation efforts to ensure that each region contains an integral number of objects, some regions may still contain partial objects, making it more challenging for the user to provide an accurate number quickly.
An important technical contribution of our paper is the creation of an adaptation technique capable of leveraging the user's weak supervision feedback. This feedback is weak in terms of both object localization and count. Specifically, it does not provide exact locations of object instances, only indicating the number of objects present in an image region. Additionally, the number of objects is provided only as a range, rather than an exact count. Below, we will detail our adaptation technique, starting from the novel refinement module for effective and efficient adaptation.
#### 3.3.1 Refinement module
We aim for an adaptation method that works for all class-agnostic counting networks [49, 35, 33, 41, 51]. Most of them contain three components: a feature extractor \(f\), a spatial-similarity module \(g\) (e.g., convolution [35] or learnable bilinear transform [41]), and a regression head \(h\) consisting of upsampling layers and convolution layers. Let \(\mathbf{I}\) be the input image, \(\mathbf{E}\) the exemplars, \(\mathbf{S}=g(f(\mathbf{I}),f(\mathbf{E}))\) the correlation map for the spatial similarity between the input image and exemplars, and \(\mathbf{D}=h(\mathbf{S})\) the predicted density map. The predicted count is obtained by summing over the predicted density map \(\mathbf{D}\).
The correlation map serves as input to the regression head, which applies several convolution and upsampling layers to generate the output object density map. We observe that if the correlation map or any intermediate feature map between the input and output maps accurately represents the spatial similarity between the input image and exemplar objects, the final output density map and the predicted count will be correct. Therefore, the adaptation process need not revert to layers earlier than the correlation map. To minimize disruption to learned knowledge and accelerate adaptation for user interaction, we propose a lightweight refinement module that is integrated only within the regression head.
The refinement module, depicted in Fig. 4, can be applied to any intermediate feature map \(\mathbf{F}\) between the input correlation map \(\mathbf{S}\) and the output density map \(\mathbf{D}\): \(\mathbf{F^{\prime}}=\mathcal{R}(\mathbf{F})\), where \(\mathbf{F^{\prime}}\) is the refined feature map. Our refinement module consists of two components: channel-wise refinement and spatial-wise refinement.
**The channel-wise refinement** is illustrated in Fig. 5, and it refines the feature of each channel by multiplying with a scale parameter and adding with a bias parameter: \(\mathcal{R}_{ch}(\mathbf{F})=\theta_{ch}^{scale}\odot\mathbf{F}+\theta_{ch}^{bias}\), where \(\mathbf{F}\in\mathbb{R}^{H\times W\times C}\) is the input feature map, \(\theta_{ch}^{scale}\in\mathbb{R}^{C}\) is the vector of scale parameters, \(\theta_{ch}^{bias}\in\mathbb{R}^{C}\) is the vector of bias parameters.
**The spatial-wise refinement** is illustrated in Fig. 6, and it also refines the feature at each spatial position: \(\mathcal{R}_{sp}(\mathbf{F})=\theta_{sp}^{scale}\odot\mathbf{F}+\theta_{sp}^{bias}\), where \(\theta_{sp}^{scale}\in\mathbb{R}^{H\times W}\), \(\theta_{sp}^{bias}\in\mathbb{R}^{H\times W}\).
**The overall refinement module** is the successive application of channel-wise refinement and spatial-wise refinement: \(\mathbf{F^{\prime}}=\mathcal{R}(\mathbf{F})=\mathcal{R}_{sp}(\mathcal{R}_{ch}(\mathbf{F}))\). The set of adaptable
Figure 3: Density map segmentation comparison with MSER [2]. These two examples are from the FSC-147 dataset.
parameters of the two refinement modules are: \(\theta^{scale}=[\theta^{scale}_{ch};\theta^{scale}_{sp}]\) and \(\theta^{bias}=[\theta^{bias}_{ch};\theta^{bias}_{sp}]\). At the beginning of each adaptation iteration, the scale parameters are reset to one and the bias parameters to zero, so that the refined feature map \(\mathbf{F}^{\prime}\) is the same as the input feature map \(\mathbf{F}\).
#### 3.3.2 Adaptation schemes
Given a selected region \(R\) and a specified range \(c=(c_{l},c_{u}]\) for the number of objects in region \(R\), a weekly-supervised adaptation loss is defined as:
\[\mathcal{L}_{I}(R,c)=ReLU(c_{l}-R_{s})+ReLU(R_{s}-c_{u}), \tag{2}\]
If the sum of predicted density values in the selected region is outside the count range provided by the user, the above loss will be positive.
To account for the scenario where the user can provide feedback for multiple regions either in a single iteration or through multiple iterations, we extend the adaptation loss to use multiple regions to update the counter. Let \(\Omega=(R,c)\) denote the user's selected regions and their corresponding specified count ranges. We use the following combined loss for adaptation:
\[\mathcal{L}(\Omega)=\mathcal{L}_{L}(\Omega)+\mathcal{L}_{G}(\Omega)+\eta(|| \theta^{scale}-1||+||\theta^{bias}||), \tag{3}\]
where \(\mathcal{L}_{L}(\Omega)\) is the sum of regional losses, with each loss corresponding to an individual region separately:
\[\mathcal{L}_{L}(\Omega)=\sum_{(R,c)\in\Omega}\mathcal{L}_{I}(R,c), \tag{4}\]
and \(\mathcal{L}_{G}(\Omega)\) is the single loss for all the regions combined:
\[\mathcal{L}_{G}(\Omega)=\mathcal{L}_{I}(\sum_{(R,c)\in\Omega}R_{s},\sum_{(R,c )\in\Omega}c). \tag{5}\]
Here, we combine all the selected regions and view them as one big region, and then use Eq. (2) on the big region to adapt the visual counter. Hereafter, we will refer to \(\mathcal{L}_{L}(\Omega)\) as Local Loss and \(\mathcal{L}_{G}(\Omega)\) as Global Loss.
The last term of Eq. (3) is a regularization term to discourage large changes to the scale and bias parameters of the refinement module and \(\eta\) is the weight for the regularization term. In our experiments, \(\eta=0.002\).
Our adaptation loss is based on the sum of predicted density values within one or multiple regions, which provides a weak supervision signal as it lacks penalty terms related to individual values. This type of supervision signal can be problematic if a large learning rate is used. Meanwhile, using a small learning rate would require numerous gradient steps, leading to a prolonged convergence process. To overcome this issue, we propose determining the impact value of the user's feedback and using it to adjust the learning rate and gradient steps, resulting in a smoother adaptation process. When the uncertainty level is higher, such as having too few regions or inconsistent error types (i.e., undercounting in some regions and over-counting in others), we use a lower learning rate with more gradient steps. Conversely, we apply a larger learning rate with fewer gradient steps to shorten the adaptation process when the uncertainty level is lower. We define the user feedback value as follows:
\[F_{C}(\Omega)=0.5F_{i}(\Omega)+0.5F_{s}(\Omega), \tag{6}\]
The first term of the above measures the informativeness of the feedback while the second term measures the inconsistency of the feedback for multiple regions. The informativeness term can be defined as:
\[F_{i}(\Omega)=\min(1,\exp(\frac{|\Omega|-t}{T})), \tag{7}\]
where \(|\Omega|\) is the size of the set \(\Omega\), \(T\) is the temperature, and \(t\) is the informativeness threshold. Specifically, in our experiments \(t=3,T=2\).
Figure 4: The refinement module can be integrated into the **regression head** of any density estimation visual counter.
Figure 5: Channel-wise refinement refines the feature of each channel by multiplying with scale and adding with a bias.
Figure 6: Spatial-wise refinement refines the feature at each spatial position by multiplying with scale and adding with a bias.
Let \(\Omega_{o}\) and \(\Omega_{u}\) be the sets of over-counting and under-counting regions. Let \(p=\frac{|\Omega_{o}|}{|\Omega_{o}|+|\Omega_{u}|}\), the feedback consistency value is defined based on negative entropy:
\[F_{s}(\Omega)=1+p\log p+(1-p)\log(1-p). \tag{8}\]
Based on the estimated value of the feedback, the learning rate and the number of gradient steps will be scaled accordingly as follows: \(\gamma^{\prime}=\gamma F_{C}(\Omega)\), \(N^{\prime}=\frac{N}{F_{C}(\Omega)}\), where \(\gamma\) and \(N\) are the default values for the learning rate and the number of gradient steps.
## 4 Experiments
### Class-agnostic counting
#### 4.1.1 Experiment settings
**Datasets.** We evaluate our approach on two challenging class-agnostic counting benchmarks: FSC-147 [35] and FSCD-LVIS [31].
**Class-agnostic visual counters.** Our interactive framework is applicable to many different class-agnostic counters, and we experiment with FamNet [35], SAFECount [51], and BMNet+ [41] in this paper.
**Evaluation metrics.** We use Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as performance metrics, which are widely used for evaluating visual counting performance [49, 35, 33, 41, 51, 26, 31].
**User feedback simulation.** To simulate user feedback, we randomly select a displayed region and provide the counting range for that region. We repeat each experiment five times with different random seeds (5, 10, 15, 20, 25) and report the **average** and **standard error**.
**Count limit and pre-defined ranges.** According to [45], people can quickly and correctly estimate the number of objects without one-to-one counting if the number of objects is less or equal to four. Therefore, we set the count limit \(C\) to 4 and the pre-defined ranges to \(\{(-\infty,0],(0,1],(1,2],(2,3],(3,4],(4,\infty)\}\).
**Implementation details.** We insert the refinement module after the **first convolution layer** in the regression head. For faster computation, we first downsample the density map by a factor of four, then perform the IPSE, and finally upsample the density map segmentation result to its original size. We adapt a FamNet with an Adam optimizer [15]. On FSC-147, the default learning rate is \(0.02\) while the default number of gradient steps is 10. On FSCD-LVIS, FamNet does not converge as well, so the default learning rate and number of gradient steps are set to \(0.01\) and \(20\), respectively.
#### 4.1.2 Experiment results
The proposed interactive framework can be used to improve the performance of various types of visual counters, including FamNet [35], SAFECount [51], and BMNet+ [41]. As can be seen from Fig. 7, the benefits are consistently observed in multiple experiments and metrics (three visual counters, two datasets, and two performance metrics). Significant error reduction is already achieved even after one feedback iteration, as also shown in Fig. 8. After five iterations, the amount of error reduction is huge, with an average value of 30%.
The proposed framework requires minimal user input, but it should not be viewed as a competitor to few-shot counting methods. Rather, it is an interactive framework that offers complementary benefits to few-shot methods. Notably, most class-agnostic visual counters, including FamNet, SAFECount, and BMNet+, are few-shot methods that can count with just a few exemplars. As shown earlier, the proposed framework enhances the performance of these counters. However, there is another approach to improve these counters, which is to provide more exemplars. Rather than using these visual counters in our interactive framework, we could offer them additional exemplars. As our framework requires two mouse clicks for each iteration and drawing a bounding box also requires the same effort, we compare the performance of our framework with five iterations to the performance of visual counters with five additional exemplars, and the results are shown in Table 1. As shown, our framework produces greater improvement with the same level of user effort. This may be due to several reasons: (1) supplying extra exemplars does not immediately highlight prediction errors, resulting in a weaker supervi
Figure 7: The proposed framework can be used to improve the performance of various visual counters. This shows the MAE and RMSE values of FamNet, SAFECount, and BMNet+ on FSC-147 and FSCD-LVIS test data, as the number of feedback iterations is increased from zero (without any adaption) to five.
sion signal; (2) an exemplar provides information for only one object, less informative than a region containing multiple objects; (3) most class-agnostic counting methods are trained with a predefined number of exemplars (e.g., three), so the model may not be able to fully utilize the additional exemplars to improve the performance.
One technical contribution of our method is the innovative segmentation algorithm. Table 2 compares this algorithm with four segmentation methods: MSER [2], K-means, Watershed, and DBSCAN. For K-means and DBSCAN, we use the spatial coordinate and the density value as the feature to perform clustering for segmentation. For K-means, \(K\) is set to \(\min(S_{u},\max(S_{l},\frac{Sum(D)}{C}))\), where \(S_{u}\) and \(S_{l}\) are the pre-defined upper bound and lower bound, \(Sum(D)\) is the summed density, and \(C\) is the count limit. For these density map segmentation baselines, we also use our feature refinement adaptation scheme to update the visual counter. As shown in Table 2, our proposed algorithm surpasses the other methods by a wide margin. Fig. 8 shows some qualitative results.
#### 4.1.3 Ablation study
We perform some experiments to evaluate the contribution of different components, including the refinement module, the adaptation loss, and the setting of user feedback simulation. All ablation studies are conducted on the FSC-147 validation set.
**Refinement module.** The results of the ablation study for the refinement module are shown in Table 3. Both the channel-wise and spatial-wise refinement steps are important. The channel-wise refinement contributes more to the improvement than the spatial-wise refinement step. This is perhaps because the spatial-wise refinement refines locally while the channel-wise refinement refines globally, as shown in Fig. 9. Also, the order of these two refinement steps has little effect on the final result.
**Adaptation loss.** Table 4 shows the ablation study on the adaptation loss. Both the Local Loss \(\mathcal{L}_{L}(\Omega)\) and the Global Loss \(\mathcal{L}_{G}(\Omega)\) contribute to the reduction in MAE and RMSE. The confidence scaling is less important.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{FSC-147 Test set} & \multicolumn{2}{c}{FSCD-LVIS Test set} \\ \cline{2-5} & MAE & RMSE & MAE & RMSE \\ \hline FamNet [35] & 22.08 & 99.54 & 41.26 & 57.87 \\ + 5 exemplars & 21.52 \(\downarrow\)20\% & 98.10 \(\downarrow\)1\% & 40.36 \(\downarrow\)20\% & 57.85 \(\downarrow\)0\% \\ + our framework & 11.75 \(\downarrow\)47\% & 75.37 \(\downarrow\)24\% & 21.18 \(\downarrow\)49\% & 34.13 \(\downarrow\)41\% \\ SAFECount [51] & 1.35 \(\downarrow\)6 & 91.31 & 15.45 & 28.73 \\ + 5 exemplars & 13.01 \(\downarrow\)46 & 94.22 \(\downarrow\)13\% & 14.83 \(\downarrow\)40\% & 28.01 \(\downarrow\)18\% \\ + our framework & 9.42 \(\downarrow\)31\% & 80.69 \(\downarrow\)12\% & 10.45 \(\downarrow\)32\% & 18.42 \(\downarrow\)36\% \\ BMNet+ [41] & 14.62 & 91.83 & 17.49 & 29.76 \\ + 5 exemplars & 14.40 \(\downarrow\)20\% & 91.56 \(\downarrow\)0\% & 17.27 \(\downarrow\)18\% & 29.60 \(\downarrow\)18\% \\ + our framework & 9.51 \(\downarrow\)35\% & 84.66 \(\downarrow\)8\% & 13.43 \(\downarrow\)23\% & 22.39 \(\downarrow\)25\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparing the performance of the proposed interactive framework with **five feedback iterations** to a few-shot baseline approach that uses the base counters with **5 additional exemplars**.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{FSC-147 Test set} & \multicolumn{2}{c}{FSCD-LVIS Test set} \\ \cline{2-5} & MAE & RMSE & MAE & RMSE \\ \hline Initial error & 22.08 & 99.54 & 41.26 & 57.87 \\ \hline MSER [2] & 16.46\(\pm\)0.13 & 86.06\(\pm\)3.28 & 30.61\(\pm\)0.37 & 44.57\(\pm\)0.77 \\ Watershed & 18.95\(\pm\)0.09 & **74.08\(\pm\)4.66** & 27.35\(\pm\)0.25 & 43.21\(\pm\)0.79 \\ K-means & 15.32\(\pm\)0.23 & 86.49\(\pm\)2.43 & 32.70\(\pm\)0.04 & 47.80\(\pm\)0.09 \\ DBSCAN & 19.69\(\pm\)0.24 & 78.26\(\pm\)8.08 & 41.26\(\pm\)0.00 & 57.87\(\pm\)0.00 \\ IPSE (proposed) & **11.75\(\pm\)0.12** & 75.37\(\pm\)5.21 & **21.18\(\pm\)0.28** & **34.13\(\pm\)0.88** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of different segmentation methods under the same adaptation scheme with **five feedback iterations**, when FamNet is the base visual counter. Each experiment is repeated five times, and the **mean** and **standard error** are reported.
Figure 8: Qualitative results of our approach for test images in FSC-147 with FamNet as the visual counter. The brighter region is the selected region, and the red dot is the approximate location of each region generated by peak selection and non-maximum suppression on each region. The selected region is highlighted in this example, and the input range is \((-\infty,0]\), since the object of interest is cake. With one single interaction, our method can improve the counting result locally and globally. **More qualitative results and a demo video are in our supplementary.**
Figure 9: Channel-wise contributes more to the improvement than spatial-wise refinement since channel-wise refinement corrects global errors, while spatial-wise focuses on local errors.
**Adaptation scheme.** Table 5 shows the ablation study on the adaptation scheme. LocalCorrection is the method that only corrects the prediction of the selected region, and will not adapt the visual counter. AllParamAdapt is the method that updates all the parameters in the regression head.
**User feedback simulation.** To further assess the efficiency and efficacy of various region selection strategies, we consider two methods that may offer advantages over random selection used in previous experiments. These two strategies are: 1) prioritizing background regions containing no objects, and 2) selecting regions with largest errors. The comparison of these region selection strategies is presented in Table 6. The error-based approach emerges as the most successful, whereas prioritizing background regions has a detrimental impact on performance. Concerning time efficiency, random selection is the quickest, followed by background prioritization, with error-based selection being the slowest due to the need for error estimation for each region.
In the primary experiment, we set interactions at five, given consecutively. To validate this five-interaction approach, we conducted extra trials on the FSC-147 dataset with varying interaction counts. Results are shown in Fig. 10, revealing continued performance improvement beyond five interactions, though at a slower rate.
We also compared consecutive and non-consecutive interactions to confirm the importance of sequential engagement. Table 7 outlines these results, indicating better performance with sequential interactions. However, this approach increases time consumption due to added segmentation and adaptation time.
methods, and the feedback simulation is identical to those reported in Sec. 4.1. The quantitative results are shown in Table 8, and the qualitative results are shown in Fig. 11. Our approach reduces the MAE and RMSE by approximately 40% and 30% on ShanghaiTech A, and approximately 30% and 25% on UCF-QNRF. For car counting, we apply our framework to FamNet and SAFECount on CARPK [8]. The results are shown in Table 9. The MAE decreased more than 15% on CARPK.
### User study
To assess the feasibility of the proposed interactive counting framework, we conducted a user study with eight participants. We selected 20 images with high counting errors from the FSC-147 dataset and used FamNet as the visual counter. Each participant was allowed a maximum of five iterations for each image, but they could terminate the process if they felt the prediction was accurate enough. The average number of iterations for one image is \(3.08\), and the variance is 0.41. Additionally, we carried out an experiment involving simulated user feedback on the same set of images, to compare the results with those obtained from real user feedback. As shown in Table 10, the users were able to improve the performance of the counter using our framework, demonstrating its potential for practical usage. Moreover, the benefits achieved from real user feedback were comparable to those obtained from simulated feedback, indicating that many of our analyses using simulated feedback can be extrapolated to real-world scenarios.
The user study was conducted on an RTX3080 machine, and several time statistics of a single iteration are presented in Fig. 12. It took less than a second to segment any density image and display it. The average time for a user to select a region and specify a range was four seconds, and the adaptation time for a single iteration was less than one second. All operations are sufficiently fast for interactive systems.
## 5 Conclusions
We have proposed an interactive framework primarily for class-agnostic counting, but can also be extended to class-specific counting. It uses a novel method for density map segmentation to generate an intuitive display for the user, enabling them to visualize the results and provide feedback. Additionally, we have developed an adaptation loss and a refinement module to efficiently and effectively update the visual counter with the user's feedback. Experiments on two class-agnostic counting datasets and two crowd-counting benchmarks with four different visual counters demonstrate the effectiveness and general applicability of our framework.
**Acknowledgement**: This project was supported by US National Science Foundation Award NSDF DUE-2055406.
Figure 11: Qualitative results of our approach for test images in ShanghaiTech A with DM-Count as the visual counter. The left figure is before interaction, and the right is after one interaction. The selected region is highlighted, and the input range is \((20,30]\).
\begin{table}
\begin{tabular}{l c c c} \hline \hline \#Feedback Iterations & MAE & RMSE \\ \hline Initial error & \(93.41\) & \(125.57\) \\ Real User (avg. 3.08 liters) & \(45.11\pm 2.63\) \(\downarrow\)52\% & \(90.64\pm 2.68\)\(\downarrow\)28\% \\ Simulated feedback (3 items) & \(59.97\pm 3.34\) \(\downarrow\) 34\% & \(110.67\pm 5.96\)\(\downarrow\)12\% \\ Simulated feedback (5 items) & \(43.86\pm 3.02\) \(\downarrow\) 52\% & \(92.93\pm 2.90\)\(\downarrow\)26\% \\ \hline \hline \end{tabular}
\end{table}
Table 10: Comparison between real user’s feedback and the simulated feedback used in our quantitative experiments.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{2}{c}{ShanghaiTech A} & \multicolumn{2}{c}{UCF-QNRF} \\ \cline{2-4} & MAE & RMSE & MAE & RMSE \\ \hline Initial error & 59.60 & 95.56 & 85.65 & 148.35 \\ \hline MSER [2] & 57.90\(\pm\)0.32 & 94.84\(\pm\)0.50 & 81.38\(\pm\)0.58 & 142.73\(\pm\)0.93 \\ Watershed & 58.84\(\pm\)0.71 & 90.17\(\pm\)2.06 & 82.78\(\pm\)0.46 & 144.70\(\pm\)0.39 \\ K-means & 48.20\(\pm\)0.75 & 79.23\(\pm\)2.46 & 73.29\(\pm\)1.77 & 131.37\(\pm\)3.98 \\ DBSCAN & 56.99\(\pm\)0.21 & 92.73\(\pm\)0.28 & 80.62\(\pm\)0.93 & 142.29\(\pm\)1.66 \\ LocalCorrection & 43.86\(\pm\)0.26 & 74.76\(\pm\)0.84 & 78.27\(\pm\)0.23 & 135.07\(\pm\)0.53 \\ AllParamAdapt & **31.03\(\pm\)0.33** & 59.47\(\pm\)0.96 & 62.30\(\pm\)2.81 & 123.03\(\pm\)7.52 \\ Proposed & 33.85\(\pm\)0.78 & **57.50\(\pm\)1.94** & **58.13\(\pm\)1.04** & **102.32\(\pm\)2.63** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Results of interactive adaptation methods for a crowd-counting network (DM-Count) using five feedback iterations.
Figure 12: Per-iteration statistics. The visualization time (including segmentation time) and the adaptation time are both less than one second, sufficiently fast for interactive systems.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{2}{c}{Initial Error} & \multicolumn{2}{c}{Five Interactions} \\ \cline{2-4} & MAE & RMSE & MAE & RMSE \\ \hline FamNet & 18.34 & 35.77 & 13.91 & 20.14 \\ SAFECount & 4.91 & 6.32 & 4.16 & 5.91 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Result on CARPK with FamNet and SAFECount using five feedback iterations.
## 6 Supplementary Overview
In the supplementary, we first provide more details about our approach in section 7.1. Then provide more implementation details in section 7.2, analyze the time efficiency and conduct the ablation of the location of the feature refinement module in section 7.4, analyze our method's robustness in 7.3 and analyze the effectiveness of confidence scaling in the other two class-agnostic visual counters in section 7.5. After that, we introduce the interface of our interactive system in section 7.6, give more qualitative results in section 7.7 and section 7.8. Finally, we briefly discuss the limitation and future work in section 7.9. In addition to the supplementary material, we also provide a demo video of our interactive counting system.
## 7 Supplementary Material
### Additional details for our approach
In this section, we provide additional details for the interaction loop and the IPSE density map segmentation.
#### 7.1.1 Interaction loop
A detailed algorithm for the interaction loop is illustrated in Algorithm 1. The input visual counter contains the following components, a feature extractor \(f\), a spatial-similarity learning module \(g\), layers before the refinement module in regression head \(h_{b}\), refinement module \(\mathcal{R}_{\theta_{r}}\), and layers after the refinement module in regression head \(h_{a}\). We need to update \(\Omega\) with \(\mathbf{D}\) in each gradient step because the summation over each region depends on the estimated density map.
```
0: Density map: \(\mathbf{D}\), Smooth kernel: \(\mathbf{G}\), Objective function:\(h(R)\), Region size upper bound: \(T_{u}\).
0: Initialization: Foreground region set: \(\mathbb{V}_{f}=\{\ \}\), Background region set: \(\mathbb{V}_{b}=\{\ \}\)
1:\(\widetilde{\mathbf{D}}\leftarrow\mathbf{D}*\mathbf{G}\)
2:\(S\gets sum(\mathbf{D})\)
3:while\(S\geq 1\)do
4:\(p=argmax(\widetilde{\mathbf{D}})\)
5:\(\widetilde{\mathbf{D}}[p]\leftarrow-\infty\)
6:\(R\) = Peak Expansion(\(\mathbf{D}\), \(\widetilde{\mathbf{D}}\), \(p\), \(h(R)\), \(T_{u}\))
7:\(\mathbb{V}_{f}.append(R)\)
8:\(S\gets S-R_{s}\)
9:\(\mathbb{V}_{b}\) = Background Splitting \((\mathbf{D},\widetilde{\mathbf{D}})\)
10:\(\mathbb{V}=\mathbb{V}_{f}\cup\mathbb{V}_{b}\)
11:\(\mathbb{V}\leftarrow\) Small Region Merging(\(\mathbb{V}\))
12:return\(\mathbb{V}\)
```
**Algorithm 2 IPSE Density Map Segmentation Algorithm**
#### 7.1.2 IPSE density map segmentation
A detailed algorithm for IPSE is illustrated in Algorithm 2. The peak expansion algorithm is shown in Algorithm 3. In Algorithm 2, background splitting is simply expanding at a random background peak with iteratively including the neighbor pixels with the same upper bound, and the small region merging is merging some small region to its neighbor region. More specifically, the region size upper bound \(T_{u}\) is set to 1250, and the region size lower bound \(T_{l}\) in the objective function is set to 250.
### Additional implementation details
**FamNet [35].** For FSC-147 [35] we used the released pre-trained model. For FSCD-LVIS [31], we train it on one RTX A5000 machine for 150 epochs, and the learning rate is \(1\times 10^{-6}\). On FSC-147, following [35], we do the test-time adaptation, on FSCD-LVIS we do not do the test-time adaptation for time efficiency.
**SAFECount [51].** For FSC-147 we used the released pre-trained model. For FSCD-LVIS, we train it on one RTX A5000 machine for 100 epochs, and the learning rate is \(2\times 10^{-5}\). The interactive adaptation gradient steps are set to 30, and the interactive adaptation learning rate is 0.001.
**BMNet+ [41].** For FSC-147 we used the released pre-trained model. For FSCD-LVIS, we train it on one RTX A5000 machine for 100 epochs, and the learning rate is \(1\times 10^{-5}\). The interactive adaptation gradient steps are set to 30, and the interactive adaptation learning rate is 0.001.
**DM-Count [46].** For ShanghaiTech and UCF-QNRF, we used the released pre-trained model.
### Robustness experiment on crowd counting
Our experiment on crowd counting has demonstrated the effectiveness of our adaptation method from the computational perspective. But from the human perspective, it may be possible that the human user cannot easily provide feedback for the count ranges needed for crowd counting. This is not a concern for the small count limit and the count ranges used in the class-agnostic counting setting given the subitizing ability of humans. But for the count ranges \(\{[-\infty,0],(0,10],\ldots,(40\ 50],(50\ \infty)\}\) used in this crowd counting experiment, a human user might make estimation mistakes leading to noisy feedback. We therefore perform an experiment to study the robustness of our method to noisy feedback. Specifically, we introduce random biases to the ground truth estimation to simulate mistakes. We consider two estimation biases. For moderate bias, a random noise of 30% of the count limit is added ([-15, 15]). For large bias, a random noise of 50% is added([-25, 25]). With a biased estimation, our approach still can reduce the MAE by approximately 30% on ShanghaiTech A and 10% on the other two datasets, as shown in Table 11.
### Additional ablation study
All additional ablation study is conduct on FSC-147 validation set or FSC-LVIS validation set with FamNet as the visual counter.
#### 7.4.1 Time efficiency analysis
Table 12 shows the time efficiency comparison with vanilla adaptation(Adapt the whole regression head). In Table 12 the average adaptation time(second) for one single click is reported. This experiment is run on RTX A5000, for FSC-147 both of them use 10 gradient steps for one adaptation, and for FSC-LVIS is 20. We find that our approach is 11.88% faster than vanilla adaptation on FSC-147, and 8.92% faster on FSC-LVIS. Our method is faster because our method requires less computation in feedforward, backpropagation, and parameter updating, as illustrated in Algorithm 1. In feedforward, we only need to compute the layer before the refinement module one time, and in backpropagation, we only need to compute the gradient for the layers after the refinement module. Also, we only need to update the parameters in the refinement module.
#### 7.4.2 Location of the refinement module.
The ablation of the location of the refinement module is shown in Table 13. This experiment is conduct on FSC-147 validation set. Correlation map means directly refine the spatial correlation map between the exemplar and the input image. We can find that inserting at the shallow position has better performance, and directly refining the correlation map doesn't work well.
\begin{table}
\begin{tabular}{l c c} \hline \hline Component & MAE & RMSE \\ \hline Correlation map & 18.71\(\pm\)0.78 & 64.15\(\pm\)9.69 \\ After first conv & **12.79\(\pm\)0.16** & **47.21\(\pm\)2.05** \\ After second conv & 13.63\(\pm\)0.34 & 48.11\(\pm\)3.95 \\ After third conv & 13.87\(\pm\)0.13 & 51.56\(\pm\)1.62 \\ \hline \hline \end{tabular}
\end{table}
Table 13: Results of different locations of refinement module on the regression head of FamNet. The mean and the standard error of five experiments with different seeds are reported.
### Additional analysis on confidence scaling
In the ablation of the adaptation loss in our main paper, confidence scaling seems less important on FamNet. To further analyze its effectiveness, we conduct additional analysis of confidence scaling on SAFECount and BMNet+. As shown in Fig. 13 and Fig. 14 We can find that confidence scaling can make the adaptation smoother and improve the final result significantly.
### Interactive Interface
The frontend interface of our interactive software is shown Fig. 15. In the visualization, We also provide approximate locations of the detected objects by putting some dots in the regions. The locations of these dots are found automatically, by iteratively selecting a peak of the density map and performing non-maximum suppression for the neighboring pixels. We also provide a demo video in the supplementary. In the demo video the running time for each interaction is around two seconds. This is because in the demo video one interaction includes four stages: adaptation, density map display, segmentation, and visualizing the final result(overlay the image with region boundary and approximate location for each counted object). Analysis of these stages, using images from our user study with three interactions each, shows a mean interaction time of 2.07 seconds. Breakdown: adaptation 0.52s, map display 0.50s, segmentation 0.40s, visualizing the final result 0.64s. Although segmentation takes less than a second, the full process lasts over two seconds due to the image save-load-visualize process. We aim to optimize our software for increased speed in the fut.
### Qualitative results of refinement module.
The qualitative results of the feature refinement are shown in Fig. 16. In this figure, for each example, the first row shows the initial result, and the second row shows the result after one interaction. In each row, we show the prediction, the estimated density map, the refined feature map, and the scale parameters in the refinement module. From the last three columns, we can find that the spatial-wise refinement focuses on the local error that only the parameters close to the region are updated. Thus the spatial-wise refinement contributes more to the refinement of local error. We also find that channel-wise refinement can refine the feature map globally and can correct the global error. This also explains why the channel-wise refinement contributes more to the final result, as illustrated in the refinement module's ablation study in the main paper.
### Additional Qualitative results.
Additional qualitative results on FSC-147 with FamNet is shown in Fig. 17 and Fig. 18.
### Limitation and future work
Our approach has several limitations. First, the user's feedback is for the entire region, not individual objects. Second, the specified count is a range, not a precise number. Third, local adaptation may improve global error, due to the inconsistency between local and global errors. Despite these limitations, the proposed method provides a practical way for the user to provide feedback and reduce counting errors in most cases. Also important is the availability of an intuitive graphical user interface for the user to decide whether to trust the automated counting results before and after the adaptation.
In this work, we aim for a system that reduces the user's burden so that the user is not asked to delineate or localize objects. But we envision that localizing an object and delineating its spatial extent would be a stronger form of supervision, and it would be necessary for certain situations. This will be explored in our future work.
|
2310.20157 | Electrically empowered microcomb laser | Optical frequency comb underpins a wide range of applications from
communication, metrology, to sensing. Its development on a chip-scale platform
-- so called soliton microcomb -- provides a promising path towards system
miniaturization and functionality integration via photonic integrated circuit
(PIC) technology. Although extensively explored in recent years, challenges
remain in key aspects of microcomb such as complex soliton initialization, high
threshold, low power efficiency, and limited comb reconfigurability. Here we
present an on-chip laser that directly outputs microcomb and resolves all these
challenges, with a distinctive mechanism created from synergetic interaction
among resonant electro-optic effect, optical Kerr effect, and optical gain
inside the laser cavity. Realized with integration between a III-V gain chip
and a thin-film lithium niobate (TFLN) PIC, the laser is able to directly emit
mode-locked microcomb on demand with robust turnkey operation inherently built
in, with individual comb linewidth down to 600 Hz, whole-comb frequency tuning
rate exceeding $\rm 2.4\times10^{17}$ Hz/s, and 100% utilization of optical
power fully contributing to comb generation. The demonstrated approach unifies
architecture and operation simplicity, high-speed reconfigurability, and
multifunctional capability enabled by TFLN PIC, opening up a great avenue
towards on-demand generation of mode-locked microcomb that is expected to have
profound impact on broad applications. | Jingwei Ling, Zhengdong Gao, Shixin Xue, Qili Hu, Mingxiao Li, Kaibo Zhang, Usman A. Javid, Raymond Lopez-Rios, Jeremy Staffa, Qiang Lin | 2023-10-31T03:49:39Z | http://arxiv.org/abs/2310.20157v1 | # Electrically empowered microcomb laser
###### Abstract
Optical frequency comb underpins a wide range of applications from communication, metrology, to sensing. Its development on a chip-scale platform - so called soliton microcomb - provides a promising path towards system miniaturization and functionality integration via photonic integrated circuit (PIC) technology. Although extensively explored in recent years, challenges remain in key aspects of microcomb such as complex soliton initialization, high threshold, low power efficiency, and limited comb reconfigurability. Here we present an on-chip laser that directly outputs microcomb and resolves all these challenges, with a distinctive mechanism created from synergetic interaction among resonant electro-optic effect, optical Kerr effect, and optical gain inside the laser cavity. Realized with integration between a III-V gain chip and a thin-film lithium niobate (TFLN) PIC, the laser is able to directly emit mode-locked microcomb on demand with robust turkey operation inherently built in, with individual comb linewidth down to 600 Hz, whole-comb frequency tuning rate exceeding \(2.4\times 10^{17}\) Hz/s, and 100% utilization of optical power fully contributing to comb generation. The demonstrated approach unifies architecture and operation simplicity, high-speed reconfigurability, and multifunctional capability enabled by TFLN PIC, opening up a great avenue towards on-demand generation of mode-locked microcomb that is expected to have profound impact on broad applications.
Optical frequency comb is a coherent light source that consists of many highly coherent single-frequency laser lines equally spaced in the frequency domain. Its development has revolutionized many fields including metrology, spectroscopy, and clock [1]. In recent years, significant interest has been attracted to the generation of phase-locked optical frequency comb in on-chip nonlinear microresonators [2; 3; 4]. The superior coherence offered by these mode-locked microcombs has rendered a variety of important applications including data communication [5], spectroscopic sensing [6], optical computing [7; 8], range measurement [9; 10; 11; 12], optical [13] and microwave [14] frequency synthesis, with many others expected in the years to come.
Despite these great progresses, challenges remain in the development and application of microcombs. The first is the difficulty in triggering comb mode-locking due to the intrinsic device nonlinearities. Recently, self-starting operation has been demonstrated to address this issue [15; 16; 17; 18]. Their implementations, however, require sophisticated system pre-configuration and careful balance of specific nonlinear dynamics, which are difficult to apply in most practical devices. The second is the low power efficiency of soliton microcomb generation due to the pump-laser-cavity frequency detuning induced by soliton pulsing. Although pulse pumping [19] or auxiliary-resonator enhancement [20; 21] can improve the generation efficiency, they require delicate synchronization in time or resonance frequency and the difficulty of soliton initialization remains the same. The third is the limitation in the comb controllability due to the monolithic nature of the comb generator that is difficult to change after the device is fabricated. Piezoelectric effect could be used to deform the comb resonator [12], which, however, exhibits limited tuning speed and efficiency due to its slow mechanical response. To date, the majority of comb generators still have to rely on external laser control to adjust the microcomb state.
Recently, there are significant advances in chip-scale integration of semiconductor lasers and nonlinear comb generators [22; 23; 24; 16], in which a diode laser produces single-frequency laser emission to pump a hybridly or heterogeneously integrated external nonlinear resonator to excite microcombs. Such a fully integrated system shows great promise in improving the size, weight, and power consumption. However, the nature of soliton comb generation remains essentially the same, with all the above challenges persistent. Up to now, the realization of an integrated comb source free from these challenges remains elusive.
Here we present a fundamentally distinctive approach to resolve all these challenges in a single device. Figure 1**a** shows the device concept. In contrast to conventional approaches that rely solely on a single mechanism - either optical Kerr or electro-optic effect - for comb generation while with external pumping, we utilize the resonantly enhanced electro-optic (EO) modulation to initiate the comb generation, the resonantly enhanced optical Kerr effect to expand the comb bandwidth and phase-lock the comb lines, and the embedded III-V optical gain to sustain and stabilize the comb operation. Moreover, the resulting coherent microwave (via optical detection) is fed back to the EO comb to further enhance the mode-locking, leading to unique self-sustained comb operation.
We realize this approach by integrating a III-V gain element with a thin-film lithium-niobate (LN) photonic integrated circuit (PIC) to produce a III-V/LN comb laser (Fig. 1**b**). LN PIC has attracted significant interest recently [25; 26; 27; 28] for a variety of applications including high-speed modulation [29; 30], frequency conversion [31; 32; 33], optical frequency comb [34; 15; 35], and single-frequency lasers [36; 37; 38; 39]. Here, we unite active EO modulation with passive four-wave mix
ing (FWM) in a dispersion-engineered high-Q laser cavity for the on-demand generation of mode-locked soliton microcomb, which naturally leads to self-starting full turnkey operation simply by turning on/off either the RF signal driving the comb resonator or the electric current driving the gain element. As the comb modes extract energy directly from material gain, 100% of the optical power contributes directly to the comb generation. Moreover, the strong electro-optic effect of the LN cavity enables high tunability and reconfigurability of the produced microcomb. With this approach, we are able to produce broadband highly coherent microcombs, with individual comb linewidth down to 600 Hz, frequency tunability of over \(2.4\times 10^{17}\) Hz/s for the entire microcomb, microwave phase noise down to -115 dBc/Hz at 500 kHz frequency offset, and a wall-plug efficiency exceeding 5.6%. The simplicity of the demonstrated approach opens up a new path for on-demand generation of mode-locked microcombs that is expected to have profound impact on the broad applications in high-precision metrology, telecommunications, remote sensing, clocking, computing, and beyond.
## Comb laser structure design
The III-V/LN microcomb laser is formed by integrating an InP reflective semiconductor optical amplifier (RSOA) with an LN external cavity chip via facet-to-facet coupling. We employ two types of laser cavity structures for the purpose, as shown in Fig. 1**b**. Chip A laser structure is embedded with a dispersion-engineered EO microresonator and Chip B laser structure consists of a simple EO phase modulation waveguide section. The advantage of Chip-A resonator-type structure is that the high-Q microresonator offers strong cavity enhancement for comb generation and mode-locking, which we term as the cavity-enhanced (CE) comb laser. The benefit of the Chip-B Fabry-Perot-type structure is that it offers flexibility of mode-locking operation, which we term as the Fabry-Perot (FP) comb laser.
In the CE comb laser, the group-velocity dispersion (GVD) of the racetrack microresonator (intrinsic optical Q \(\sim 1.6\times 10^{6}\)) is engineered to be small but slightly anomalous to support broadband comb generation. At the same time, the pulley
Figure 1: Device concept of the integrated comb laser. **a.** Conceptual illustration of the comb generation and mode-locking principle, in which electro-optic (EO) comb generation, Kerr comb generation, and broadband optical gain all work synergistically together inside a single laser cavity for on-demand generation of mode-locked soliton comb. In addition, the laser comb output is detected and fed back to the laser cavity for resonant EO modulation to realize a self-sustained operation. **h.** Schematic of comb laser cavity structure formed by hybrid integration between a RSOA chip and a LN external cavity chip. Two different configurations are employed: A, cavity-enhanced (CE) comb laser structure in which the LN external cavity is formed mainly by an embedded high-Q racetrack resonator together with a broadband Sagnacloop mirror; B, Fabry-Perot (FP) comb laser in which the LN external cavity is formed by an EO phase modulation section together with an a broadband Sagnac loop mirror. **c.** Photo of a CE comb laser, showing that the RSOA is edge-coupled to the LN external cavity chip. **d.** Zoom-in photo showing the edge-coupling region between the RSOA and the LN chip. **e.,** Photo of the racetrack resonator and the loop mirror in a CE comb laser. **f.** Photo of the EO phase modulator and the loop mirror in an FP comb laser.
coupling regions are specially designed for uniform close-to-critical coupling to the resonator over a broad telecom band. Such a design ensures high loaded optical Q (\(\sim 5\times 10^{5}\)) of the resonator uniformly across a wide spectral range which is crucial both for enhancing the comb generation and mode-locking and for efficient light coupling into/out of the microresonator. Moreover, we also engineer the GVD of straight waveguide sections outside the racetrack resonator to compensate for that of the RSOA section so as to minimize the overall laser cavity GVD. At the same time, the overall optical path length of the laser cavity is designed to be an integer multiple of the racetrack resonator's for matching their resonance mode frequencies and round-trip group delay. The GVD of the FP comb laser is engineered in a similar fashion. The free
Figure 2: Lasing performance of a CE comb laser. **a.** Schematic of the experimental setup for comb laser characterization. RSOA is powered by a stable current source and an RF signal generator is used to drive the racetrack resonator. OSA: optical spectrum analyzer; AC: autocorrelator; PD: photo-detector; ESA: electrical spectrum analyzer; OSC: real-time oscilloscope; PNA: phase noise analyzer. **b.** Optical spectrum of the comb laser output. Off state: single mode lasing with the RF driving signal turned off. On state: comb lasing when the RF driving signal is turned on. Inset: Autocorrelation trace of the laser output pulses, in the “comb on” state, in which the blue curve shows the experimental data, dotted curves show the the fitted autocorrelation profiles from individual sech\({}^{2}\) pulses, and the dashed curve show the overall fitted autocorrelation trace. The autocorrelation is recorded directly from the laser output pulses without dispersion compensation or pulse shaping. **c.** Electrical spectrum of the beat note detected from the comb laser output. The red and blue curves show the comb-on and comb-off (single-mode lasing) states, respectively, corresponding to those in **b.** Gray curve shows the noise background of the optical detector, as a comparison. The inset shows the detailed spectrum of the RF beat note at 39.58 GHz. **d.** Phase noise spectrum of the 39.58-GHz RF beat note (red) and the RF driving signal (blue). **e.** Phase noise spectrum of the CE comb laser output measured with a self-heterodyne method. Red and green curves show for two different comb states, and blue curve shows for the comb-off (single-mode lasing) state. Inset shows the corresponding frequency noise spectrum of the three laser states. **f.** P-I-V curve of the CE comb laser, in which the red and blue curves show the L-I and I-V curves, respectively. **g - j.** Turnkey operation of the comb laser at two different speed of 2 Hz (**g, h**) and 200 Hz (**i, j**), respectively. Red curve shows the normalized driving RF power and blue curve shows the beating signal between the comb laser output and an external reference laser at 1582 nm. **h** and **j** show the zoomed-in signal for the on/off states, respectively.
spectral range (FSR) of the racetrack resonator is designed to be around 40 GHz to better accommodate bandwidths of RF filter and amplifier after detection of comb beating signal. In contrast, the FP comb laser is designed to have a small FSR of around 10 GHz for easy operation of harmonic mode locking.
To support the broadband operation, the Sagnac loop mirror employs an adiabatic coupling design [40] to achieve high reflection and feedback with a reflectivity of \(>\)95% over a broad spectral band of 1500-1600 nm. On the other hand, a horn taper waveguide is designed on the LN chip to minimize the coupling loss between the LN chip and the RSOA gain chip. The RSOA exhibits a broadband gain in the telecom L-band, with a 3-dB bandwidth of \(>\)40 nm. Fig. 1**c**-**f** show the device structures. Details about the design parameters and the characterization of the laser structures are provided in the Supplementary Information (SI).
### Comb laser performance
To excite the mode-locked microcomb, we first launch a single-frequency RF signal to drive the racetrack resonator of the CE comb laser (Fig. 2**a**), with a frequency of 39.58 GHz that matches its FSR. Before the RF signal is applied, the device exhibits single-mode or multi-mode lasing, with an example shown in the blue curve of Fig. 2**b**. However, A microcomb is readily produced as soon as the RF signal is applied, with an optical spectral bandwidth of about 20 nm (Fig. 2**b**, red curve). Mode-locking of the comb is verified by the clean RF tone at 39.58 GHz detected from the beating between comb lines (Fig. 2**c**), as well as the autocorrelation trace from the laser output pulses (Fig. 2**b**, inset). The 39.58-GHz RF tone exhibits a high signal-to-noise ratio of 79 dB (Fig. 2**c**, inset), whose phase noise spectrum matches identically the driving RF source (Fig. 2**d**), showing the preservation of the relative phase coherence between comb lines via mode-locking. Mode-locking of the comb is also clearly evident by the clean noise floor around the DC region in the RF spectrum (Fig. 2**c**. See SI for details), where the zero extra noise from the mode-locked comb state infers that all the comb lines of the entire comb are phase-locked together.
The underlying mechanism responsible for mode-locking dominantly contributes from the combined resonant EO modulation and optical Kerr effect, in which the EO modulation produces EO sidebands to initiate the comb generation while the optical Kerr effect broadens the comb spectrum and phase-locks the comb lines (Fig. 1**a**). Indeed, the laser is able to produce mode-locked soliton pulses in the absence of EO modulation (while with a narrower spectrum), in which only the optical Kerr effect is responsible for mode-locking. The detailed theoretical modeling and testing results are provided in SI. In Fig. 2**c**, the small RF tone around the half-harmonic at 19.79 GHz indicates certain comb dynamics. It can be eliminated by reconfiguring the laser and one example is shown in SI which exhibits a clean single RF beating tone and a well-defined sech\({}^{2}\)-shaped soliton pulse spectrum. The two lasers mainly differ in their overall dispersion of the laser cavity, indicating that the device dispersion plays an important role on the comb spectrum. The two-sidelobe feature of the comb spectrum in Fig. 2**b** implies that the output pulses are likely to be mode-locked two-color pulses in which the two color pulses bounds with each other via certain interpulse interaction [41]. Its exact nature, however, will require further exploration. The comb spectrum can be reconfigured by changing the power or frequency of the RF driving signal, whose details are provided in the SI.
In addition to the high coherence between the comb lines, the comb laser also exhibits narrow linewidth on its individual comb lines. To show this feature, we employ the correlated self-heterodyne method [42; 43] to characterize the overall linewidth of the whole comb laser by launching the entire comb for linewidth measurement (rather than characterizing individual comb lines themselves) (See SI for details). The recorded phase noise spectrum is shown in Fig. 2**e**, which indicates a white frequency-noise floor of \(\sim\)350 Hz\({}^{2}\)/Hz (Fig. 2**e**, inset) that corresponds to a laser linewidth of \(\sim\)2 kHz. The linewidth of the comb lines can be decreased further and an example is shown in Fig. 2**e** for a slightly different comb state produced from the same laser, which exhibits a white frequency-noise floor of \(\sim\)100 Hz\({}^{2}\)/Hz (Fig. 2**e**, inset) that corresponds to a laser linewidth as low as \(\sim\)600 Hz. Note that these values represent the overall linewidth contributed from the entire comb, which indicates the upper limit of the intrinsic linewidth of individual comb lines.
Figure 2**f** shows the current-dependent characteristics of the comb laser, which exhibits a low threshold current of 50 mA, indicating the low overall loss of the integrated laser. The comb laser produces an optical output power of 11 mW at a pumping current of 275 mA and a pumping voltage of 1.4 V, which corresponds to a wall-plug efficiency of 2.8%. As the laser has two output ports (Fig. 1**b**) that emit the same amount of optical power, the total wall-plug efficiency of the laser is thus 5.6%. This level of wall-plug efficiency is on par with other integrated external-cavity semiconductor lasers recently developed [44; 45]. Intriguingly, the comb power increases with increased driving RF power, whose details are provided in the SI. Note that the total optical power contributes fully to the generated comb, in strong contrast to conventional Kerr solitons or EO combs in which the major optical power remains in the residual pump wave with low comb generation efficiency.
A distinctive feature of the comb laser is that the produced comb can be switched on/off on demand by simply switching on/off the driving RF signal. To show this feature, we beat the comb with a reference single-frequency laser operating at the wavelength of 1582 nm that is inside the comb spectrum, and monitor the beating signal with the RF driving signal being turned on/off. As shown in Fig. 2**g** and **i**, the beating signal follows faithfully the driving RF signal. The coherent beating signal shows up readily when the RF driving is on, indicating the generation of the mode-locked comb. The beating signal disappears right after the RF driving is off, indicating the shut-off of the comb state. Same phenomenon is observed when the reference laser is tuned to other wavelengths within the comb spectrum.
Similar phenomena are observed in the FP comb laser, while generally with smaller spectral extents due to the lack
of cavity enhancement. The FP comb laser, however, exhibits a distinctive feature in that it can be flexibly mode-locked at higher harmonics of the laser cavity FSR. Fig. 3 shows this feature. We are able to achieve third- and fourth-order harmonic mode-locking by applying an RF signal to the phase modulation section of the FP comb laser, with a frequency of 29.45 and 39.27 GHz, respectively, that are three and four times of the laser FSR (9.817 GHz). Again, mode locking of the combs is clearly verified by the detected RF beating signal from the combs with a SNR of 77 dB, as well as by the autocorrelation traces from the laser output pulses (Fig. 3**b**,**c**, insets).
### High-speed frequency tuning of the comb laser
Another distinctive characteristic of the comb laser is that the laser frequencies of the entire mode-locked comb can be tuned cohesively at an extremely high speed. To show this feature, we apply a triangular-waveform electric signal - together with the 39.58-GHz RF driving signal - to the racetrack resonator of the CE comb laser as shown in Fig. 4**a**. While the 39.58-GHz RF driving signal supports the mode-locking process, the triangular-waveform electric signal will adiabatically tune the resonance frequencies of the racetrack resonator, thus tuning the laser frequencies of the entire mode-locked comb together as a whole.
To show this feature, we beat the comb with a narrow-linewidth reference CW laser at 1582 nm that is about 15 GHz away from a comb line, and monitor the beating signal in real time. At the same time, we monitor the spectrum of the recorded 39.58-GHz RF tone from the beating between the comb lines (See SI for details of the setup). The frequency dynamics of the 15-GHz beating signal with the reference laser show the frequency tuning of the comb line nearby while the 39.58-GHz RF tone from the comb line beating indicates the quality of mode locking during the frequency tuning. Fig. 4**f**-**h** show the temporal variation of the 15-GHz beating signal at different modulation speeds of 1, 10, 100 MHz. They show clearly that the frequency tuning of the comb line follows faithfully the waveform of the driving triangular-waveform electric signal at all modulation speeds, with a deviation of no more than 5%. In particular, the recorded 39.58-GHz RF tone from the comb line beating (Fig. 4**e**-**g**) remains unchanged during the frequency tuning, except with created modulation sidebands that simply results from the laser frequency modulation (see also Fig. 4**a**, right figure). This observation confirms that the phase-locking between the comb modes is fully preserved during the high-speed frequency tuning process, indicating that the entire mode-locked comb is tuned in its frequencies as a whole, without any perturbation to the comb mode spacing. This is in strong contrast to other comb modulation approaches [11; 12; 46] where the comb mode spacing is seriously impacted by external modulation. The frequency tuning range of 1.2 GHz at the modulation speed of 100 MHz (Fig. 4**h**) corresponds to a frequency tuning rate as high as \(2.4\times 10^{17}\mathrm{Hz/s}\) for the comb. Both the frequency tuning rate and tuning speed are orders of magnitudes higher than the piezoelectric tuning and the external pump modulation approaches [11; 12], which are constrained only by the photon lifetime of the high-Q racetrack resonator. As shown in Fig. 4**e**, the device exhibits a frequency tuning efficiency of about 0.2-0.8 GHz/V depending on the modulation speed, which is more than an order of magnitude higher than the piezoelectric approach [12]. The tuning efficiency can be further doubled by employing both sets of driving electrodes of the racetrack resonator (Fig. 4**a**).
### Feedback mode-locking of the comb laser
So far, the comb laser utilizes an external RF signal to support the mode-locking. This signal, however, can be removed by feeding the coherent 39.58-GHz RF tone detected from the comb mode beating directly back to the comb laser cavity to sustain the mode-locking, resulting in unique stand-alone self
Figure 3: Harmonic mode-locking of a FP comb laser with an FSR of 9.817 GHz. Third harmonics (29.45 GHz) and fourth harmonics (39.27 GHz) are separately used as driving signals. **a**. Schematic of the experimental setup for harmonic mode-locking of the comb laser. **b**. Optical spectrum of the laser output with third-harmonic mode-locking, by driving the phase modulator with a RF signal at 29.45 GHz. **c**. Optical spectrum of the laser output with fourth-harmonic mode-locking, by driving the phase modulator with a RF signal at 39.27 GHz. In **b** and **c**, the left insets shows the electrical spectrum of the RF beat note detected from the output laser comb, and the right insets show the autocorrelation trace of the laser output pulses with dashed curve showing the fitted individual pulses. Same as Fig. 2**b**, autocorrelation is recorded directly from the laser output pulses without dispersion compensation or pulse shaping.
sustained comb lasing operation. Figure 5**a** illustrates this approach. The comb laser output is detected by a high-speed optical detector whose output RF signal is amplified to an adequate amplitude, filtered to suppress excess low-frequency noises, adjusted with appropriate phase, and then fed back to drive the racetrack resonator of the CE comb laser.
As shown in Fig. 5**b**, a broadband microcomb with a spectrum covering about 50 nm and an optical power of 8.5 mW is produced on chip with a driving current of 285 mA. Indeed, the microcomb is readily produced on demand as soon as the driving current is turned on, with a driving current as low as 60 mA, as shown in Fig. 5**c**-**e**. Mode locking of the comb is clearly evident by the clean 39.58-GHz RF tone detected from the comb mode beating (Fig. 5**f**), which exhibits a SNR of 65 dB and a narrow 3-dB linewidth of 1.5 kHz. The phase noise of the RF beating signal reaches a level of -90 dBc/Hz at an offset frequency of 60 kHz, which is considerably lower than the laser heterodyne beating approach [47] and is comparable to that of free-running optical Kerr soliton microcombs [46; 48; 49]. The optical spectral bandwidth and the output power of the comb laser increase considerably with increased driving current (Fig. 5**d**). This is expected since the increased optical power of the mode-locked comb inside the high-Q racetrack resonator would significantly enhance the optical Kerr effect and the resulting four-wave-mixing process to broaden the comb spectrum. No saturation is observed on the comb spectral bandwidth as the current increases.
## Discussion
The attainable extent of the microcomb spectrum or soliton pulse width in current devices is primarily limited by the available optical power inside the cavity and the group delay mismatch between the enhancing resonator and the main laser cavity. For the former, it can be improved by either reducing the loss (_e.g._, improving the RSOA-LN chip coupling efficiency) or increasing the optical gain (_e.g._, using a higher-power RSOA) inside the laser cavity. For the latter, our the
Figure 4: Fast frequency tuning of the whole lasing comb. **a.** Left panel: Schematic of the setup for comb frequency tuning, in which a triangular-waveform electrical signal produced by an arbitrary waveform generator (AWG) is used to drive the racetrack resonator of the CE comb laser together with the 39.58-GHz mode-locking RF signal. Middle panel: Conceptual illustration of the comb frequency tuning process, showing the laser frequencies of the comb are tuned together as a whole. Right panel: Schematic showing the corresponding sideband creation around the comb lines, introduced by triangular-waveform frequency modulation. **b, c, d.** Time-frequency spectra of the beatnote between the comb laser output and a referenced laser operating at a fixed wavelength of 1582 nm, at the modulation speed of 1, 10, and 100 MHz, respectively. The dashed curves show the corresponding triangular-waveform EO tuning signal. Bottom panels: Corresponding relative frequency deviation at each modulation speed. **e, f, g.** Electrical spectrum of the 39.58-GHz beat note detected from the laser output comb, at modulation speed of 1 MHz (**e**), 10 MHz (**f**), and 100 MHz (**g**), respectively. **h.** Laser frequency tuning efficiency recorded at different modulation speeds.
oretical modeling (see SI) shows that the formation of ideal ultrashort soliton pulses would require that the roundtrip time of the main laser cavity be integer times that of the enhancing racetrack resonator. In current devices, however, there is a certain amount of mismatch which limits the comb spectrum and the coherence of the mode-beating RF tone. This problem can be resolved by further optimization of the roundtrip length of the main laser cavity and introducing tunability, after which we expect that ultra-broadband highly coherent soliton microcomb can be produced.
To conclude, we have introduced a new type of chip-scale microcomb laser that can be flexibly mode-locked with either active-driving or passive-feedback approaches and that can be tuned/reconfigured at an ultrafast speed, with robust turnkey operation inherently built in. The demonstrated integrated comb laser exhibits outstanding reconfigurability and performance significantly beyond the reach of conventional on-chip mode-locked semiconductor lasers [50; 51; 52; 53]. The demonstrated devices combine elegantly the simplicity of integrated laser structure, robustness of mode-locking operation, and electro-optically enhanced tunability and controllability, opening up a new avenue towards on-demand generation of soliton microcombs with high power efficiency that we envision to be of great promise for a wide range of applications including ranging, communication, optical and microwave synthesis, sensing, metrology, among many others.
**Method**
**Device fabrication** The device fabrication begins with a congruent x-cut thin film lithium-niobate-on-insulator (LNOI) wafer, with a 600 nm LN layer on a 4.7 \(\mu\)m silica-coated silicon substrate. E-beam lithography (EBL) and Ar-ion milling are used to etch the waveguide with ZEP-520A as mask. Etching thickness ranges from 300 nm (CE comb laser) to 350 nm (FP comb laser) for dispersion engineering. Second EBL is applied on PMMA for deposition of 400 nm gold-evaporated electrodes, which are placed 2.5 \(\mu\)m from
Figure 5: Self-sustained operation of the CE comb laser with feedback locking. **a.** Schematic of self-feedback locking of the CE comb laser. **b.** Optical spectrum of the comb laser output at a driving current of 285 mA. **c - e.** Optical spectrum (**d**) and optical power (**e**) of the comb laser output as a function of the RSOA driving current (**c**). **f.** Electrical spectrum of the 39.58-GHz beat note detected from the laser output comb, with a driving current of 60 mA. **g.** Phase noise spectrum of the detected 39.58-GHz beat note.
the waveguide. The distance between the waveguide and electrode is chosen to balance the EO modulation frequency with loss from metal absorption. Dicing and polishing of LN chip are employed at last to acquire optimized fiber-to-chip and amplifier-to-chip coupling, with both coupling losses around 6 dB.
|
2309.03835 | Instructing Robots by Sketching: Learning from Demonstration via
Probabilistic Diagrammatic Teaching | Learning for Demonstration (LfD) enables robots to acquire new skills by
imitating expert demonstrations, allowing users to communicate their
instructions in an intuitive manner. Recent progress in LfD often relies on
kinesthetic teaching or teleoperation as the medium for users to specify the
demonstrations. Kinesthetic teaching requires physical handling of the robot,
while teleoperation demands proficiency with additional hardware. This paper
introduces an alternative paradigm for LfD called Diagrammatic Teaching.
Diagrammatic Teaching aims to teach robots novel skills by prompting the user
to sketch out demonstration trajectories on 2D images of the scene, these are
then synthesised as a generative model of motion trajectories in 3D task space.
Additionally, we present the Ray-tracing Probabilistic Trajectory Learning
(RPTL) framework for Diagrammatic Teaching. RPTL extracts time-varying
probability densities from the 2D sketches, applies ray-tracing to find
corresponding regions in 3D Cartesian space, and fits a probabilistic model of
motion trajectories to these regions. New motion trajectories, which mimic
those sketched by the user, can then be generated from the probabilistic model.
We empirically validate our framework both in simulation and on real robots,
which include a fixed-base manipulator and a quadruped-mounted manipulator. | Weiming Zhi, Tianyi Zhang, Matthew Johnson-Roberson | 2023-09-07T16:49:38Z | http://arxiv.org/abs/2309.03835v3 | # Learning from Demonstration via Probabilistic Diagrammatic Teaching
###### Abstract
Learning from Demonstration (LfD) enables robots to acquire new skills by imitating expert demonstrations, allowing users to communicate their instructions in an intuitive manner. Recent progress in LfD often relies on kinesthetic teaching or teleoperation as the medium for users to specify the demonstrations. Kinesthetic teaching requires physical handling of the robot, while teleoperation demands proficiency with additional hardware. This paper introduces an alternative paradigm for LfD called _Diagrammatic Teaching_. Diagrammatic Teaching aims to teach robots novel skills by prompting the user to sketch out demonstration trajectories on 2D images of the scene, these are then synthesised as a generative model of motion trajectories in 3D task space. Additionally, we present the Ray-tracing Probabilistic Trajectory Learning (RPTL) framework for Diagrammatic Teaching, RPTL extracts time-varying probability densities from the 2D sketches, applies ray-tracing to find corresponding regions in 3D Cartesian space, and fits a probabilistic model of motion trajectories to these regions. New motion trajectories, which mimic those sketched by the user, can then be generated from the probabilistic model. We empirically validate our framework both in simulation and on real robots, which include a fixed-base manipulator and a quadruped-mounted manipulator.
## I Introduction
Learning from Demonstration (LfD) enables robots to learn novel motions by mimicking a collected set of expert trajectories [1]. LfD is particularly appealing in its ability to specify complex robot movements in the absence of explicit programming or cost design, thereby empowering non-roboticists to teach a robot how to act. Demonstrations are typically collected via kinesthetic teaching where a human physically handles the robot, or via teleoperation where the expert uses a remote controller to collect demonstrations. These approaches can be limiting, as they may require co-location with the robot or proficiency with specialised hardware. These challenges are further amplified when attempting to provide demonstrations to mobile manipulators.
This paper proposes _Diagrammatic Teaching_, as a paradigm for LfD where a small number of demonstrations are provided by the user through sketches over static two-dimensional images of the environment scene from different views. These images can either be captured by a camera or generated from a scene representation, such as a NeRF model [2]. Diagrammatic Teaching seeks to enable the teaching of new skills to the robot from the sketched demonstrations. We note that humans have a remarkable ability to infer instructions from crude diagrammatic sketches, and we wish to endow robots with this same capability.
Correspondingly, we present _Ray-tracing Probabilistic Trajectory Learning_ (RPTL), a framework for Diagrammatic Teaching. RPTL extracts time-varying probability distributions from user-provided sketches over 2D images, and then uses ray tracing to fit a probabilistic distribution of continuous motion trajectories in the 3D task space of a robot. This can be achieved with sketched demonstrations on as few as two images taken at different view poses. We can then adapt the trained model to generate trajectories that account for novel starting positions. We present extensive evaluations of RPTL in simulation and on real-world robots.
Concretely, the technical contributions of this paper include:
1. The formulation of Diagrammatic Teaching Problem, an alternative set-up for LfD where the user provides demonstrations of motion via sketching, avoiding the need for physical interaction with the robot;
2. Ray-tracing Probabilistic Trajectory Learning (RPTL), a novel framework for Diagrammatic Teaching, which estimates probabilistic distributions of task space motion trajectories from 2D sketches.
In the subsequent sections, we first briefly review the related work (section II) and introduce Diagrammatic Teaching
Fig. 1: _Diagrammatically teaching_ a quadruped with a mounted arm to shut a drawer, by sketching robot demonstrations over 2D images. The user sketches trajectories of the desired movement over images of the scene (top left and right). The robot then learns to execute the skill and close the drawer (bottom).
as a paradigm for LfD (section III). Then, we introduce and elaborate on Ray-tracing Probabilistic Trajectory Learning (RPTL) (section IV). Finally, we present empirical evaluations of RPTL, before concluding and presenting future research directions (section VI).
## II Related Work
**Learning for Demonstration:** Learning for Demonstration (LfD) is broadly a strategy to teach robots novel skills by learning on a small set of expert demonstrations. LfD circumvents the explicit programming of actions or the hand-designing of planning costs, which are time-consuming and require expertise [3, 1]. Many attempts at LfD collect demonstrations by _Kineshtetic Teaching_, where the user physically handles the robot to show it the desired motion while recording the robot's joint or end-effector coordinates. These include Dynamical Movement Primitive (DMP) approaches [4, 5], Probabilistic Movement Primitives [6], and stable dynamical system approaches [7, 8, 9]. In these methods, demonstrations can be difficult to obtain as it requires the user to physically interact with the robot. Another approach for obtaining demonstrations is teleoperation, where a human provides demonstrations via a remote controller. This enables humans to provide demonstrations even when not co-located with the robot, allowing larger demonstration datasets to be collected [10]. However, collecting trajectories via teleoperation is non-trivial and requires proficiency with the remote controller. The use of virtual reality [11] can also simplify the demonstration collection procedure but requires specialised hardware. Compared with these approaches, the proposed Diagrammatic Teaching interface is distinct in enabling sketching as a medium for users to specify demonstrations.
**Ray Tracing for Neural Rendering:** Neural rendering methods trace rays from 2D images into 3D space [12], allowing a 3D scene model to be built from a set of images captured at multiple views. This model can then be used to generate images of the scene from arbitrary views. NeRF models [2] have been the most prominent of neural volume rendering methods where neural networks are used to regress onto the density and colour of the scene. Many follow-up variants focus on speeding up NeRFs, including Decomposed Neural Fields [13], Neural Sparse Voxel Fields [14], and Instant Neural Graphics Primitives [15]. Like NeRF methods, our proposed Ray-tracing Probabilistic Trajectory Learning (RPTL) framework also applies ray-tracing to learn a spatially 3D model from 2D data. Additionally, RPTL requires demonstrators to sketch onto images from separate views -- these can be either taken from cameras or generated from NeRF models.
## III The Diagrammatic Teaching Problem
Diagrammatic teaching follows the typical setup for Learning for Demonstration: we are assumed to have a small dataset of demonstrated trajectories and seek to learn a generative model that produces trajectories to imitate the demonstrations. Diagrammatic Teaching is unique in that the trajectories provided are **not** trajectories in the robot's Cartesian end-effector space, nor its configuration space, but are instead 2D trajectories sketched onto images, while the desired generative model produces end-effector trajectories.
Formally, we have a dataset \(\mathcal{D}=\{(v^{j},\mathbf{\zeta}_{i}^{j})_{i=1}^{n_{j}}\}_{j=1}^{2}\), where \(v^{1}\) and \(v^{2}\) denote two unique views from which the demonstrations are collected, and \(\mathbf{\zeta}_{i}^{j}\) denotes the \(i^{th}\) trajectory collected from the \(j^{th}\) view. The user is shown images of the scene rendered from the views, and the user sketches how the end-effector movement is expected to look from the views. Images from different views may be captured by cameras at different locations or rendered from 3D scene representations such as NeRF. The collected _view space_ trajectories \(\mathbf{\zeta}\) contain sequences of normalised pixel coordinates \((x,y)\) along with a normalised time \(t\), i.e. \(\mathbf{\zeta}=\{t_{k},x_{k},y_{k}\}_{k=1}^{l}\). The length of the trajectory is denoted by \(l\). We assume that \(x,y,t\) are all normalised to be in \([0,1]\).
We aim to learn the generative model of trajectories \(p(\mathbf{\xi}|\mathcal{D})\), where \(\mathbf{\xi}\) denotes motion trajectories in the robot's end-effector task space. Here, \(\mathbf{\xi}\) are represented as functions that map from normalised time \(t\in[0,1]\) to Cartesian space \(\mathbf{x}\in\mathbb{R}^{3}\). Figure 1 shows an example of applying Diagrammatic Teaching to teach a quadruped robot with a mounted manipulator to shut a drawer by simply providing it demonstrations as sketches in two different views.
## IV Ray-tracing Probabilistic Trajectory Learning
In this section, we propose Ray-tracing Probabilistic Trajectory Learning (RPTL), a framework to solve the Diagrammatic Teaching problem. The overview of the different components in RPTL is outlined in fig. 2. The user is prompted to sketch trajectories onto images of the scene, which could be generated from a NeRF model or taken by cameras in multiple poses. The camera poses can be accurately estimated by tools such as COLMAP [16, 17], or with identifiable shared objects, such as an AprilTag [18], in the images taken. RPTL constructs time-varying density estimates over the collected 2D trajectories and then uses ray-tracing to find regions in 3D Cartesian space corresponding to the densities. Subsequently, a distribution of continuous-time motion trajectories can be fitted over these regions, and end-effector motion trajectories can be generated from the distribution.
### _Density Estimation in View Space_
We begin by estimating the time-varying densities over 2D coordinates of demonstrations from each view, which we denote as \(p(x^{j},y^{j}|t)\) for \(j\in\{1,2\}\). We use flexible normalizing flow models [19] to estimate the joint distribution \(p^{j}(t,x^{j},y^{j})\), and assume that \(p(t)\) is uniform over \([0,1]\). A normalizing flow model uses neural networks to learn an invertible and differentiable function that transforms the arbitrarily complex data distribution into a known base distribution, typically a standard Gaussian, \(\mathcal{N}(0,\mathbf{I})\). Let \(\mathbf{y}^{j}=[t,x^{j},y^{j}]\) be time-stamped coordinates from demonstrated trajectories from the \(j^{th}\) view and \(\hat{\mathbf{y}}^{j}\in\mathbb{R}^{3}\) be corresponding
latent variables linked by invertible functions \(g^{j}\), such that \(\mathfrak{H}^{j}=g^{j}(\mathbf{y}^{j})\). The densities over \(\mathbf{y}^{j}\) and \(\mathbf{\hat{y}}^{j}\) are linked by the change of variables,
\[p(\mathbf{y}^{j}) =p(g^{j}(\mathbf{y}^{j}))|\mathrm{det}\mathbf{J}^{j}(\mathbf{y}^{ j})|, \tag{1}\] \[p(\mathbf{\hat{y}}^{j}) =p({g^{j}}^{-1}(\mathbf{\hat{y}}^{j}))|\mathrm{det}\mathbf{J}^{j }({g^{j}}^{-1}(\mathbf{\hat{y}}^{j}))|^{-1} \tag{2}\]
where \(\mathrm{det}\) is the determinant and \(\mathbf{J}^{j}\) is the Jacobian of \(g^{j}\). We wish to learn \(g^{j}\) such that the distribution of latent variables matches a standard Gaussian, i.e. \(p(\mathbf{\hat{y}}^{j})=p(g^{j}(\mathbf{y}^{j}))\approx\mathcal{N}(0,\mathbf{ I})\). To ensure \(g^{j}\) is invertible, we model them as invertible neural network models described in [20, 21], with trainable parameters. We can then train \(g^{j}\) from each view, by minimising the negative log-likelihood of \(g^{j}(\mathbf{y}^{j})\) being drawn from a standard Gaussian over dataset \(\mathcal{D}\). We arrive at the following loss [22]:
\[\mathcal{L}=-\sum_{\mathbf{y}^{j}\in\mathcal{D}}\big{\{}\log p(g^{j}(\mathbf{ y}^{j}))+\log\lvert\mathrm{det}\mathbf{J}(\mathbf{y}^{j})\rvert\big{\}}. \tag{3}\]
As the number of demonstrations is typically small, to prevent the densities from collapsing into a delta function during training, we inject a small Gaussian noise into the data. After training up normalizing flow density estimators for each view, we can obtain densities \(p(x^{j},y^{j}|t)\) over each view by evaluating eq.1. Next, we seek to find the regions in Cartesian space which correspond to pixel coordinates that have high density.
### _Trajectory Distribution Fitting via Ray-Tracing_
We apply ray-tracing along the 2D pixels of high estimated density from both views to find regions in 3D space where the rays intersect. We can then fit our generative model onto these regions. In this section, we shall (1) introduce the parameterisation of our generative model, and then (2) elaborate on finding and fitting the model on corresponding regions. An example of learning to push a box is provided as fig.3, where the view space density and the resulting distribution over trajectories are visualised.
**Distributions of Trajectories:** Generated motion trajectories are represented as functions \(\boldsymbol{\xi}:t\rightarrow\mathbf{x}\), where \(t\in[0,1]\) is a normalised time variable and \(\mathbf{x}\in\mathbb{R}^{3}\) is a corresponding location. Trajectories are modelled by a linear combination of \(M\) basis functions \(\phi_{i}:t\rightarrow\mathbb{R}\) for \(i=1,\ldots,M\), and are parameterised by a matrix of weights \(\mathbf{W}\in\mathbb{R}^{M\times 3}\). We have:
\[\boldsymbol{\xi}(t)=\mathbf{W}^{\top}\Phi(t),\hskip 14.226378pt\Phi(t)=[\phi_{1 }(t),\ldots,\phi_{M}(t)]^{\top} \tag{4}\]
where \(\Phi\) is a \(M\)-dimension feature vector with each basis function evaluated once for each spatial dimension in Cartesian space. The basis functions can be selected to enforce _a priori_ assumptions on the motion trajectories, such as smoothness of periodicity. In this work, each basis is a squared exponential function that enforces the smoothness of motion, specifically,
\[\phi_{i}(t)=\exp{(-\gamma||t-t_{i}||_{2}^{2})}, \tag{5}\]
for \(i=1,\ldots,M\), where \(\gamma\) is a length-scale hyper-parameter where smaller values correspond to smoother trajectories. The \(M\) times, \(t_{i}\), are evenly spaced values between \(0\) and \(1\).
We can extend our parameterisation of a single trajectory to a _distribution_ of trajectories \(p(\boldsymbol{\xi})\), by estimating a distribution over the weight matrix \(\mathbf{W}\). For tractability, we assume independent Gaussian distributions over each element in \(\mathbf{W}\). Let us denote each element in \(\mathbf{W}\) as \(w_{m,n}\) where \(m=1,\ldots,M\) and \(n=1,2,3\), where \(n\) is each spatial dimension. Then, the joint distribution over \(\mathbf{W}\) is simply the product of the distribution over each element,
\[p(\mathbf{W})=\prod_{m=1}^{M}\prod_{n=1}^{3}p(w_{m,n})=\prod_{m=1}^{m}\prod_{ n=1}^{3}\mathcal{N}(\mu_{m,n},\sigma_{m,n}^{2}), \tag{6}\]
where the means and standard deviations of the distribution over each \(w_{m,n}\) are denoted as \(\mu_{m,n}\) and \(\sigma_{m,n}\). Fitting the distribution of trajectories involves finding each \(\mu_{m,n}\) and \(\sigma_{m,n}\) to match the given data.
**Ray-tracing from view space densities:** Provided time-varying densities over pixels from different views, \(p(x^{j},y^{j}|t)\) for \(j\in\{1,2\}\), we wish to fit a trajectory distribution \(p(\boldsymbol{\xi})\) by tracing the path of rays. We follow classical rendering
Fig. 3: Example of learning to push a box. Top: sketched demonstrations over two different views. Middle: Robot executing the learned skill. Bottom: Densities in 2D view space with time axis collapsed and trajectories from the model in 3D task space.
Fig. 2: Overview and components of Ray-tracing Probabilistic Trajectory Learning for Diagrammatic Teaching.
methods [23] used for NeRF models [2] and assume pin-hole cameras at each view. We construct the ray in 3D which passes through each coordinate in 2D view space as,
\[f_{r}(d,x^{j},y^{j})=\mathbf{o}^{j}+\boldsymbol{\omega}(x^{j},y^{j})d, \tag{7}\]
where \(\mathbf{o}^{j}\) is the origin of the camera, \(\boldsymbol{\omega}(x^{j},y^{j})\) is a direction, and \(d\) is a distance bounded between \(d_{near}\) and \(d_{far}\). These values are directly obtainable from the camera specifications. Figure 4 shows an example of rays traced from two cameras at different poses. From each view, the region corresponding to the density above threshold \(\epsilon\) at a given time \(t\) is the codomain of \(f_{r}\). Let us define this as the set:
\[R_{t}^{j}=\{\mathbf{x}\in\mathbb{R}^{3}|f_{r}(d,x^{j},y^{j}),\] \[\text{ for all }d\in[d_{near},d_{far}]\text{ and }x^{j},y^{j}\in[0,1],\] \[\text{ such that }p(x^{j},y^{j}|t)\geq\epsilon\}. \tag{8}\]
We seek the intersection of the 3D regions which corresponds to densities from both views, i.e. \(R_{t}^{1}\cap R_{t}^{2}\). This can be approximated by sampling regular-spaced grid points over \(d\) and view space coordinates \((x^{j},y^{j})\), for both views. If the distance between a sample from one view and its closest sample from the other view is below a specified distance threshold \(\delta\), then we consider both samples to be in the intersecting region. An outline of drawing samples from the intersecting region is presented in algorithm 1. In practice, the nested loops are vectorised and can be efficiently executed on GPUs using deep learning frameworks, such as PyTorch [24]. We can obtain a set of \(n_{s}\) intersecting 3D spatial coordinates along the normalised time \(t\), i.e. \(\mathcal{S}=\{(t_{i},\mathbf{x}_{i})\}_{i=1}^{n_{s}}\).
To fit our generative model on \(\mathcal{S}\), we need to compute the time-conditional distributions in Cartesian space, i.e. \(p(\boldsymbol{\xi}|t)\). Let us first stack our train-able mean and variance parameters from eq. (6),
\[\mathcal{M}=\begin{bmatrix}\mu_{1,1}&\mu_{1,2}&\mu_{1,3}\\ \vdots&\vdots&\vdots\\ \mu_{M,1}&\mu_{M,2}&\mu_{M,3}\end{bmatrix}\quad\boldsymbol{\Lambda}=\begin{bmatrix} \sigma_{1,1}^{2}&\sigma_{1,2}^{2}&\sigma_{1,3}^{2}\\ \vdots&\vdots&\vdots\\ \sigma_{M,1}^{2}&\sigma_{M,2}^{2}&\sigma_{M,3}^{2}\end{bmatrix}. \tag{9}\]
As the weight distribution \(p(\mathbf{W})\) is Gaussian and involves a linear transformation with \(\Phi\), as given in eq. (4), we have:
\[p(\boldsymbol{\xi}|t)=\mathcal{N}(\mathcal{M}^{\top}\Phi(t),\mathrm{Diag}( \boldsymbol{\Lambda}^{\top}\Phi(t)^{2})), \tag{10}\]
where \(\mathrm{Diag}(\cdot)\) produces a square matrix with the inputted vector as its diagonal. We minimise the Gaussian negative log-likelihood of \(p(\boldsymbol{\xi}|t)\) over points in \(\mathcal{S}\) to fit the parameters \(\mathcal{M}\) and \(\boldsymbol{\Lambda}\).
### _Conditional Trajectory Generation at New Positions_
Trajectories learned by RPTL can be further adapted to start and new starting positions. After training to obtain fitted means, \(\mathcal{M}\), and variances, \(\boldsymbol{\Lambda}\), we can generate a collection of trajectories by sampling elements in \(\mathbf{W}\) from \(w_{i,j}\sim\mathcal{N}(\mu_{i,j},\sigma_{i,j}^{2})\), and then evaluate eq. (4). However, often times we have additional knowledge of where the trajectory shall be at the starting position. For example, the generated task space trajectory at the initial time should match its current end-effector position (\(\mathbf{x}_{eef}\)), i.e. enforcing, \(\boldsymbol{\xi}(0)=\mathbf{x}_{eef}\).
Let us begin by defining notation: let \(\mathbf{W}_{1}\) be the first row of \(\mathbf{W}\), and \(\mathbf{W}_{2:}\) be the others; let \(\Phi(0)_{1}\) be the first element in \(\Phi(0)\) and \(\Phi(0)_{2:}\) denote the others. For a specific sampled trajectory, we draw a sample of \(\mathbf{W}\) and then alter \(\mathbf{W}_{1}\) from the enforced condition. Specifically, at the beginning of the trajectory, we wish to enforce:
\[\boldsymbol{\xi}(\boldsymbol{0})=\mathbf{W}_{1}^{\top}\Phi(0)_{1}+\mathbf{W}_ {2:}^{\top}\Phi(0)_{2:}=\mathbf{x}_{eef}, \tag{11}\]
allowing us to solve for \(\mathbf{W}_{1}\), together with the known \(\mathbf{W}_{2:}\). An example of generated trajectories conforming to starting at the current end-effector position is shown in fig. 5, where the robot is taught to sketch out "R" characters, along with the \(x,y,z\) positions of generated trajectories over \(t\).
## V Experimental Evaluation
We empirically explore the performance of the proposed RPTL for the Diagrammatic Teaching problem in both simulation and real robot platforms. Specifically, we first examine the quality of the trained RPTL model and whether the motion acquired is consistent with user expectations. Then,
Fig. 4: Rays traced from cameras at different poses.
Fig. 5: We use Diagrammatic Teaching to teach the robot to follow “R” characters. Left: The \(x,y,z\) positions of sampled trajectories over normalised time \(t\). The initial positions of the trajectory samples are enforced at the black marker. Right: Three end-effector trajectories, conditioned to start from the current position.
we examine a case of using RPTL to diagrammatically teach the tracing of challenging alphabet characters, which takes several turns in movement and requires greater intricacy. Finally, we examine the robustness of RPTL on real-world robot platforms, by generating motion for a fixed-based 6 degrees-of-freedom manipulator and a quadruped robot with a mounted manipulator.
### _Quality and Consistency of RPTL_
We set up three simulated environments with a Franka manipulator, following the _table top_, _box_, and _shelf_ environment types in [25], using the physics simulator PyBullet [26]. We place cameras at three different poses in the environment and ask the user to provide three demonstrations per camera view to reach a cylindrical goal object. The goal position is not given to the model, and the only specifications given are the diagrammatic demonstrations. We select two views and their corresponding demonstrations for training and hold out the demonstrations from the third view as a test set. To benchmark motion trajectories generated from the learned model, we project the generated 3D trajectories into the 2D view of the third view and compute distances between the projected trajectories and the retained test trajectories. Distances computed are all normalised by the width of the test image. By testing against a hidden test set, we can assess whether the produced motions match the expectation of the user and are consistent when viewed from a different pose.
**Metrics:** We use the following metrics to measure the quality of the learned trajectory distribution: (1) _Mean Frechet distance (MFD)_: We compute the discrete Frechet distance [27] between the mean of the trajectory distribution model and each of the test trajectory, then record the averages and standard deviations of the Frechet distances. The Frechet distance measures the distance between curves and can account for the ordering of points and handle varying lengths. (2) _Wasserstein Distance (WD)_: We compute the 2-Wasserstein distance implemented in [28] between five trajectories drawn from our model and the set of test trajectories. Crucially, the WD can measure distances between distributions beyond simply considering the mean of the probabilistic model.
**Baselines:** We evaluate our model with respect to the following baseline models. (1) _Linear_: We provide the mean start and end positions of the test trajectories and assume a linear curve between them. Note that the other methods do not assume knowledge of the mean test start and end positions. (2) _Nearest Neighbour (NN)_: We predict the trajectories as being identical to those traced from the nearest camera in the training set.
The quantitative performance of our method, along with baseline comparisons, are given in table I. The low distances between our learned model and the test set highlight the ability of our method to train a generative model that is consistent with the trajectories a human user would expect.
Fig. 6: We teach the manipulator motion in the _table top_, _box_, and _shelf_ environments (left, middle, right respectively). The top row images are two camera views provided to the user, and the red trajectories are 2D demonstrations sketched by the user. We sample five 3D motion trajectories from the trajectory distribution model and illustrate them in red in the images on the _bottom_ row. We see that the learned distribution of trajectories in 3D Cartesian space is consistent with the user’s sketches.
Fig. 7: We diagrammatically teach the manipulator to sketch out the letters “B”, “Z”, “U”, and sample three trajectories from the trained model. The model accurately generates the desired letters from randomised starting configurations. An additional example for “R” in fig. 5.
This is reiterated when qualitatively examining fig. 6, where the top row consists of two images along with user-sketched trajectories at different views, and the bottom rows contain samples from the 3D trajectory model. We observe the produced end-effector motions are highly consistent with the sketches provided.
### _Tracing Out Letters: a Case of Intricate Trajectories_
We seek to investigate whether RPTL can be applied to learn more complex distributions of motion trajectories which may involve multiple turns or swerves. To this end, we diagrammatically teach a simulated Franka to trace out the letters "B", "Z", "U". This requires motion that deviates greatly from a linear trajectory and would be challenging to describe with a motion planning cost function. We randomly select robot starting configurations facing a surface, and conditionally sample motion trajectories to sketch out the letters on the surfaces. These are illustrated in fig. 7. An additional tracing of the letter "R" is shown in fig. 5 We observe that RPTL is able to generate trajectories that accurately capture the intricacies of each of the characters. Additionally, we validate that RPTL is able to generate trajectories conditional on newly sampled end-effector positions in the vicinity.
### _Diagrammatic Teaching in the Real-world_
To test the robustness of RPTL in the real world, we diagrammatically teach a 6-DOF Unitree Z1 manipulator new skills. Additionally, we also demonstrate the applicability of RPTL to a Unitree Aliengo quadruped with the Z1 arm attached. We take two photos with a camera, find the camera poses via AprilTags [18], and collect demonstrations from the user. The robot end-effector, along with the quadruped, then tracks trajectories sampled from the trained models.
We diagrammatically teach skills for the following tasks to the Z1 arm: **Push box**: Push a box from its side, such that it drops off the table; **Drop into cup**: Move past a mug, and hover right over a paper cup. The gripper is subsequently released and the held object shall be dropped into the paper cup; **Tip box**: Reach into a box and tip it by moving in a "U"-shaped motion. We then mount the arm onto a quadruped and teach the quadruped + arm robot skills for the following tasks: **Drop into box**: Reach out towards a box on the floor. The gripper is released and the object held is placed into the box; **Close a drawer**: Close an open drawer.
A video recording of the robots performing each learned skill is provided, and images of the provided demonstrations and subsequent execution are given in figs. 1 and 3. We observe that RPTL can robustly teach the robot the specified skills on both robot platforms. Moreover, Diagrammatic Teaching demonstrates its utility on mobile manipulators where kinesthetic teaching would be impractical. In particular, the task **Drop into box** requires the dog to bend its knees while the arm moves towards the box for the mounted arm to reach sufficiently low. The task **Close drawer** requires even more coordination between the dog and the arm, as the dog needs to concurrently march forward towards the drawer as the arm shuts the drawer.
## VI Conclusions and Future Work
We introduce Diagrammatic Teaching, with which a user specifies desired motion via sketched curves, as a novel paradigm for learning from demonstration. We present Ray-tracing Probabilistic Trajectory Learning (RTPL) as a framework for Diagrammatic Teaching. RPTL estimates probability densities over 2D user sketches and traces rays into the robot's task space to fit a distribution of motion trajectories. We evaluate RPTL both in simulation and on real-world robots. Future avenues to explore include (1) actively generating posed images from a scene model, such that they are the most conducive for the user to sketch on; (2) incorporating user-sketched constraints, such as enforcing the elbow of a manipulator to not enter a specified region.
\begin{table}
\begin{tabular}{l l|r r r} \hline \hline & & Table & Box & Shelf \\ \hline \multirow{2}{*}{RPTL (Ours)} & \multirow{2}{*}{\begin{tabular}{c} MFD (\(\times 10^{-2}\)) \\ WD (\(\times 10^{-2}\)) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(\mathbf{3.1\pm 0.2}\) \\ **2.6** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(\mathbf{3.9\pm 0.2}\) \\ **2.9** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(\mathbf{5.3\pm 0.9}\) \\ **3.8** \\ \end{tabular} } \\ \hline \multirow{2}{*}{Linear} & \multirow{2}{*}{\begin{tabular}{c} MFD (\(\times 10^{-2}\)) \\ WD (\(\times 10^{-2}\)) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(10.6\pm 0.3\) \\ 6.9 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(7.4\pm 0.1\) \\ 6.9 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(9.7\pm 1.0\) \\ 6.3 \\ \end{tabular} } \\ \hline \multirow{2}{*}{NN} & \multirow{2}{*}{\begin{tabular}{c} MFD (\(\times 10^{-2}\)) \\ WD (\(\times 10^{-2}\)) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(17.8\pm 0.8\) \\ 10.3 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(33.2\pm 0.6\) \\ 19.1 \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{c} \(7.9\pm 0.9\) \\ 9.2 \\ \end{tabular} } \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison of RPTL and baselines on different environments. Lower distances indicate better performance. Lowest in bold.
Fig. 8: RPTL applied on real-world robot platforms. We diagrammatically teach the robot for the tasks **Drop into cup**, **Drop into box**, **Tip box**, and **Close a drawer**. The sketched demonstrations are shown in the top row, while robot execution is shown at the bottom. Additionally, the task **Push box** is shown in fig. 3. |
2308.16877 | HPAC-Offload: Accelerating HPC Applications with Portable Approximate
Computing on the GPU | The end of Dennard scaling and the slowdown of Moore's law led to a shift in
technology trends toward parallel architectures, particularly in HPC systems.
To continue providing performance benefits, HPC should embrace Approximate
Computing (AC), which trades application quality loss for improved performance.
However, existing AC techniques have not been extensively applied and evaluated
in state-of-the-art hardware architectures such as GPUs, the primary execution
vehicle for HPC applications today.
This paper presents HPAC-Offload, a pragma-based programming model that
extends OpenMP offload applications to support AC techniques, allowing portable
approximations across different GPU architectures. We conduct a comprehensive
performance analysis of HPAC-Offload across GPU-accelerated HPC applications,
revealing that AC techniques can significantly accelerate HPC applications
(1.64x LULESH on AMD, 1.57x NVIDIA) with minimal quality loss (0.1%). Our
analysis offers deep insights into the performance of GPU-based AC that guide
the future development of AC algorithms and systems for these architectures. | Zane Fink, Konstantinos Parasyris, Giorgis Georgakoudis, Harshitha Menon | 2023-08-31T17:32:44Z | http://arxiv.org/abs/2308.16877v1 | # HPAC-Offload: Accelerating HPC Applications with Portable Approximate Computing on the GPU
###### Abstract.
The end of Dennard scaling and the slowdown of Moore's law led to a shift in technology trends towards parallel architectures, particularly in HPC systems. To continue providing performance benefits, HPC should embrace Approximate Computing (AC), which trades application quality loss for improved performance. However, existing AC techniques have not been extensively applied and evaluated in state-of-the-art hardware architectures such as GPUs, the primary execution vehicle for HPC applications today.
This paper presents HPAC-Offload, a pragma-based programming model that extends OpenMP offload applications to support AC techniques, allowing portable approximations across different GPU architectures. We conduct a comprehensive performance analysis of HPAC-Offload across GPU-accelerated HPC applications, revealing that AC techniques can significantly accelerate HPC applications (1.64x LULESH on AMD, 1.57x NVIDIA) with minimal quality loss (0.1%). Our analysis offers deep insights into the performance of GPU-based AC that guide the future development of AC algorithms and systems for these architectures.
## 1. Introduction
As Dennard scaling -- which stipulated a steady rise in processor clock speed through transistor shrinkage -- came to an end, and Moore's law -- predicting a doubling of CMOS transistors on a microchip every two years -- slowed down, technology trends shifted toward parallel architectures. Parallel architectures focused on multi-core CPUs in the early 2000s, while the emergence of GPGPU paradigms pivoted technology trends to many-core accelerator systems. This trend is evident in the Top500 list (Dennard et al., 2002): as of November 2022, 7 of the 10 fastest supercomputers use GPUs. Despite the success of many-core architectures overcoming the slowdown of Moore's law (Mayer et al., 2010), HPC requires another paradigm shift to continue delivering performance improvements.
Approximate Computing (AC) has emerged as an attractive new paradigm that increases performance by introducing novel approximations within applications, controllably reducing the application's accuracy. Both hardware and software AC techniques have been proposed. Specifically, (Hanan et al., 2009; Chen et al., 2010) introduce approximate CPUs, (Chen et al., 2010) proposes approximate memories while (Chen et al., 2010; Chen et al., 2010) discuss approximate accelerators. Software techniques include loop perforation (Chen et al., 2010; Chen et al., 2010), which accelerates image processing workloads by up to 3\(\times\) with less than 10% accuracy loss. Input (Xu et al., 2011) and output (Chen et al., 2010) approximate memoization have been used in various domains, such as stencil computations, finance, and image processing, doubling application performance with small error. Other techniques, such as variable precision, can increase performance by 45%. HPAC (Xu et al., 2011) provides a state-of-the-art compiler and runtime implementation to apply software AC techniques on multi-core CPUs using OpenMP.
These works extensively showcase the potential of AC in various CPU applications. However, little research assesses approximate computing on GPUs, which currently dominate HPC supercomputers. It is imperative to assess whether AC is a viable execution paradigm for next-generation software: any paradigm that cannot apply to many-core architectures will likely have limited impact. Consequently, a comprehensive study applying AC to GPU-enabled applications is essential to fully gauge the potential and challenges imposed by approximations in modern GPUs.
To address this problem, this work studies state-of-the-art software approximate computing techniques applied to HPC GPU-enabled scientific applications. We present HPAC-offload, an extension of HPAC (Xu et al., 2011) that supports approximations in GPU applications. The proposed extensions seamlessly compose with the portable OpenMP offload programming model and consist of easy-to-use annotations on OpenMP offload applications. The result is an approximate computing framework that enables portable approximations across different GPU architectures, such as NVIDIA and AMD. The composition of approximation and GPU parallel execution results in several challenges due to the execution model of GPU devices. _Porting AC techniques to GPUs without considering their unique architectural characteristics results in significant slowdowns_.
For example, approximate computing techniques for CPU parallelism typically duplicate the AC state on each CPU thread; however, the massive parallelism of GPUs that use millions of software threads makes this approach impractical by depleting the device's memory. Additionally, CPU-AC allows each parallel thread to independently decide whether to approximate without observable overhead. In contrast, independent thread decision-making in GPUs can introduce thread divergence and reduce performance, limiting the expected performance boosts of approximation.
As such, programming models for GPU-AC must match the hierarchical nature of the underlying execution model. HPAC-offload identifies such challenges and proposes programming model extensions to support high-performance implementations of several state-of-the-art approximation techniques: input/output memoization and loop perforation.
This paper makes the following contributions:
* HPAC-offload, a programming model for composing state-of-the-art AC techniques (input/output memoization, loop perforation) with OpenMP-offload. Our pragma-based programming |
2309.10849 | Inflation Correlators with Multiple Massive Exchanges | The most general tree-level boundary correlation functions of quantum fields
in inflationary spacetime involve multiple exchanges of massive states in the
bulk, which are technically difficult to compute due to the multi-layer nested
time integrals in the Schwinger-Keldysh formalism. On the other hand,
correlators with multiple massive exchanges are well motivated in cosmological
collider physics, with the original quasi-single-field inflation model as a
notable example. In this work, with the partial Mellin-Barnes representation,
we derive a simple rule, called family-tree decomposition, for directly writing
down analytical answers for arbitrary nested time integrals in terms of
multi-variable hypergeometric series. We present the derivation of this rule
together with many explicit examples. This result allows us to obtain
analytical expressions for general tree-level inflation correlators with
multiple massive exchanges. As an example, we present the full analytical
results for a range of tree correlators with two massive exchanges. | Zhong-Zhi Xianyu, Jiaju Zang | 2023-09-19T18:00:14Z | http://arxiv.org/abs/2309.10849v2 | # Inflation Correlators with Multiple Massive Exchanges
###### Abstract
The most general tree-level boundary correlation functions of quantum fields in inflationary spacetime involve multiple exchanges of massive states in the bulk, which are technically difficult to compute due to the multi-layer nested time integrals in the Schwinger-Keldysh formalism. On the other hand, correlators with multiple massive exchanges are well motivated in cosmological collider physics, with the original quasi-single-field inflation model as a notable example. In this work, with the partial Mellin-Barnes representation, we derive a simple rule, called family-tree decomposition, for directly writing down analytical answers for arbitrary nested time integrals in terms of multi-variable hypergeometric series. We present the derivation of this rule together with many explicit examples. This result allows us to obtain analytical expressions for general tree-level inflation correlators with multiple massive exchanges. As an example, we present the full analytical results for a range of tree correlators with two massive exchanges.
###### Contents
* 1 Introduction
* 2 Tree Graphs with Partial Mellin-Barnes
* 3 Time Integrals with Partial Mellin-Barnes
* 3.1 Family-tree decomposition of nested integrals
* 3.2 Partially ordered families: simple examples
* 3.3 General family integrals
* 3.4 Alternative representation
* 3.5 Discussions
* 4 General Two Massive Exchanges
* 4.1 Three-vertex seed integral
* 4.2 Computing the seed integral
* 5 Four-Point Correlator with Two Massive Exchanges
* 6 Conclusion and Outlooks
* A Mathematical Appendix
* A.1 Mellin-Barnes representation
* A.2 Special functions
* B Details of Computing the Three-Vertex Seed Integral
Introduction
Recent years have witnessed increasing interests in the theoretical study of cosmological correlation functions of large-scale fluctuations, which are believed to be sourced by quantum fluctuations of spacetime and matter fields during cosmic inflation [1]. By observing the correlation functions of the large-scale structure, we can access quantum field theory in inflationary spacetime. This connection has far-reaching consequences to both early-universe cosmology and fundamental particle physics. It has been emphasized that heavy particles produced during cosmic inflation could leave characteristic and oscillatory signals in certain soft limits of correlation functions. Many recent studies have exploited this Cosmological Collider (CC) signal to study particle physics at the inflation scale [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62]. At the same time, there are considerable works devoting to the analytical or numerical study of correlation functions of quantum field theory in inflationary spacetime, or _inflation correlators_ for short [63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93]. These studies have revealed many interesting structures of inflation correlators or wavefunctions, which deepen our understanding of quantum field theory in de Sitter spacetime. On the other hand, explicit analytical results are indispensable for a precise understanding of CC signals and for comparing theoretical predictions of CC models with observational data.
Many explicit analytical results have been obtained in recent years for inflation correlators relating to CC physics [63, 64, 70, 71, 88, 89, 90, 91, 92, 93]. Most of these results are for the exchange of a single massive particle in the bulk of dS, with a few exceptions at loop orders. However, previous works on CC model building have shown that correlators with multiple exchanges of massive particles could be phenomenologically important. Already in the early studies of quasi-single-field inflation, it was noticed that the correlator with cubic self interaction of a bulk massive scalar can greatly enhance the size of the correlation function. In such models, tree-level graphs exchanging more than one massive scalar make dominant contributions to the 3-point correlator [3, 21, 19]. However, due to the technical complications, explicit analytical results for inflation correlators with more than one bulk massive field are still beyond our reach at the tree level.
It may come as a surprise to flat-space field theorists that scalar tree graphs are hard to compute. Indeed, setting aside the issues of tensor and flavor structures, the complexity of a scalar Feynman graph in flat spacetime largely increases with the number of loops \(L\): Each loop gives rise to a loop momentum integral, and carrying out these loop integrals are not trivial. However, so long as we stay at the tree level (\(L=0\)), Feynman graphs are simply products of propagators and vertices and are typically rational functions of external momenta. So, increasing the number of vertices and propagators does not generate any difficulty per se.
Things are a little different in inflationary spacetime: Here we normally have full spatial translation and rotation symmetries, but the time translation is usually broken. Accordingly, we Fourier transform only spatial dependence of a function to momentum space, and leave the time dependence untransformed. In this hybrid "time-momentum" representation, we get additional time integrals at all interaction vertices in the Schwinger-Keldysh (SK) formalism [19, 109, 110, 111, 112]. As a result, the complexity of graphs in inflation increases in two directions: either with the number of loops, or with the number of vertices.
Partly for this reason, full analytical computation of tree correlators with multiple massive exchanges remains challenging: In a tree graph, the number of bulk vertices is always equal to
the number of bulk propagators plus 1. Thus, a tree graph with \(I\) internal legs requires time integrals of \((I+1)\) layers. Worse still, each bulk propagator \(D_{\mathsf{ab}}(k;\tau_{1},\tau_{2})\) in SK formalism comes in four types, depending on the four choices of SK indices \(\mathsf{a},\mathsf{b}=\pm\) at the two endpoints. The two propagators with same-sign indices \(D_{\pm\pm}(k;\tau_{1},\tau_{2})\) involve expressions that depend on the time ordering, which make the \((I+1)\)-layer time integral nested. That is, the integration limit of one layer could depend on the integration variable of the next layer. So, the integration hlquickly becomes intractable with increasing number of bulk lines or vertices.
One may wish to bypass the difficulty of bulk time integrals by taking a boundary approach. For instance, one can try to derive differential equations satisfied by the correlators starting from simple bootstrapping inputs [63, 64, 65, 70, 71, 89, 91]. As explored in many previous works, this approach turns out quite successful for single massive exchanges, where the "bootstrap equations" are usually a simple set of second-order ordinary differential equations and usually have well-known analytical solutions. However, when one goes to the two massive exchanges, the resulting differential equations become much more complicated, and it seems rather nontrivial to directly solve such equations [70].
One can also try other methods such as a full Mellin-space approach, where one still works in the bulk, but rewrites correlators in Mellin space [82, 83, 84, 85]. Then, the time ordering of the same-sign propagators \(D_{\mathsf{ab}}\) becomes an overall cosecant factor that nests two Mellin variables. While this is enormously simpler than the time-momentum representation, eventually we need to transform the Mellin-space correlators back to a normal time-momentum representation and push the time variables to the boundary: The future boundary is where the observables are naturally defined, and the momentum space is where the cosmological data are presented and analyzed. However, the nested Mellin variables make the inverse Mellin transform nontrivial. Thus, in a sense, in the full Mellin-space approach, we are moving the difficulty of nested time integral to the difficulty of nested Mellin integral.
There are other studies considering inflation correlators or wavefunction coefficients with multiple massive bulk lines. Rather than full analytical computations, most works focused on general properties of such amplitudes, such as the analyticity, unitarity, causality, cutting rules, etc. There is a special case where one does achieve full results for tree graphs with arbitrary number of bulk lines, namely when the bulk field's mass is tuned to the conformal value \(m^{2}=2H^{2}\) and all couplings are dimensionless. In such cases, the amplitudes reduce to the flat-space results, and one can find nice recursion relations to directly build arbitrary tree amplitudes or even loop integrand [102, 103]. However, this result only applies to very special class of theories which are not of direct interest to CC physics. One might want to restore general mass and couplings by integrating the conformal-scalar amplitudes with appropriate weighting functions. However, the complication here is that we encounter fractions of nested energy variables which are hard to integrate.
As we see, no matter what representation we take, there is always a nested part of the amplitude that makes the computation difficult. There is a physical reason behind it: The nested time integrals are from the time ordering of the bulk propagator, and the time-ordered bulk propagator is a solution to the field equation with a local \(\delta\)-source. Thus, the nested part of the amplitude is closely related to the EFT limit where several or all nested vertices are pinched to a single bulk vertex. Very schematically, we can express this fact with the position-space Feynman propagator
\(D(x,y)\), which is a solution to the sourced equation of motion \((\Box_{x}-m^{2})D(x,y)=\mathrm{i}\delta(x-y)\). Then, we can make an EFT expansion of \(D(x,y)\sim\frac{\mathrm{i}}{\Box_{x}-m^{2}}\delta(x-y)\). The leading order term is simple, which is just the contact graph with \(D(x,y)\sim-\mathrm{i}\delta(x-y)/m^{2}\). However, there are higher order terms coming from acting on powers of \(\Box_{x}/m^{2}\) on \(\delta(x-y)\), which produce a series of momentum ratios when transformed to the momentum-space representation. Technically, as we shall see, such series are typically multi-variable hypergeometric series which in general do not reduce to any well known functions. So, one just has no way to get around with this result; the complication has to show up somewhere. The best we can do is to find a way to write down the analytical result as a convergent hypergeometric series for some kinematic configurations, and then try to find ways to do analytical continuation for other configurations. This is the goal we are going to pursue in this work. Below, we introduce the main results of this work before detailed expositions in subsequent sections.
Summary of main results.In this work, we tackle the problem of analytically computing tree-level inflation correlators with arbitrary number of massive exchanges, via a standard bulk calculation in the SK formalism. The main technical tool is the partial Mellin-Barnes (PMB) representation proposed in [88, 89]. The basic idea is very simple: One takes the Mellin-Barnes representation for all factorized bulk propagators, but leaves all the bulk-to-boundary propagators in the original time-momentum representation. Also, one leaves all the time-ordering Heaviside \(\theta\)-functions untransformed. In this way, one takes the advantage of Mellin-Barnes (MB) representation that it resolves complicated bulk mode functions into simple powers, but still retains the explicit time-domain representation for external modes. As has been shown in several previous works, the PMB representation is suitable for analyzing a range of problems related to inflation correlators, including explicit results at tree and loop levels [88, 89], and the analytical properties and on-shell factorizations for arbitrary loop correlators [92, 93]. The general procedure of using PMB representation to compute an arbitrary tree-level inflation correlator is detailed in Sec. 2.
As mentioned above, the time orderings are not removed in the PMB representation. So, we still need to deal with them. We solve this problem in Sec. 3. As we will see, the PMB representation greatly simplifies the integrand of nested time integrals. As a result, the most general nested time integral we have to compute takes the following form:
\[\mathbb{T}_{q_{1}\cdots q_{V}}(E_{1},\cdots,E_{V})=\int\prod_{\ell=1}^{V}\left[ \mathrm{d}\tau_{\ell}(-\tau_{\ell})^{q_{\ell}-1}e^{\mathrm{i}E_{\ell}\tau_{ \ell}}\right]\prod_{i,j}\theta(\tau_{i}-\tau_{j}), \tag{1}\]
where we have time integrals at \(V\) vertices, nested arbitrarily by the Heaviside \(\theta\) functions from the \(I\) internal lines. While this integral is still somewhat complicated, it is already in a form that allows us to directly write down the analytical answer. The way to make progress is to recognize that every bulk propagator has a time ordering in a fully nested integral, and we are free to flip the direction of time orderings using a simple relation of the Heaviside \(\theta\)-function, so that any nested integral can be recast into a partially ordered form. To explain the partial ordering, we adopt this convenient terminology: Whenever we have a factor \(\theta(\tau_{i}-\tau_{j})\), we call \(\tau_{j}\) to be \(\tau_{i}\)'s mother and \(\tau_{i}\) to be \(\tau_{j}\)'s daughter. Then, a partially ordered graph simply means that every time variable in the graph can have any number of daughters but must have only one mother, except
the earliest member, who is motherless. In plain words, a partially ordered graph can be thought of as a maternal family tree.
After rewriting a given nested integral into a partially ordered form, we get new terms with less layers of nested integrals, which can be further rewritten into partially ordered form with additional terms generated. This procedure can be carried out recursively, until all nested integrals are partially ordered. This procedure has a very similar structure with the conventional cluster decomposition in statistical mechanics or quantum field theory. We will call it family-tree decomposition. Then, each of the partially ordered nested integrals is a family tree, which we also call "family" or "family integral" for short. A family is denote by \(\mathcal{C}_{q_{1}\cdots q_{N}}(E_{1},\cdots,E_{N})\). The details of this family-tree decomposition will be presented in Sec. 3.1. In practice, this family-tree decomposition takes a very simple form. An example of family-decomposing a graph with 5-layer nested integral is shown in Fig. 1.
As we shall see, the partial order structure allows us to find a simple one-line formula for general family integral \(\mathcal{C}_{q_{1}\cdots q_{N}}(E_{1},\cdots,E_{N})\). Working in the configurations where \(E_{1}\gg E_{i}\) with \(i=2,\cdots,N\), we find:
\[\mathcal{C}_{q_{1}\cdots q_{N}}(\widehat{E}_{1},E_{2},\cdots,E_{N})=\frac{1} {(\mathrm{i}E_{1})^{q_{1}\cdots_{N}}}\sum_{n_{2},\cdots,n_{N}=0}^{\infty} \Gamma(q_{1\cdots N}+n_{2\cdots N})\prod_{j=2}^{N}\frac{(-E_{j}/E_{1})^{n_{j} }}{(\widetilde{q}_{j}+\widetilde{n}_{j})n_{j}!}. \tag{2}\]
Here the hatted energy \(\widehat{E}_{1}\) denotes the maximal energy, which sits at the vertex with the earliest time. On the right hand side, we have \(N-1\) layers of summations corresponding to the \(N-1\) descendents of \(E_{1}\)-site. We use shorthands such as \(q_{1\cdots N}\equiv q_{1}+\cdots+q_{N}\). The quantity \(\widetilde{q}_{j}\) means to take the sum of all \(q_{i}\) where either \(i=j\) or \(i\) is a descendent of \(j\). \(\widetilde{n}_{j}\) is similarly defined. Explicit application of this formula to the 5-layer graph in Fig. 1 is given in (25)-(28). We give many examples and also a general proof of the formula (2) in Sec. 3.2 and Sec. 3.3.
One important point is that the maximal energy variable can be chosen at will: To take the analytical continuation of (2) to kinematic regions where \(E_{1}\) is no longer maximal, all we need to do is to rearrange the original integral into a different partial order such that the new maximal energy sits at the earliest time. Thus, our method provides a practical way to do analytical continuation of multi-variable hypergeometric series (2) beyond its convergence region. An example of this analytical continuation is given for the example of 5-layer integral in Fig. 2. As will be shown in Sec. 3.4, one can exploit the flexibility of MB representation to rewrite (2) as Taylor series of the sum of several or all energy variables, which further extends the domain of validity of our expressions.
With the formula for general time integrals at hand, the computation of the tree-level inflation correlators becomes a matter of collecting appropriate Mellin poles in the PMB representation. As a demonstration of this procedure, we compute the general tree-level graphs with two bulk massive exchanges in Sec. 4 and present the full analytical result of this type of correlators for the first time. In Sec. 5, we show how to take folded limits of these results, by computing a tree-level 4-point graph with two massive exchanges. We conclude the paper with further discussions in Sec. 6. Useful mathematical formulae on Mellin-Barnes representations and hypergeometric functions are collected in App. A, and some intermediate steps of computing graphs with two massive exchanges are collected in App. B.
Notation and convention.We work in the Poincare patch of the dS spacetime with inflation coordinates \((\tau,\mathbf{x})\) where \(\tau\in(-\infty,0)\) is the conformal time, and \(\mathbf{x}\in\mathbb{R}^{3}\) is the comoving coordinate. In this coordinate system, the spacetime metric is \(\mathrm{d}s^{2}=a^{2}(\tau)(-\mathrm{d}\tau^{2}+\mathrm{d}\mathbf{x}^{2})\), where \(a(\tau)=-1/(H\tau)\) is the scale factor, and \(H\) is the inflation Hubble parameter. We set \(H=1\) throughout this work for simplicity. We use bold letters such as \(\mathbf{k}\) to denote 3-momenta and the corresponding italic letter \(k\equiv|\mathbf{k}|\) to denote its magnitude, which is also called an energy. The energies are often denoted by \(E_{i}\), and the energy ratios such as \(\varrho_{ij}\equiv E_{i}/E_{j}\) are often used. We follow the diagrammatic methods reviewed in [19] to compute inflation correlators in SK formalism. We often use shorthand for sums of several indexed quantities. Examples include:
\[k_{12}\equiv k_{1}+k_{2},\qquad E_{ij}\equiv E_{i}+E_{j},\qquad s_{123}\equiv s_ {1}+s_{2}+s_{3},\qquad q_{1\cdots N}\equiv q_{1}+\cdots+q_{N}. \tag{3}\]
Finally, the Mellin integral measures are very often abbreviated in the following way:
\[\int_{s_{1},s_{2}\cdots}\equiv\int_{-\mathrm{i}\infty}^{+\mathrm{i}\infty} \frac{\mathrm{d}s_{1}}{2\pi\mathrm{i}}\,\frac{\mathrm{d}s_{2}}{2\pi\mathrm{i} }\cdots. \tag{4}\]
## 2 Tree Graphs with Partial Mellin-Barnes
In this section, we review the method of PMB representation for a general tree-level inflation correlator with arbitrary massive exchanges. Our starting point is a general \(B\)-point connected equal-time correlation function of a bulk field \(\varphi\) in the late time limit:
\[\lim_{\tau\to 0}\big{\langle}\Omega\big{|}\varphi_{\mathbf{k}_{1}}(\tau)\cdots \varphi_{\mathbf{k}_{B}}(\tau)\big{|}\Omega\big{\rangle}_{\mathrm{C}}=(2\pi)^{3} \delta^{(3)}(\mathbf{k}_{1}+\cdots+\mathbf{k}_{B})\mathcal{T}(\mathbf{k}_{1},\cdots,\mathbf{k} _{B}). \tag{5}\]
As shown above, the correlation function is defined as an equal-time expectation value of the product of \(B\) operators \(\varphi_{\mathbf{k}_{i}}\) in 3-momentum space, over a state \(|\Omega\rangle\) which is taken asymptotic to the Bunch-Davies vacuum state in the early time limit \(\tau\to-\infty\). We assume that the bulk theory of \(\varphi\) is a weakly coupled local quantum field theory. Therefore, after stripping off the momentum-conserving \(\delta\)-function, the amplitude on the right hand side \(\mathcal{T}(\mathbf{k}_{1},\cdots,\mathbf{k}_{B})\) can be represented as an expansion of connected graphs \(\mathcal{G}(\mathbf{k}_{1},\cdots,\mathbf{k}_{B})\) with increasing number of loops. Thus, the leading contribution is from the tree graphs, which are the focus of this work.
We do not specify the type of the field \(\varphi\), but we do assume that it has a simple mode function \(\varphi(k,\tau)\). More explicitly, if we expand the mode \(\varphi_{\mathbf{k}}\) in terms of canonically normalized creation and annihilation operators \(a_{\mathbf{k}}\) and \(a_{-\mathbf{k}}^{\dagger}\), we get mode function \(\varphi(k,\tau)\) as the coefficient:
\[\varphi_{\mathbf{k}}(\tau)=\varphi(k,\tau)a_{\mathbf{k}}+\varphi^{*}(k,\tau)a_{-\mathbf{k} }^{\dagger}. \tag{6}\]
We suppress helicity indices if there are any. We assume that all the time dependence of the mode function \(\varphi(k,\tau)\) can be expressed as an exponential factor \(e^{-\mathrm{i}k\tau}\) times a polynomial of \(-k\tau\). This covers essentially all cases relevant to cosmological collider phenomenology where the mode function survives the late-time limit, including the massless spin-0 inflaton field and the massless spin-2 graviton. For instance, the mode function for the inflaton is given by:
\[\varphi(k,\tau)=\frac{1}{\sqrt{2k^{3}}}(1+\mathrm{i}k\tau)e^{-\mathrm{i}k\tau}. \tag{7}\]
Our assumption also covers the case where the mode does not survive the late-time limit but is of theoretical interest, such as a conformal scalar \(\phi_{c}\) with mass \(m_{c}=\sqrt{2}\) in \(3+1\) dimensions, whose mode function is:
\[\phi_{c}(k,\tau)=\frac{\tau}{\sqrt{2k}}e^{-\mathrm{i}k\tau}. \tag{8}\]
The bulk fields appearing in the tree graphs of \(\mathcal{T}(\mathbf{k}_{1},\cdots,\mathbf{k}_{B})\) can be rather arbitrary. In general, they can have arbitrary mass and spin. They can also have dS-boost-breaking dispersion relations, and thus can have nonzero (helical) chemical potential or non-unit sound speed. They can also have rather arbitrary couplings among themselves and to the boundary field \(\varphi\). In particular, these couplings can break dS boosts and even the dilatation symmetry. However, we do assume that these couplings are well behaved in the infrared so that the diagrammatic expansion remains perturbative in the late-time limit.
However, for definiteness, we shall take a fixed type of bulk field, namely a scalar field in the principal series (i.e., with mass \(m>3/2\)), in all the following discussions. Generalization to other cases should be straightforward. For a massive scalar with \(m>3/2\), it is convenient to introduce a _mass parameter_\(\widetilde{\nu}\equiv\sqrt{m^{2}-9/4}\). Then, according to the SK formalism [19], we can construct four bulk propagator \(D^{(\widetilde{\nu})}_{\mathsf{ab}}(k;\tau_{1},\tau_{2})\) with \(\mathsf{a},\mathsf{b}=\pm\) for such a field. More explicitly:
\[D^{(\widetilde{\nu})}_{-+}(k;\tau_{1},\tau_{2}) =\frac{\pi}{4}e^{-\pi\widetilde{\nu}}(\tau_{1}\tau_{2})^{3/2} \mathrm{H}^{(1)}_{\widetilde{\nu}}(-k\tau_{1})\mathrm{H}^{(2)}_{-\widetilde{ \nu}}(-k\tau_{2}), \tag{9}\] \[D^{(\widetilde{\nu})}_{+-}(k;\tau_{1},\tau_{2}) =\left[D^{(\widetilde{\nu})}_{-+}(k;\tau_{1},\tau_{2})\right]^{*},\] (10) \[D^{(\widetilde{\nu})}_{\pm\pm}(k;\tau_{1},\tau_{2}) =D^{(\widetilde{\nu})}_{\mp\pm}(k;\tau_{1},\tau_{2})\theta(\tau_{ 1}-\tau_{2})+D^{(\widetilde{\nu})}_{\pm\mp}(k;\tau_{1},\tau_{2})\theta(\tau_{ 2}-\tau_{1}). \tag{11}\]
Then, a general tree graph consisting of massive scalar bulk propagators and massless/conformal scalar bulk-to-boundary propagators can be computed by an integral of the following form:
\[\mathcal{I}=\sum_{\mathsf{a}_{1},\cdots,\mathsf{a}_{V}=\pm}\int\prod_{\ell=1} ^{V}\left[\mathrm{d}\tau_{\ell}\,\mathsf{i}\mathsf{a}_{\ell}(-\tau_{\ell})^{ p\ell}e^{\mathrm{i}\mathsf{a}_{\ell}E_{\ell}\tau_{\ell}}\right]\prod_{i=1}^{I}D_{ \mathsf{a}_{i1}\mathsf{a}_{2}}(K_{i},\tau_{i1},\tau_{i2}). \tag{12}\]
Here we assume that there are \(V\) vertices and \(I\) bulk propagators in the graph. For each vertex, we have an integral over the conformal time variable \(\tau_{\ell}\) (\(\ell=1,\cdots,V\)). Also, we introduce a factor of \(\mathsf{i}\mathsf{a}_{\ell}\) as required by the diagrammatic rule [19], and a factor of \((-\tau_{\ell})^{p_{\ell}}\) to account for various types of couplings as well as power factors in the external mode function (such as the \(\mathrm{i}k\tau\) term in the massless mode function (7)). The exponential factor \(e^{\mathrm{i}a_{\ell}E_{\ell}\tau_{\ell}}\) comes from the external mode function, and \(E_{\ell}\) represents the sum of magnitudes of 3-momenta of all _external_ modes. Following the terminology in the literature, we call it the _energy_ at the Vertex \(\ell\). However, we note that \(E_{\ell}\) is not the total energy at Vertex \(\ell\) since we do not include energies of bulk lines. For each bulk line, we have a bulk propagator \(D_{\mathsf{a}_{i1}\mathsf{a}_{i2}}(K_{i},\tau_{i1},\tau_{i2})\) with momentum \(\mathbf{K}_{i}\), which is completely determined by external momenta via the 3-momentum conservation at each vertex. The two time variables \(\tau_{i1},\tau_{i2}\) as well as the two SK variables \(\mathsf{a}_{i1},\mathsf{a}_{i2}\) should be identified with the corresponding time and SK variables at the two vertices to which the bulk propagator attach.
The computation of the integral (12) is complicated by the products of Hankel functions, as well as the time-ordering \(\theta\)-functions in the bulk propagators. To tackle these problems, we use the MB representation for all the _bulk_ propagators, but leave all the bulk-to-boundary propagators
untransformed. This is the so-called PMB representation [88, 89]. The MB representations for the two opposite-sign bulk propagators (9) and (10) are given by [88, 89]:
\[D^{(\widetilde{\nu})}_{\pm\mp}(k;\tau_{1},\tau_{2}) =\frac{1}{4\pi}\int_{-\mathrm{i}\infty}^{+\mathrm{i}\infty}\frac{ \mathrm{d}s}{2\pi\mathrm{i}}\frac{\mathrm{d}\bar{s}}{2\pi\mathrm{i}}\,e^{\mp \mathrm{i}\pi(s-\bar{s})}\Big{(}\frac{k}{2}\Big{)}^{-2(s+\bar{s})}(-\tau_{1})^ {-2s+3/2}(-\tau_{2})^{-2\bar{s}+3/2}\] \[\quad\times\Gamma\Big{[}s-\frac{\mathrm{i}\widetilde{\nu}}{2},s+ \frac{\mathrm{i}\widetilde{\nu}}{2},\bar{s}-\frac{\mathrm{i}\widetilde{\nu}}{ 2},\bar{s}+\frac{\mathrm{i}\widetilde{\nu}}{2}\Big{]}, \tag{13}\]
This follows directly from the MB representation of the Hankel function (129), which we collect in App. A. In particular, the Mellin variable \(s\) is associated with time \(\tau_{1}\) and \(\bar{s}\) is associated with \(\tau_{2}\). The same-sign propagators \(D_{\pm\pm}\) are obtained by substituting in the above expression into (11). We note that the time-ordering \(\theta\)-functions are left untransformed.
After taking the above PMB representation, the original SK integral (12) becomes:
\[\mathcal{I}= \int_{-\mathrm{i}\infty}^{+\mathrm{i}\infty}\prod_{i=1}^{I}\bigg{\{} \frac{1}{4\pi}\frac{\mathrm{d}s_{i}}{2\pi\mathrm{i}}\frac{\mathrm{d}\bar{s}_{ i}}{2\pi\mathrm{i}}\Big{(}\frac{K}{2}\Big{)}^{-2s_{\bar{s}i}}\Gamma\Big{[}s_{i}- \frac{\mathrm{i}\widetilde{\nu}}{2},s_{i}+\frac{\mathrm{i}\widetilde{\nu}}{2 },\bar{s}_{i}-\frac{\mathrm{i}\widetilde{\nu}}{2},\bar{s}_{i}+\frac{\mathrm{ i}\widetilde{\nu}}{2}\Big{]}\bigg{\}}\] \[\times\bigg{\{}\sum_{\mathbf{a}_{1},\cdots,\mathbf{a}_{V}=\pm} \int_{-\infty}^{0}\prod_{\ell=1}^{V}\Big{[}\mathrm{d}\tau_{\ell}\,\mathrm{i} \mathbf{a}_{\ell}(-\tau_{\ell})^{p_{\ell}-2\sum_{\ell}s}e^{\mathrm{i}\mathrm{ a}_{\ell}E_{\ell}\tau_{\ell}}\Big{]}\mathcal{N}_{\mathbf{a}_{1}\cdots\mathbf{a}_{V}} \Big{(}\tau_{1},\cdots,\tau_{V};\{s,\bar{s}\}\Big{)}\bigg{\}}. \tag{14}\]
Here we have switched the order of the time integral and the Mellin integral, assuming all integrals are well convergent. With this representation, we see that all SK-index-dependent part goes into the time integral, namely the second line of the above expression. In this time integral, we have used a shorthand \((-\tau_{\ell})^{p_{\ell}-2\sum_{\ell}s}\) where \(\sum_{\ell}s\) denotes the sum of all Mellin variables associated to \(\tau_{\ell}\), and the Mellin variables in this summation can be either barred or unbarred. An important fact we shall use below is that the Mellin variables always appear with negative signs in this exponent. Also, we have introduce a function \(\mathcal{N}_{\mathbf{a}_{1}\cdots\mathbf{a}_{V}}(\tau_{1},\cdots,\tau_{V};\{s, \bar{s}\})\) to represent all combinations of time-ordering \(\theta\)-functions, as well as the SK-index-dependent phase factor \(e^{\mp\mathrm{i}\pi(s-\bar{s})}\) in (13).
The reason we introduce the PMB representation is that the time integral now only involves exponentials and powers in its integrand, as shown in the second line of (14). This is significantly simpler than the original time integral, which involves Hankel functions. While this simplification is powerful enough for a single layer time integral, the computation of time-ordered integrals remain nontrivial. In previous works using PMB representation, only the two-layer nested integral was explicitly computed [89]:
\[\int_{-\infty}^{0}(-\tau_{1})^{q_{1}-1}(-\tau_{2})^{q_{2}-1}e^{\mathrm{i}E_{1 }\tau_{1}+\mathrm{i}E_{2}\tau_{2}}\theta(\tau_{2}-\tau_{1})=\frac{1}{(\mathrm{ i}E_{1})^{\mathrm{i}q_{12}}}\,{}_{2}\mathcal{F}_{1}\,\begin{bmatrix}q_{2},q_{12}\\ q_{2}+1\end{bmatrix}\!-\!\frac{E_{2}}{E_{1}}\,\bigg{]}\,, \tag{15}\]
where \({}_{2}\mathcal{F}_{1}\) is the dressed hypergeometric function, defined in App. A. For computing inflation correlators with single massive exchange, this result is enough. However, if we wish to go beyond the single massive exchange and consider the most general tree graphs, it is necessary to tackle the problem of computing time integrals of exponentials and powers with arbitrary layers and arbitrary time orderings. We will systematically solve this problem in the next section.
From (14), we see that, if the time integral in the second line can be done, then it only remains to finish the Mellin integrals. This is typically done by closing the Mellin contour and collecting the residues of all enclosed poles. So, we need a knowledge about the pole structure of the Mellin
integrand. Although the answer to the time integral was not explicitly known in the previous studies, it was proved in [92] that such time integrals, however nested, only contribute _right poles_ of the Mellin integrand. That is, their poles only appear on the right side of the integral contour that goes from \(-\mathrm{i}\infty\) to \(+\mathrm{i}\infty\). As a result, all _left poles_ are contributed by the \(\Gamma\)-factors from the bulk propagators, shown in the first line of (14). These are all the poles of the Mellin integrand for a tree graph.
Another important observation is that, if we sum the _arguments_ of all \(\Gamma\)-factors in the first line of (14), we get:
\[+2\sum_{i=1}^{I}(s_{i}+\bar{s}_{i})+\cdots\,. \tag{16}\]
That is, all Mellin variables are summed together, with an overall coefficient \(+2\). Here "\(\cdots\)" denotes \(s\)-independent terms, which are irrelevant to our current argument, and happen to be \(0\) in this particular case. On the other hand, as we shall see from the explicit results in the next section, the right poles contributed by the time integrals are also from \(\Gamma\)-factors of the form \(\Gamma[\cdots-2\sum s]\). If we sum over the arguments of all right poles, we will get:
\[-2\sum_{i=1}^{I}(s_{i}+\bar{s}_{i})+\cdots\,, \tag{17}\]
which is exactly the \(s\) terms from the left-pole \(\Gamma\)-arguments with an additional sign. In this sense, we say that the Mellin variables in the integrand are _balanced_. In such a balanced situation, the convergence of the Mellin integral is determined by the power factors such as \((K_{i}/2)^{-2s_{i}}\) in (14). Typically, one can first work in the kinematic region where the internal momenta \(K_{i}\) are small (compared to relevant external energies), so that the Mellin integrals will be convergent if we pick up all the _left poles_, which are all from the bulk propagator \(\Gamma\)-factors in (13). Their poles and residues are well understood. So, if we can finish the time integral, then we only need to collect all left poles from the first line of (14). The result will be a series expansion in \(K_{i}\). So, this result will be valid at least when the bulk momenta \(K_{i}\) are not too large.1 In the opposite limit, when \(K_{i}\) becomes large compared to the relevant energy variables, we can instead close the Mellin contour from the right side and pick up all the right poles. In this way, we get analytical continuation of the result from small \(K_{i}\) region to large \(K_{i}\) region. This will cover most of the parameter space of interest. The narrow intermediate region will be difficult to be expressed by a series solution. Analytically, one needs take the analytical continuation of the series solutions for those intermediate regions, which is a separate mathematical problem. Practically, however, we can use numerical interpolation to bridge the gap between different regions. This strategy has been shown to be workable in previous studies [90]. So, barring possible issues of analytical continuation for special configurations, we can say that, the problem of analytical computation of arbitrary tree-level inflation correlators is solved, if we can compute the arbitrary nested time integral. We will solve the latter problem in the next section.
Footnote 1: There is a degenerate situation where a vertex is _internal_, in the sense that it is not attached to any bulk-to-boundary propagator. In this case, the energy variable \(E=0\) at this vertex. Then, if we compute a factorized time integral for this vertex alone, we get a \(\delta\) function for Mellin variables. Consequently, one should integrate out one Mellin variable using this \(\delta\) function instead of picking up poles. We will come back to this point at the end of Sec. 3.
Time Integrals with Partial Mellin-Barnes
In this section we provide a systematic investigation of arbitrary nested time integrals in the PMB representation. It is clear from the previous section that the most general nested time integral has the following form:
\[\mathbb{T}_{q_{1}\cdots q_{V}}(E_{1},\cdots,E_{V})=\int\prod_{\ell=1}^{V}\left[ \mathrm{d}\tau_{\ell}(-\tau_{\ell})^{q_{\ell}-1}e^{\mathrm{i}E_{\ell}\tau_{ \ell}}\right]\prod_{i,j}\theta(\tau_{i}-\tau_{j}). \tag{18}\]
Here we are again considering a \(V\)-fold time integral with arbitrary nesting. We require that all \(\tau_{i}\) (\(1\leq i\leq V\)) appear in the \(\theta\)-factors so that the integral is fully nested. Also, we have used a factor \((-\tau_{\ell})^{q_{\ell}-1}\) to account for a variety of external modes and couplings, as well as powers of time from the partial MB representation. In the notation of the previous section, we have:
\[q_{\ell}-1=p_{\ell}-2\sum\nolimits_{\ell}s. \tag{19}\]
The difficulty with time ordering in (18) is easy to understand: A single time integral of exponential with power factors from \(\tau=-\infty\) to \(\tau=0\) gives rise to a \(\Gamma\) function. However, if there is a time ordering, the integration limit for one time variable would be dependent on another integration variable. As a result, we get incomplete \(\Gamma\) functions after finishing one layer of integral. Then we need to perform time integrals over incomplete \(\Gamma\) functions with integration limits dependent on yet another time variable. This quickly becomes intractable with an increasing number of nested layers.
Our strategy of solving this problem is again the Mellin-Barnes representation: whenever we perform a layer of nested time integration, we take the MB representation of the result so that the integrand for the next layer is still a simple exponential times a power. In this way, the nested time integrals can be done recursively layer by layer, until the last layer, which yields a simple \(\Gamma\) factor. Along the way, we generate many layers of Mellin integrals, which can again be done by closing the contours properly.
As we shall see below, this recursive integration is easiest if the time integral is nested with a partial order, which is not the case in the most general nested integrals. Thus, we should first use a simple relation \(\theta(\tau_{j}-\tau_{k})+\theta(\tau_{k}-\tau_{j})=1\) to reorganize the original time integral such that the result is either partially ordered or factorized. This will be called a "family-tree decomposition." Then, we apply the above procedure to the partially ordered integrals to get the explicit results for them. These steps will be carried out in details below.
A side remark on notation and terminology: It will be helpful to use a diagrammatic representation for the nested time integral (18). We will use a directional line to denote a \(\theta\)-function where the direction of the arrow coincides with the direction of the time flow. Two factorized time variables (which is simply associated with a factor of 1) may be connected by a dashed line. So, for instance, we can write the relation \(\theta(\tau_{1}-\tau_{2})+\theta(\tau_{2}-\tau_{1})=1\) as:
(20)
Also, to highlight the fact that these diagrams are not the original SK graphs for the inflation correlators, we will use "site" in place of "vertex," and use "line" in place of "propagators." Then, each site \(\tau_{i}\) is associated with an energy variable \(E_{i}\) and an exponent \(q_{i}\), as is clear from (18).
### Family-tree decomposition of nested integrals
Now, we describe our family-tree decomposition algorithm in detail. We begin with the most generfamilyal nested time integral (18). After finishing all the time integrals, the result \(\mathbb{T}_{q_{1}\cdots q_{V}}(E_{1},\cdots,E_{V})\) is a function of \(V\) energies \(E_{1},\cdots,E_{V}\) and \(V\) exponents \(q_{1},\cdots,q_{V}\). In the following, we shall show that this integral can always be written as a sum over a finite number of terms. Each term is a product of several _families_. Each family is a multi-variable hypergeometric function of several energy variables \(E_{i}\). Of course, multi-variable hypergeometric functions are not well studied. It is most useful if we can find a fast converging series expansion of this hypergeometric function in terms of any given small energy ratios. Below, we will show that this can be done.
The reduction procedure.Our reduction procedure consists of the following simple steps:
**Step 1:**: We start with a particular kinematic region of the integral \(\mathbb{T}_{q_{1}\cdots q_{V}}(E_{1},\cdots,E_{V})\), where there is a _largest_ energy, say \(E_{i}\), such that \(E_{i}>E_{j}\) for all \(j\neq i\). We want to find an analytical expression for \(\mathbb{T}_{q_{1}\cdots q_{V}}(E_{1},\cdots,E_{V})\) as a series in \(1/E_{i}\), which should be convergent in most of the region where \(E_{i}\) remain the largest. We add a hat to the largest energy variable \(\widehat{E}_{i}\) to highlight the fact that we are considering a particular kinematic region.
So, if we choose \(E_{1}\) to be the largest energy, we will write \(\mathbb{T}_{q_{1}\cdots q_{V}}(\widehat{E}_{1},E_{2},\cdots,E_{V})\) to highlight this choice. If, instead, we want to consider the case where \(E_{2}\) gets larger than \(E_{1}\), then we should add the hat on \(\widehat{E}_{2}\). The degenerate case where there are multiple maximal energies will be considered in following subsections.
**Step 2:**: We use the relation \(\theta(\tau_{j}-\tau_{k})+\theta(\tau_{k}-\tau_{j})=1\) to flip the direction of time flows in some bulk lines, such that the original graph is broken into a sum of several terms. Each term can be represented as a graph, in which all sites are either partially ordered or factorized. As a result, each graph becomes a product of several integrals, each of which has a partial order structure, and is called a _family_. As a part of the rule, we require that the maximal energy site has the earliest time in a family.
Let us define what is a (partially ordered) family. Clearly, a time-ordered line connects a site with an earlier time to another site with a later time. We call the earlier-time site the _mother_ of the later-time site, and call the later-time site the _daughter_ of the earlier-time site. Then, a partially ordered graph means that every site has a unique mother, except the maximal-energy site, which is the earliest-time site and motherless. On the other hand, a mother can have many daughters. In this way, all sites within a family integral genuinely belong to a family. Also, for a given site, we call all the sites flowing out of it the _descendant sites_. Thus, the descendant sites of a given site consist of its daughters, granddaughters, great-granddaughters, etc.
Let us rephrase the above heuristic language into a more rigorous definition. We will use \(\mathcal{C}_{q_{1}\cdots q_{N}}(\widehat{E}_{1},E_{2},\cdots,E_{N})\) to denote a family integral with \(N\) sites, where we have highlighted the maximal energy \(\widehat{E}_{1}\) with a hat. Then, a family integral \(\mathcal{C}_{q_{1}\cdots q_{N}}(\widehat{E}_{1},E_{2},\cdots,E_{N})\) has the following form:
\[\mathcal{C}_{q_{1}\cdots q_{N}}(\widehat{E}_{1},E_{2},\cdots,E_{N})=\int\prod _{\ell=1}^{N}\left[\mathrm{d}\tau_{\ell}(-\tau_{\ell})^{q_{\ell}-1}e^{\mathrm{ i}E_{\ell}\tau_{\ell}}\right]\prod_{i,j}\theta(\tau_{i}-\tau_{j}), \tag{21}\]
with the following restrictions on the \(\theta\)-function factors:
1. Every time variable \(\tau_{i}\) (\(1\leq i\leq N\)) appears in time-ordering \(\theta\)-function. (All sites belong to a family.)
2. In a factor such as \(\theta(\tau_{j}-\tau_{k})\), let us call \(\tau_{j}\) to be in the late position and \(\tau_{k}\) in the early position. Then, it is required that every variable \(\tau_{i}\) except the maximal energy site appears in the late position once and only once. (Every site has a unique mother except the maximal energy site.) On the other hand, early positions can be taken more than once by a given \(\tau_{i}\). (A mother can give birth to more than one daughter.)
3. The maximal energy site \(\tau_{1}\) appears in \(\theta\) factors only in the early position. (The maximal energy site is motherless, but can have any (including zero) number of daughters.)
**Step 3:**: After taking Step 2, each resulting graph is a product of several fully factorized families. The maximal energy site sits in a particular family, which we call the maximal-energy family. As a consequence, families other than the maximal-energy family are independent of the maximal energy variable \(\widehat{E}_{i}\), and it becomes meaningless to ask for a series expansion in \(\widehat{E}_{i}\) for those families. We call them non-maximal energy families. Thus, for each of the non-maximal energy families, we should further assign a "locally" maximal energy, such that this energy is largest among all energies _within_ the family. Then, we further perform the reduction of Step 2 for all non-maximal energy families and we do this procedure recursively, until, within each family, the locally maximal energy site sits at the earliest time.
**Step 4:**: After taking the above steps, we fully reduce the original integral \(\mathbb{T}_{q_{1}\cdots q_{V}}(\widehat{E}_{1},E_{2},\cdots,E_{V})\) into a sum of products of partially ordered families, and in each family, the locally maximal energy acquires the earliest time.
It then remains to state the rule for directly writing down the answer for arbitrary families. The rule is the following: 1) Within each partially ordered family, we assign a summation variable \(n_{i}\) for all sites except the (locally) maximal-energy site. Without loss of generality, we can always relabel the sites within a family such that the (locally) maximal energy is \(E_{1}\). Then, for the \(N\)-site family defined in (21), the result is:
\[\mathcal{C}_{q_{1}\cdots q_{N}}(\widehat{E}_{1},E_{2},\cdots,E_{N}) =\frac{1}{(\mathrm{i}E_{1})^{q_{1}\cdots N}}\widetilde{\mathcal{ C}}_{q_{1}\cdots q_{N}}(\widehat{E}_{1},E_{2},\cdots,E_{N}),\] \[\widetilde{\mathcal{C}}_{q_{1}\cdots q_{N}}(\widehat{E}_{1},E_{2} \cdots,E_{N}) =\sum_{n_{2},\cdots,n_{N}=0}^{\infty}\Gamma(q_{1\cdots N}+n_{2 \cdots N})\prod_{j=2}^{N}\frac{(-\varrho_{j1})^{n_{j}}}{(\widetilde{q}_{j}+ \widetilde{n}_{j})n_{j}!}. \tag{22}\]
Here, the hatted energy \(\widehat{E}_{1}\) represents the maximal energy. In the first line, we stripped away a dimensionful factor \(({\rm i}E_{1})^{q_{1\cdots N}}\) so that the resulting integral \(\widetilde{C}_{q_{1}\cdots q_{N}}(\widehat{E}_{1},E_{2},\cdots,E_{N})\) is dimensionless. In the second line, we have defined \(\varrho_{jk}\equiv E_{j}/E_{k}\). Also, \(\widetilde{n}_{i}\) is defined to be the sum of \(n_{i}\)-variables over the site \(i\) and all its descendants. \(\widetilde{q}\) is similarly defined.
This completes our reduction of the original time integral into a sum of products of hypergeometric series.
Example.As often happens, it is better to demonstrate an algorithm with examples than mere abstract description. So, now, let us demonstrate the above reduction procedure with a concrete example. Suppose we want to compute a 5-layer time integral:
\[\mathbb{T}_{q_{1}\cdots q_{5}}(E_{1},\cdots,E_{5})\equiv\int_{-\infty}^{0} \prod_{\ell=1}^{5}\Big{[}{\rm d}\tau_{\ell}(-\tau_{\ell})^{q_{\ell}-1}e^{{\rm i }E_{\ell}\tau_{\ell}}\Big{]}\theta(\tau_{2}-\tau_{1})\theta(\tau_{2}-\tau_{3} )\theta(\tau_{4}-\tau_{3})\theta(\tau_{3}-\tau_{5}). \tag{23}\]
Furthermore, suppose that we want to consider the kinematic region where \(E_{1}\) is the largest energy among all five energies. Thus, we want to express the final result as a series expansion of \(1/E_{1}\). This is shown on the left hand side of Fig. 1, where the magenta-circled site represents the maximal-energy site. Then, according to the above procedure, we should use the relation \(\theta(\tau_{i}-\tau_{j})+\theta(\tau_{j}-\tau_{i})=1\) to change the direction of several lines, such that 1) all sites become either partially ordered or factorized and 2) the maximal-energy site \(E_{1}\) has the earliest time variable. This is done on the right hand side of Fig. 1. In each diagram on the right hand side, we get a product of one or several partially ordered families.
In all but the last term on the right hand side of Fig. 1, we have families which do not contain the maximal energy site. Thus we should specify locally maximal site for each of them. The one-site family is trivial. The nontrivial non-maximal families appear in the first and third terms on the right hand side of Fig. 1, which can be expressed as \(\mathcal{C}_{q_{3}q_{4}}(E_{3},E_{4})\) and \(\mathcal{C}_{q_{3}q_{4}q_{5}}(E_{3},E_{4},E_{5})\), respectively. Thus, we should further assign a maximal energy for these two sites. So, let us further work within the region where \(E_{3}>E_{4},E_{5}\), so that \(E_{3}\) is the locally maximal energy, marked with a blue circle in Fig. 1. (On the other hand, the relation between \(E_{3}\) and \(E_{2}\) is irrelevant.) Then, we see that \(E_{3}\) is already in the earliest time site in both families. So, we are done, and the result of our reduction procedure can be expressed as:
\[\mathbb{T}_{q_{1}\cdots q_{5}}(\widehat{E}_{1},\cdots,E_{5})= \mathcal{C}_{q_{1}q_{2}}(\widehat{E}_{1},E_{2})\mathcal{C}_{q_{3 }q_{4}}(\widehat{E}_{3},E_{4})\mathcal{C}_{q_{5}}(E_{5})-\mathcal{C}_{q_{1}q_{ 2}q_{3}q_{4}}(\widehat{E}_{1},E_{2},E_{3},E_{4})\mathcal{C}_{q_{5}}(E_{5})\] \[-\mathcal{C}_{q_{1}q_{2}}(\widehat{E}_{1},E_{2})\mathcal{C}_{q_{ 4}q_{3}q_{5}}(E_{4},\widehat{E}_{3},E_{5})+\mathcal{C}_{q_{1}q_{2}q_{3}q_{4}q_ {5}}^{({\rm iso})}(\widehat{E}_{1},E_{2},E_{3},E_{4},E_{5}). \tag{24}\]
Here we have also added hats to the locally maximal energy \(E_{3}\). In the last term, we added a superscript (iso) to show that this family has a cubic vertex. See the next subsection for details.
Next, we assign \(n_{2},\cdots,n_{4}\) for the sites with energy variables \(E_{2},\cdots,E_{4}\), respectively. It is clear from (24) that there are four independent nontrivial families (i.e., with more than one site)
involved in this example. Applying the formula (22) for each of them, we get
\[\widetilde{\mathcal{C}}_{q_{1}q_{2}}(\widehat{E}_{1},E_{2})=\sum_{n _{2}=0}^{\infty}\frac{(-1)^{n_{2}}\Gamma(q_{12}+n_{2})}{(q_{2}+n_{2})}\,\frac{ \varrho_{21}^{n_{2}}}{n_{2}!}, \tag{25}\] \[\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}q_{4}}(\widehat{E}_{1},E_ {2},E_{3},E_{4})=\sum_{n_{2},n_{3},n_{4}=0}^{\infty}\frac{(-1)^{n_{234}}\Gamma (q_{1234}+n_{234})}{(q_{234}+n_{234})(q_{34}+n_{34})(q_{4}+n_{4})}\,\frac{ \varrho_{21}^{n_{2}}}{n_{2}!}\,\frac{\varrho_{31}^{n_{3}}}{n_{3}!}\,\frac{ \varrho_{41}^{n_{4}}}{n_{4}!},\] (26) \[\widetilde{\mathcal{C}}_{q_{4}q_{3}q_{5}}(E_{4},\widehat{E}_{3},E _{5})=\sum_{n_{4},n_{5}=0}^{\infty}\frac{(-1)^{n_{45}}\Gamma(q_{345}+n_{45})}{ (q_{4}+n_{4})(q_{5}+n_{5})}\,\frac{\varrho_{43}^{n_{4}}}{n_{4}!}\,\frac{ \varrho_{53}^{n_{5}}}{n_{5}!},\] (27) \[\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}q_{4}q_{5}}^{(\text{iso})} (\widehat{E}_{1},E_{2},E_{3},E_{4},E_{5})\] \[=\sum_{n_{2},n_{3},n_{4},n_{5}=0}^{\infty}\frac{(-1)^{n_{2345}} \Gamma(q_{12345}+n_{2345})}{(q_{2345}+n_{2345})(q_{345}+n_{345})(q_{4}+n_{4})( q_{5}+n_{5})}\,\frac{\varrho_{21}^{n_{2}}}{n_{2}!}\,\frac{\varrho_{31}^{n_{3}}} {n_{3}!}\,\frac{\varrho_{41}^{n_{4}}}{n_{4}!}\,\frac{\varrho_{51}^{n_{5}}}{n_ {5}!}. \tag{28}\]
On the other hand, the result for the one-site family is trivial: \(\widetilde{\mathcal{C}}_{q}(E)=\Gamma(q)\). In fact, some of above series can be summed to well known hypergeometric functions, which we shall introduce below. In any case, we have found the series expression for the original 5-layer time integral \(\mathbb{T}_{q_{1}\cdots q_{5}}(E_{1},\cdots,E_{5})\) without actually doing any integrals.
The above series solution has a validity range beyond which the summations no longer converge. This happens in particular when any energy \(E_{i}\) (\(i=2,3,4,5\)) becomes larger than \(E_{1}\). In principle, if we need the result when \(E_{1}\) is no longer maximal, we need to take analytical continuation of the above series. This analytical continuation can be very conveniently implemented in our procedure. To see this, let us have a second look at the 5-site integral \(\mathbb{T}_{q_{1}\cdots q_{5}}(E_{1},\cdots,E_{5})\) in (23), but now choose \(E_{3}\) as the maximal energy. Then, according to our procedure, we should do
Figure 1: The diagrammatic representation of (24), showing the reduction of a 5-layer time integral into partially ordered families. In this example, we choose \(E_{1}>E_{2},E_{3},E_{4},E_{5}\) and \(E_{3}>E_{4},E_{5}\). The maximal energy site (Site 1) is marked with a magenta circle and the locally maximal energy site (Site 3) is marked with a blue circle.
a new family-tree decomposition, as shown in Fig. 2.
\[\mathbb{T}_{q_{1}\cdots q_{5}}(\widehat{E}_{1},\cdots,E_{5})= \mathcal{C}_{q_{1}}(E_{1})\mathcal{C}_{q_{2}q_{3}q_{4}}(E_{2}, \widehat{E}_{3},E_{4})\mathcal{C}_{q_{5}}(E_{5})-\mathcal{C}_{q_{1}}(E_{1}) \mathcal{C}_{q_{2}q_{4}q_{5}q_{3}}^{(\mathrm{iso})}(E_{2},E_{4},E_{5}, \widehat{E}_{3})\] \[-\mathcal{C}_{q_{1}q_{2}q_{3}q_{4}}(E_{1},E_{2},\widehat{E}_{3}, E_{4})\mathcal{C}_{q_{5}}(E_{5})+\mathcal{C}_{q_{1}q_{2}q_{3}q_{4}q_{5}}^{( \mathrm{iso})}(E_{1},E_{2},\widehat{E}_{3},E_{4},E_{5}). \tag{29}\]
Clearly, we do not need to choose any locally maximal energy in this example. The explicit expressions for the above families can be written down directly according to the general formula (22):
\[\widetilde{\mathcal{C}}_{q_{2}q_{3}q_{4}}(E_{2},\widehat{E}_{3}, E_{4})=\sum_{n_{2},n_{4}=0}^{\infty}\frac{(-1)^{n_{24}}\Gamma(q_{234}+n_{24})}{(q_ {2}+n_{2})(q_{4}+n_{4})}\,\frac{\varrho_{23}^{n_{2}}}{n_{2}!}\,\frac{\varrho_{ 43}^{n_{4}}}{n_{4}!}, \tag{30}\] \[\widetilde{\mathcal{C}}_{q_{2}q_{4}q_{5}q_{3}}^{(\mathrm{iso})}(E _{2},E_{4},E_{5},\widehat{E}_{3})=\sum_{n_{2},n_{4},n_{5}=0}^{\infty}\frac{(- 1)^{n_{24}5}\Gamma(q_{2345}+n_{245})}{(q_{2}+n_{2})(q_{4}+n_{4})(q_{5}+n_{5})} \,\frac{\varrho_{23}^{n_{2}}}{n_{2}!}\,\frac{\varrho_{43}^{n_{4}}}{n_{4}!}\, \frac{\varrho_{53}^{n_{5}}}{n_{5}!},\] (31) \[\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}q_{4}}(E_{1},E_{2}, \widehat{E}_{3},E_{4})=\sum_{n_{1},n_{2},n_{4}=0}^{\infty}\frac{(-1)^{n_{124} }\Gamma(q_{1234}+n_{124})}{(q_{1}+n_{1})(q_{12}+n_{12})(q_{4}+n_{4})}\,\frac{ \varrho_{13}^{n_{1}}}{n_{1}!}\,\frac{\varrho_{23}^{n_{2}}}{n_{2}!}\,\frac{ \varrho_{43}^{n_{4}}}{n_{4}!},\] (32) \[\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}q_{4}q_{4}}^{(\mathrm{iso} )}(E_{1},E_{2},\widehat{E}_{3},E_{4},E_{5})\] \[=\sum_{n_{1},n_{2},n_{4},n_{5}=0}^{\infty}\frac{(-1)^{n_{1245}} \Gamma(q_{12345}+n_{1245})}{(q_{1}+n_{1})(q_{12}+n_{12})(q_{4}+n_{4})(q_{5}+n_ {5})}\,\frac{\varrho_{13}^{n_{1}}}{n_{1}!}\,\frac{\varrho_{23}^{n_{2}}}{n_{2}!}\,\frac{\varrho_{43}^{n_{4}}}{n_{4}!}\,\frac{\varrho_{53}^{n_{5}}}{n_{5}!}. \tag{33}\]
Thus we have find an expression for the original 5-site integral \(\mathbb{T}_{q_{1}\cdots q_{5}}(E_{1},\cdots,E_{5})\) expanded as powers of \(1/E_{3}\). Let us emphasize that (24) and (29) are just different expansions for the same function \(\mathbb{T}_{q_{1}\cdots q_{5}}(E_{1},\cdots,E_{5})\), with different validity regions.
Figure 2: The diagrammatic representation of (29), showing the reduction of a 5-layer time integral into partially ordered families. In this example, we choose \(E_{3}>E_{i}\) (\(i=1,2,4,5\)). The maximal energy site (Site 3) is marked with a magenta circle.
### Partially ordered families: simple examples
Clearly, the only nontrivial step in our family-tree decomposition procedure is the last step, where we directly write down the answer for the family integral (22). The derivation of this result is best illustrated with examples. So in this subsection we will walk the readers through a few simple examples, before presenting a general proof in the next subsection.
One-site family.We begin with the simplest integral, the one-site family, shown in Fig. 3(a):
\[\widetilde{\mathcal{C}}_{q}(E)=(\mathrm{i}E)^{q}\int_{-\infty}^{0}\mathrm{d} \tau\,(-\tau)^{q-1}e^{\mathrm{i}E\tau}. \tag{34}\]
The application of the rule is trivial, and we have the following answer:
\[\widetilde{\mathcal{C}}_{q}(E)=\Gamma(q). \tag{35}\]
The answer is obtained by a direct integration of (34). Since there is only one dimensionful variable \(E\) involved in the problem, the final answer for the dimensionless family \(\widetilde{\mathcal{C}}_{q}(E)\) must be independent of \(E\).
Two-site family.Next let us look at the simplest nontrivial example, namely the two-site family, shown in Fig. 3(b). The integral is:
\[\widetilde{\mathcal{C}}_{q_{1}q_{2}}(\widehat{E}_{1},E_{2})=(\mathrm{i}E_{1 })^{q_{12}}\int_{-\infty}^{0}\mathrm{d}\tau_{1}\mathrm{d}\tau_{2}\,(-\tau_{1 })^{q_{1}-1}(-\tau_{2})^{q_{2}-1}e^{\mathrm{i}(E_{1}\tau_{1}+E_{2}\tau_{2})} \theta(\tau_{2}-\tau_{1}). \tag{36}\]
By design, we take \(E_{1}>E_{2}\). Now let us try to find the answer for the above integral. It turns out useful to start from the integral of reversed time ordering:
\[\mathcal{R}\Big{[}\widetilde{\mathcal{C}}_{q_{1}q_{2}}(\widehat{E}_{1},E_{2} )\Big{]}\equiv(\mathrm{i}E_{1})^{q_{12}}\int_{-\infty}^{0}\mathrm{d}\tau_{1} \mathrm{d}\tau_{2}\,(-\tau_{1})^{q_{1}-1}(-\tau_{2})^{q_{2}-1}e^{\mathrm{i}(E _{1}\tau_{1}+E_{2}\tau_{2})}\theta(\tau_{1}-\tau_{2}). \tag{37}\]
Then, the integral over \(\tau_{2}\) can be performed, with the result expressed in terms of an exponential integral \(\mathrm{E}_{p}(z)\) whose definition is given in (130):
\[\mathcal{R}\Big{[}\widetilde{\mathcal{C}}_{q_{1}q_{2}}(\widehat{E}_{1},E_{2} )\Big{]}=(\mathrm{i}E_{1})^{q_{12}}\int_{-\infty}^{0}\mathrm{d}\tau_{1}\,(- \tau_{1})^{q_{12}-1}e^{\mathrm{i}E_{1}\tau_{1}}\mathrm{E}_{1-q_{2}}(-\mathrm{ i}E_{2}\tau_{1}). \tag{38}\]
Figure 3: The one-site family and two-site family.
At this point, we make use of the following MB representation of \(\mathrm{E}_{p}(z)\):
\[\mathrm{E}_{p}(z)=\int_{-\mathrm{i}\infty}^{+\mathrm{i}\infty}\frac{\mathrm{d}s }{2\pi\mathrm{i}}\frac{\Gamma(s)z^{-s}}{s+p-1}. \tag{39}\]
The details of this MB representation is given in App. A. As explained there, the pole in \(s\) from the denominator \(1/(s+p-1)\) should be interpreted as a left pole, in the sense that the integration contour should go around this pole from the right side. Now, using (39) in (38), we get:
\[\mathcal{R}\Big{[}\widetilde{\mathcal{C}}_{q_{1}q_{2}}(\widehat{E}_{1},E_{2}) \Big{]}=(\mathrm{i}E_{1})^{q_{12}}\int_{s_{2}}\frac{\Gamma(s_{2})(\mathrm{i}E )^{-s_{2}}}{s_{2}-q_{2}}\int_{-\infty}^{0}\mathrm{d}\tau_{1}\,(-\tau_{1})^{q_{ 12}-1-s_{2}}e^{\mathrm{i}E_{1}\tau_{1}}. \tag{40}\]
Then, the \(\tau_{1}\) integral is trivial, which is simply given by \(\mathcal{C}_{q_{12}-s_{2}}(E_{1})=(\mathrm{i}E_{1})^{-q_{12}+s_{2}}\Gamma(q_{ 12}-s_{2})\). So, finishing the \(\tau_{1}\) integral, we get:
\[\mathcal{R}\Big{[}\widetilde{\mathcal{C}}_{q_{1}q_{2}}(\widehat{E}_{1},E_{2}) \Big{]}=\int_{s_{2}}\frac{\Gamma[s_{2},q_{12}-s_{2}]\varrho_{21}^{-s_{2}}}{s_{ 2}-q_{2}}. \tag{41}\]
Now it remains to finish the Mellin integral over \(s_{2}\). Given that \(\varrho_{21}=E_{2}/E_{1}<1\), we should close the Mellin contour from the left side and collect the residues of all left poles. There are two sets of left poles, one at \(s_{2}=-n_{2}\) with \(n_{2}=0,1,2,\cdots\), which is from the \(\Gamma\)-factor \(\Gamma(s_{2})\), and the other at \(s_{2}=q_{2}\) coming from the denominator. Collecting the residues at these poles, we get:
\[\mathcal{R}\Big{[}\widetilde{\mathcal{C}}_{q_{1}q_{2}}(\widehat{E}_{1},E_{2}) \Big{]}=\sum_{n_{2}=0}^{\infty}\,\frac{(-1)^{n_{2}+1}\Gamma[n_{2}+q_{12}]}{(n _{2}+q_{2})}\,\frac{\varrho_{21}^{n_{2}}}{n_{2}!}+\frac{\Gamma[q_{1},q_{2}]}{ \varrho_{21}^{q_{2}}}. \tag{42}\]
Now, we recognize that the last term without any summation is the product of two one-site family:
\[\frac{1}{(\mathrm{i}E_{1})^{q_{12}}}\frac{\Gamma[q_{1},q_{2}]}{\varrho_{21}^{ q_{2}}}=\frac{\Gamma(q_{1})}{(\mathrm{i}E_{1})^{q_{1}}}\frac{\Gamma(q_{2})}{( \mathrm{i}E_{2})^{q_{2}}}=\mathcal{C}_{q_{1}}(E_{1})\mathcal{C}_{q_{2}}(E_{2}). \tag{43}\]
Then, given the relation:
\[\mathcal{R}\Big{[}\mathcal{C}_{q_{1}q_{2}}(\widehat{E}_{1},E_{2})\Big{]}+ \mathcal{C}_{q_{1}q_{2}}(\widehat{E}_{1},E_{2})=\mathcal{C}_{q_{1}}(E_{1}) \mathcal{C}_{q_{2}}(E_{2}), \tag{44}\]
we see that the original family integral (36) is:
\[\widetilde{\mathcal{C}}_{q_{1}q_{2}}(\widehat{E}_{1},E_{2})=\sum_{n_{2}=0}^{ \infty}\frac{(-1)^{n_{2}}\Gamma[n_{2}+q_{12}]}{(n_{2}+q_{2})}\,\frac{\varrho_{ 21}^{n_{2}}}{n_{2}!}. \tag{45}\]
This is exactly what we would get using the rule (22). Incidentally, the above summation can be directly done and the result is the well-known Gauss's hypergeometric function:
\[\widetilde{\mathcal{C}}_{q_{1}q_{2}}(\widehat{E}_{1},E_{2})=\,_{2}\mathcal{F} _{1}\begin{bmatrix}q_{2},q_{12}\\ q_{2}+1\end{bmatrix}\!-\!\varrho_{21}\end{bmatrix}. \tag{46}\]
Here we use the dressed version \({}_{2}\mathcal{F}_{1}\) instead of the original hypergeometric function \({}_{2}F_{1}\) for notational simplicity. The dressed hypergeometric functions are defined in App. A.
Now, had we chosen to expand the integral (36) in terms of \(1/E_{2}\), we will get:
\[\mathcal{C}_{2}(E_{1},\widehat{E}_{2}) =\frac{\Gamma[q_{1},q_{2}]}{(\mathrm{i}E_{1})^{q_{1}}(\mathrm{i}E_ {2})^{q_{2}}}-\frac{1}{(\mathrm{i}E_{2})^{q_{12}}}\sum_{n_{1}}^{\infty}\frac{ \Gamma[n_{1}+q_{12}]}{(n_{1}+q_{1})}\,\frac{(-\varrho_{12})^{n_{1}}}{n_{1}!}\] \[=\frac{\Gamma[q_{1},q_{2}]}{(\mathrm{i}E_{1})^{q_{1}}(\mathrm{i}E _{2})^{q_{2}}}-\frac{1}{(\mathrm{i}E_{2})^{q_{12}}}2\mathcal{F}_{1}\begin{bmatrix} q_{1},q_{12}\\ q_{1}+1\end{bmatrix}\!\!-\!\varrho_{12}\end{bmatrix}. \tag{47}\]
Clearly, the series expression in the first line in (47) has a different region of convergence from the series expression in (45). However, the two expressions are just two power-series expansions of the same function in two different limits, one at \(E_{2}/E_{1}\to 0\) and the other at \(E_{1}/E_{2}\to 0\). This becomes more transparent after finishing the summations of both series into hypergeometric functions. Indeed, equating (46) with the second line of (47), we just get a transformation-of-variable formula for the hypergeometric function. Thus, our procedure provides a convenient way to derive many transformation-of-variable formulae for hypergeometric functions, which is particularly convenient for more complicated hypergeometric series, as we shall see below.
Three-site family.Next we consider a slightly nontrivial case with three sites. There is only one topology at the tree level with 3 sites. However, after including the time ordering, there are two independent possibilities, depending on whether the latest site is on the side or in the middle. These two possibilities are shown in Fig. 4. Again, by construction, the latest site is chosen to be the maximal-energy site. So, for the case in Fig. 4(a), we have:
\[\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}}(\widehat{E}_{1},E_{2},E_{3})=( \mathrm{i}E_{1})^{q_{123}}\int_{-\infty}^{0}\prod_{i=1}^{3}\Big{[}\mathrm{d} \tau_{i}\,(-\tau_{i})^{q_{i}-1}e^{\mathrm{i}E_{i}\tau_{i}}\Big{]}\theta(\tau_{ 3}-\tau_{2})\theta(\tau_{2}-\tau_{1}). \tag{48}\]
We again start from the completely reversed integral:
\[\mathcal{R}\Big{[}\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}}(\widehat{E}_{1},E _{2},E_{3})\Big{]}=(\mathrm{i}E_{1})^{q_{123}}\int_{-\infty}^{0}\prod_{i=1}^{3 }\Big{[}\mathrm{d}\tau_{i}\,(-\tau_{i})^{q_{i}-1}e^{\mathrm{i}E_{i}\tau_{i}} \Big{]}\theta(\tau_{1}-\tau_{2})\theta(\tau_{2}-\tau_{3}). \tag{49}\]
Now, we can repeat the above strategy and finish the three layers of time integrals in the order of \(\tau_{3},\tau_{2},\tau_{1}\). The first two layers produce exponential integrals which can then be represented as Mellin integrals. The last layer is again a single-site integral which can be finished directly. Here we show the results after finishing each layer of time integral and taking the MB representation
Figure 4: Two independent family integrals at 3-site level.
for exponential integrals:
\[\mathcal{R}\Big{[}\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}}( \widehat{E}_{1},E_{2},E_{3})\Big{]} \tag{50}\] \[= (\mathrm{i}E_{1})^{q_{123}}\int_{s_{3}}\frac{\Gamma(s_{3})( \mathrm{i}E_{3})^{-s_{3}}}{s_{3}-q_{3}}\int_{-\infty}^{0}\mathrm{d}\tau_{1} \mathrm{d}\tau_{2}\,(-\tau_{1})^{q_{1}-1}(-\tau_{2})^{q_{23}-1-s_{3}}e^{\mathrm{ i}(E_{1}\tau_{1}+E_{2}\tau_{2})}\theta(\tau_{1}-\tau_{2})\] \[= (\mathrm{i}E_{1})^{q_{123}}\int_{s_{2},s_{3}}\frac{\Gamma[s_{2},s _{3}](\mathrm{i}E_{2})^{-s_{2}}(\mathrm{i}E_{3})^{-s_{3}}}{(s_{23}-q_{23})(s_{ 3}-q_{3})}\int_{-\infty}^{0}\mathrm{d}\tau_{1}\,(-\tau_{1})^{q_{123}-1-s_{23}} e^{\mathrm{i}E_{1}\tau_{1}}\] \[= \int_{s_{2},s_{3}}\frac{\Gamma[s_{2},s_{3},q_{123}-s_{23}]\varrho _{21}^{-s_{2}}\varrho_{31}^{-s_{3}}}{(s_{23}-q_{23})(s_{3}-q_{3})}.\]
We start to observe a pattern here: Let the maximal-energy site be \(\tau_{i}\). When carrying out any but the last layer of time integrals, say \(\tau_{j}\) with \(j\neq i\), we are effectively generating a new layer of Mellin integral with Mellin variable \(s_{j}\), a pole-generating factor \(\Gamma(s_{j})/(\widetilde{s}_{j}-\widetilde{q}_{j})\), and a power of energy ratio \(\varrho_{ji}^{-s_{j}}\). Here \(\widetilde{s}_{j}\) is the sum of all Mellin variables assigned to the site \(j\) and its descendant, and \(\widetilde{q}_{j}\) is likewise defined.
Then it remains to carry out the Mellin integrals. Unlike the previous case, now we encounter pole-carrying factors involving more than one Mellin variable. In the current case, it is the denominator \(1/(s_{23}-q_{23})\). To avoid any potential complication of such poles, our strategy is that we perform the Mellin integrals in the "anti-chronological" order. In the current case, we integrate out \(s_{3}\) first, by collecting poles _only_ from \(\Gamma(s_{3})/(s_{3}-q_{3})\). Only after this is done, we then perform the \(s_{2}\)-integral, by collecting poles from \(\Gamma(s_{2})/(s_{23}-q_{23})\). By this time, the \(s_{3}\) variables in these factor have already been set to the poles. Thus, we never need to directly deal with poles involving a sum of several Mellin variables. Finishing the Mellin integral in this way, we get:
\[\frac{1}{(\mathrm{i}E_{1})^{q_{123}}}\mathcal{R}\Big{[}\widetilde {\mathcal{C}}_{q_{1}q_{2}q_{3}}(\widehat{E}_{1},E_{2},E_{3})\Big{]} =\frac{1}{(\mathrm{i}E_{1})^{q_{123}}}\sum_{n_{2},n_{3}=0}^{\infty }\frac{(-1)^{n_{23}}\Gamma[n_{23}+q_{123}]}{(n_{23}+q_{23})(n_{3}+q_{3})}\, \frac{\varrho_{21}^{n_{2}}}{n_{2}!}\,\frac{\varrho_{31}^{n_{3}}}{n_{3}!}\] \[\quad-\frac{\Gamma(q_{1})}{(\mathrm{i}E_{1})^{q_{1}}}\frac{1}{( \mathrm{i}E_{2})^{q_{23}}}\sum_{n_{3}=0}^{\infty}\frac{(-1)^{n_{3}}\Gamma[n_{ 3}+q_{23}]}{(n_{3}+q_{3})}\,\frac{\varrho_{32}^{n_{3}}}{n_{3}!}\] \[\quad-\frac{\Gamma(q_{3})}{(\mathrm{i}E_{3})^{q_{3}}}\frac{1}{( \mathrm{i}E_{1})^{q_{12}}}\sum_{n_{2}=0}^{\infty}\frac{(-1)^{n_{2}}\Gamma[n_{ 2}+q_{12}]}{(n_{2}+q_{2})}\,\frac{\varrho_{21}^{n_{2}}}{n_{2}!}\] \[\quad+\frac{\Gamma(q_{1})}{(\mathrm{i}E_{1})^{q_{1}}}\frac{\Gamma (q_{2})}{(\mathrm{i}E_{2})^{q_{2}}}\frac{\Gamma(q_{3})}{(\mathrm{i}E_{3})^{q_ {3}}}. \tag{51}\]
Here we have restored all the dimemsionful energy factors to make clear the following point: The result of Mellin integral is effectively executing the identical transformation \(\theta(\tau_{i}-\tau_{j})=1-\theta(\tau_{j}-\tau_{i})\) in a line-by-line fashion. Thus, with \(N\) lines in a family, we will get \(2^{N}\) terms. All but one of them are factorized. There is a unique unfactorized term with all line reversed. In the current example it is the first term on the right hand side of (51). This is nothing but the original family integral. Thus:
\[\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}}(\widehat{E}_{1},E_{2},E_{3})=\sum_{n_{ 2},n_{3}=0}^{\infty}\frac{(-1)^{n_{23}}\Gamma[n_{23}+q_{123}]}{(n_{23}+q_{23})(n _{3}+q_{3})}\,\frac{\varrho_{21}^{n_{2}}}{n_{2}!}\,\frac{\varrho_{31}^{n_{3}} }{n_{3}!}. \tag{52}\]
Once again, this is exactly what we would get by applying the simple formula (22). It seems to us that this series does not sum to any widely known special function in general, but it can be represented as a (dressed) Kampe de Feriet function, whose definition is collected in App. A:
\[\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}}(\widehat{E}_{1},E_{2},E_{3})=\left. \begin{array}{c}{}^{2+1}\mathcal{F}_{1+1}\left[\begin{matrix}q_{123},q_{23} \\ q_{23}+1\end{matrix}\end{array}\right|\begin{array}{c}\mbox{- },q_{3}\\ \mbox{- },q_{3}+1\end{array}\right|-\varrho_{21},-\varrho_{31}\right]. \tag{53}\]
The lesson we learned from the above example is the following: To find the answer to a given family integral \(\mathcal{C}\), all we need to do is to compute another integral \(\mathcal{R}[\mathcal{C}]\) with all time orderings completely reversed. We compute \(\mathcal{R}[\mathcal{C}]\) layer by layer. Each step generates an exponential integral of which we take the MB representation. The last layer of time integral is directly done, and we are left with an \((N-1)\)-fold Mellin integral. We finish the Mellin integral by retaining poles only from \(\Gamma\)-factors. The result of this term is automatically a sign factor \((-1)^{N-1}\) times the original family \(\mathcal{C}\).
With this lesson learned, we can bypass all steps detailed above, and write down the answers for arbitrary families. Now, let us go on to consider the three-site family in Fig. 4, which corresponds to the following integral:
\[\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}}(E_{1},\widehat{E}_{2},E_{3})=( \mathrm{i}E_{2})^{q_{123}}\int_{-\infty}^{0}\prod_{i=1}^{3}\Big{[}\mathrm{d} \tau_{i}\,(-\tau_{i})^{q_{i}-1}e^{\mathrm{i}E_{i}\tau_{i}}\Big{]}\theta(\tau_{ 1}-\tau_{2})\theta(\tau_{3}-\tau_{2}). \tag{54}\]
The result after finishing all three layers of time integrals for the _reversed diagram_\(\mathcal{R}[\widetilde{\mathcal{C}}]\) is:
\[\mathcal{R}\Big{[}\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}}(E_{1},\widehat{E} _{2},E_{3})\Big{]}=\int_{s_{1},s_{3}}\frac{\Gamma[s_{1},s_{3},q_{123}-s_{13}] \varrho_{12}^{-s_{1}}\varrho_{32}^{-s_{3}}}{(s_{1}-q_{1})(s_{3}-q_{3})}. \tag{55}\]
Then, we finish the Mellin integral by picking up poles in \(\Gamma[s_{1},s_{3}]\)_only_. Multiplying the result by a trivial sign factor of \((-1)^{3-1}=1\), we get the original family:
\[\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}}(E_{1},\widehat{E}_{2},E_{3})=\sum_{ n_{1},n_{3}=0}^{\infty}\frac{(-1)^{n_{13}}\Gamma[n_{13}+q_{123}]}{(n_{1}+q_{1})(n_{ 3}+q_{3})}\,\frac{\varrho_{12}^{n_{1}}}{n_{1}!}\frac{\varrho_{32}^{n_{3}}}{n_{ 3}!}. \tag{56}\]
Again, it agrees with what we would get by applying (22). Incidentally, the above two-fold hypergeometric series belongs to the well-known Appell series, which can be summed into the (dressed) Appell \(F_{2}\)-function:
\[\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}}(E_{1},\widehat{E}_{2},E_{3})= \mathcal{F}_{2}\left[q_{123}\bigg{|}\genfrac{}{}{0.0pt}{}{q_{1},q_{3}}{1+q_{1},1+q_{3}}-\varrho_{12},-\varrho_{32}\right]. \tag{57}\]
The definition of \(\mathcal{F}_{2}\) is collected in App. A.
Four-site family with a cubic vertex.Finally let us look at four-site families. There are two possible tree topologies with 4 sites. One is the chain graph \(\mathcal{C}_{q_{1}q_{2}q_{3}q_{4}}^{\mathrm{(n)}}\), which is a direct generalization of the case considered above. We are not going to consider this chain graph any more. On the other hand, there is a new topology with a cubic vertex \(\mathcal{C}_{q_{1}q_{2}q_{3}q_{4}}^{\mathrm{(iso)}}\), as shown in Fig.
5.2 Again, there are two independent ways to assign the maximal-energy site, either at the middle site or at a boundary site, corresponding to Fig. 5(a) and Fig. 5(b), respectively.
Footnote 2: We are borrowing the nomenclature of organic chemistry, where an unbranched linear carbon chain is dubbed normal (n), while a chain with a “cubic vertex” is dubbed isometric (iso).
Consider Fig. 5(a) first, where the maximal-energy site is chosen to be the middle site \(\tau_{4}\). The corresponding (dimensionless) family integral is:
\[\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}q_{4}}^{(\mathrm{iso})}(E_{1},E_{2},E _{3},\widehat{E}_{4})=(\mathrm{i}E_{4})^{q_{1234}}\int_{-\infty}^{0}\prod_{i=1 }^{4}\left[\mathrm{d}\tau_{i}\left(-\tau_{i}\right)^{q_{i}-1}e^{\mathrm{i}E_{ i}\tau_{i}}\right]\theta(\tau_{1}-\tau_{4})\theta(\tau_{2}-\tau_{4})\theta(\tau_{3}- \tau_{4}). \tag{58}\]
As always, we compute the corresponding reversed time-ordering integral:
\[\mathcal{R}\Big{[}\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}q_{4}}^{(\mathrm{ iso})}(E_{1},E_{2},E_{3},\widehat{E}_{4})\Big{]}=\int_{s_{1},s_{2},s_{3}}\frac{ \Gamma[s_{1},s_{2},s_{3},q_{1234}-s_{123}]\varrho_{14}^{-s_{1}}\varrho_{24}^ {-s_{2}}\varrho_{34}^{-s_{3}}}{(s_{1}-q_{1})(s_{2}-q_{2})(s_{3}-q_{3})}. \tag{59}\]
Then, the original integral is obtained by finishing the three-fold Mellin integral in which we only collect poles from \(\Gamma[s_{1},s_{2},s_{3}]\), and multiplying the result by \((-1)^{4-1}=-1\). The result is:
\[\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}q_{4}}^{(\mathrm{iso})}(E_{1},E_{2},E _{3},\widehat{E}_{4})=\sum_{n_{1},n_{2},n_{3}=0}^{\infty}\frac{(-1)^{n_{123}} \Gamma[n_{123}+q_{1234}]}{(n_{1}+q_{1})(n_{2}+q_{2})(n_{3}+q_{3})}\frac{ \varrho_{14}^{n_{1}}}{n_{1}!}\frac{\varrho_{24}^{n_{2}}}{n_{2}!}\frac{\varrho _{34}^{n_{3}}}{n_{3}!}. \tag{60}\]
This three-variable series is not covered in commonly known special functions. It is however covered by the so-called (dressed) Lauricella's \(F_{A}\) function:
\[\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}q_{4}}^{(\mathrm{iso})}(E_{1},E_{2},E _{3},\widehat{E}_{4})=\mathcal{F}_{A}\left[q_{1234}\bigg{|}\!\!\!\!\!\!\!\!q_{ 1},q_{2},q_{3}\atop q_{1}+1,q_{2}+1,q_{3}+1\bigg{|}\!\!-\!\varrho_{14},-\varrho _{24},-\varrho_{34}\right]. \tag{61}\]
The definition of this function is collected in App. A.
Next let us look at Fig. 5(b) where the maximal-energy site is on the side. We take it to be \(E_{1}\). Then, the corresponding family integral is given by:
\[\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}q_{4}}^{(\mathrm{iso})}( \widehat{E}_{1},E_{2},E_{3},E_{4})=(\mathrm{i}E_{1})^{q_{1234}}\int_{-\infty} ^{0}\prod_{i=1}^{4}\left[\mathrm{d}\tau_{i}\left(-\tau_{i}\right)^{q_{i}-1}e^ {\mathrm{i}E_{i}\tau_{i}}\right]\theta(\tau_{2}-\tau_{4})\theta(\tau_{3}-\tau _{4})\theta(\tau_{4}-\tau_{1}). \tag{62}\]
Figure 5: Two independent family integrals of 4-site graphs with a cubic vertex.
Without mentioning any details of intermediate steps, we directly provide the final answer to this integral:
\[\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}q_{4}}^{(\mathrm{iso})}(\widehat{E}_{1},E _{2},E_{3},E_{4})=\sum_{n_{2},n_{3},n_{4}=0}^{\infty}\frac{(-1)^{n_{234}}\Gamma [n_{234}+q_{1234}]}{(n_{2}+q_{2})(n_{3}+q_{3})(n_{234}+q_{234})}\,\frac{\varrho_ {21}^{n_{2}}}{n_{2}!}\,\frac{\varrho_{31}^{n_{3}}}{n_{3}!}\,\frac{\varrho_{41 }^{n_{4}}}{n_{4}!}. \tag{63}\]
### General family integrals
Above we have examined enough number of examples. By now, it is clear that why we need to do family-tree decomposition: Our performance of nested integrals requires a partial order of the nested time variables, and this partial order can always be achieved by the family-tree decomposition. Once we have a partially ordered integral, we can always carry out the completely reversed integral, from the originally latest sites (earliest sites in the reversed integral), and to their mothers, and to grandmothers, etc., until the last layer which is the maximal energy site. In this way, a full derivation of the general equation (22) becomes a matter of mathematical induction. Below we complete this proof.
We begin with a general partially ordered family with \(N\) sites, \(\mathcal{C}_{q_{1}\cdots q_{N}}(\widehat{E}_{1},E_{2},\cdots,E_{N})\), where we assume that the maximal energy site is \(\tau_{1}\). Its integral representation is given in (21). As in the previous section, we work with the completely reversed integral \(\mathcal{R}[\mathcal{C}_{q_{1}\cdots q_{N}}(\widehat{E}_{1},E_{2},\cdots,E_{N})]\), and we integrate every time variable from \(-\infty\) to the time variable of her mother.
Suppose that we have finished all the time integrals for the descendants of the site \(\tau_{j}\), and now we want to finish the time integral at \(\tau_{j}\). Our induction assumption is that, after all the descendants of \(\tau_{j}\) integrated out, the integral over the variable \(\tau_{j}\) has the following form:
\[\mathcal{I}_{j}(\tau_{M})=\int_{-\infty}^{\tau_{M}}\mathrm{d}\tau_{j}\,(-\tau _{j})^{\widetilde{q}_{j}-1}e^{\mathrm{i}E_{j}\tau_{j}}\prod_{i\in\mathfrak{D}( j)}\int_{s_{i}}\frac{\Gamma[s_{i}](\mathrm{i}E_{i})^{-s_{i}}}{\widetilde{s}_{i}- \widetilde{q}_{i}}(-\tau_{j})^{-s_{i}}, \tag{64}\]
where \(\tau_{M}\) is the time variable of \(\tau_{j}\)'s mother, and \(\mathfrak{D}(j)\) denotes the set of labels for all \(\tau_{j}\)'s descendants. Now, finishing the \(\tau_{j}\) integral, we get:
\[\mathcal{I}_{j}(\tau_{M}) =\prod_{i\in\mathfrak{D}(j)}\bigg{[}\int_{s_{i}}\frac{\Gamma[s_{i }](\mathrm{i}E_{i})^{-s_{i}}}{\widetilde{s}_{i}-\widetilde{q}_{i}}\bigg{]}(- \tau_{M})^{\widetilde{q}_{j}-\sum s_{i}}\mathrm{E}_{1-\widetilde{q}_{j}+\sum s _{i}}(-\mathrm{i}E_{j}\tau_{M})\] \[=\prod_{i\in\mathfrak{D}(j)}\bigg{[}\int_{s_{i}}\frac{\Gamma[s_{i }](\mathrm{i}E_{i})^{-s_{i}}}{\widetilde{s}_{i}-\widetilde{q}_{i}}\bigg{]} \int_{s_{j}}\frac{\Gamma(s_{j})(\mathrm{i}E_{j})^{-s_{j}}}{s_{j}-\widetilde{q} _{j}+\sum\limits_{i\in\mathfrak{D}(j)}s_{i}}(-\tau_{M})^{\widetilde{q}_{j}-s_{ j}-\sum s_{i}}\] \[=(-\tau_{M})^{\widetilde{q}_{j}}\prod_{i\in\{j\}\cap\mathfrak{D}( j)}\bigg{[}\int_{s_{i}}\frac{\Gamma[s_{i}](\mathrm{i}E_{i})^{-s_{i}}}{ \widetilde{s}_{i}-\widetilde{q}_{i}}(-\tau_{M})^{-s_{i}}\bigg{]}. \tag{65}\]
Here we have used the fact that \(s_{j}+\sum\limits_{i\in\mathfrak{D}(j)}s_{i}=\widetilde{s}_{j}\). Also, we have abbreviated \(\sum\limits_{i\in\mathfrak{D}(j)}s_{i}\) as \(\sum s_{i}\) when it appears as an upper or lower index.
Now, to go one step further, we should finish the time integral over \(\tau_{M}\), which we denote as \(\mathcal{I}_{M}(\tau_{G})\) and \(\tau_{G}\) is the time variable of \(\tau_{j}\)'s grandmother. To this end, we take products of above integral \(\mathcal{I}_{j}(\tau_{M})\) over all daughters \(\tau_{j}\) of \(\tau_{M}\). Then, together with \(\tau_{M}\)'s own factor \((-\tau_{M})^{q_{M}-1}e^{\mathrm{i}E_{M}\tau_{M}}\)
we get:
\[\mathcal{I}_{M}(\tau_{G})= \int_{-\infty}^{\tau_{G}}\mathrm{d}\tau_{M}\,(-\tau_{M})^{q_{M}-1}e^ {\mathrm{i}E_{M}\tau_{M}}\prod_{j\in\mathrm{daughters\ of\ }\tau_{M}}(-\tau_{M})^{\widetilde{q}_{j}}\prod_{i\in\{j\} \cap\mathfrak{D}(j)}\Bigg{[}\int_{s_{i}}\frac{\Gamma[s_{i}](\mathrm{i}E_{i})^{ -s_{i}}}{\widetilde{s}_{i}-\widetilde{q}_{i}}(-\tau_{M})^{-s_{i}}\Bigg{]}\] \[= \int_{-\infty}^{\tau_{G}}\mathrm{d}\tau_{M}\,(-\tau_{M})^{ \widetilde{q}_{M}-1}e^{\mathrm{i}E_{M}\tau_{M}}\prod_{j\in\mathfrak{D}(M)} \Bigg{[}\int_{s_{j}}\frac{\Gamma[s_{j}](\mathrm{i}E_{j})^{-s_{j}}}{ \widetilde{s}_{j}-\widetilde{q}_{j}}(-\tau_{M})^{-s_{j}}\Bigg{]}. \tag{66}\]
This is identical to (64) upon a "generation shift" \(j\to M\) and \(M\to G\). So we have shown that the original induction assumption (64) persists to all generations as long as it holds at one generation. On the other hand, it is trivial to check that the induction assumption holds for the initial step, i.e., at any site who has no descendant. Thus, we have proved that the induction assumption (64) holds for all sites. In particular, (64) holds for the maximal energy site \(\tau_{1}\) if we take \(\tau_{j}=\tau_{1}\) and \(\tau_{M}=0\). Then, completing this final layer of time integral over \(\tau_{1}\), we get, for the whole reversed family,
\[\mathcal{R}\Big{[}\mathcal{C}_{q_{1}\cdots q_{N}}(\widehat{E}_{1 },E_{2},\cdots,E_{N})\Big{]}= \int_{-\infty}^{0}\mathrm{d}\tau_{1}\,(-\tau_{1})^{\widetilde{q} _{1}-1}e^{\mathrm{i}E_{1}\tau_{1}}\prod_{i=2}^{N}\int_{s_{i}}\frac{\Gamma[s_{i }](\mathrm{i}E_{i})^{-s_{i}}}{\widetilde{s}_{i}-\widetilde{q}_{i}}(-\tau_{1}) ^{-s_{i}}\] \[= \prod_{i=2}^{N}\int_{s_{i}}\frac{\Gamma[s_{i}](\mathrm{i}E_{i})^{ -s_{i}}}{\widetilde{s}_{i}-\widetilde{q}_{i}}\,\frac{\Gamma[q_{1\cdots N}-s_{2 \cdots N}]}{(\mathrm{i}E_{1})^{q_{1\cdots N}-s_{2\cdots N}}}\] \[= \frac{1}{(\mathrm{i}E_{1})^{q_{1\cdots N}}}\prod_{i=2}^{N}\Bigg{[} \int_{s_{i}}\frac{\Gamma[s_{i}]\varrho_{i1}^{-s_{i}}}{\widetilde{s}_{i}- \widetilde{q}_{i}}\Bigg{]}\Gamma[q_{1\cdots N}-s_{2\cdots N}]. \tag{67}\]
As shown many times in the previous subsection, the original family \(\mathcal{C}_{q_{1}\cdots q_{N}}(\widehat{E}_{1},E_{2},\cdots,E_{N})\) is recovered by picking up all poles from \(\Gamma[s_{i}]\), and including an overall factor \((-1)^{N-1}\) which comes from reversing the directions of \(N-1\) bulk lines. Thus, we get:
\[\mathcal{C}_{q_{1}\cdots q_{N}}(\widehat{E}_{1},E_{2},\cdots,E_{N})=\frac{(-1 )^{N-1}}{(\mathrm{i}E_{1})^{q_{1\cdots N}}}\sum_{n_{2},\cdots,n_{N}=0}^{\infty }\Gamma(q_{1\cdots N}+n_{2\cdots N})\prod_{i=2}^{N}\frac{(-\varrho_{i1})^{n_{i }}}{(-\widetilde{n}_{i}-\widetilde{q}_{i})n_{i}!}. \tag{68}\]
This is exactly the original family formula (22). Thus we have completed the proof.
### Alternative representation
The MB representation of a function is not unique. In the previous computations, we have chosen a relatively simple representation (39) for the exponential function \(\mathrm{E}_{p}(z)\). This representation allows us to find simple expression for the dimensionless family integrals as Taylor series of energy ratios. On the other hand, there exist other MB representations which may be useful in certain cases. One example is the following partially resolved MB representation, which is particularly useful to improve the convergence of the hypergeometric series when there are several energies comparable to the maximal energy:
\[\mathrm{E}_{p}(z)=e^{-z}\int_{-\mathrm{i}\infty}^{+\mathrm{i}\infty}\frac{ \mathrm{d}s}{2\pi\mathrm{i}}\Gamma\begin{bmatrix}p+s,1+s,-s\\ p\end{bmatrix}z^{-s-1}. \tag{69}\]
This result can be derived from the MB representation for a confluent hypergeometric function, as discussed in App. A. We note that (69) is not a complete MB representation of \(\mathrm{E}_{p}(z)\), as there is an exponential factor \(e^{-z}\) left.3 As we shall see, this remaining exponential factor will help us to circumvent the problem of convergence of hypergeometric series with several maximal energies. Let us take monotonic three-site family integral (48) as an example, namely Fig. 4(a), but we do not divide out the dimensionful factor \((\mathrm{i}E_{1})^{q_{123}}\), nor do we assign a maximal energy variable:
Footnote 3: Incidentally, this is very similar to the partially resolved MB representation for the Whittaker function used in [89].
\[\mathcal{C}_{q_{1}q_{2}q_{3}}(E_{1},E_{2},E_{3})=\int_{-\infty}^{0}\prod_{i=1}^ {3}\Big{[}\mathrm{d}\tau_{i}\,(-\tau_{i})^{q_{i}-1}e^{\mathrm{i}E_{i}\tau_{i} }\Big{]}\theta(\tau_{3}-\tau_{2})\theta(\tau_{2}-\tau_{1}). \tag{70}\]
As before, we compute the integral with all time orderings reversed, but with the new representation (69). The result is:
\[\mathcal{R}\Big{[}\mathcal{C}_{q_{1}q_{2}q_{3}}(E_{1},E_{2},E_{3} )\Big{]}= \int_{s_{2},s_{3}}\,\Gamma\,\begin{bmatrix}1+s_{3},1-q_{3}+s_{3},- s_{3}\\ 1-q_{3}\end{bmatrix}\Gamma\,\begin{bmatrix}1+s_{2},2-q_{23}+s_{23},-s_{2}\\ 2-q_{23}+s_{3}\end{bmatrix}\] \[\times(\mathrm{i}E_{3})^{-1-s_{3}}(\mathrm{i}E_{23})^{-1-s_{2}} \frac{\Gamma[q_{123}-2-s_{23}]}{(\mathrm{i}E_{123})^{q_{123}-2-s_{23}}}. \tag{71}\]
Similar to the Mellin integrals in the previous representation, each Mellin variable \(s_{i}\) gets two sets of left poles from the \(\Gamma\) factors, one at \(s_{i}=-1-n_{i}\) (\(n_{i}=0,1,\cdots\)) from \(\Gamma(1+s_{i})\), and the other more complicated, involving both \(\widetilde{q}_{i}\) and other Mellin variables from the descendants of \(s_{i}\). We are not going to present a detailed analysis here, but only mention that, similar to the previous case, the original family integral \(\widetilde{\mathcal{C}}_{q_{1}q_{2}q_{3}}(E_{1},E_{2},E_{3})\) is recovered by picking up poles from all \(\Gamma(1+s_{i})\) factors only and multiplying the result with an appropriate sign factor. Thus:
\[\mathcal{C}_{q_{1}q_{2}q_{3}}(E_{1},E_{2},E_{3})=\frac{1}{(\mathrm{i}E_{123})^ {q_{123}}}\sum_{n_{2},n_{3}=0}^{\infty}(-1)^{n_{23}}\Gamma\,\begin{bmatrix}n_{ 23}-q_{123},-n_{23}-q_{23},-n_{3}-q_{3}\\ 1-n_{3}-q_{23},1-q_{3}\end{bmatrix}\varrho_{23T}^{n_{2}}\varrho_{3T}^{n_{3}}, \tag{72}\]
where we have defined \(\varrho_{23T}\equiv E_{23}/E_{123}\) and \(\varrho_{3T}\equiv E_{3}/E_{123}\). Thus we see that, instead of using the inverse of the maximal energy as the expansion parameter, in this representation, we are using the inverse of the _total energy_\(E_{123}\) as the expansion parameter. Although it has a somewhat more complicated looking than the previous representation, this representation is a safer choice in certain cases, in particular in the kinematic region of several equal or comparable maximal energies. In any case, it is easy to check numerically that (72) and (52) agree with each other perfectly whenever both series converge.
The lesson here is that we can make use of the flexibility of MB representations to get different series solutions for the nested time integrals, expanded either in the inverse power of some energy variable of a given site, in the inverse power of the sum of several energy variables, or even in the inverse power of the total energy. Although the final results may look quite different, these results are just different expansions of the same function. We can thus obtain a large number of transformation-of-variable relations for these multi-variable hypergeometric functions. We leave a more systematic investigation of this topic to future works.
### Discussions
We end this section with a discussion of the nested time integral.
Pole structure.In Sec. 2, we mentioned that the time integral in the PMB representation only contains right poles for all Mellin variables, which was proved in [92]. Now that we have explicit results for arbitrary nested time integrals, it is straightforward to check this statement. Indeed, we can rewrite our general result for the family integral (22) in the following way:
\[\widetilde{\mathcal{C}}_{q_{1}\cdots q_{N}}(\widehat{E}_{1},E_{2}\cdots,E_{N })=\sum_{n_{2},\cdots,n_{N}=0}^{\infty}\Gamma(q_{1\cdots N}+n_{2\cdots N}) \prod_{j=2}^{N}\Gamma\begin{bmatrix}\widetilde{q}_{j}+\widetilde{n}_{j}\\ \widetilde{q}_{j}+\widetilde{n}_{j}+1\end{bmatrix}\frac{(-\varrho_{j1})^{n_{j }}}{n_{j}!}. \tag{73}\]
Then it is clear that all the exponents \(q_{i}\) (\(i=1,\cdots,N\)) have the positive coefficients when appearing in the arguments of \(\Gamma\) factors. Now, if we use (19) to rewrite all \(q\)'s in terms of Mellin variables \(s\), we will see that all Mellin variables \(s\) have _negative_ coefficients when appearing in the arguments of all \(\Gamma\) factors in (73). So, we have confirmed with our explicit results that nested time integrals only have right poles in all Mellin variables.
With the explicit results for the nested time integrals, it is also easy to confirm that the Mellin integrand for any tree graph in the PMB representation is well balanced for all Mellin variables, an important fact for the computation of Mellin integrals, as mentioned in Sec. 2. To see this, we only need to derive (17) from our result.
From our result for the time integral in (22), it is trivial to see that the exponents \(q_{i}\) (\(i=1,\cdots,N\)) appear in the \(\Gamma\) factor \(\Gamma(q_{1\cdots N}+n_{2\cdots N})\) as a total sum. Then, let us look at (19), which says that the value of \(q_{\ell}\) at Vertex \(\ell\) receives contributions from all Mellin variables ending at this vertex. Note that, by construction, every Mellin variable is associated to one and only one vertex. Thus, (19) tells us that summing over all \(q_{\ell}\) is equivalent to summing over all Mellin variables. As a result, the argument of the \(\Gamma\) factor \(\Gamma(q_{1\cdots N}+n_{2\cdots N})\) becomes:
\[q_{1\cdots N}+n_{2\cdots N}=-2\sum_{i=1}^{I}(s_{i}+\bar{s}_{i})+p_{1\cdots N} +n_{2\cdots N}+N, \tag{74}\]
which agrees nicely with the general structure in (17). So, we see that the Mellin variables are indeed balanced.
Hard limits.It is interesting to look at different kinematic limits of our result (22). First, it is simple to take a hard limit where one energy \(E_{1}\) is much greater than any other energies. Obviously, in this limit, we should work with the expression where \(E_{1}\) is chosen as the maximal energy site. Then, in the series expansion (22), only the leading term with \(n_{2}=\cdots=n_{N}=0\) survives the limit. So we get:
\[\lim_{E_{1}\to\infty}\mathcal{C}_{q_{1}\cdots q_{N}}(E_{1},E_{2},\cdots,E_{N })=\frac{\Gamma(q_{1\cdots N})}{(\mathrm{i}E_{1})^{q_{1\cdots N}}}\prod_{j=2} ^{N}\frac{1}{\widetilde{q}_{j}}. \tag{75}\]
Apart from the simple numerical factor \(\prod\limits_{j}1/\widetilde{q}_{j}\), this is very similar to the result of one-site family \(\mathcal{C}_{q}(E)=\Gamma(q)/(\mathrm{i}E)^{q}\) with \(E=E_{1\cdots N}\) and \(q=q_{1\cdots N}\). Here we have used the fact that
in the \(E_{1}\to\infty\) limit. So, in this hard limit, the time integral behaves as if we have pinched all \(N\) nested vertices together, with all exponents \(q_{i}\) (\(i=1,\cdots,N\)) summed. Thus, this hard limit can in a sense be thought of as an EFT limit, where all internal lines in the family integral shrink into local vertices. From the viewpoint of the cosmological bootstrap [63, 89], we know that the EFT part is related to the particular solution of the bootstrap equation with a local source term. This local source term originates exactly from the time-ordering part of the internal propagators. So, there is a close relation between the local EFT limit and the nested integrals, and it is not surprising to get (75).
However, we can make a new interesting observation from (75): In the original SK integral (12) for a correlator, we need to sum over all SK indices, which involve all kinds of propagators with arbitrary nesting. This means that the site of \(E_{1}\) can be nested arbitrarily with other sites. Then, coupled with (75), we see that the \(E_{1}\to\infty\) limit can generate a power \(1/E_{1}^{q_{1}\cdots q_{N}}\) involving exponents \(q_{i}\) at any other site. Note that the variable \(q_{i}\) contains the Mellin variables of internal lines ending at Site \(i\), and we see that the power \(1/E_{1}^{q_{1}\cdots q_{N}}\) could be dependent on Mellin variables of any internal lines _not_ ending at Site \(1\). Thus, if we finish the Mellin integrals by picking up left poles for those Mellin variables, they can introduce noninteger powers of \(1/E_{1}\). This is exactly the source of _local signals_. The local signals have been considered mainly for single exchange graphs in previous works [79, 89]. Here we see that, in the hard limit \(E_{1}\to\infty\), the local signal from \(E_{1}\) can in principle be generated by _any_ internal massive propagators _not_ ending on Site \(1\). So, the local signal is more subtle and more complicated than the nonlocal signal. This topic will be further explored in a separate work [113].
Soft limits: internal vertices.Now let us look at an opposite limit where one or several energies approach zero, \(E_{i}\to 0\). This is a soft limit. Note that, in our general expression for the nested time integral (18), we have assigned an exponential factor \(e^{\mathrm{i}E_{i}\tau_{i}}\) for each site at time \(\tau_{i}\). This factor is generally from the bulk-to-boundary propagator of a massless or conformal scalar (or from a massless graviton if nonzero spins are considered). In realistic tree graphs, there are certainly vertices on which only bulk massive propagators end, and no bulk-to-boundary lines are attached. We call such a vertex an _internal vertex_ following [93]. Clearly, we have \(E_{i}=0\) for such a vertex. So, it is necessary to know how to take soft limits if we want to consider graphs with internal vertices.
Fortunately, our series expression for the time integral makes it very convenient to take a soft limit. For instance, suppose that we want to set \(E_{4}=0\) in the 4-site family (63) which corresponds to Fig. 5(b). Then, the form of the hypergeometric series in (63) allows us to set \(\varrho_{41}=E_{4}/E_{1}=0\) directly without encountering any singularities. Then, in the summation over \(n_{4}\), only the term with \(n_{4}=0\) survives the \(\varrho_{41}\to 0\) limit, and we get:
\[\mathcal{C}^{\mathrm{(iso)}}_{q_{1}q_{2}q_{3}q_{4}}(\widehat{E}_{1},E_{2},E_{ 3},E_{4}=0)=\sum_{n_{2},n_{3}=0}^{\infty}\frac{(-1)^{n_{23}}\Gamma[n_{23}+q_{ 1234}]}{(n_{2}+q_{2})(n_{3}+q_{3})(n_{23}+q_{234})}\,\frac{\varrho_{21}^{n_{2} }}{n_{2}!}\,\frac{\varrho_{31}^{n_{3}}}{n_{3}!}. \tag{76}\]
We take this opportunity to make a general comment on the computation of graphs with internal vertices, as briefly mentioned in Footnote 1. We illustrate the point with a concrete
example. Suppose we want to compute the following integral with \(E_{4}=0\):
\[\mathbb{T}_{q_{1}q_{2}q_{3}q_{4}}(E_{1},E_{2},E_{3})=\int_{-\infty}^{0}\prod_{i=1 }^{4}\Big{[}\mathrm{d}\tau_{i}\,(-\tau_{i})^{q_{i}-1}e^{\mathrm{i}E_{i}\tau_{i} }\Big{]}\theta(\tau_{4}-\tau_{1})\theta(\tau_{4}-\tau_{2})\theta(\tau_{4}-\tau_ {3}). \tag{77}\]
Note that we set \(E_{4}=0\) on the right hand side. Suppose we want the result for this integral with \(E_{1}\) chosen as the maximal energy. Then, according to our reduction procedure, we should proceed with the following family-tree decomposition:
\[\mathbb{T}_{q_{1}q_{2}q_{3}q_{4}}(E_{1},E_{2},E_{3}) =\mathcal{C}_{q_{1}q_{4}}(\widehat{E}_{1},0)\mathcal{C}_{q_{2}}( E_{2})\mathcal{C}_{q_{3}}(E_{3})-\mathcal{C}_{q_{1}q_{4}q_{2}}(\widehat{E}_{1},0,E_ {2})\mathcal{C}_{q_{3}}(E_{3})\] \[\quad-\mathcal{C}_{q_{1}q_{4}q_{3}}(\widehat{E}_{1},0,E_{3}) \mathcal{C}_{q_{2}}(E_{2})+\mathcal{C}_{q_{1}q_{4}q_{2}q_{3}}^{(\mathrm{iso})} (\widehat{E}_{1},0,E_{2},E_{3}). \tag{78}\]
This is shown diagrammatically in Fig. 6. (Note that the last graph in Fig. 6 is exactly the previous example in (76).) Then, applying the general formula (22) to all families here, and setting the summation variable \(n_{4}=0\), we get:
\[\mathbb{T}_{q_{1}q_{2}q_{3}q_{4}}(E_{1},E_{2},E_{3})\] \[=\frac{\Gamma[q_{14},q_{2},q_{3}]/q_{4}}{(\mathrm{i}E_{1})^{q_{1 4}}(\mathrm{i}E_{2})^{q_{2}}(\mathrm{i}E_{3})^{q_{3}}}-\bigg{\{}\frac{\Gamma[ q_{3}]}{(\mathrm{i}E_{1})^{q_{142}}(\mathrm{i}E_{3})^{q_{3}}}\sum_{n_{2}=0}^{ \infty}\frac{(-1)^{n_{2}}\Gamma[q_{142}+n_{2}]}{(n_{2}+q_{2})(n_{2}+q_{24})} \frac{\varrho_{21}^{n_{2}}}{n_{2}!}\] \[\quad+(2\leftrightarrow 3)\bigg{\}}+\frac{1}{(\mathrm{i}E_{1})^{q_{1 234}}}\sum_{n_{2},n_{3}=0}^{\infty}\frac{(-1)^{n_{23}}\Gamma[n_{23}+q_{1234}]} {(n_{2}+q_{2})(n_{3}+q_{3})(n_{23}+q_{234})}\frac{\varrho_{21}^{n_{2}}}{n_{2}!}\frac{\varrho_{31}^{n_{3}}}{n_{3}!}. \tag{79}\]
On the other hand, we can as well compute the integral (77) directly. First, we integrate out \(\tau_{1}\), \(\tau_{2}\), and \(\tau_{3}\), and the result is:
\[\mathbb{T}_{q_{1}q_{2}q_{3}q_{4}}(E_{1},E_{2},E_{3})=\prod_{i=1}^{3}\bigg{[} \int_{s_{i}}\frac{\Gamma[s_{i}](\mathrm{i}E_{i})^{-s_{i}}}{(s_{i}-q_{i})} \bigg{]}\int_{-\infty}^{0}\mathrm{d}\tau_{4}(-\tau_{4})^{q_{1234}-1-s_{123}}. \tag{80}\]
Now, the final layer integral contains no energy variable since it is from an internal vertex. Finishing this integral, we get a \(\delta\) function:
\[\int_{-\infty}^{0}\mathrm{d}\tau_{4}(-\tau_{4})^{q_{1234}-1-s_{123}}=(2\pi) \delta\big{[}\mathrm{i}(q_{1234}-s_{123})\big{]}. \tag{81}\]
Now we need to choose a maximal energy. Suppose we choose \(E_{1}\) without loss of generality. Then, we use the above \(\delta\) function to integrate out \(s_{1}\). Then, (80) becomes:
\[\mathbb{T}_{q_{1}q_{2}q_{3}q_{4}}(E_{1},E_{2},E_{3})=\int_{s_{2},s_{3}}\frac{ \Gamma[q_{1234}-s_{23},s_{2},s_{3}](\mathrm{i}E_{1})^{s_{23}-q_{1234}}(\mathrm{ i}E_{2})^{-s_{2}}(\mathrm{i}E_{3})^{-s_{3}}}{(q_{234}-s_{23})(s_{2}-q_{2})(s_{3}-q_{3} )}. \tag{82}\]
Figure 6: The family-tree decomposition of a 4-site nested time integral with an internal vertex. The internal vertex (Vertex 4) with \(E_{4}=0\) is marked with an orange dot.
We can finish this integral by collecting the residues of all left poles of the integrand, as before. The result exactly agrees with (79).
There are two lessons to be learnt here. First, when computing a specific nested integral, if we decide to do the time integral directly, we do not have to be as rigid as when we derive the family-tree decomposition. Instead, we can always do the nested integral so long as the integral has a partial order, and the latest (or earliest) site does not have to be the maximal-energy site. The choice of maximal energy can be delayed until we perform the Mellin integral, where we do need a maximal energy to decide how to make the series expansion. On the other hand, the advantage of family-tree decomposition is that we do not have to compute the integral at all; So long as we follow this reduction procedure, we can write down the answer directly.
The second lesson is about the \(\delta\) function generated from an energy-less time integral, such as the one in (81). When we use (81) to integrate out a Mellin variable, say \(s_{1}\), we are effectively setting \(s_{1}=q_{1234}-s_{23}\) everywhere in the integrand. Then, all previous left (right) poles of \(s_{1}\) now become right (left) poles of \(s_{23}\). However, this left-right flip is harmless at least for tree graphs. The general rule is that, whenever we have a \(\delta\) function from an internal vertex, we choose a maximal energy among all energies connected at this vertex, and we use the \(\delta\) function to integrate out the Mellin variable associated with the maximal energy. Then, we still pick up left poles of other Mellin variables to finish the Mellin integral. In this way, we will end up with a series expansion in terms of small energy ratios, as shown in the above example.
Multiple maximal energies.Finally, there is a more difficult parameter region where the energies at more than one sites become equal or comparable. This case is tractable if the equal energies are not maximal. The only tricky situation is when the equal energies are maximal, so that, in the series solution (22), there is at least one energy ratio \(\varrho_{j1}\) approaching \(1\). At this point, the series representation is likely divergent. There are several things one can try in this case. First, it is always possible to finish any one layer of summation in the general formula (22) in terms of a (generalized) hypergeometric function \({}_{\rho}\mathcal{F}_{q}\). Then, one can study the behavior of this hypergeometric function with argument equal to \(1\). Such a pure analytical strategy can sometimes be extended to two-variable summation as well. Second, one can switch to the partially resolved representation (69) discussed in Sec. 3.4, so that the result is expanded in powers of \(1/E_{1\cdots N}\) instead of the inverse of any single energy variable. This helps to improve the convergence of the series in many cases. As mentioned above, this is a practical way to discover many transformation-of-variable formula for multi-variable hypergeometric functions, and thus could be particularly useful. Third, when all previous methods fails (such as when all energies become nearly equal), we can do numerical interpolation to sew together disconnected parameter region with convergent series expressions. We leave this somewhat mathematically oriented problem to a future work.
## 4 General Two Massive Exchanges
With the nested time integral done in the last section, in principle, we are able to compute arbitrary tree-level inflation correlators with any number of massive exchanges. In this section, we illustrate this procedure with a concrete example, namely a general tree graph with two massive exchanges, as shown in Fig. 7. We follow the diagrammatic representation of [19]. In
particular, the external (bulk-to-boundary) propagators can be either conformal scalars with \(m^{2}=2\), massless scalar fields such as the inflaton, or the massless spin-2 graviton. The conformal scalar is technically easiest and is often used as a starting point in a theoretical analysis of inflation correlators. The cases of massless scalar and tensor modes are more relevant to CC phenomenology. On the other hand, the two internal (bulk) propagators represent two massive scalar fields which can be either identical or distinct. There is no difficulty to generalize the bulk lines to massive fields with spins or with helical chemical potentials, but we choose to work with scalars of principle series \((m_{1},m_{2}>3/2)\) for definiteness. Thus, we assign two mass parameters \(\widetilde{\nu}_{1,2}\equiv\sqrt{m_{1,2}^{2}-9/4}\) for the two lines, respectively.
### Three-vertex seed integral
Following the diagrammatic rule of SK formalism [19], one can show that the correlators in the form of Fig. 7 can in general be reduced to the following _three-vertex seed integral_:
\[\mathcal{T}_{\mathfrak{a}_{1}\mathfrak{a}_{2}\mathfrak{a}_{3}}^{p_{1}p_{2}p_ {3}}\equiv -\mathsf{i}_{\mathfrak{a}_{1}\mathfrak{a}_{2}\mathfrak{a}_{3}}E_{ 1}^{p_{1}+1}E_{2}^{p_{2}+1}E_{3}^{p_{3}+1}\ell_{1}^{3}\ell_{2}^{3}\int_{- \infty}^{0}\mathrm{d}\tau_{1}\mathrm{d}\tau_{2}\mathrm{d}\tau_{3}(-\tau_{1}) ^{p_{1}}(-\tau_{2})^{p_{2}}(-\tau_{3})^{p_{3}}\] \[\times e^{\mathsf{i}_{\mathbf{a}_{1}}E_{1}\tau_{1}+\mathsf{i}_{ \mathbf{a}_{2}}E_{2}\tau_{2}+\mathsf{i}_{3}E_{3}\tau_{3}}D_{\mathfrak{a}_{1} \mathfrak{a}_{2}}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1},\tau_{2})D_{ \mathfrak{a}_{2}\mathfrak{a}_{3}}^{(\widetilde{\nu}_{2})}(\ell_{2};\tau_{2}, \tau_{3}). \tag{83}\]
Here, as before, \(E_{i}\) represents the total energies of _bulk-to-boundary_ lines at the Vertex \(i\), while \(\boldsymbol{\ell}_{1}\) and \(\boldsymbol{\ell}_{2}\) represent the 3-momenta of the two internal lines, respectively. As before, we have included power factors of the form \((-\tau_{i})^{p_{i}}\) to allow for different choices of external states and coupling types. Finally, the two bulk massive propagators \(D_{\mathsf{ab}}^{(\widetilde{\nu}_{1})}\) and \(D_{\mathsf{bc}}^{(\widetilde{\nu}_{2})}\) are given in (9), (10), and (11). To minimize unnecessary complications, we will take \(p_{1},p_{2},p_{3}\in\mathbb{R}\). Generalization to complex values of \(p_{i}\) is straightforward, although the expressions will be lengthier.
The \(E\) and \(\ell\) factors in front of the integral in (83) are included to make the integral dimensionless. The reason we introduce this special combination of energy variables is the following: We can define the dimensionless integration variables \(z_{i}=E_{i}\tau_{i}\) (\(i=1,2,3\)), and use the momentum ratios \(r_{1}=\ell_{1}/E_{1}\), \(r_{2}=\ell_{1}/E_{2}\), \(r_{3}=\ell_{2}/E_{2}\), \(r_{4}=q_{\ell}/E_{3}\). Then, one can easily verify:
\[\ell_{1}^{3}D_{\mathsf{ab}}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1},\tau_{2 })=\widetilde{D}_{\mathsf{ab}}^{(\widetilde{\nu}_{1})}(r_{1}z_{1},r_{2}z_{2}), \tag{84}\]
and a similar relation for the \(\ell_{2}\)-propagator. Then, the seed integral is manifestly dimensionless
Figure 7: A general tree graph with two massive exchanges.
and depends only on dimensionless energy ratios:
\[\mathcal{I}^{p_{1}p_{2}p_{3}}_{\mathtt{a_{1}a_{2}a_{3}}}(r_{1},r_{2},r_{3},r_{4}) \equiv\!\int_{-\infty}^{0}\prod_{i=\ell}^{3}\Big{[}\mathtt{i}\mathtt{a}_{\ell} \mathrm{d}z_{\ell}(-z_{\ell})^{p_{\ell}}e^{\mathtt{i}\mathtt{a}_{\ell}z_{\ell} }\Big{]}\widehat{D}^{(\widetilde{\nu}_{1})}_{\mathtt{a_{1}a_{2}}}(r_{1}z_{1}, r_{2}z_{2})\widehat{D}^{(\widetilde{\nu}_{2})}_{\mathtt{a_{2}a_{3}}}(r_{3}z_{2},r_{4}z_{3}). \tag{85}\]
There are simple kinematic constraints for the range of \(r_{i}\) variables from the momentum conservation at each vertex. For instance, let there be \(N\) external lines ending at Site 1 with 3-momenta \(\boldsymbol{k}_{1},\cdots,\boldsymbol{k}_{N}\). Then, by definition, \(E_{1}=k_{1}+\cdots+k_{N}\geq|\boldsymbol{k}_{1}+\cdots+\boldsymbol{k}_{N}|= \ell_{1}\). Thus we have \(0<r_{1}<1\). Similarly, we have \(0<r_{4}<1\). On the other hand, the constraints on \(r_{2}\) and \(r_{3}\) are much weaker. In general, these two values can take any nonnegative real values.
Many correlators with two massive exchanges can be expressed in terms of \(\mathcal{I}^{p_{1}p_{2}p_{3}}_{\mathtt{a_{1}a_{2}a_{3}}}\). For example, when the external leges are conformal scalars \(\phi_{c}\) with cubic and quartic direct couplings with two massive scalars \(\sigma_{1,2}\), we can form a 6-point correlator. The Lagrangian is:
\[\mathscr{L}\supset-\frac{1}{2}a^{2}(\partial_{\mu}\phi_{c})^{2}-\frac{1}{2}a ^{4}m_{c}^{2}\phi_{c}^{2}-\frac{1}{2}\sum_{i=1}^{2}\Big{[}a^{2}(\partial_{\mu} \sigma_{i})^{2}+a^{4}m_{i}^{2}\sigma_{i}+a^{4}\mu_{i}\phi_{c}^{2}\sigma_{i} \Big{]}-\frac{1}{2}a^{4}\lambda\phi_{c}^{2}\sigma_{1}\sigma_{2}. \tag{86}\]
Here \(m_{c}=\sqrt{2}\) is the mass of the conformal scalar \(\phi_{c}\), and \(m_{1,2}>3/2\) are the mass of two scalars \(\sigma_{1,2}\), respectively. We also include two cubic couplings with dimension-1 coupling constants \(\mu_{i}\) (\(i=1,2\)) and a quartic coupling with dimensionless coupling \(\lambda\). The powers of scale factors \(a=-1/\tau\) are introduced to make the Lagrangian scale invariant, and the spacetime indices in (86) are contracted by Minkowski metric \(\eta_{\mu\nu}\). Then, the 6-point correlator is shown in Fig. 7 with all black dots removed. With the diagrammatic rule, it is easy to see that the corresponding SK integral reduces to the seed integral in the following way:
\[\mathcal{G}(\boldsymbol{k}_{1},\cdots,\boldsymbol{k}_{6})=-\mu_{1}\mu_{2} \lambda\frac{(-\tau_{f})^{6}k_{12}k_{34}k_{56}}{64k_{1}\cdots k_{6}\ell_{1}^{ 3}\ell_{2}^{3}}\sum_{\mathtt{a_{1}a_{2},a_{3}=\pm}}\mathcal{I}^{-2,-2,-2}_{ \mathtt{a_{1}a_{2}a_{3}}}\!\left(\frac{\ell_{1}}{k_{12}},\frac{\ell_{1}}{k_{5 6}},\frac{\ell_{2}}{k_{56}},\frac{\ell_{2}}{k_{34}}\right)\!, \tag{87}\]
where we have introduced a final-time cutoff \(\tau_{f}\). We note that this expression is for a single graph \(\mathcal{G}(\boldsymbol{k}_{1},\cdots,\boldsymbol{k}_{6})\) rather than the whole correlator \(\mathcal{T}(\boldsymbol{k}_{1},\cdots,\boldsymbol{k}_{6})\) at the same perturbative order. The correlator \(\mathcal{T}\) can be obtained from the graph \(\mathcal{G}\) by including suitable permutations, which we do not spell out here.
As another example, we can consider the 4-point correlators of massless inflaton \(\varphi\) with two massive exchanges, as shown in Fig. 9. We assume that the inflaton \(\varphi\) is coupled derivatively, to respect the approximate shift symmetry of the inflaton field, and also to produce a nontrivial result.4 The relevant Lagrangian is:
Footnote 4: Directly coupled 2-point vertex (the mass mixing) is trivial in the sense that it can be rotated away by diagonalizing the mass matrix.
\[\mathscr{L}\supset-\frac{1}{2}a^{2}(\partial_{\mu}\varphi)^{2}-\sum_{i=1}^{2} \bigg{[}\frac{1}{2}a^{2}(\partial_{\mu}\sigma_{i})^{2}+\frac{1}{2}a^{4}m_{i} ^{2}\sigma_{i}+a^{3}\mu_{i}\varphi^{\prime}\sigma_{i}\bigg{]}-\frac{\lambda}{2 }a^{2}(\varphi^{\prime})^{2}\sigma_{1}\sigma_{2}. \tag{88}\]
Then, the 4-point graph in Fig. 9 can be expressed as:
\[\mathcal{G}(\boldsymbol{k}_{1},\cdots,\boldsymbol{k}_{4})=-\frac{\mu_{1}\mu_{2 }\lambda}{16k_{1}^{3}k_{2}^{3}k_{3}k_{4}k_{34}}\sum_{\mathtt{a_{1}a_{2},a_{3} =\pm}}\mathcal{I}^{-2,0,-2}_{\mathtt{a_{1}a_{2}a_{3}}}\!\Big{(}1,\frac{k_{1}}{ k_{34}},\frac{k_{2}}{k_{34}},1\Big{)}. \tag{89}\]
Thus the computation of the 4-point correlator (89) requires us to take simultaneous folded limit \(r_{1}\to 1\) and \(r_{4}\to 1\) which is a bit nontrivial. We shall take this limit in the next section.
### Computing the seed integral
Now we are going to compute the seed integral (83). The computation is rather lengthy and tedious. Here we only outline the main steps, and collect more details in App. B
The seed integral \(\mathcal{I}^{p_{1}p_{2}p_{3}}_{\mathtt{a}_{1}\mathtt{a}_{2}\mathtt{a}_{3}}(r_{1}, r_{2},r_{3},r_{4})\) in (83) has 8 SK branches, depending on the values of the 3 SK indices \(\mathtt{a}_{1},\mathtt{a}_{2},\mathtt{a}_{3}=\pm\). We only need to compute 4 integrals: \(\mathcal{I}^{p_{1}p_{2}p_{3}}_{+++}\), \(\mathcal{I}^{p_{1}p_{2}p_{3}}_{++-}\), \(\mathcal{I}^{p_{1}p_{2}p_{3}}_{+-+}\), and \(\mathcal{I}^{p_{1}p_{2}p_{3}}_{-++}\). Since all exponents \(p_{i}\) (\(i=1,2,3\)) are real, the other four can be obtained by taking complex conjugation. As a general rule, for graphs with real couplings, flipping the sign of _all_ SK indices simultaneously brings an integral to its complex conjugate.
As usual, the main difficulty of the computation comes from the time orderings. Thus, our first step is to rewrite the time-ordered propagator \(D_{++}\) in a more suitable form. For definiteness, we work in the region where \(E_{2}>E_{1}\) and \(E_{2}>E_{3}\). Then, according to the discussion of the previous section, whenever we have a time ordering between \(\tau_{1}\) and \(\tau_{2}\), or between \(\tau_{3}\) and \(\tau_{2}\), we should let \(\tau_{2}\) take the earlier position. Thus, we use the following expression for the two \(D_{++}\) propagators:
\[D^{(\widetilde{\nu}_{1})}_{++}(\ell_{1};\tau_{1},\tau_{2}) = D^{(\widetilde{\nu}_{1})}_{+-}(\ell_{1};\tau_{1},\tau_{2})+ \Big{(}D^{(\widetilde{\nu}_{1})}_{-+}(\ell_{1};\tau_{1},\tau_{2})-D^{( \widetilde{\nu}_{1})}_{+-}(\ell_{1};\tau_{1},\tau_{2})\Big{)}\theta(\tau_{1}- \tau_{2}), \tag{90}\] \[D^{(\widetilde{\nu}_{2})}_{++}(\ell_{2};\tau_{2},\tau_{3}) = D^{(\widetilde{\nu}_{2})}_{-+}(\ell_{2};\tau_{2},\tau_{3})+ \Big{(}D^{(\widetilde{\nu}_{2})}_{+-}(\ell_{2};\tau_{2},\tau_{3})-D^{( \widetilde{\nu}_{2})}_{-+}(\ell_{2};\tau_{2},\tau_{3})\Big{)}\theta(\tau_{3}- \tau_{2}). \tag{91}\]
After taking this representation, the expressions for the four integrals \(\mathcal{I}^{p_{1}p_{2}p_{3}}_{+++}\), \(\mathcal{I}^{p_{1}p_{2}p_{3}}_{++-}\), \(\mathcal{I}^{p_{1}p_{2}p_{3}}_{++}\), and \(\mathcal{I}^{p_{1}p_{2}p_{3}}_{-++}\) can be obtained directly. Here we show one example of \(\mathcal{I}^{p_{1}p_{2}p_{3}}_{+++}\). The complete list is given in (147) to (150).
\[\mathcal{I}^{p_{1}p_{2}p_{3}}_{+++}= -\mathrm{i}E^{p_{1}+1}_{1}E^{p_{2}+1}_{2}E^{p_{3}+1}_{3}\ell^{3}_{ 1}\ell^{3}_{2}\int_{-\infty}^{0}\mathrm{d}\tau_{1}\mathrm{d}\tau_{2}\mathrm{d} \tau_{3}(-\tau_{1})^{p_{1}}(-\tau_{2})^{p_{2}}(-\tau_{3})^{p_{3}}e^{\mathrm{i}( E_{1}\tau_{1}+E_{2}\tau_{2}+E_{3}\tau_{3})}\] \[\times\Big{[}D^{(\widetilde{\nu}_{1})}_{+-}(\ell_{1};\tau_{1}, \tau_{2})+\Big{(}D^{(\widetilde{\nu}_{1})}_{-+}(\ell_{1};\tau_{1},\tau_{2})-D^ {(\widetilde{\nu}_{1})}_{+-}(\ell_{1};\tau_{1},\tau_{2})\Big{)}\theta(\tau_{1} -\tau_{2})\Big{]}\] \[\times\Big{[}D^{(\widetilde{\nu}_{2})}_{-+}(\ell_{2};\tau_{2}, \tau_{3})+\Big{(}D^{(\widetilde{\nu}_{2})}_{+-}(\ell_{2};\tau_{2},\tau_{3})-D^ {(\widetilde{\nu}_{2})}_{-+}(\ell_{2};\tau_{2},\tau_{3})\Big{)}\theta(\tau_{3} -\tau_{2})\Big{]}. \tag{92}\]
Next, we expand all the integrals and classify the terms according to whether the adjacent two time variables are time-ordered (T) or factorized (F).
\[\mathcal{I}^{p_{1}p_{2}p_{3}}_{++} =\mathcal{I}^{\mathrm{(FF)}}_{+++}+\mathcal{I}^{\mathrm{(FT)}}_{++ +}+\mathcal{I}^{\mathrm{(TF)}}_{+++}+\mathcal{I}^{\mathrm{(TT)}}_{+++}, \tag{93}\] \[\mathcal{I}^{p_{1}p_{2}p_{3}}_{++-} =\mathcal{I}^{\mathrm{(F)}}_{++-}+\mathcal{I}^{\mathrm{(T)}}_{++-},\] (94) \[\mathcal{I}^{p_{1}p_{2}p_{3}}_{++-} =\mathcal{I}^{\mathrm{(F)}}_{-++}+\mathcal{I}^{\mathrm{(T)}}_{-++}. \tag{95}\]
The explicit expressions of these integrals are given in (151)-(158). Then, we define the following 4 integrals:
\[\mathcal{I}^{\mathrm{(FF)}} =\Big{[}\mathcal{I}^{\mathrm{(FF)}}_{+++}+\mathcal{I}^{\mathrm{(F )}}_{++-}+\mathcal{I}^{\mathrm{(F)}}_{-++}+\mathcal{I}_{+-+}\Big{]}+\mathrm{c.c.}; \tag{96}\] \[\mathcal{I}^{\mathrm{(FT)}} =\Big{[}\mathcal{I}^{\mathrm{(FT)}}_{+++}+\mathcal{I}^{\mathrm{(T )}}_{-++}\Big{]}+\mathrm{c.c.};\] (97) \[\mathcal{I}^{\mathrm{(TF)}} =\Big{[}\mathcal{I}^{\mathrm{(TF)}}_{+++}+\mathcal{I}^{\mathrm{( T)}}_{++-}\Big{]}+\mathrm{c.c.};\] (98) \[\mathcal{I}^{\mathrm{(TT)}} =\mathcal{I}^{\mathrm{(TT)}}_{++++}+\mathrm{c.c.}. \tag{99}\]
After this regrouping of terms, each of the four integrals in \(\{{\cal I}^{(\rm FF)},{\cal I}^{(\rm FT)},{\cal I}^{(\rm TF)},{\cal I}^{(\rm TT)}\}\) has a definite nesting structure in its time integral, which can then be readily computed using the PMB representation. The procedure is by now standard: We first use the MB representations for the two massive propagators:
\[D^{(\widetilde{\nu}_{1})}_{\pm\overline{\tau}}(\ell_{1};\tau_{1},\tau_{2}) = \frac{1}{4\pi}\int_{s_{1},s_{2}}e^{\mp{\rm i}\pi(s_{1}-s_{2})} \Big{(}\frac{\ell_{1}}{2}\Big{)}^{-2s_{12}}(-\tau_{1})^{-2s_{1}+3/2}(-\tau_{2} )^{-2s_{2}+3/2} \tag{100}\] \[\times\Gamma\Big{[}s_{1}-\frac{{\rm i}\widetilde{\nu}_{1}}{2},s_ {1}+\frac{{\rm i}\widetilde{\nu}_{1}}{2},s_{2}-\frac{{\rm i}\widetilde{\nu}_{1 }}{2},s_{2}+\frac{{\rm i}\widetilde{\nu}_{1}}{2}\Big{]},\] \[D^{(\widetilde{\nu}_{2})}_{\pm\overline{\tau}}(\ell_{2};\tau_{2},\tau_{3}) = \frac{1}{4\pi}\int_{s_{3},s_{4}}e^{\mp{\rm i}\pi(s_{3}-s_{4})} \Big{(}\frac{\ell_{2}}{2}\Big{)}^{-2s_{34}}(-\tau_{2})^{-2s_{3}+3/2}(-\tau_{3} )^{-2s_{4}+3/2}\] (101) \[\times\Gamma\Big{[}s_{3}-\frac{{\rm i}\widetilde{\nu}_{2}}{2},s_ {3}+\frac{{\rm i}\widetilde{\nu}_{2}}{2},s_{4}-\frac{{\rm i}\widetilde{\nu}_{ 2}}{2},s_{4}+\frac{{\rm i}\widetilde{\nu}_{2}}{2}\Big{]}.\]
The assignment of the four Mellin variables \(s_{1},\cdots,s_{4}\) are shown in Fig. 8. Then, the time integrals can be directly done for all branches using results of the previous section. It then remains to finish the integrals over the four Mellin variables \(s_{1},\cdots,s_{4}\). We work in the region where the two bulk momenta \(\boldsymbol{\ell}_{1}\) and \(\boldsymbol{\ell}_{2}\) are both softer than all energies \(E_{1}\), \(E_{2}\), and \(E_{3}\). In this region, we have \(0<r_{i}<1\) (\(i=1,2,3,4\)), which means that we should pick up all left poles to finish the Mellin integrals.5
Footnote 5: Slightly stronger condition than \(0<r_{i}<1\) may be required for the convergence of the final result, but we expect that the radius of convergence for the final result should be \({\cal O}(1)\). We will not be concerned with the precise value of the radius of convergence.
As already explained in Sec. 2, the left poles of the Mellin integrand are all from the \(\Gamma\) factors in (100) and (101), and are given by:
\[s_{1}=-n_{1}-\frac{{\rm i}{\mathfrak{a}}_{1}\widetilde{\nu}_{1}}{2},\ \ \ \ s_{2}=-n_{2}-\frac{{\rm i}{\mathfrak{a}}_{2}\widetilde{\nu}_{1}}{2},\ \ \ \ s_{3}=-n_{3}-\frac{{\rm i}{\mathfrak{a}}_{3}\widetilde{\nu}_{2}}{2},\ \ \ \ s_{4}=-n_{4}-\frac{{\rm i}{\mathfrak{a}}_{4} \widetilde{\nu}_{2}}{2}. \tag{102}\]
Here \(n_{i}=0,1,2,\cdots\) (\(i=1,\cdots,4\)), and \({\mathfrak{a}}_{i}=\pm\) are not SK indices. Thus, by collecting the residues of the integrand at all these poles, we get the final answer for the three-vertex seed integral in (83). Similar to the case of single-exchange graph studied in previous works, it turns out convenient to express the final answer of \({\cal I}^{p_{1}p_{2}p_{3}}_{\sf abc}\) with all indices \({\mathfrak{a}},{\mathfrak{b}},{\mathfrak{c}}\) summed. Then, the result can be written as a sum of four distinct terms, plus trivial permutations:
\[\sum_{{\mathfrak{a}},{\mathfrak{b}},{\mathfrak{c}}=\pm}{\cal I}^{p _{1}p_{2}p_{3}}_{\sf abc}\big{(}\{r_{i}\}\big{)} = \bigg{\{}\Big{[}{\cal I}_{\rm SS}\big{(}\{r_{i}\}\big{)}+{\cal I }_{\rm SB}\big{(}\{r_{i}\}\big{)}+{\cal I}_{\rm BS}\big{(}\{r_{i}\}\big{)}+{ \cal I}_{\rm BB}\big{(}\{r_{i}\}\big{)}\Big{]} \tag{103}\] \[+(\widetilde{\nu}_{1}\to-\widetilde{\nu}_{1})+(\widetilde{\nu}_ {2}\to-\widetilde{\nu}_{2})+\begin{pmatrix}\widetilde{\nu}_{1}\to-\widetilde{ \nu}_{1}\\ \widetilde{\nu}_{2}\to-\widetilde{\nu}_{2}\end{pmatrix}\bigg{\}}+{\rm c.c.}.\]
Figure 8: The Mellin variables for the two massive propagators in the computation of the three-vertex seed integral.
Here we use the shorthand notation \(\mathcal{I}_{\text{abc}}^{p_{1}p_{2}p_{3}}(\{r_{i}\})\equiv\mathcal{I}_{\text{abc} }^{p_{1}p_{2}p_{3}}(r_{1},r_{2},r_{3},r_{4})\). Also, we are using the terminology of CC physics: The term \(\mathcal{I}_{\text{SS}}\), with the subscript SS denoting "signal-signal," is nonanalytic in all four momentum ratios \(r_{1},\cdots,r_{4}\) when these ratios go to \(0\), and thus corresponds to the CC signals generated by both bulk lines:
\[\mathcal{I}_{\text{SS}}=\sum_{\mathbf{a}_{1},\mathbf{a}_{2}=\pm}\mathbf{A}_{ \mathbf{a}_{1}\mathbf{a}_{2}}^{p_{1}p_{2}p_{3}}(r_{1},r_{2},r_{3},r_{4})\Big{(} \frac{r_{1}}{2}\Big{)}^{\mathrm{i}\widetilde{\nu}_{1}}\Big{(}\frac{r_{2}}{2} \Big{)}^{\mathrm{i}\mathbf{a}_{1}\widetilde{\nu}_{1}}\Big{(}\frac{r_{3}}{2} \Big{)}^{\mathrm{i}\mathbf{a}_{2}\widetilde{\nu}_{2}}\Big{(}\frac{r_{4}}{2} \Big{)}^{\mathrm{i}\widetilde{\nu}_{2}}, \tag{104}\]
where the function \(\mathbf{A}_{\mathbf{a}_{1}\mathbf{a}_{2}}^{p_{1}p_{2}p_{3}}(r_{1},r_{2},r_{3},r_{4})\) is fully analytic in the limit \(r_{i}\to 0\) (\(i=1,\cdots,4\)), whose explicit expression will be given below in (108).
Next, the term \(\mathcal{I}_{\text{SB}}\) denotes the "signal-background," which contains CC signals from the \(\widetilde{\nu}_{1}\)-leg, and thus is nonanalytic in both \(r_{1}\) and \(r_{2}\) as \(r_{1,2}\to 0\), but is fully analytic6 in both \(r_{3}\) and \(r_{4}\) as \(r_{3,4}\to 0\). Likewise, the term \(\mathcal{I}_{\text{SB}}\), denoting the "background-signal," is a trivial permutation of \(\mathcal{I}_{\text{SB}}\):
Footnote 6: There could be factors such as \((r_{3}/r_{4})^{p_{3}}\) which we do not count as nonanalytic.
\[\mathcal{I}_{\text{SB}}= \sum_{\mathbf{a}=\pm}\mathbf{B}_{\widetilde{\nu}_{1}\widetilde{ \nu}_{2}\widetilde{\nu}_{1}\mathbf{a}}^{p_{1}p_{2}p_{3}}(r_{1},r_{2},r_{3},r_{ 4})\Big{(}\frac{r_{1}}{2}\Big{)}^{\mathrm{i}\widetilde{\nu}_{1}}\Big{(}\frac{ r_{2}}{2}\Big{)}^{\mathrm{i}\mathbf{a}\widetilde{\nu}_{1}}, \tag{105}\] \[\mathcal{I}_{\text{BS}}= \sum_{\mathbf{a}=\pm}\mathbf{B}_{\widetilde{\nu}_{2}\widetilde{ \nu}_{1}\mathbf{a}}^{p_{1}p_{2}p_{3}}(r_{4},r_{3},r_{2},r_{1})\Big{(}\frac{r_{ 4}}{2}\Big{)}^{\mathrm{i}\widetilde{\nu}_{2}}\Big{(}\frac{r_{3}}{2}\Big{)}^{ \mathrm{i}\mathbf{a}\widetilde{\nu}_{2}}. \tag{106}\]
Again, the function \(\mathbf{B}_{\widetilde{\nu}_{1}\widetilde{\nu}_{2}\mathbf{a}}^{p_{1}p_{2}p_{3 }}(r_{1},r_{2},r_{3},r_{4})\) is fully analytic in all \(r_{i}\) as \(r_{i}\to 0\) (\(i=1,\cdots,4\)), whose explicit expression will be given below in (110).
Finally, the term \(\mathcal{I}_{\text{BB}}\) denotes the "background-background," and is fully analytic in all momentum ratios \(r_{i}\) as \(r_{i}\to 0\).
\[\mathcal{I}_{\text{BB}}= \frac{\sin(\mathrm{i}\pi\widetilde{\nu}_{1})\sin(\mathrm{i}\pi \widetilde{\nu}_{2})e^{-\mathrm{i}p_{123}\pi/2}}{4\pi^{2}}\sum_{n_{1},\cdots,n_ {4}=0}^{\infty}\frac{r_{2}^{3}r_{3}^{3}}{n_{1}!n_{2}!n_{3}!n_{4}!}\Big{(}\frac{ r_{2}}{r_{1}}\Big{)}^{p_{1}+1}\Big{(}\frac{r_{3}}{r_{4}}\Big{)}^{p_{3}+1} \Big{(}\frac{r_{2}}{2}\Big{)}^{2n_{12}}\Big{(}\frac{r_{3}}{2}\Big{)}^{2n_{34}}\] \[\times\Gamma\Big{[}-n_{1}-\mathrm{i}\widetilde{\nu}_{1},-n_{2}+ \mathrm{i}\widetilde{\nu}_{1},-n_{3}+\mathrm{i}\widetilde{\nu}_{2},-n_{4}- \mathrm{i}\widetilde{\nu}_{2}\Big{]}\] \[\times\mathcal{F}_{2}\left[p_{123}+2n_{1234}+9\!\!\begin{array}[] {c}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The function \(\mathbf{F}(a,b;z)\) in (108) is defined in terms of the Gaussian hypergeometric function, given in (140), and the function \(\mathcal{F}_{4}\) denotes the dressed Appell \(F_{4}\) function, which is defined in (142). Finally, the function \(\mathbf{B}_{\widetilde{\nu}_{1}\widetilde{\nu}_{2}|\mathsf{a}}^{p_{1}p_{2}p_{3}}\) is given by:
\[\mathbf{B}_{\widetilde{\nu}_{1}\widetilde{\nu}_{2}|\mathsf{a}}^{ p_{1}p_{2}p_{3}} \left(r_{1},r_{2},r_{3},r_{4}\right)\equiv\frac{e^{-\mathrm{i}\pi(p_{23}+ \mathrm{i}\widetilde{\nu}_{1}-1/2)/2}}{4\pi^{2}}\sin\left[\tfrac{\pi}{2}( \mathrm{i}\mathsf{a}\widetilde{\nu}_{1}+p_{1}-\tfrac{3}{2})\right]\sin( \mathrm{i}\pi\widetilde{\nu}_{2})(r_{1}r_{2}r_{3}^{2})^{\frac{3}{2}}\] \[\times\sum_{n_{1},n_{2},n_{3}=0}^{\infty}\frac{(-1)^{n_{123}}}{ n_{1}!n_{2}!n_{3}!}\frac{\Gamma[-n_{1}-\mathrm{i}\widetilde{\nu}_{2},-n_{2}+ \mathrm{i}\widetilde{\nu}_{2}]}{p_{3}+n_{3}+2n_{2}-\mathrm{i}\widetilde{\nu}_ {2}+\tfrac{5}{2}}\Big{(}\frac{r_{3}}{2}\Big{)}^{2n_{12}}\Big{(}\frac{r_{3}}{ r_{4}}\Big{)}^{n_{3}+p_{3}+1}\] \[\times\mathbf{F}(p_{1}+\mathrm{i}\widetilde{\nu}_{1}+\tfrac{5}{2 },-\mathrm{i}\widetilde{\nu}_{1};r_{1}^{2})\mathbf{F}(n_{3}+2n_{12}+p_{23}+ \mathsf{i}\mathsf{a}\widetilde{\nu}_{1}+\tfrac{13}{2},-\mathsf{i}\widetilde{ \nu}_{1};r_{2}^{2}). \tag{110}\]
We can make a finer classification of terms with signals in (103), according to whether the signal is local or nonlocal. Once again, the nonlocal signal means a piece in the correlator which is nonanalytic in the bulk momentum \(\ell_{1}\) or \(\ell_{2}\) as \(\ell_{1,2}\to 0\). Using our momentum ratios, this means that a nonlocal signal contains noninteger powers of either \(r_{1}r_{2}\) or \(r_{3}r_{4}\). On the other hand, a local signal means a piece analytic as \(\ell_{1,2}\to 0\) but nonanalytic in other momentum ratios. In our case, local signals come from noninteger powers of \(r_{1}/r_{2}\) or \(r_{3}/r_{4}\). Thus, we can further write:
\[\mathcal{I}_{\mathrm{SS}} =\mathcal{I}_{\mathrm{LL}}+\mathcal{I}_{\mathrm{LN}}+\mathcal{I}_ {\mathrm{NL}}+\mathcal{I}_{\mathrm{NN}}; \tag{111}\] \[\mathcal{I}_{\mathrm{SB}} =\mathcal{I}_{\mathrm{LB}}+\mathcal{I}_{\mathrm{NB}};\] (112) \[\mathcal{I}_{\mathrm{BS}} =\mathcal{I}_{\mathrm{BL}}+\mathcal{I}_{\mathrm{BN}}. \tag{113}\]
Then, explicitly, we have:
\[\mathcal{I}_{\mathrm{LL}} =\mathbf{A}_{--}^{p_{1}p_{2}p_{3}}(r_{1},r_{2},r_{3},r_{4})\Big{(} \frac{r_{1}}{r_{2}}\Big{)}^{\mathrm{i}\widetilde{\nu}_{1}}\Big{(}\frac{r_{4}}{ r_{3}}\Big{)}^{\mathrm{i}\widetilde{\nu}_{2}}, \tag{114}\] \[\mathcal{I}_{\mathrm{LN}} =\mathbf{A}_{-+}^{p_{1}p_{2}p_{3}}(r_{1},r_{2},r_{3},r_{4})\Big{(} \frac{r_{1}}{r_{2}}\Big{)}^{\mathrm{i}\widetilde{\nu}_{1}}\Big{(}\frac{r_{3}r_ {4}}{4}\Big{)}^{\mathrm{i}\widetilde{\nu}_{2}},\] (115) \[\mathcal{I}_{\mathrm{NL}} =\mathbf{A}_{+-}^{p_{1}p_{2}p_{3}}(r_{1},r_{2},r_{3},r_{4})\Big{(} \frac{r_{1}r_{2}}{4}\Big{)}^{\mathrm{i}\widetilde{\nu}_{1}}\Big{(}\frac{r_{4}}{ r_{3}}\Big{)}^{\mathrm{i}\widetilde{\nu}_{2}},\] (116) \[\mathcal{I}_{\mathrm{NN}} =\mathbf{A}_{++}^{p_{1}p_{2}p_{3}}(r_{1},r_{2},r_{3},r_{4})\Big{(} \frac{r_{1}r_{2}}{4}\Big{)}^{\mathrm{i}\widetilde{\nu}_{1}}\Big{(}\frac{r_{3}r_ {4}}{4}\Big{)}^{\mathrm{i}\widetilde{\nu}_{2}},\] (117) \[\mathcal{I}_{\mathrm{LB}} =\mathbf{B}_{\widetilde{\nu}_{1}\widetilde{\nu}_{2}|-}^{p_{1}p_{2 }p_{3}}(r_{1},r_{2},r_{3},r_{4})\Big{(}\frac{r_{1}}{r_{2}}\Big{)}^{\mathrm{i} \widetilde{\nu}_{1}},\] (118) \[\mathcal{I}_{\mathrm{NB}} =\mathbf{B}_{\widetilde{\nu}_{1}\widetilde{\nu}_{2}|+}^{p_{1}p_{2 }p_{3}}(r_{1},r_{2},r_{3},r_{4})\Big{(}\frac{r_{1}r_{2}}{4}\Big{)}^{\mathrm{i} \widetilde{\nu}_{1}},\] (119) \[\mathcal{I}_{\mathrm{BL}} =\mathbf{B}_{\widetilde{\nu}_{2}\widetilde{\nu}_{1}|-}^{p_{1}p_{2 }p_{3}}(r_{4},r_{3},r_{2},r_{1})\Big{(}\frac{r_{4}}{r_{3}}\Big{)}^{\mathrm{i} \widetilde{\nu}_{2}},\] (120) \[\mathcal{I}_{\mathrm{BN}} =\mathbf{B}_{\widetilde{\nu}_{2}\widetilde{\nu}_{1}|+}^{p_{1}p_{2 }p_{3}}(r_{4},r_{3},r_{2},r_{1})\Big{(}\frac{r_{3}r_{4}}{4}\Big{)}^{\mathrm{i} \widetilde{\nu}_{2}}. \tag{121}\]
We have checked numerically that our analytical result (103) agrees well with a direct numerical integration of (83).
## 5 Four-Point Correlator with Two Massive Exchanges
Now, we consider a special case of the general two massive exchanges from the last section. We will compute a four-point graph in Fig. 9, in which we have four external massless scalar \(\varphi\)
e this example to show how to take folded limits in the three-vertex seed integral. The correlator corresponding to Fig. 9 has been given in (89), which shows that all we need to do is to take the following folded limit of the three-vertex seed integral (103):
\[\lim_{r_{1},r_{4}\to 1}\mathcal{I}_{\mathtt{a}_{1}\mathtt{a}_{2}\mathtt{a}_{3}}^{-2,0,- 2}(r_{1},r_{2},r_{3},r_{4}). \tag{122}\]
In the folded limit \(r_{1}\to 1\) and \(r_{4}\to 1\), we expect that various individual terms in (103) diverge, but the divergence must cancel out in the full result, as a consequence of choosing the Bunch-Davies initial condition.7
Footnote 7: Choosing the Bunch-Davies initial condition is implicit in our choice for all the mode functions.
Specifically, all the hypergeometric functions in the \(\mathbf{F}\) factors in both (108) and (110) could develop divergent terms when we take their arguments to 1. With the knowledge that these divergent terms must cancel among themselves, we can directly throw them away when evaluating the function \(\mathbf{F}(a,b;z)\) at \(z=1\). Using the expansion of hypergeometric function at argument unity, we get the following finite result for \(\mathbf{F}(a,b;z)\) as \(z\to 1\)[91]:
\[\mathrm{Fin}\left\{\lim_{z\to 1}\mathbf{F}(a,b;z)\right\}=\Gamma\left[ \begin{matrix}a,b,1-b,1/2-a-b\\ 1-b-a/2,1/2-b-a/2\end{matrix}\right],\qquad(1/2-a-b\notin\mathbb{Z}), \tag{123}\]
where \(\mathrm{Fin}\{\}\) means the finite part of the expression within. The case of \(1/2-a-b\in\mathbb{Z}\) can be computed by taking the limit. For example, the limit of a term in \(\mathcal{I}^{\mathrm{FF}}\) when \(p\to-2\) can be computed as:
\[\mathrm{Fin}\left\{\lim_{\begin{subarray}{c}p\to-2\\ r\to 1\end{subarray}}\left[\left(\frac{r}{2}\right)^{\mathrm{i}\widetilde{ \nu}}\mathbf{F}(p+\mathrm{i}\widetilde{\nu}+\tfrac{5}{2},-\mathrm{i} \widetilde{\nu};r^{2})+(\widetilde{\nu}\to-\widetilde{\nu})\right]\right\}= \sqrt{2}\pi^{3/2}\operatorname{sech}(\pi\widetilde{\nu}). \tag{124}\]
With all limits of all \(\mathbf{F}\) factors properly taken as above, we find a finite result for (89), which can again be separated into several pieces according their analytical properties at \(k_{1}\to 0\) or \(k_{2}\to 0\):
\[\sum_{\mathtt{a}_{1},\mathtt{a}_{2},\mathtt{a}_{3}=\pm} \mathcal{I}_{\mathtt{a}_{1}\mathtt{a}_{2}\mathtt{a}_{3}}^{-2,-2,-2}\big{(}1,r_ {2},r_{3},1\big{)}\] \[=\Big{\{}\Big{[}\mathcal{T}_{\mathrm{SS}}(r_{2},r_{3})+\mathcal{T }_{\mathrm{SB}}(r_{2},r_{3})+\mathcal{T}_{\mathrm{BS}}(r_{2},r_{3})+\mathcal{T }_{\mathrm{SS}}(r_{2},r_{3})\Big{]}\] \[\quad+(\widetilde{\nu}_{1}\to-\widetilde{\nu}_{1})+(\widetilde{ \nu}_{2}\to-\widetilde{\nu}_{2})+(\widetilde{\nu}_{1}\to-\widetilde{\nu}_{1},\widetilde{\nu}_{2}\to-\widetilde{\nu}_{2})\Big{\}}+\mathrm{c.c.}. \tag{125}\]
Figure 9: The tree-level 4-point inflaton correlator with two massive exchanges.
Here \(r_{2}\equiv k_{1}/k_{34}\) and \(r_{3}\equiv k_{2}/k_{34}\). The four pieces \(\{\mathcal{T}_{\rm SS},\mathcal{T}_{\rm SB},\mathcal{T}_{\rm BB},\mathcal{T}_{ \rm BB}\}\) are defined in a similar way as before, according to whether the expression is analytic in \(r_{2}\to 0\) or \(r_{3}\to 0\) limit. In particular, the signal-signal piece is nonanalytic when \(r_{2}\to 0\) and when \(r_{3}\to 0\), whose explicit expression is:
\[\mathcal{T}_{\rm SS}(r_{2},r_{3})= \ -\frac{4\pi^{5/2}(e^{\pi\widetilde{\nu}_{1}}-{\rm i})(e^{\pi \widetilde{\nu}_{2}}-{\rm i})}{\sin(2\pi{\rm i}\widetilde{\nu}_{1})\sin(2\pi{ \rm i}\widetilde{\nu}_{2})}r_{2}^{\widetilde{\nu}_{1}+3/2}r_{3}^{\widetilde{ \nu}_{2}+3/2}\mathcal{F}_{4}\left[\frac{\frac{\widetilde{\nu}_{1}+\widetilde{ \nu}_{2}+4}{2}}{1+{\rm i}\widetilde{\nu}_{1},1+{\rm i}\widetilde{\nu}_{2}} \bigg{|}r_{2}^{2},r_{3}^{2}\right], \tag{126}\]
where \(\mathcal{F}_{4}\) is again the dressed Appell \(F_{4}\) function defined in (142). Next, the signal-background piece \(\mathcal{T}_{\rm SB}\) is nonanalytic in \(r_{2}\to 0\) but analytic in \(r_{3}\to 0\):
\[\mathcal{T}_{\rm SB}(r_{2},r_{3}) = \frac{\sin({\rm i}\pi\widetilde{\nu}_{2})}{\sqrt{\pi}(e^{\pi \widetilde{\nu}_{1}}-{\rm i})}\Big{(}\frac{r_{2}}{2}\Big{)}^{3/2-{\rm i} \widetilde{\nu}_{1}}\sum_{n_{1},n_{2},n_{3}=1}^{\infty}\frac{(-1)^{n_{123}}} {n_{1}!n_{2}!n_{3}!}r_{3}^{2+n_{3}}\Big{(}\frac{r_{3}}{2}\Big{)}^{2n_{12}} \tag{127}\] \[\times\frac{\Gamma[-n_{1}-{\rm i}\widetilde{\nu}_{2},-n_{2}+{\rm i }\widetilde{\nu}_{2}]}{n_{3}+2n_{2}+{\rm i}\widetilde{\nu}_{2}+\frac{1}{2}} \mathbf{F}(n_{3}+2n_{12}-{\rm i}\widetilde{\nu}_{1}+\frac{9}{2},{\rm i} \widetilde{\nu}_{1};r_{2}^{2}).\]
The background-signal piece \(\mathcal{T}_{\rm BS}\) is obtained from \(\mathcal{T}_{\rm SB}\) by switching \(\widetilde{\nu}_{1}\leftrightarrow\widetilde{\nu}_{2}\) as well as \(r_{2}\leftrightarrow r_{3}\). Finally, the background-background piece is analytic in both \(r_{2}\to 0\) and \(r_{3}\to 0\) limit, and its expression is:
\[\mathcal{T}_{\rm BB}(r_{2},r_{3}) = \frac{\sin({\rm i}\pi\widetilde{\nu}_{1})\sin({\rm i}\pi \widetilde{\nu}_{2})r_{2}^{2}r_{3}^{2}}{4\pi^{2}}\sum_{n_{1},\cdots,n_{4}=0}^{ \infty}\frac{(-1)^{n_{1234}}}{n_{1}!n_{2}!n_{3}!n_{4}!}\Big{(}\frac{r_{2}}{2} \Big{)}^{2n_{12}}\Big{(}\frac{r_{3}}{2}\Big{)}^{2n_{34}} \tag{128}\] \[\times\Gamma\Big{[}-n_{1}-{\rm i}\widetilde{\nu}_{1},-n_{2}+{\rm i }\widetilde{\nu}_{1},-n_{3}+{\rm i}\widetilde{\nu}_{2},-n_{4}-{\rm i} \widetilde{\nu}_{2}\Big{]}\] \[\times\mathcal{F}_{2}\left[2n_{1234}+5\genfrac{[}{]}{0.0pt}{}{2n_ {1}+{\rm i}\widetilde{\nu}_{1}+\frac{1}{2},2n_{4}+{\rm i}\widetilde{\nu}_{2}+ \frac{1}{2}}{2n_{1}+{\rm i}\widetilde{\nu}_{1}+\frac{3}{2},2n_{4}+{\rm i} \widetilde{\nu}_{2}+\frac{3}{2}}\bigg{|}-r_{2},-r_{3}\right].\]
Again, we have checked that our analytical result (125) for the 4-point graph in Fig. 9 agrees well with a direct numerical integration.
## 6 Conclusion and Outlooks
Inflation correlators are important theoretical data for QFTs in dS spacetime, and are promising targets for current and future cosmological observations. Inflation correlators mediated by massive fields are central objects in Cosmological Collider physics. Thus, the analytical computation of inflation correlators deserves a systematic investigation.
Very often in weakly coupled theories, inflation correlators are dominated by tree-level exchanges. However, analytical computation of general tree graphs remains challenging in dS, due to the multi-layer integrals with time orderings in Schwinger-Keldysh formalism. In previous works, it has been shown that the partial Mellin-Barnes representation is useful in the analytical computation of inflation correlators. Even using this method, the complete analytical evaluation of tree graphs is still hampered by the complication of nested time integrals.
In this work, we computed arbitrarily nested time integrals in PMB representation. The result is in general a multi-variable hypergeometric series. With our family-tree decomposition procedure, we can find series representation in terms of the inverse of any desired energy variable,
or even the sum of several energy variables. This result largely solves the problem of analytical continuation of the nested time integrals in most physical regions.
With our results, the analytical computation of inflation correlators with arbitrary massive exchanges at the tree level is reduced to a work of pole collecting in the Mellin integrand, which is largely trivial. Thus, barring possible issues with analytical continuation in special kinematics to be commented below, we can say that the problem of analytical computation of tree-level inflation correlators is solved.
At this point, we want to comment on the meaning of analytical computation. As we have seen, most tree-level inflation correlators with massive exchanges have to be expressed in terms of gigantic hypergeometric series which are not yet named. One may say that we can systematically classify hypergeometric functions with increasing number of variables and parameters and give each of them a name. However, given that we know so little about these series in general, this is not super meaningful, and also is not really different from directly giving names to inflation correlators. The meaning of analytical calculation is thus somewhat obscure. Normally, when we say that we obtain an analytical answer, what we really mean is that we have good understanding of this answer in at least two ways: First, we know its analytical properties. This includes how does the answer change with parameters, and where does it blow up or show other singular behavior. Second, we know how to find numerical values of this answer for any choices of parameters with reasonable precision and computation time. Therefore, we can claim that we get an analytical answer only when we have gained sufficient knowledge about this answer. Having an answer is not enough; we need to understand it.
From this viewpoint, it seems to us that our result for the nested time integrals can be called an analytical answer: We know how to write down this answer as Taylor series for most kinematics. As long as there is a largest energy variable and we can use it to form small energy ratios, we can always express our answer as power series of these given small numbers. This means, on the one hand, we know the analytical properties of the answer at any soft energy limit, and also know how to take analytical continuations to different parameter space. On the other hand, having a convergent series often means that we can do fast numerical evaluation of the answer. This is proved true in our examples for two massive exchanges: In many cases, the numerical evaluation of our series solution is way faster than direct numerical integration of the original graph.
Our results have opened new possibilities in the analytical study of inflation correlators. Many interesting problems can be pursued along this direction. We mention some of them as below.
First, a main result of this work is a simple procedure to write down the analytical answer for arbitrary nested time integrals in PMB representation. Since the same time integral also appears in the loop computation, the result here could be useful for the computation of loop correlators as well. Thus it would be interesting to apply our method to make more complete loop computation in PMB representation.
Second, a nice feature of PMB representation is that the energy dependence and the momentum dependence of a graph are separated: The energy dependence is fully in the time integral, while the momentum dependence is fully in the loop momentum integral (or trivially factored out in a tree graph). Thus, our result on the nested time integral will be useful for studying energy dependence of a graph. This is particularly relevant to digging out the local CC signal in a graph, since, by definition, the local signal is a nonanalytic power in the energy ratios. We leave a more
systematic study of local signals to a future work.
Third, it is important that the PMB representation does not assume full dS isometries of the problem. Therefore, it is straightforward to apply our results here to fields with dS-breaking dispersions such as non-unit sound speed, helical chemical potential, or even more exotic dispersion relations. Our method is also applicable to correlation functions in more general FRW background. We leave these generalizations to future studies.
Finally, it remains challenging to take analytical continuation of our series expressions to the parameter region where no small energy ratios exist. A pragmatic solution is using numerical interpolation to bridge different parameter regions in which various series expression converge. While this can indeed be implemented in some cases, it is not clear to us if this method works for all possible kinematics. Analytically, we may need more sophisticated methods to take analytical continuation for multi-layer series, which sounds like a nontrivial mathematical problem. We leave these more mathematically oriented problems for future studies as well.
Acknowledgments.We thank Zhehan Qin for useful discussions. This work is supported by the National Key R&D Program of China (2021YFC2203100), NSFC under Grant No. 12275146, an Open Research Fund of the Key Laboratory of Particle Astrophysics and Cosmology, Ministry of Education of China, and the Dushi Program of Tsinghua University.
Mathematical Appendix
### Mellin-Barnes representation
We use MB representation for quite a few special functions in the main text, which we collect here. All expressions here can be found in standard mathematical handbooks such as [114].
First, the Hankel functions \(\mathrm{H}_{\nu}^{(j)}(az)\) of \(j\)'th kind (\(j=1,2\)) frequently appear. Their MB representations are given by:
\[\mathrm{H}_{\nu}^{(j)}(az)=\int_{-\mathrm{i}\infty}^{\mathrm{i}\infty}\frac{ \mathrm{d}s}{2\pi\mathrm{i}}\frac{(az/2)^{-2s}}{\pi}e^{(-1)^{j+1}(2s-\nu-1)\pi \mathrm{i}/2}\Gamma\Big{[}s-\frac{\nu}{2},s+\frac{\nu}{2}\Big{]}.\quad(j=1,2) \tag{129}\]
Next, we use an exponential integral \(\mathrm{E}_{p}(z)\) defined in the following way:
\[\mathrm{E}_{p}(z)=\int_{1}^{\infty}\frac{e^{-zt}}{t^{p}}\mathrm{d}t. \tag{130}\]
This exponential integral is related to a confluent hypergeometric function \(\mathrm{U}(a,b;z)\) via
\[\mathrm{E}_{p}(z)=z^{p-1}e^{-z}\mathrm{U}(p,p;z), \tag{131}\]
The confluent hypergeometric function \(U(a,b;z)\) has the following MB representations:
\[\mathrm{U}(a,b;z)=\int_{-\mathrm{i}\infty}^{+\mathrm{i}\infty}\frac{\mathrm{d }s}{2\pi\mathrm{i}}\Gamma\begin{bmatrix}a+s,1+a-b+s,-s\\ a,1+a-b\end{bmatrix}z^{-a-s}, \tag{132}\]
\[\mathrm{U}(a,b,z)=z^{1-b}e^{z}\int_{-\mathrm{i}\infty}^{+\mathrm{i}\infty} \frac{\mathrm{d}s}{2\pi\mathrm{i}}\Gamma\begin{bmatrix}b-1+s,s\\ a+s\end{bmatrix}z^{-s}. \tag{133}\]
The validities of these expressions put constraints on the range of \(z\), which are always satisfied in the cases studied in this work, and thus we do not spell them out. With these expressions, we can get two different MB representations for the exponential integral \(\mathrm{E}_{p}(z)\). First, we have a partially resolved representation:
\[\mathrm{E}_{p}(z)=e^{-z}\int_{-\mathrm{i}\infty}^{+\mathrm{i}\infty}\frac{ \mathrm{d}s}{2\pi\mathrm{i}}\Gamma\begin{bmatrix}p+s,1+s,-s\\ p\end{bmatrix}z^{-s-1}. \tag{134}\]
Second, we have the following completely resolved representation:
\[\mathrm{E}_{p}(z)=\int_{-\mathrm{i}\infty}^{+\mathrm{i}\infty}\frac{\mathrm{ d}s}{2\pi\mathrm{i}}\frac{\Gamma(s)z^{-s}}{s+p-1}. \tag{135}\]
We note that the denominator \(1/(s+p-1)\) in the integrand of the last expression comes from the \(\Gamma\) factors \(\Gamma(b-1+s)/\Gamma(a+s)\) in (133) with \(a=b=p\) as required by (131). After taking \(a=b=p\), most left poles of \(\Gamma(b-1+s)\) are canceled by the zeros of \(1/\Gamma(a+s)\), with only one pole left, which is exactly the denominator \(1/(s+p-1)\) in (135). Thus we see that the pole from \(1/(s+p-1)\) should be treated as a left pole.
### Special functions
Following our previous works on similar topics, we use shorthand notations for the products and fractions of Euler \(\Gamma\) functions:
\[\Gamma\left[z_{1},\cdots,z_{m}\right] \equiv\Gamma(z_{1})\cdots\Gamma(z_{m}), \tag{136}\] \[\Gamma\left[\genfrac{.}{.}{0.0pt}{}{z_{1},\cdots,z_{m}}{w_{1}, \cdots,w_{n}}\right] \equiv\frac{\Gamma(z_{1})\cdots\Gamma(z_{m})}{\Gamma(w_{1}) \cdots\Gamma(w_{n})}. \tag{137}\]
A number of hypergeometric series have been well studied and designated with special names. Several of these hypergeometric functions are used in the main text, and we collect their definitions here. More details about these functions can be found in [115]. First, the (generalized) hypergeometric function \({}_{p}F_{q}\) is defined by the following way when the series converges:
\[{}_{p}F_{q}\left[\genfrac{.}{.}{0.0pt}{}{a_{1},\cdots,a_{p}}{b_{1},\cdots,b_{q }}\bigg{|}z\right]=\sum_{n=0}^{\infty}\frac{(a_{1})_{n}\cdots(a_{p})_{n}}{(b_ {1})_{n}\cdots(b_{q})_{n}}\frac{z^{n}}{n!}, \tag{138}\]
where \((a)_{n}\equiv\Gamma(a+n)/\Gamma(a)\) is the Pochhammer symbol. In most cases, it turns out simpler to use the following dressed version of hypergeometric function:
\[{}_{p}\mathcal{F}_{q}\left[\genfrac{.}{.}{0.0pt}{}{a_{1},\cdots,a_ {p}}{b_{1},\cdots,b_{q}}\bigg{|}z\right] =\Gamma\left[\genfrac{.}{.}{0.0pt}{}{a_{1},\cdots,a_{p}}{b_{1}, \cdots,b_{q}}\bigg{]}{}_{p}F_{q}\left[\genfrac{.}{.}{0.0pt}{}{a_{1},\cdots,a_{ p}}{b_{1},\cdots,b_{q}}\bigg{|}z\right]\] \[=\sum_{n=0}^{\infty}\Gamma\left[\genfrac{.}{.}{0.0pt}{}{a_{1}+n, \cdots,a_{p}+n}{b_{1}+n,\cdots,b_{q}+n}\right]\frac{z^{n}}{n!}. \tag{139}\]
A special case of Gauss hypergeometric function is frequently used in the main text, and thus we give a particular symbol to it:
\[\mathbf{F}(a,b;z)\equiv\Gamma[a,b]\times{}_{2}F_{1}\left[\genfrac{.}{.}{0.0pt }{}{a/2,(1+a)/2}{1-b}\bigg{|}z\right]. \tag{140}\]
Next we come to hypergeometric functions of two variables. First, there are four Appell functions \(F_{1},\cdots,F_{4}\). Two of them are used in the main text. We only present the definition of their dressed versions:
\[\mathcal{F}_{2}\left[\genfrac{.}{.}{0.0pt}{}{a}\bigg{|}\genfrac{.}{.}{0.0pt }{}{b_{1},b_{2}}{c_{1},c_{2}}\bigg{|}x,y\right]=\sum_{m,n=0}^{\infty}\Gamma \left[\genfrac{.}{.}{0.0pt}{}{a+m+n,b_{1}+m,b_{2}+n}{c_{1}+m,c_{2}+n}\right] \frac{x^{m}y^{n}}{m!n!}. \tag{141}\]
\[\mathcal{F}_{4}\left[\genfrac{.}{.}{0.0pt}{}{a,b}{c_{1},c_{2}}\bigg{|}x,y \right]=\sum_{m,n=0}^{\infty}\Gamma\left[\genfrac{.}{.}{0.0pt}{}{a+m+n,b+m+n} {c_{1}+m,c_{2}+n}\right]\frac{x^{m}y^{n}}{m!n!}. \tag{142}\]
Second, a more general class of two-variable hypergeometric functions are called Kampe de Feriet function in the literature, whose definition is:
\[{}^{p+q}F_{r+s}\left[\genfrac{.}{.}{0.0pt}{}{a_{1},\cdots,a_{p}}{ c_{1},\cdots,c_{r}}\bigg{|}\genfrac{.}{.}{0.0pt}{}{b_{1},b_{1}^{\prime};\cdots;b_{q },b_{q}^{\prime}}{d_{1},d_{1}^{\prime};\cdots;d_{s},d_{s}^{\prime}}\bigg{|}x,y\right]\] \[=\sum_{m,n=0}^{\infty}\frac{(a_{1})_{m+n}\cdots(a_{p})_{m+n}}{(c_ {1})_{m+n}\cdots(c_{r})_{m+n}}\frac{(b_{1})_{m}(b_{1}^{\prime})_{n}\cdots(b_{q })_{m}(b_{q}^{\prime})_{n}}{(d_{1})_{n}(d_{1}^{\prime})_{n}\cdots(d_{s})_{m}( d_{s}^{\prime})_{n}}\frac{x^{m}y^{n}}{m!n!}. \tag{143}\]
Again, we use the dressed version in the main text:
\[\left.\begin{aligned} &{}^{p+q}\mathcal{F}_{r+s}\begin{bmatrix}a_{1}, \cdots,a_{p}\Big{|}\,b_{1},b^{\prime}_{1};\cdots;b_{q},b^{\prime}_{q}\\ c_{1},\cdots,c_{r}\big{|}\,d_{1},d^{\prime}_{1};\cdots;d_{s},d^{\prime}_{s} \end{bmatrix}x,y\end{aligned}\right.\\ =\sum_{m,n=0}^{\infty}\Gamma\begin{bmatrix}a_{1}+m+n,\cdots,a_{ p}+m+n\\ c_{1}+m+n,\cdots,c_{r}+m+n\end{bmatrix}\Gamma\begin{bmatrix}b_{1}+m,\cdots,b_{q}+m \\ d_{1}+m,\cdots,d_{s}+m\end{bmatrix}\Gamma\begin{bmatrix}b^{\prime}_{1}+n,\cdots,b ^{\prime}_{q}+n\\ d^{\prime}_{1}+n,\cdots,d^{\prime}_{s}+n\end{bmatrix}\frac{x^{m}y^{n}}{m!n!}. \tag{144}\]
Finally, there is a particular \(n\)-variable hypergeometric function that appears in the main text, called Lauricella's \(F_{A}\) function:
\[F_{A}\begin{bmatrix}a\Big{|}b_{1},\cdots,b_{N}\\ c_{1},\cdots,c_{N}\Big{|}z_{1},\cdots,z_{N}\end{bmatrix}=\sum_{m_{1},\cdots,m_ {N}=0}^{\infty}\frac{(a)_{m_{1}+\cdots+m_{N}}(b_{1})_{m_{1}}\cdots(b_{N})_{m_ {N}}}{(c_{1})_{m_{1}}\cdots(c_{N})_{m_{N}}}\,\frac{z_{1}^{m_{1}}\cdots z_{N}^{ m_{N}}}{m_{1}!\cdots m_{N}!}. \tag{145}\]
Again, we use this function in its dressed form:
\[\mathcal{F}_{A}\begin{bmatrix}a\Big{|}b_{1},\cdots,b_{N}\\ c_{1},\cdots,c_{N}\Big{|}z_{1},\cdots,z_{N}\end{bmatrix}\] \[=\sum_{m_{1},\cdots,m_{N}=0}^{\infty}\Gamma\begin{bmatrix}a+m_{1} +\cdots+m_{N},b_{1}+m_{1},\cdots,b_{N}+m_{N}\\ c_{1}+m_{1},\cdots,c_{N}+m_{N}\end{bmatrix}\frac{z_{1}^{m_{1}}\cdots z_{N}^{m_{ N}}}{m_{1}!\cdots m_{N}!}. \tag{146}\]
## Appendix B Details of Computing the Three-Vertex Seed Integral
In this appendix we collect some intermediate steps in the computation of the three-vertex seed integral (83) in Sec. 4. First, there are four independent branches in the seed integrals (83), shown below. The other four can be found by taking complex conjugation.
\[\mathcal{I}_{++}= -\mathrm{i}E_{1}^{p_{1}+1}E_{2}^{p_{2}+1}E_{3}^{p_{3}+1}\ell_{1}^ {3}\ell_{2}^{3}\int_{-\infty}^{0}\mathrm{d}\tau_{1}\mathrm{d}\tau_{2}\mathrm{d }\tau_{3}(-\tau_{1})^{p_{1}}(-\tau_{2})^{p_{2}}(-\tau_{3})^{p_{3}}e^{\mathrm{i }(E_{1}\tau_{1}+E_{2}\tau_{2}+E_{3}\tau_{3})}\] \[\times\Big{[}D_{+-}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1}, \tau_{2})+\Big{(}D_{-+}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1},\tau_{2})-D_{ +-}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1},\tau_{2})\Big{)}\theta(\tau_{1}- \tau_{2})\Big{]}\] \[\times\Big{[}D_{-+}^{(\widetilde{\nu}_{2})}(\ell_{2};\tau_{2}, \tau_{3})+\Big{(}D_{+-}^{(\widetilde{\nu}_{2})}(\ell_{2};\tau_{2},\tau_{3})-D_{ -+}^{(\widetilde{\nu}_{2})}(\ell_{2};\tau_{2},\tau_{3})\Big{)}\theta(\tau_{3}- \tau_{2})\Big{]} \tag{147}\]
\[\mathcal{I}_{++-}=+\mathrm{i}E_{1}^{p_{1}+1}E_{2}^{p_{2}+1}E_{3}^{ p_{3}+1}\ell_{1}^{3}\ell_{2}^{3}\int_{-\infty}^{0}\mathrm{d}\tau_{1}\mathrm{d} \tau_{2}\mathrm{d}\tau_{3}(-\tau_{1})^{p_{1}}(-\tau_{2})^{p_{2}}(-\tau_{3})^{p _{3}}e^{\mathrm{i}(E_{1}\tau_{1}+E_{2}\tau_{2}-E_{3}\tau_{3})}\] \[\times D_{-+}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1},\tau_{2}) \Big{[}D_{-+}^{(\widetilde{\nu}_{2})}(\ell_{2};\tau_{2},\tau_{3})+\Big{(}D_{+-}^ {(\widetilde{\nu}_{2})}(\ell_{2};\tau_{2},\tau_{3})-D_{-+}^{(\widetilde{\nu}_{ 2})}(\ell_{2};\tau_{2},\tau_{3})\Big{)}\theta(\tau_{3}-\tau_{2})\Big{]}. \tag{149}\]
\[{\cal I}_{+-+-}= +{\rm i}E_{1}^{p_{1}+1}E_{2}^{p_{2}+1}E_{3}^{p_{3}+1}\ell_{1}^{3} \ell_{2}^{3}\int_{-\infty}^{0}{\rm d}\tau_{1}{\rm d}\tau_{2}{\rm d}\tau_{3}(-\tau _{1})^{p_{1}}(-\tau_{2})^{p_{2}}(-\tau_{3})^{p_{3}}e^{{\rm i}(E_{1}\tau_{1}-E_{2 }\tau_{2}+E_{3}\tau_{3})}\] \[\times D_{+-}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1},\tau_{2})D _{-+}^{(\widetilde{\nu}_{2})}(\ell_{2};\tau_{2},\tau_{3}). \tag{150}\]
Then, we classify the terms in the above four integrals according to whether the adjacent two time variables are time-ordered (T) or factorized (F). Thus, we get the following eight different terms:
\[{\cal I}_{+++}^{\rm(FF)}= -{\rm i}E_{1}^{p_{1}+1}E_{2}^{p_{2}+1}E_{3}^{p_{3}+1}\ell_{1}^{3} \ell_{2}^{3}\int_{-\infty}^{0}{\rm d}\tau_{1}{\rm d}\tau_{2}{\rm d}\tau_{3}(- \tau_{1})^{p_{1}}(-\tau_{2})^{p_{2}}(-\tau_{3})^{p_{3}}e^{{\rm i}(E_{1}\tau_{1} +E_{2}\tau_{2}+E_{3}\tau_{3})}\] \[\times D_{+-}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1},\tau_{2}) D_{-+}^{(\widetilde{\nu}_{2})}(\ell_{2};\tau_{2},\tau_{3}), \tag{151}\]
\[{\cal I}_{++}^{\rm(TF)}= -{\rm i}E_{1}^{p_{1}+1}E_{2}^{p_{2}+1}E_{3}^{p_{3}+1}\ell_{1}^{3} \ell_{2}^{3}\int_{-\infty}^{0}{\rm d}\tau_{1}{\rm d}\tau_{2}{\rm d}\tau_{3}(- \tau_{1})^{p_{1}}(-\tau_{2})^{p_{2}}(-\tau_{3})^{p_{3}}e^{{\rm i}(E_{1}\tau_{1} +E_{2}\tau_{2}+E_{3}\tau_{3})}\] \[\times\Big{(}D_{-+}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1}, \tau_{2})-D_{+-}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1},\tau_{2})\Big{)} \theta(\tau_{1}-\tau_{2})D_{-+}^{(\widetilde{\nu}_{2})}(\ell_{2};\tau_{2},\tau _{3}), \tag{152}\]
\[{\cal I}_{++}^{\rm(FT)}= -{\rm i}E_{1}^{p_{1}+1}E_{2}^{p_{2}+1}E_{3}^{p_{3}+1}\ell_{1}^{3} \ell_{2}^{3}\int_{-\infty}^{0}{\rm d}\tau_{1}{\rm d}\tau_{2}{\rm d}\tau_{3}(- \tau_{1})^{p_{1}}(-\tau_{2})^{p_{2}}(-\tau_{3})^{p_{3}}e^{{\rm i}(E_{1}\tau_{1} +E_{2}\tau_{2}+E_{3}\tau_{3})}\] \[\times D_{+-}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1},\tau_{2}) \Big{(}D_{+-}^{(\widetilde{\nu}_{2})}(\ell_{2};\tau_{2},\tau_{3})-D_{-+}^{( \widetilde{\nu}_{2})}(\ell_{2};\tau_{2},\tau_{3})\Big{)}\theta(\tau_{3}-\tau_{ 2}), \tag{153}\]
\[{\cal I}_{+++}^{\rm(TT)}= -{\rm i}E_{1}^{p_{1}+1}E_{2}^{p_{2}+1}E_{3}^{p_{3}+1}\ell_{1}^{3} \ell_{2}^{3}\int_{-\infty}^{0}{\rm d}\tau_{1}{\rm d}\tau_{2}{\rm d}\tau_{3}(- \tau_{1})^{p_{1}}(-\tau_{2})^{p_{2}}(-\tau_{3})^{p_{3}}e^{{\rm i}(E_{1}\tau_{1} +E_{2}\tau_{2}+E_{3}\tau_{3})}\] \[\times\Big{(}D_{-+}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1}, \tau_{2})-D_{+-}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1},\tau_{2})\Big{)} \theta(\tau_{1}-\tau_{2})\] \[\times\Big{(}D_{+-}^{(\widetilde{\nu}_{2})}(\ell_{2};\tau_{2}, \tau_{3})-D_{-+}^{(\widetilde{\nu}_{2})}(\ell_{2};\tau_{2},\tau_{3})\Big{)} \theta(\tau_{3}-\tau_{2}), \tag{154}\]
\[{\cal I}_{++-}^{\rm(F)}= +{\rm i}E_{1}^{p_{1}+1}E_{2}^{p_{2}+1}E_{3}^{p_{3}+1}\ell_{1}^{3} \ell_{2}^{3}\int_{-\infty}^{0}{\rm d}\tau_{1}{\rm d}\tau_{2}{\rm d}\tau_{3}(- \tau_{1})^{p_{1}}(-\tau_{2})^{p_{2}}(-\tau_{3})^{p_{3}}e^{{\rm i}(E_{1}\tau_{1} +E_{2}\tau_{2}-E_{3}\tau_{3})}\] \[\times D_{+-}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1},\tau_{2})D_{ +-}^{(\widetilde{\nu}_{2})}(\ell_{2};\tau_{2},\tau_{3}), \tag{155}\]
\[{\cal I}_{++-}^{\rm(T)}= +{\rm i}E_{1}^{p_{1}+1}E_{2}^{p_{2}+1}E_{3}^{p_{3}+1}\ell_{1}^{3} \ell_{2}^{3}\int_{-\infty}^{0}{\rm d}\tau_{1}{\rm d}\tau_{2}{\rm d}\tau_{3}(- \tau_{1})^{p_{1}}(-\tau_{2})^{p_{2}}(-\tau_{3})^{p_{3}}e^{{\rm i}(E_{1}\tau_{1} +E_{2}\tau_{2}-E_{3}\tau_{3})}\] \[\times\Big{(}D_{-+}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1}, \tau_{2})-D_{+-}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1},\tau_{2})\Big{)} \theta(\tau_{1}-\tau_{2})D_{+-}^{(\widetilde{\nu}_{2})}(\ell_{2};\tau_{2},\tau_ {3}), \tag{156}\]
\[{\cal I}_{++-}^{\rm(F)}= +{\rm i}E_{1}^{p_{1}+1}E_{2}^{p_{2}+1}E_{3}^{p_{3}+1}\ell_{1}^{3} \ell_{2}^{3}\int_{-\infty}^{0}{\rm d}\tau_{1}{\rm d}\tau_{2}{\rm d}\tau_{3}(- \tau_{1})^{p_{1}}(-\tau_{2})^{p_{2}}(-\tau_{3})^{p_{3}}e^{{\rm i}(-E_{1}\tau_{1} +E_{2}\tau_{2}+E_{3}\tau_{3})}\] \[\times D_{-+}^{(\widetilde{\nu}_{1})}(\ell_{1};\tau_{1},\tau_{2})D_{ -+}^{(\widetilde{\nu}_{2})}(\ell_{2};\tau_{2},\tau_{3}), \tag{157}\]
\[{\cal I}^{(\rm T)}_{-++}= +{\rm i}E_{1}^{p_{1}+1}E_{2}^{p_{2}+1}E_{3}^{p_{3}+1}\ell_{1}^{3} \ell_{2}^{3}\int_{-\infty}^{0}{\rm d}\tau_{1}{\rm d}\tau_{2}{\rm d}\tau_{3}(- \tau_{1})^{p_{1}}(-\tau_{2})^{p_{2}}(-\tau_{3})^{p_{3}}e^{{\rm i}(-E_{1}\tau_{1 }+E_{2}\tau_{2}+E_{3}\tau_{3})}\] \[\times D^{(\widetilde{\nu}_{1})}_{-+}(\ell_{1};\tau_{1},\tau_{2}) \Big{(}D^{(\widetilde{\nu}_{2})}_{+-}(\ell_{2};\tau_{2},\tau_{3})-D^{( \widetilde{\nu}_{2})}_{-+}(\ell_{2};\tau_{2},\tau_{3})\Big{)}\theta(\tau_{3}- \tau_{2}). \tag{158}\]
We then take the PMB representation for all these terms and regroup them according to (96)-(99). The time integrals can then be carried out using the method in Sec. 3. To write down the result after the time integral, we note that there is a common factor in all terms after taking the PMB representation:
\[{\mathbb{H}}(\{s\}) \equiv\frac{1}{(4\pi)^{2}}\Gamma\Big{[}s_{1}-\frac{{\rm i} \widetilde{\nu}_{1}}{2},s_{1}+\frac{{\rm i}\widetilde{\nu}_{1}}{2},s_{2}- \frac{{\rm i}\widetilde{\nu}_{1}}{2},s_{2}+\frac{{\rm i}\widetilde{\nu}_{1}} {2}\Big{]}\] \[\times\Gamma\Big{[}s_{3}-\frac{{\rm i}\widetilde{\nu}_{2}}{2},s_ {3}+\frac{{\rm i}\widetilde{\nu}_{2}}{2},s_{4}-\frac{{\rm i}\widetilde{\nu}_{ 2}}{2},s_{4}+\frac{{\rm i}\widetilde{\nu}_{2}}{2}\Big{]}. \tag{159}\]
Then, we can write down the results of all terms with time integrals finished:
\[{\cal I}^{(\rm FF)}_{+++} +{\cal I}^{(\rm F)}_{++-}+{\cal I}^{(\rm F)}_{-++}+{\cal I}_{+-+}\] \[=-{\rm i}\int_{s_{1},\cdots,s_{4}}\Big{[}-{\rm i}e^{{\rm i}\pi(2s _{23}-p_{123}/2)}-e^{{\rm i}\pi(2s_{2}-p_{12}/2+p_{3}/2)}-e^{{\rm i}\pi(2s_{3}+ p_{1}/2-p_{23}/2)}+{\rm i}e^{{\rm i}\pi(p_{2}/2-p_{13}/2)}\Big{]}\] \[\times{\mathbb{H}}(\{s\})(r_{1}r_{2}r_{3}r_{4})^{3/2}\Big{(}\frac {r_{1}}{2}\Big{)}^{-2s_{1}}\Big{(}\frac{r_{2}}{2}\Big{)}^{-2s_{2}}\Big{(}\frac {r_{3}}{2}\Big{)}^{-2s_{3}}\Big{(}\frac{r_{4}}{2}\Big{)}^{-2s_{4}}\] \[\times\Gamma\Big{[}p_{1}-2s_{1}+\tfrac{5}{2},p_{2}-2s_{23}+4,p_{3} -2s_{4}+\tfrac{5}{2}\Big{]}. \tag{160}\]
\[{\cal I}^{(\rm FT)}_{+++}+{\cal I}^{(\rm T)}_{-++}= -4{\rm i}\int_{s_{1},\cdots,s_{4}}\sin[\pi(s_{3}-s_{4})]\sin[\pi( s_{2}-p_{1}/2+3/4)]e^{-{\rm i}(p_{23}-2s_{234}+13/2)\pi/2}{\mathbb{H}}(\{s\})\] \[\times(r_{1}r_{2}r_{3}^{2})^{3/2}\Big{(}\frac{r_{1}}{2}\Big{)}^{- 2s_{1}}\Big{(}\frac{r_{2}}{2}\Big{)}^{-2s_{2}}\Big{(}\frac{r_{3}}{2}\Big{)}^{- 2s_{34}}\Big{(}\frac{r_{3}}{r_{4}}\Big{)}^{p_{3}+1}\] \[\times\Gamma(p_{1}-2s_{1}+5/2)_{2}{\cal F}_{1}\begin{bmatrix}p_{ 3}-2s_{4}+5/2,p_{23}-2s_{234}+13/2\\ p_{3}-2s_{4}+7/2\end{bmatrix}-\frac{r_{3}}{r_{4}}\Big{]}\,. \tag{161}\]
\[{\cal I}^{(\rm TF)}_{+++}+{\cal I}^{(\rm T)}_{++-}= -4{\rm i}\int_{s_{1},\cdots,s_{4}}\sin[\pi(s_{2}-s_{1})]\sin[ \pi(s_{3}-p_{3}/2+3/4)]e^{-{\rm i}(p_{12}-2s_{123}+13/2)\pi/2}{\mathbb{H}}(\{s\})\] \[\times(r_{2}^{2}r_{3}r_{4})^{3/2}\Big{(}\frac{r_{4}}{2}\Big{)}^{- 2s_{4}}\Big{(}\frac{r_{3}}{2}\Big{)}^{-2s_{3}}\Big{(}\frac{r_{2}}{2}\Big{)}^{- 2s_{12}}\Big{(}\frac{r_{2}}{r_{1}}\Big{)}^{p_{1}+1}\] \[\times\Gamma(p_{3}-2s_{4}+5/2)_{2}{\cal F}_{1}\begin{bmatrix}p_{ 1}-2s_{1}+5/2,p_{12}-2s_{123}+13/2\\ p_{1}-2s_{1}+7/2\end{bmatrix}-\frac{r_{2}}{r_{1}}\,\Big{]}\,. \tag{162}\]
\[{\cal I}^{(\rm TT)}_{+++}= -4\int_{s_{1},\cdots,s_{4}}\sin[\pi(s_{1}-s_{2})]\sin[\pi(s_{3}- s_{4})]{\mathbb{H}}(\{s\})e^{{\rm i}\pi(s_{1234}-p_{123}/2)}\] \[\times r_{2}^{3}r_{3}^{3}\Big{(}\frac{r_{2}}{r_{1}}\Big{)}^{p_{1}+ 1}\Big{(}\frac{r_{3}}{r_{4}}\Big{)}^{p_{3}+1}\Big{(}\frac{r_{2}}{2}\Big{)}^{-2s_ {12}}\Big{(}\frac{r_{3}}{2}\Big{)}^{-2s_{34}}\] \[\times{\cal F}_{2}\begin{bmatrix}p_{123}-2s_{1234}+9\Big{|}\!\! \begin{matrix}p_{1}-2s_{1}+5/2,p_{3}-2s_{4}+5/2\\ p_{1}-2s_{1}+7/2,p_{3}-2s_{4}+7/2\end{matrix}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
It then remains to finish the Mellin integrals. As indicated in the main text, we pick up all left poles given in (102). The residues of \(\mathbb{H}(\{s\})\) are given by
\[\underset{\{s\}=\{-n+\mathrm{i}\mathfrak{a}_{1}\widetilde{\nu}_{1}/ 2\}}{\mathrm{Res}}\mathbb{H}(\{s\})\] \[=\frac{1}{(4\pi)^{2}}\frac{(-1)^{n_{1}+n_{2}+n_{3}+n_{4}}}{n_{1}! n_{2}!n_{3}!n_{4}!}\Gamma\Big{[}-n_{1}+\mathrm{i}\mathfrak{a}_{1}\widetilde{ \nu}_{1},-n_{2}+\mathrm{i}\mathfrak{a}_{2}\widetilde{\nu}_{1},-n_{3}+\mathrm{i }\mathfrak{a}_{3}\widetilde{\nu}_{2},-n_{4}+\mathrm{i}\mathfrak{a}_{4} \widetilde{\nu}_{2}\Big{]}. \tag{164}\]
Summing up all the left poles, we can finish the Mellin integral:
\[\mathcal{I}^{(\mathrm{FF})}_{+++} +\mathcal{I}^{(\mathrm{F})}_{++-}+\mathcal{I}^{(\mathrm{F})}_{-++ }+\mathcal{I}_{+-+}\] \[=\sum_{\{n\},\{\mathfrak{a}\}}\frac{1}{(4\pi)^{2}}\Big{[}-e^{ \mathrm{i}\pi(\mathrm{i}\mathfrak{a}_{2}\widetilde{\nu}_{1}+\mathrm{i} \mathfrak{a}_{3}\widetilde{\nu}_{2}-p_{123}/2)}+\mathrm{i}e^{\mathrm{i}\pi( \mathrm{i}\mathfrak{a}_{2}\widetilde{\nu}_{1}-p_{12}/2+p_{3}/2)}+\mathrm{i}e^ {\mathrm{i}\pi(\mathrm{i}\mathfrak{a}_{3}\widetilde{\nu}_{2}+p_{1}/2-p_{23}/2 )}+e^{\mathrm{i}\pi(p_{2}/2-p_{13}/2)}\Big{]}\] \[\times\frac{(-1)^{n_{1}+n_{2}+n_{3}+n_{4}}}{n_{1}!n_{2}!n_{3}!n_{ 4}!}(r_{1}r_{2}r_{3}r_{4})^{3/2}\Big{(}\frac{r_{1}}{2}\Big{)}^{2n_{1}-\mathrm{ i}\mathfrak{a}_{1}\widetilde{\nu}_{1}}\Big{(}\frac{r_{2}}{2}\Big{)}^{2n_{2}- \mathrm{i}\mathfrak{a}_{2}\widetilde{\nu}_{1}}\Big{(}\frac{r_{3}}{2}\Big{)}^{2 n_{3}-\mathrm{i}\mathfrak{a}_{3}\widetilde{\nu}_{2}}\Big{(}\frac{r_{4}}{2}\Big{)}^{2 n_{4}-\mathrm{i}\mathfrak{a}_{4}\widetilde{\nu}_{2}}\] \[\times\Gamma\Big{[}p_{1}+2n_{1}-\mathrm{i}\mathfrak{a}_{1} \widetilde{\nu}_{1}+5/2,p_{2}+2n_{23}-\mathrm{i}\mathfrak{a}_{2}\widetilde{ \nu}_{1}-\mathrm{i}\mathfrak{a}_{3}\widetilde{\nu}_{2}+4,p_{3}+2n_{4}-\mathrm{ i}\mathfrak{a}_{4}\widetilde{\nu}_{2}+5/2\Big{]}\] \[\times\Gamma\Big{[}-n_{1}+\mathrm{i}\mathfrak{a}_{1}\widetilde{ \nu}_{1},-n_{2}+\mathrm{i}\mathfrak{a}_{2}\widetilde{\nu}_{1},-n_{3}+\mathrm{ i}\mathfrak{a}_{3}\widetilde{\nu}_{2},-n_{4}+\mathrm{i}\mathfrak{a}_{4} \widetilde{\nu}_{2}\Big{]}\] \[=\sum_{n_{3},\{\mathfrak{a}\}}\frac{1}{(4\pi)^{2}}\Big{[}-e^{ \mathrm{i}\pi(\mathrm{i}\mathfrak{a}_{2}\widetilde{\nu}_{1}+\mathrm{i} \mathfrak{a}_{3}\widetilde{\nu}_{2}-p_{123}/2)}+\mathrm{i}e^{\mathrm{i}\pi( \mathrm{i}\mathfrak{a}_{2}\widetilde{\nu}_{1}-p_{12}/2+p_{3}/2)}+\mathrm{i}e^ {\mathrm{i}\pi(\mathrm{i}\mathfrak{a}_{3}\widetilde{\nu}_{2}+p_{1}/2-p_{23}/2 )}+e^{\mathrm{i}\pi(p_{2}/2-p_{13}/2)}\Big{]}\] \[\times\frac{(-1)^{n_{3}}}{n_{3}!}(r_{1}r_{2}r_{3}r_{4})^{3/2} \Big{(}\frac{r_{1}}{2}\Big{)}^{-\mathrm{i}\mathfrak{a}_{1}\widetilde{\nu}_{1} }\Big{(}\frac{r_{2}}{2}\Big{)}^{-\mathrm{i}\mathfrak{a}_{2}\widetilde{\nu}_{1 }}\Big{(}\frac{r_{4}}{2}\Big{)}^{-\mathrm{i}\mathfrak{a}_{4}\widetilde{\nu}_{2}} \Big{(}\frac{r_{3}}{2}\Big{)}^{2n_{3}-\mathrm{i}\mathfrak{a}_{3}\widetilde{ \nu}_{2}}\] \[\times\Gamma(-n_{3}+\mathrm{i}\mathfrak{a}_{3}\widetilde{\nu}_{2}) \mathbf{F}(p_{2}+2n_{3}-\mathrm{i}\mathfrak{a}_{2}\widetilde{\nu}_{1}-\mathrm{ i}\mathfrak{a}_{3}\widetilde{\nu}_{2}+4,\mathrm{i}\mathfrak{a}_{2}\widetilde{\nu}_{1},r_{2} ^{2})\] \[=\!\sum_{\{\mathfrak{a}\}}\frac{-2^{p_{2}-1}}{\pi^{1/2}}\Big{[}-e^ {\mathrm{i}\pi(\mathrm{i}\mathfrak{a}_{2}\widetilde{\nu}_{1}+\mathrm{i} \mathfrak{a}_{3}\widetilde{\nu}_{2}-p_{123}/2)}+\mathrm{i}e^{\mathrm{i}\pi( \mathrm{i}\mathfrak{a}_{2}\widetilde{\nu}_{1}-p_{12}/2+p_{3}/2)}+\mathrm{i}e^ {\mathrm{i}\pi(\mathrm{i}\mathfrak{a}_{3}\widetilde{\nu}_{2}+p_{1}/2-p_{23}/2 )}+e^{\mathrm{i}\pi(p_{2}/2-p_{13}/2)}\Big{]}\] \[\times\mathrm{csch}(\mathfrak{a}_{2}\widetilde{\nu}_{1})\,\mathrm{ csch}(\mathfrak{a}_{3}\widetilde{\pi}\widetilde{\nu}_{2})(r_{1}r_{2}r_{3}r_{4})^{3/2} \Big{(}\frac{r_{1}}{2}\Big{)}^{-\mathrm{i}\mathfrak{a}_{1}\widetilde{\nu}_{1}}r_{ 2}^{-\mathrm{i}\mathfrak{a}_{2}\widetilde{\nu}_{1}}r_{3}^{-\mathrm{i}\mathfrak{a} _{3}\widetilde{\nu}_{2}}\Big{(}\frac{r_{4}}{2}\Big{)}^{-\mathrm{i}\mathfrak{a}_{4} \widetilde{\nu}_{2}}\] \[\times\mathbf{F}(p_{1}-\mathrm{i}\mathfrak{a}_{1}\widetilde{\nu}_{ 1}+5/2,\mathrm{i}\mathfrak{a}_{1}\widetilde{\nu}_{1},r_{1}^{2})\mathbf{F}(p_{3}- \mathrm{i}\mathfrak{a}_{4}\widetilde{\nu}_{2}+5/2,\mathrm{i}\mathfrak{a}_{4} \widetilde{\nu}_{2},r_{4}^{2})\] \[\times\mathcal{F}_{4}\left[\begin{matrix}-\mathrm{i}\mathfrak{a}_{2} \widetilde{\nu}_{1}/2-\mathrm{i}\mathfrak{a}_{3}\widetilde{\nu}_{2}/2+2+p_{2}/2,- \mathrm{i}\mathfrak{a}_{2}\widetilde{\nu}_{1}/2-\mathrm{i}\mathfrak{a}_{3} \widetilde{\nu}_{2}/2+5/2+p_{2}/2\\ 1-\mathrm{i}\mathfrak{a}_{2}\widetilde{\nu}_{1},1-\mathrm{i}\mathfrak{a}_{3} \widetilde{\nu}_{2}\end{matrix}\right], \tag{165}\]
where
\[\mathbf{F}(a,b,c)\equiv\Gamma[a,b]_{2}F_{1}\left[\begin{matrix}a/2,(1+a)/2\\ 1-b\end{matrix}\right]c\right]. \tag{166}\]
Similarly,
\[\mathcal{I}^{\rm(FT)}_{++++}+ \mathcal{I}^{\rm(T)}_{-++}\] \[= \sum_{\{n\},\{\mathfrak{a}\}}\frac{-4{\rm i}}{(4\pi)^{2}}\sin[\pi({ \rm i}\mathfrak{a}_{3}\widetilde{\nu}_{2}/2-{\rm i}\mathfrak{a}_{4}\widetilde{ \nu}_{2}/2)]\sin[\pi({\rm i}\mathfrak{a}_{2}\widetilde{\nu}_{1}/2-p_{1}/2+3/4) ]e^{-{\rm i}(p_{23}-{\rm i}({\rm a}\widetilde{\nu})_{234}+13/2)\pi/2}\] \[\times\frac{(-1)^{n_{1}+n_{2}+n_{3}+n_{4}}}{n_{1}!n_{2}!n_{3}!n_{ 4}!}(r_{1}r_{2}r_{3}^{2})^{3/2}\Big{(}\frac{r_{1}}{2}\Big{)}^{2n_{1}-{\rm i} \mathfrak{a}_{1}\widetilde{\nu}_{1}}\Big{(}\frac{r_{2}}{2}\Big{)}^{2n_{2}-{ \rm i}\mathfrak{a}_{2}\widetilde{\nu}_{1}}\Big{(}\frac{r_{3}}{2}\Big{)}^{2n_{3 }-{\rm i}\mathfrak{a}_{3}\widetilde{\nu}_{2}}\Big{(}\frac{r_{3}}{r_{4}}\Big{)} ^{p_{3}+1}\] \[\times\Gamma(p_{1}+2n_{1}-{\rm i}\mathfrak{a}_{1}\widetilde{\nu} _{1}+5/2)\Gamma\Big{[}-n_{1}+{\rm i}\mathfrak{a}_{1}\widetilde{\nu}_{1},-n_{2 }+{\rm i}\mathfrak{a}_{2}\widetilde{\nu}_{1},-n_{3}+{\rm i}\mathfrak{a}_{3} \widetilde{\nu}_{2},-n_{4}+{\rm i}\mathfrak{a}_{4}\widetilde{\nu}_{2}\Big{]}\] \[\times{}_{2}\mathcal{F}_{1}\begin{bmatrix}p_{3}+2n_{4}-{\rm i} \mathfrak{a}_{4}\widetilde{\nu}_{2}+5/2,p_{23}+2n_{234}-{\rm i}(\mathfrak{a} \widetilde{\nu})_{234}+13/2\Big{|}-\frac{r_{3}}{r_{4}}\end{bmatrix}\] \[= \sum_{m,n_{3},n_{4},\{\mathfrak{a}\}}\frac{-4{\rm i}}{(4\pi)^{2}} \sin[\pi({\rm i}\mathfrak{a}_{3}\widetilde{\nu}_{2}/2-{\rm i}\mathfrak{a}_{4} \widetilde{\nu}_{2}/2)]\sin[\pi({\rm i}\mathfrak{a}_{2}\widetilde{\nu}_{1}/2- p_{1}/2+3/4)]e^{-{\rm i}(p_{23}-{\rm i}({\rm a}\widetilde{\nu})_{234}+13/2)\pi/2}\] \[\times\frac{(-1)^{n_{3}+n_{4}}}{n_{3}!n_{4}!m!}(r_{1}r_{2}r_{3}^{ 2})^{3/2}\Big{(}\frac{r_{1}}{2}\Big{)}^{-{\rm i}\mathfrak{a}_{1}\widetilde{ \nu}_{1}}\Big{(}\frac{r_{2}}{2}\Big{)}^{-{\rm i}\mathfrak{a}_{2}\widetilde{ \nu}_{1}}\Big{(}\frac{r_{3}}{2}\Big{)}^{2n_{34}-{\rm i}\mathfrak{a}_{3} \widetilde{\nu}_{2}}\Big{(}-\frac{r_{3}}{r_{4}}\Big{)}^{m}\Big{(}\frac{r_{3}} {r_{4}}\Big{)}^{p_{3}+1}\] \[\times\frac{\Gamma[-n_{3}+{\rm i}\mathfrak{a}_{3}\widetilde{\nu} _{2},-n_{4}+{\rm i}\mathfrak{a}_{4}\widetilde{\nu}_{2}]}{m+p_{3}+2n_{4}-{\rm i }\mathfrak{a}_{4}\widetilde{\nu}_{2}+5/2}\mathbf{F}(p_{1}-{\rm i}\mathfrak{a} _{1}\widetilde{\nu}_{1}+5/2,{\rm i}\mathfrak{a}_{1}\widetilde{\nu}_{1},r_{1} ^{2})\] \[\times\mathbf{F}(m+p_{23}+2n_{34}-{\rm i}(\mathfrak{a} \widetilde{\nu})_{234}+13/2,{\rm i}\mathfrak{a}_{2}\widetilde{\nu}_{1},r_{2} ^{2}); \tag{167}\]
Here in the last line we have used a shorthand notation \((\mathfrak{a}\widetilde{\nu})_{234}\equiv\mathfrak{a}_{2}\widetilde{\nu}_{1}+ \mathfrak{a}_{3}\widetilde{\nu}_{2}+\mathfrak{a}_{4}\widetilde{\nu}_{2}\).
\[\mathcal{I}^{\rm(TF)}_{++}+\mathcal{I}^{\rm(T)}_{++-}\] \[= \sum_{m,n_{1},n_{2},\{\mathfrak{a}\}}\frac{-4{\rm i}}{(4\pi)^{2}} \sin[\pi({\rm i}\mathfrak{a}_{2}\widetilde{\nu}_{1}/2-{\rm i}\mathfrak{a}_{1} \widetilde{\nu}_{1}/2)]\sin[\pi({\rm i}\mathfrak{a}_{3}\widetilde{\nu}_{2}/2- p_{3}/2+3/4)]e^{-{\rm i}(p_{12}-{\rm i}({\rm a}\widetilde{\nu})_{123}+13/2)\pi/2}\] \[\times\frac{(-1)^{n_{1}+n_{2}}}{n_{1}!n_{2}!m!}(r_{2}^{2}r_{3}r_{ 4})^{3/2}\Big{(}\frac{r_{4}}{2}\Big{)}^{-{\rm i}\mathfrak{a}_{4}\widetilde{ \nu}_{2}}\Big{(}\frac{r_{3}}{2}\Big{)}^{-{\rm i}\mathfrak{a}_{3}\widetilde{\nu} _{2}}\Big{(}\frac{r_{2}}{2}\Big{)}^{2n_{12}-{\rm i}\mathfrak{a}_{12}\widetilde{ \nu}_{1}}\Big{(}-\frac{r_{2}}{r_{1}}\Big{)}^{m}\Big{(}\frac{r_{2}}{r_{1}}\Big{)} ^{p_{1}+1}\] \[\times\frac{\Gamma[-n_{2}+{\rm i}\mathfrak{a}_{2}\widetilde{\nu} _{1},-n_{1}+{\rm i}\mathfrak{a}_{1}\widetilde{\nu}_{1}]}{m+p_{1}+2n_{1}-{\rm i} \mathfrak{a}_{1}\widetilde{\nu}_{1}+5/2}\mathbf{F}(p_{3}-{\rm i}\mathfrak{a}_{ 4}\widetilde{\nu}_{2}+5/2,{\rm i}\mathfrak{a}_{4}\widetilde{\nu}_{2},r_{4}^{2})\] \[\times\mathbf{F}(m+p_{12}+2n_{12}-{\rm i}(\mathfrak{a}\widetilde{ \nu})_{123}+13/2,{\rm i}\mathfrak{a}_{3}\widetilde{\nu}_{2},r_{3}^{2}). \tag{168}\]
Here in the last line we have used a shorthand notation \((\mathfrak{a}\widetilde{\nu})_{123}\equiv\mathfrak{a}_{1}\widetilde{\nu}_{1}+ \mathfrak{a}_{2}\widetilde{\nu}_{1}+\mathfrak{a}_{3}\widetilde{\nu}_{2}\).
\[\mathcal{I}^{\rm(TT)}_{++}= \sum_{\{n\},\{\mathfrak{a}\}}\frac{-4}{(4\pi)^{2}}\sin[\pi({ \rm i}\mathfrak{a}_{1}\widetilde{\nu}_{1}/2-{\rm i}\mathfrak{a}_{2}\widetilde{\nu}_{1}/2)] \sin[\pi({\rm i}\mathfrak{a}_{3}\widetilde{\nu}_{2}/2-{\rm i}\mathfrak{a}_{4} \widetilde{\nu}_{2}/2)]e^{{\rm i}\pi({\rm i}(\mathfrak{a}\widetilde{\nu})_{1234} /2-p_{123}/2)}\] \[\times\frac{(-1)^{n_{1}+n_{2}+n_{3}+n_{4}}}{n_{1}!n_{2}!n_{3}!n_{ 4}!}r_{2}^{3}r_{3}^{3}\Big{(}\frac{r_{2}}{r_{1}}\Big{)}^{p_{1}+1}\Big{(} \frac{r_{3}}{r_{4}}\Big{)}^{p_{3}+1}\Big{(}\frac{r_{2}}{2}\Big{)}^{2n_{12}-{ \rm i}(\mathfrak{a}\widetilde{\nu})_{12}}\Big{(}\frac{r_{3}}{2}\Big{)}^{2n_{34}-{ \rm i}(\mathfrak{a}\widetilde{\nu})_{34}}\] \[\times\Gamma\Big{[}-n_{1}+{\rm i}\mathfrak{a}_{1}\widetilde{\nu} _{1},-n_{2}+{\rm i}\mathfrak{a}_{2}\widetilde{\nu}_{1},-n_{3}+{\rm i} \mathfrak{a}_{3}\widetilde{\nu}_{2},-n_{4}+{\rm i}\mathfrak{a}_{4}\widetilde{ \nu}_{2}\Big{]}\] \[\times\mathcal{F}_{2}\begin{bmatrix}p_{123}+2n_{1234}-{\rm i}( \mathfrak{a}\widetilde{\nu})_{1234}+9\Big{|}\!\begin{matrix}p_{1}+2n_{1}-{\rm i }\mathfrak{a}_{1}\widetilde{\nu}_{1}+\frac{5}{2},p_{3}+2n |
2306.00207 | Birational geometry of Calabi-Yau pairs and 3-dimensional Cremona
transformations | We develop a framework that allows one to describe the birational geometry of
Calabi-Yau pairs $(X,D)$. After establishing some general results for
Calabi-Yau pairs $(X,D)$ with mild singularities, we focus on the special case
when $X=\mathbb{P}^3$ and $D\subset \mathbb{P}^3$ is a quartic surface. We
investigate how the appearance of increasingly worse singularities on $D$
enriches the birational geometry of the pair $(\mathbb{P}^3, D)$. | Carolina Araujo, Alessio Corti, Alex Massarenti | 2023-05-31T21:57:35Z | http://arxiv.org/abs/2306.00207v2 | # Birational geometry of Calabi-Yau pairs and \(3\)-dimensional Cremona transformations
###### Abstract.
In this paper we develop a framework that allows one to describe the birational geometry of Calabi-Yau pairs \((X,D)\). After establishing some general results for Calabi-Yau pairs \((X,D)\) with mild singularities, we focus on the special case when \(X=\mathbb{P}^{3}\) and \(D\subset\mathbb{P}^{3}\) is a quartic surface. We investigate how the appearance of increasingly worse singularities in \(D\) enriches the birational geometry of the pair \((\mathbb{P}^{3},D)\), and lead to interesting subgroups of the Cremona group of \(\mathbb{P}^{3}\).
Key words and phrases:Sarkisov program, Calabi-Yau pairs, Cremona group 2020 Mathematics Subject Classification: 14E30, 14E05, 14E07
###### Contents
* 1 Introduction
* 2 The Sarkisov program for Mf CY pairs
* 3 Proof of Theorem A
* 4 Extremal contractions
* 5 Proof of Theorem B
* 6 Proof of Theorem C
## 1. Introduction
### Overview
In [1, 2], Oguiso addressed the following question, attributed to Gizatullin:
_Which automorphisms of a smooth quartic surface \(D\subset\mathbb{P}^{3}\) are induced by Cremona transformations of \(\mathbb{P}^{3}\)?_
He produced several interesting examples of smooth quartic surfaces in \(\mathbb{P}^{3}\), among which:
* A smooth quartic surface \(D\subset\mathbb{P}^{3}\) with \(\operatorname{Aut}(D)\cong\mathbb{Z}\), and such that no nontrivial automorphism of \(D\) is induced by a Cremona transformation of \(\mathbb{P}^{3}\), see [1, Theorem 1.2] and [1, Theorem 1.8].
* A smooth quartic surface \(D\subset\mathbb{P}^{3}\) with \(\operatorname{Aut}(D)\cong\mathbb{Z}_{2}*\mathbb{Z}_{2}*\mathbb{Z}_{2}\), and such that every automorphism of \(D\) is induced by a Cremona transformation of \(\mathbb{P}^{3}\), see [1, Theorem 1.7].
More recently, Paiva and Quedo produced examples of smooth quartic surfaces \(D\subset\mathbb{P}^{3}\) with \(\operatorname{Aut}(D)\cong\mathbb{Z}_{2}*\mathbb{Z}_{2}\), and such that no nontrivial automorphism of \(D\) is induced by a Cremona transformation of \(\mathbb{P}^{3}\), see [1, Theorem 17].
The pair \((\mathbb{P}^{3},D)\), where \(D\subset\mathbb{P}^{3}\) is a smooth quartic surface, is an example of a _Calabi-Yau (CY) pair_, that is, a pair \((X,D)\), consisting of a normal projective variety \(X\) and an effective Weil divisor \(D\) on \(X\) such that \(K_{X}+D\sim 0\). Oguiso's results can be interpreted as statements about the _birational geometry of the CY pair_\((\mathbb{P}^{3},D)\), which is the theme of this paper.
Given a CY pair \((X,D)\), there is a rational volume form \(\omega\) on \(X\), unique up to scaling by a nonzero constant, such that \(D+\operatorname{div}_{X}\omega=0\). Our goal is to understand birational self-maps of \(X\) that preserve the volume form \(\omega\), up to scaling. In particular, we are interested in the structure
Introduction
Let \(X\) be a smooth manifold and \(D\subset\mathbb{P}^{3}\) be a smooth manifold. A _manifold_\(X\) is a _manifold_\(X\) if and only if it is a smooth manifold. The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\) if and only if it is a _manifold_\(X\). The _manifold_\(X\) is a _manifold_\(X\) if and only if it is a _manifold_\(X\) if
A mildly singular CY pair \((X,D)\) together with a Mori fibre space structure \(X\to Z\) is called a _Mori fibered (Mf) CY pair_ (see Definition 1.5). The _pliability_ of a Mf CY pair \((X,D)\) is the set \(\mathcal{P}(X,D)\) of equivalence classes of Mf CY pairs that admit a volume preserving birational map to \((X,D)\) (see Definition 1.6 for the precise notion of equivalence in this setting). Our main theorems not only describe \(\operatorname{Bir}(X,D)\), but also determine the pliability \(\mathcal{P}(X,D)\) of the CY pairs in question.
Theorem A states that, if \(X\) is a Fano variety with \(\rho(X)=1\), \((X,D)\) is a CY pair with terminal singularities, and the class group of \(D\) is generated by the restriction of a divisor on \(X\), then \(\mathcal{P}(X,D)=\{(X,D)\to\operatorname{Spec}\mathbb{C}\}\) is a set with one element, and \(\operatorname{Bir}(X,D)=\operatorname{Aut}(X,D)\).
In the setting of Theorem B, let \(D\subset\mathbb{P}^{3}\) be a quartic surface with a single \(A_{1}\)-singularity \(z\in D\), and suppose that the class group of \(D\) is generated by \(\mathcal{O}_{D}(1)\). The blow-up \(X\to\mathbb{P}^{3}\) of \(z\in\mathbb{P}^{3}\), together with the strict transform \(D_{X}\subset X\) of \(D\), and the \(\mathbb{P}^{1}\)-bundle \(X\to\mathbb{P}^{2}\) induced by the projection from \(z\), is a Mf CY pair. Moreover, the birational morphism \((X,D_{X})\to(\mathbb{P}^{3},D)\) is volume preserving. Theorem B states that the pliability of \((\mathbb{P}^{3},D)\) is the set consisting of the two elements \((\mathbb{P}^{3},D)\to\operatorname{Spec}\mathbb{C}\) and \((X,D_{X})\to\mathbb{P}^{2}\).
Our Theorem C describes the pliability of the pair \((\mathbb{P}^{3},D)\) when \(D\subset\mathbb{P}^{3}\) is a quartic surface with a single \(A_{2}\)-singularity, and the class group of \(D\) is generated by \(\mathcal{O}_{D}(1)\). We were surprised to discover just how large this pliability is.
Throughout the paper we work over the field \(\mathbb{C}\) of complex numbers, or, more generally any algebraically closed field of characteristic zero.
In the remaining of the introduction, we define all the relevant notions regarding CY pairs and state precisely our main results, Theorems A, B and C. In Section 2, we review the Sarkisov program for Mf CY pairs. In Sections 3, 5 and 6, we prove Theorems A, B and C, respectively.
### Calabi-Yau pairs
**Definition 1.1**.: A _Calabi-Yau (CY) pair_ is a pair \((X,D)\) consisting of a normal projective variety \(X\) and an effective integral Weil divisor \(D\) on \(X\) such that \(K_{X}+D\) is linearly equivalent to \(0\). There exists a top degree rational differential form \(\omega=\omega_{X,D}\) on \(X\), unique up to multiplication by a nonzero constant, such that \(D+\operatorname{div}_{X}\omega=0\). With a slight abuse of language, we call \(\omega\) the _volume form_ of the CY pair \((X,D)\).
Let \((X,D_{X})\) and \((Y,D_{Y})\) be CY pairs, with associated volume forms \(\omega_{X,D_{X}}\) and \(\omega_{Y,D_{Y}}\). A birational map \(\varphi\colon X\dashrightarrow Y\) is _volume preserving_ if there exists a nonzero constant \(\lambda\in\mathbb{C}^{\times}\) such that
\[\varphi^{*}(\omega_{Y,D_{Y}})=\lambda\omega_{X,D_{X}}.\]
We abuse language and say that \(\varphi\) is a _birational map of CY pairs_ to mean that it is a volume preserving map.
We denote by \(\operatorname{Bir}(X,D)\) the group of volume preserving birational self-maps of the CY pair \((X,D)\).
Volume preserving birational maps are called _crepant birational_ in [11].
Next we introduce several natural classes of singularities of pairs.
**Definition 1.2**.: A pair \((X,D)\) has _terminal singularities_ if for all \(z\in X\):
1. if \(\operatorname{codim}_{z}X\leq 2\), then the pair \((X,D)\) is smooth at \(z\);
2. if \(\operatorname{codim}_{z}X>2\), then for all divisorial valuations \(E\) of \(X\) with \(\operatorname{z}_{E}X=z\),3\(a(E,K_{X}+D)>0\). A pair \((X,D)\) has _canonical singularities_ if for all divisorial valuations \(E\) with small centre on \(X\), \(a(E,K_{X}+D)\geq 0\). Following common usage, we say that a pair "is" terminal (respectively canonical) if it has terminal (respectively canonical) singularities.
Footnote 3: We denote by \(\operatorname{z}_{E}X\in X\) the _centre_ of \(E\) on \(X\) — which, for us, is a scheme-theoretic point of \(X\).
**Remark 1.3**.: (1) If the pair \((X,D)\) is terminal (respectively canonical) then both \(X\) and \(D\) are terminal (respectively canonical). In particular, \(D\) is normal.
(2) If \(X\) is smooth and \((X,D)\) is terminal, then for all \(z\in X\) with \(\operatorname{codim}_{z}X>2\), \(\operatorname{mult}_{z}(D)<-1+\operatorname{codim}_{z}X\).
3. In particular, if \(X\) is a smooth 3-fold, then \((X,D)\) is terminal if and only if \(D\) is smooth.
4. If \(X\) is smooth and \(z\in D\) is an isolated singularity of multiplicity \(\operatorname{mult}_{z}D<-1+\dim X\) and smooth projectivized tangent cone, then \((X,D)\) is terminal.
**Definition 1.4**.: We say that a pair \((X,D)\) is _(t, c)_ (respectively _(t, lc)_) if \(X\) has terminal singularities and the pair \((X,D)\) has canonical (respectively log canonical) singularities. We say that a pair \((X,D)\) is \(\mathbb{Q}\)_-factorial_ if \(X\) is \(\mathbb{Q}\)-factorial.
**Definition 1.5**.: A _Mori fibered (Mf) CY pair_ is a \(\mathbb{Q}\)-factorial _(t, lc)_ CY pair \((X,D)\) together with a Mori fiber space structure on \(X\), i.e., a morphism \(f\colon X\to S\) such that \(f_{*}\mathcal{O}_{X}=\mathcal{O}_{S}\), \(-K_{X}\) is \(f\)-ample, and \(\rho(X)-\rho(S)=1\).
**Definition 1.6**.: Let \((X,D_{X})\to S_{X}\) and \((Y,D_{Y})\to S_{Y}\) be Mf CY pairs. A volume preserving birational map \(f\colon(X,D_{X})\dashrightarrow(Y,D_{Y})\) is _square_ if it fits into a commutative diagram
where \(g\) is birational, and the induced birational map of generic fibers \(f_{L}\colon X_{L}\dashrightarrow Y_{L}\) is biregular. In this case, we say that the Mf CY pairs \((X,D_{X})\to S_{X}\) and \((Y,D_{Y})\to S_{Y}\) are _square equivalent_.
**Remark 1.7**.: For Mf CY pairs \((X,D_{X})\to S_{X}\) and \((Y,D_{Y})\to S_{Y}\) to be square equivalent, it is necessary that \(\dim S_{X}=\dim S_{Y}\).
If \(S_{X}=S_{Y}=\operatorname{Spec}\mathbb{C}\), then the Mf CY pairs are square equivalent if and only if \((X,D_{X})\) and \((Y,D_{Y})\) are isomorphic as CY pairs.
**Definition 1.8**.: The _pliability_ of the Mf CY pair \((X,D_{X})\) is the set
\[\mathcal{P}(X,D_{X})\ =\ \frac{\left\{\text{Mf CY pairs }(Y,D_{Y})\to S_{Y} \bigm{|}\operatorname{Bir}\big{(}(X,D_{X}),(Y,D_{Y})\big{)}\neq\emptyset \right\}}{\text{square equivalence}}.\]
We say that the Mf CY pair \((X,D_{X})\to S_{X}\) is _birationally rigid_ if \(\mathcal{P}(X,D_{X})\) consists of a single element (necessarily the element \((X,D_{X})\to S_{X}\)).
### Main results
**Theorem A**.: _Let \((X,D)\to\operatorname{Spec}\mathbb{C}\) be a Mf CY pair (so that \(X\) is a \(\mathbb{Q}\)-factorial terminal Fano variety with \(\rho=1\)) such that_
1. \((X,D)\) _is terminal, and_
2. \(\operatorname{Cl}D=\mathbb{Z}\cdot\mathcal{O}_{D}(A)\)_, where_ \(A\) _is a divisor on_ \(X\)_._
_Then \((X,D)\to\operatorname{Spec}\mathbb{C}\) is birationally rigid, and \(\operatorname{Bir}(X,D)=\operatorname{Aut}(X,D)\)._
**Remark 1.9**.:
1. Let \((X,D_{X})\) and \((Y,D_{Y})\) be _(t, c)_ CY pairs. A birational map \(\varphi\colon X\dashrightarrow Y\) is volume preserving if and only if \(\varphi\) restricts to a birational map \(D_{X}\dashrightarrow D_{Y}\)(Proposition 2.6). Thus, in the statement of Theorem A, we can take \(\operatorname{Bir}(X,D)\) naively to mean the group of birational maps \(\varphi\colon X\dashrightarrow X\) that stabilize \(D\). Similar considerations apply to the statement of Theorem B below.
2. Assumption (i) is needed for the statement to hold. Theorems B and C address the cases when \(X=\mathbb{P}^{3}\) and \(D\) is a generic quartic with one singular point of type \(A_{1}\) and \(A_{2}\), respectively. In both cases the pair \((X,D)\) is canonical but not terminal, and \(\operatorname{Bir}(X,D)\supset\operatorname{Aut}(X,D)\).
3. Assumption (ii) is also needed for the statement to hold. This is illustrated for instance by Oguiso's example (2) mentioned above.
4. Assumption (ii) is meaningful even when \(n\geq 4\). If \(X=\mathbb{P}^{4}\) and \(D\) is a quintic 3-fold with ordinary quadratic singularities, then the pair \((X,D)\) is terminal but the restriction map \(r\colon\operatorname{Cl}X\to\operatorname{Cl}D\) is not necessarily an isomorphism. For instance if \(D\) contains a plane, then in general \(D\) has 16 ordinary quadratic singularities and \(\operatorname{Cl}D\) has rank 2.
**Theorem B**.: _Let \(D\subset\mathbb{P}^{3}\) be a quartic surface. Assume the following:_
1. \(D\) _is smooth apart from a single_ \(A_{1}\)_-singularity_ \(z\in D\)_,_
2. \(\operatorname{Cl}D=\mathbb{Z}\cdot\mathcal{O}_{D}(1)\)_._
_In what follows denote by \(f\colon X\to\mathbb{P}^{3}\) the blow-up of \(z\in\mathbb{P}^{3}\), \(D_{X}\subset X\) the strict transform of \(D\), and \(\pi\colon X\to\mathbb{P}^{2}\) the \(\mathbb{P}^{1}\)-bundle induced by the projection from \(z\)._
_Then the following holds:_
1. _The pliability of the pair_ \((\mathbb{P}^{3},D)\) _is the set with two elements_ \[(\mathbb{P}^{3},D)\to\operatorname{Spec}\mathbb{C}\text{ and }\pi\colon(X,D_{X})\to\mathbb{P}^{2}.\]
2. _The restriction homomorphism_ \(r\colon\operatorname{Bir}(\mathbb{P}^{3},D)\ \to\ \operatorname{Bir}(D)\) _induces a split exact sequence of groups:_ \[1\to\mathbb{G}\to\operatorname{Bir}(\mathbb{P}^{3},D)\to\operatorname{Bir}D\to 1\] _where_ \(\mathbb{G}\) _is the twist of_ \(\mathbb{G}_{m}\) _corresponding to the quadratic extension_ \(\mathbb{C}(x,y)\subset\mathbb{C}(D)\)_. More precisely, the morphism_ \(\pi\colon D_{X}\to\mathbb{P}^{2}\) _is finite of degree 2 and identifies the function field_ \(\mathbb{C}(D_{X})\) _with a quadratic extension_ \(\mathbb{C}(x,y)(\sqrt{a})\) _where_ \(a=a_{6}(x,y)\in\mathbb{C}[x,y]\) _is a sextic polynomial. The group_ \(\mathbb{G}\) _is the form of_ \(\mathbb{G}_{m}\) _over_ \(\mathbb{C}(x,y)\) _whose_ \(\mathbb{C}(x,y)\)_-points are the solutions of the Pell equation_ \[U^{2}-aV^{2}=1.\]
**Remark 1.10**.: Assumptions (i) and (ii) are satisfied for a very general quartic surface \(D\) with an \(A_{1}\)-singularity. Indeed, consider the moduli space of lattice-polarized K3 surfaces containing the rank-2 lattice \(N=\mathbb{Z}^{2}\) with quadratic form
\[\begin{pmatrix}-2&0\\ 0&4\end{pmatrix}\]
in their Picard group. A very general element of this moduli space is the minimal resolution of a quartic surface \(D\) satisfying assumptions (i) and (ii).
**Theorem C**.: _Let \(D\subset\mathbb{P}^{3}\) by a quartic surface. Assume the following:_
1. \(D\) _is smooth apart from a single_ \(A_{2}\)_-singularity_ \(z\in D\)_,_
2. \(\operatorname{Cl}D=\mathbb{Z}\cdot\mathcal{O}_{D}(1)\)_._
_Then the pliability of the pair \((\mathbb{P}^{3},D)\) is the (infinite) set with elements: object \(1\), \(2\), \(3^{a}\), \(3^{b}\), the \(3\)-parameter family of objects \(4\), and the \(6\)-parameter family of objects \(5^{a}\) constructed and displayed in Table 1, modulo isomorphism. The paragraph following Remark 1.11 explains how to read Table 1._
**Remark 1.11**.: Assumptions (i) and (ii) are satisfied for a very general quartic surface \(D\) with an \(A_{2}\)-singularity. Indeed, consider the moduli space of lattice-polarized K3 surfaces containing the rank-3 lattice \(N=\mathbb{Z}^{3}\) with quadratic form
\[\begin{pmatrix}-2&1&0\\ 1&-2&0\\ 0&0&4\end{pmatrix}\]
in their Picard group. A very general element of this moduli space is the minimal resolution of a quartic surface \(D\) satisfying assumptions (i) and (ii).
**How to read Table 1**. The rows of Table 1 display pairs \((X^{\dagger},D^{\dagger})\) birational to the pair \((\mathbb{P}^{3},D)\) of Theorem C.
The first row of Table 1 displays the pair \((\mathbb{P}^{3},D)\) itself. We have chosen homogeneous coordinates \(x_{0},\ldots,x_{3}\) on \(\mathbb{P}^{3}\) such that the singular point \(z\in D\) is the point \(z=[0:0:0:1]\), and the tangent cone to \(D\) at \(z\) is \((x_{0}x_{1}=0)\subset\mathbb{P}^{3}\). In these coordinates, the equation of \(D\) is written as
\[D=\Big{(}x_{0}x_{1}x_{3}^{2}+Bx_{3}+C=0\Big{)}\subset\mathbb{P}^{3},\]
where \(B=B_{3}(x_{0},x_{1},x_{2})\) and \(C=C_{4}(x_{0},x_{1},x_{2})\) are homogeneous forms of degree \(3\) and \(4\). The information displayed on the first row is self-explanatory: the second column states that the pair \((X^{\dagger},D^{\dagger})\) is contained in the ambient \(\mathbb{P}^{3}\), the third column names homogeneous coordinates on \(\mathbb{P}^{3}\) all of weight one, the fourth column displays the equation of \(X^{\dagger}\subset\mathbb{P}^{3}\) -- the zero "equation", since \(X^{\dagger}=\mathbb{P}^{3}\) in this case -- and the last column displays the equation of \(D^{\dagger}\) that we just explained.
For all integers \(k\geq 0\), we denote by \(\mathbb{F}_{k}^{3}\to\mathbb{P}^{2}\) the \(\mathbb{P}^{1}\)-bundle \(\mathbb{P}\big{(}\mathcal{O}\oplus\mathcal{O}(k)\big{)}\) over \(\mathbb{P}^{2}\) with homogeneous coordinates and weights
\[\begin{array}{cccccc}x_{0}&x_{1}&x_{2}&x_{3}&x\\ \hline 1&1&1&0&-k\\ 0&0&0&1&1\end{array}\]
The second row of the table displays object \(2\), that is, the pair \((\mathbb{F}_{1}^{3},D^{\dagger})\) where the equation of \(D^{\dagger}\) is as shown in the last column.
In the row that displays object \(4\), \(L=L(x_{0},x_{1},x_{2})\) is a homogeneous linear form; in the rows that display objects \(5^{a}\) and \(5^{b}\), \(Q=Q(x_{0},x_{1},x_{2})\) is a homogeneous quadratic form.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Object & \(X^{\dagger}\subseteq\) Ambient & Ambient coords. & \& wts. & Eqn. of \(X^{\dagger}\) in Ambient & Eqn. of \(D^{\dagger}\) in \(X^{\dagger}\) \\ \hline \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\(X^{\dagger}=\mathbb{P}^{3}\)} & \(x_{0}\) & \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \multirow{2}{*}{\(0\)} & \(x_{0}x_{1}x_{3}^{2}+Bx_{3}+C\)} \\ \cline{5-8} & & 1 & 1 & 1 & 1 & \\ \cline{5-8} & & \(x_{0}\) & \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(x\) & \multirow{2}{*}{\(0\)} & \(x_{0}x_{1}x_{3}^{2}+Bx_{3}x+Cx^{2}\)} \\ \cline{5-8} & & 1 & 1 & 1 & 0 & \(-1\) & \\ \cline{5-8} & & \(x_{0}\) & \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(x\) & \multirow{2}{*}{\(0\)} & \(x_{0}x_{3}^{2}+Bx_{3}x+x_{1}Cx^{2}\)} \\ \cline{5-8} & & 1 & 1 & 1 & 0 & \(-2\) & \\ \cline{5-8} & & \(x_{0}\) & \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(x\) & \multirow{2}{*}{\(0\)} & \(x_{1}x_{3}^{2}+Bx_{3}x+x_{0}Cx^{2}\)} \\ \cline{5-8} & & 1 & 1 & 1 & 0 & \(1\) & \\ \cline{5-8} \(2^{b}\) & \(X^{\dagger}=\mathbb{F}_{2}^{3}\) & \(1\) & 1 & 1 & 0 & \(-2\) & \\ \cline{5-8} & & 0 & 0 & 1 & 1 & \\ \cline{5-8} \(3^{a}\) & \(X^{\dagger}=\mathbb{P}(1^{3},2)\) & \(x_{0}\) & \(x_{1}\) & \(x_{2}\) & \(y\) & \multirow{2}{*}{\(0\)} & \(x_{0}y^{2}+By+x_{1}C\)} \\ \cline{5-8} & & 1 & 1 & 1 & 2 & \\ \cline{5-8} & & \(x_{0}\) & \(x_{1}\) & \(x_{2}\) & \(y\) & \multirow{2}{*}{\(0\)} & \(x_{1}y^{2}+By+x_{0}C\)} \\ \cline{5-8} & & 1 & 1 & 1 & 2 & \\ \cline{5-8} & & \(x_{0}\) & \(x_{1}\) & \(x_{2}\) & \(y_{0}y_{1}+C-\) & \(x_{0}y_{1}-x_{1}y_{0}-B\) \\ \cline{5-8} & & 1 & 1 & 1 & 2 & \\ \cline{5-8} & & \(x_{0}\) & \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(y\) \\ \cline{5-8} & & 1 & 1 & 1 & 2 & \\ \cline{5-8} & & \(x_{0}\) & \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(y\) \\ \cline{5-8} & & 1 & 1 & 1 & 1 & 2 \\ \cline{5-8} & & \(x_{0}\) & \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(y\) \\ \cline{5-8} & & 1 & 1 & 1 & 1 & 2 \\ \cline{5-8} & & \(x_{0}\) & \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(y\) \\ \cline{5-8} & & 1 & 1 & 1 & 1 & 2 \\ \cline{5-8} & & \(x_{0}\) & \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(x\) \\ \cline{5-8} & & 1 & 1 & 1 & 0 & \(-k\) \\ \cline{5-8} & & 1 & 1 & 1 & 0 & \(-k\) \\ \cline{5-8} & & 1 & 1 & 1 & 0 & \(-k\) \\ \cline{5-8} & & 1 & 1 & 0 & \(-k\) \\ \cline{5-8} & & 1 & 1 & 0 & \(-k\) \\ \cline{5-8} & & 1 & 1 & 0 & \(-k\) \\ \cline{5-8} & & 1 & 1 & 0 & \(-k\) \\ \cline{5-8} & & 1 & 1 & 0 & \(-k\) \\ \cline{5-8} & & 1 & 1 & 0 & \(-k\) \\ \cline{5-8} & & 1 & 1 & 0 & \(-k\) \\ \cline{5-8} & & 1 & 1 & 0 & \(-k\) \\ \cline{5-8} & & 1 & 0 & \
**Remark 1.12**.: At the end of Section 2, we exhibit explicit volume preserving birational maps between the Mf CY pairs in Table 1. These maps are constructed as Sarkisov links, and this is how they naturally appear in the proof of Theorem C. In particular, our constructions show that object \(2\) is square equivalent to objects \(2^{a}\) and \(2^{b}\). Moreover, in Example 2.17 we show that families \(5^{a}\) and \(5^{b}\) are isomorphic. This is why objects \(2^{a}\) and \(2^{b}\), and the family \(5^{b}\), appear in Table 1 while they are omitted in the statement of Theorem C.
**Remark 1.13**.: In order to prove Theorem B and Theorem C, we consider a Mf CY pair \((Y,D_{Y})/T\), and a volume preserving birational map \(\varphi\colon(\mathbb{P}^{3},D)\dashrightarrow(Y,D_{Y})\) that is not biregular. The proofs proceeds by studying explicitly the links of a Sarkisov factorization of \(\varphi\). To control these links, and the divisorial contractions, flips, flops and antiflips that constitute them, we need some explicit classification results for divisorial contractions and analytic neighbourhoods of curves. We develop this material further than strictly needed for the proof of Theorem B, enough for what we need in the proof of Theorem C. These results, contained in Section 4, are of independent interest; they represent the first steps towards developing a technology for working explicitly with the volume preserving birational maps of \(3\)-fold Mf CY pairs.
**Acknowledgements**. Carolina Araujo was partially supported by grants from CNPq, Faperj and CAPES/COFECUB. Part of this work was developed during the authors' visit to ICTP, funded by Carolina Araujo's ICTP Simons Associateship. We thank ICTP for the great working conditions, and Simons Foundation for the financial support.
Alessio Corti was partially supported by EPSRC Programme Grant EP/N03189X/1 _Classification, computation and construction: new problems in geometry_. This research was started during a visit of Alessio Corti to the IMPA funded by CNPq Visiting Researcher grant.
Alex Massarenti is a member of the Gruppo Nazionale per le Strutture Algebriche, Geometrice e le loro Applicazioni of the Istituto Nazionale di Alta Matematica F. Severi (GNSAGA-INDAM).
## 2. The Sarkisov program for Mf CY pairs
In this section we review the factorization theorem for volume preserving birational maps between Mf CY pairs established in [1]. This is the main tool in the proof of our results. We start by recalling a valuative interpretation of the volume preserving condition.
**Proposition 2.1** ([1, Remark 1.7]).: _Let \((X,D_{X})\) and \((Y,D_{Y})\) be CY pairs, and \(f\colon X\dashrightarrow Y\) an arbitrary birational map. The following conditions are equivalent:_
1. _The map_ \(f\colon(X,D_{X})\dashrightarrow(Y,D_{Y})\) _is volume preserving._
2. _For all geometric valuations_ \(E\) _with center on both_ \(X\) _and_ \(Y\)_, the discrepancies of_ \(E\) _with respect to the pairs_ \((X,D_{X})\) _and_ \((Y,D_{Y})\) _are equal:_ \(a(E,K_{X}+D_{X})=a(E,K_{Y}+D_{Y})\)_._
3. _Let_ \[\xy(0,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-10)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-10)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-10)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-10)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-10)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{ \xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1,0)*{\xy(-1)*{\xy(-1,0)*{\xy(-1)*{\xy(-1)*{\xy(- 1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\xy(- \)(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\xy(- \)(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\xy(- \)(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\(- \)(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\xy(-1)*{\(- \)(-1)*{\xy(-1)*{\xy(-1)*{\left(-1)*{\left(-1)*{\left(-1)*{\left(- \cdot{\left(-1)*{\left(-{\left((({\left({\left(({\left({\{\left({{{{\left \ \ \ \ 0 \)}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\) \right\\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \ \\ \{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
**2.2**.: A _Mori divisorial contraction_ is a divisorial contraction \(f\colon Z\to X\) from a \(\mathbb{Q}\)-factorial terminal variety \(Z\), associated to an extremal ray \(R\subset\overline{\operatorname{NE}}(Z)\) such that \(K_{Z}\cdot R<0\). In particular, \(X\) also has \(\mathbb{Q}\)-factorial terminal singularities.
If \((Z,D_{Z})\) and \((X,D_{X})\) are (t, lc) CY pairs, then a Mori divisorial contraction \(f\colon Z\to X\) is volume preserving as a map of CY pairs if and only if \(K_{Z}+D_{Z}=f^{*}(K_{X}+D_{X})\), in the sense of Proposition 2.1(3). In this case, we have \(D_{X}=f_{*}(D_{Z})\).
A _Mori flip_ is a flip \(\varphi\colon Z\dashrightarrow Z^{\prime}\) from a \(\mathbb{Q}\)-factorial terminal variety \(Z\), associated to an extremal ray \(R\subset\overline{\operatorname{NE}}(Z)\) such that \(K_{Z}\cdot R<0\). In particular, \(Z^{\prime}\) also has \(\mathbb{Q}\)-factorial terminal singularities. An _antiflip_ is the inverse of a Mori flip. A _Mori flip_ is a flop \(\varphi\colon Z\dashrightarrow Z^{\prime}\) between \(\mathbb{Q}\)-factorial terminal varieties, associated to an extremal ray \(R\subset\overline{\operatorname{NE}}(Z)\) such that \(K_{Z}\cdot R=0\).
Let \((Z,D_{Z})\) and \((Z^{\prime},D_{Z^{\prime}})\) be (t, lc) CY pairs, and \(\varphi\colon Z\dashrightarrow Z^{\prime}\) a Mori flip, flop or antiflip. Then \(\varphi\colon(Z,D_{Z})\dashrightarrow(Z^{\prime},D_{Z^{\prime}})\) is volume preserving if and only if \(D_{Z^{\prime}}=\varphi_{*}D_{Z}\).
**2.3** (Sarkisov links).: We recall the definition of the four types of Sarkisov links from [10]. In the following diagrams, \(X\to S\) and \(X^{\prime}\to S^{\prime}\) always stand for Mori fiber spaces.
1. A _Sarkisov link of type (I)_ is a commutative diagram where \(Z\to X\) is a Mori divisorial contraction, and \(Z\dashrightarrow X^{\prime}\) is a sequence of Mori flips, flops and antiflips.
2. A _Sarkisov link of type (II)_ is a commutative diagram where \(Z\to X\) and \(Z^{\prime}\to X^{\prime}\) are Mori divisorial contractions, and \(Z\dashrightarrow Z^{\prime}\) is a sequence of Mori flips, flops and antiflips.
3. A _Sarkisov link of type (III)_ is the inverse of a link of type (I).
4. A _Sarkisov link of type (IV)_ is a commutative diagram where \(X\dashrightarrow X^{\prime}\) is a sequence of Mori flips, flops and antiflips, and \(S\to T\) and \(S^{\prime}\to T\) are Mori contractions.
It is shown in [10] and [12] that every birational map between Mori fiber spaces is a composition of Sarkisov links. Next we explain the version of the Sarkisov program for volume preserving birational maps between Mf CY pairs established in [11].
**Definition 2.4**.: A _volume preserving Sarkisov link_ is a Sarkisov link as described in Paragraph 2.3 above with the following additional data and property: there are divisors \(D_{X}\) on \(X\), \(D_{X^{\prime}}\) on \(X^{\prime}\), \(D_{Z}\) on \(Z\), and \(D_{Z^{\prime}}\) on \(Z^{\prime}\), making \((X,D_{X})\), \((X^{\prime},D_{X^{\prime}})\), \((Z,D_{Z})\) and \((Z^{\prime},D_{Z^{\prime}})\) (t, lc)
CY pairs, and all the divisorial contractions, Mori flips, flops and antiflips that constitute the Sarkisov link are volume preserving for these CY pairs.
At the end of this section we will construct explicit volume preserving Sarkisov links between the Mf CY pairs displayed in Table 1 of the introduction.
**Theorem 2.5** ([16, Theorem 1.1]).: _A volume preserving birational map between Mf CY pairs is a composition of volume preserving Sarkisov links._
Theorem 2.5 provides an effective tool to investigate the group \(\operatorname{Bir}(X,D)\) when \((X,D)\) is a Mf CY pair. In this paper we restrict ourselves to canonical pairs. This is the case, for instance, when \(X\) is smooth and \(D\subset X\) is an irreducible hypersurface with canonical singularities. In this case, the theory is greatly simplified for the following reason.
**Proposition 2.6**.: _Let \((X,D_{X})\) and \((Y,D_{Y})\) be (t, c) CY pairs, and \(f\colon X\dasharrow Y\) an arbitrary birational map. Then \(f\colon(X,D_{X})\dasharrow(Y,D_{Y})\) is volume preserving if and only if \(f_{*}D_{X}=D_{Y}\) and \(f_{*}^{-1}D_{Y}=D_{X}\). (This condition is equivalent to asking that the restriction of \(f\) to each component of \(D_{X}\) is a birational map to a component of \(D_{Y}\), and the same for \(f^{-1}\))._
_In particular, if \((X,D)\) is a (t, c) CY pair with \(D\) irreducible, then \(\operatorname{Bir}(X,D)\) coincides with the group of birational self-maps \(f\colon X\dasharrow X\) such that \(f(D)=D\)._
**Remark 2.7**.:
1. In many cases of interest, the assumption that \((X,D)\) is (t, c) implies that \(D\) is irreducible. This is the case for instance when \(X\) is Fano with \(\dim(X)>1\). Indeed, if \((X,D)\) is (t, c), then \(D\) is normal, and thus irreducibility of \(D\) is equivalent to connectedness of \(D\). A key case when connectedness fails is \(X=\mathbb{P}^{1}\), \(D=\{0,\infty\}\), and one feels that all examples must in some way be related to this: see the main result of [11] for a statement along these lines.
2. The characterization of the volume preserving condition stated in Proposition 2.6 does not hold in general for birational maps between (t, lc) CY pairs. For example, consider the divisor \(D=L_{0}+L_{1}+L_{2}\) on \(\mathbb{P}^{2}\), where the \(L_{i}\) are the three coordinate lines. It is (t, lc) but not (t, c). One checks easily that the standard Cremona transformation \[\begin{array}{ccc}f\colon&(\mathbb{P}^{2},D)&\dasharrow&(\mathbb{P}^{2},D) \\ &(x_{0}:x_{1}:x_{2})&\longmapsto&(x_{1}x_{2}:x_{0}x_{2}:x_{0}x_{1})\end{array}\] is volume preserving, but it does not restrict to a birational self-map of \(D\). Indeed, \(f=f^{-1}\) contracts the lines \(L_{i}\) to the three coordinate points of \(\mathbb{P}^{2}\).
Proof of Proposition 2.6.: If \((X,D_{X})\) is a (t, c) pair, then any divisor \(E\) over \(X\) such that \(a(E,K_{X}+D_{X})=-1\) must be a component of \(D_{X}\). Therefore, if \(f\colon(X,D_{X})\dasharrow(Y,D_{Y})\) is a volume preserving birational map between (t, c) CY pairs, then \(f\) does not contract any component of \(D_{X}\), and \(f^{-1}\) does not contract any component of \(D_{Y}\). Hence, \(f_{*}D_{X}=D_{Y}\) and \(f_{*}^{-1}D_{Y}=D_{X}\).
To prove the converse, consider a common log resolution
\[\xy@{-}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{ }}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r] {@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{ }}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{ }}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{ }}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@ {}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@ {}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{ }}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@ {}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{ }}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{ }}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@ {}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@ {}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{ }}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@ {}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@ {}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@ {}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@ }[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{ }}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@ }[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{ }}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{ }}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{ }}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@ {}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{ }}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[r]{@{}}[
Applying \(q_{*}\) to (i) yields
\[q_{*}(K_{W}+D_{W})=\sum_{i}a_{i}q_{*}E_{i}\sim 0.\]
This shows that, whenever \(a_{i}>0\), the divisor \(E_{i}\) is \(q\)-exceptional. Similarly, whenever \(b_{j}>0\), the divisor \(F_{j}\) is \(p\)-exceptional. So, up to relabelling, we may assume that \(E_{i}=F_{i}\), and we have
\[\sum_{i}a_{i}E_{i}\sim\sum_{i}b_{i}E_{i}.\]
The negativity lemma [13, Lemma 3.39] then implies that \(a_{i}=b_{i}\), and so \(p^{*}(K_{X}+D_{X})=q^{*}(K_{Y}+D_{Y})\).
The next lemma ensures that canonicity is preserved when we run a volume preserving Sarkisov program for canonical Mf CY pairs.
**Lemma 2.8**.: _Let \(f\colon(X,D_{X})\dashrightarrow(Y,D_{Y})\) be a volume preserving birational map between \((t,\ lc)\) CY pairs. Then \((X,D_{X})\) is canonical if and only if so is \((Y,D_{Y})\)._
Proof.: First of all, note that if \((X,D_{X})\) is canonical, then \(D_{X}\) is normal. Write
\[D_{X}=\sum D_{i},\]
where the \(D_{i}\) are the connected components of \(D_{X}\). The \(D_{i}\) are the only divisorial valuations with \(a(D_{i},K_{X}+D_{X})<0\). Since the map is volume preserving, for every divisorial valuation \(E\), \(a(E,K_{Y}+D_{Y})=a(E,K_{X}+D_{X})\). To show that the pair \((Y,D_{Y})\) is canonical, all we need to show is that no \(D_{i}\) is exceptional over \(Y\). Consider a common log resolution
denote by \(\widetilde{D}_{i}\) the strict transform of \(D_{i}\), and by \(D_{W}=\sum\widetilde{D}_{i}\) the strict transform of \(D_{X}\). In order to show that \((Y,D_{Y})\) is canonical, we need to show that no \(\widetilde{D}_{i}\) is \(q\)-exceptional.
Since \((X,D_{X})\) is (t, c), we have
\[K_{W}+D_{W}=p^{*}(K_{X}+D_{X})+\sum a_{j}E_{j},\]
with \(a_{j}\geq 0\). If \(a_{j}>0\), then \(E_{j}\) is \(p\)-exceptional. The volume preserving condition says that \(p^{*}(K_{X}+D_{X})=q^{*}(K_{Y}+D_{Y})\), and hence
\[K_{W}+D_{W}=q^{*}(K_{Y}+D_{Y})+\sum a_{j}E_{j},\]
and if \(a_{j}>0\) then \(E_{j}\) is \(q\)-exceptional. By [13, Theorem 5.48], the support of \(D_{W}\) is connected in the neighbourhood of every fibre of \(q\). Since the \(\widetilde{D}_{i}\) are pairwise disjoint, connectedness implies that, if \(\widetilde{D}_{i}\) is \(q\)-exceptional, then it is not contained in the support of \(q^{*}D_{Y}\). Hence \(-1=a(\widetilde{D}_{i},K_{Y}+D_{Y})=a(\widetilde{D}_{i},K_{Y})\), contradicting the assumption that \(Y\) has terminal singularities. We conclude that no \(\widetilde{D}_{i}\) is \(q\)-exceptional, and thus \((Y,D_{Y})\) is canonical.
In the remaining part of this section, we revisit the Mf CY pairs in Table 1, and construct explicit volume preserving Sarkisov links connecting them. These links will appear crucially in the proof of Theorem C. They are summarized in the following diagram:
In Example 2.17 we produce an isomorphism between the family \(5^{a}\) and the family \(5^{b}\).
**Object 1**. The pair \((\mathbb{P}^{3},D_{4})\), where
\[D_{4}=\Big{(}x_{0}x_{1}x_{3}^{2}+Bx_{3}+C=0\Big{)}.\]
Here and in what follows, \(B=B_{3}(x_{0},x_{1},x_{2})\) and \(C=C_{4}(x_{0},x_{1},x_{2})\) are fixed homogeneous forms of the indicated degree.
**Object 2**. The pair \(\left(\mathbb{F}_{1}^{3},D_{\binom{2}{2}}\right)\), where \(\mathbb{F}_{1}^{3}\) is the \(\mathbb{P}^{1}\)-bundle \(\mathbb{P}\big{(}\mathcal{O}\oplus\mathcal{O}(1)\big{)}\) over \(\mathbb{P}^{2}\) with homogeneous coordinates and weights
\[\begin{array}{ccccc}\frac{x_{0}}{1}&x_{1}&x_{2}&x_{3}&x\\ \hline 1&1&1&0&-1\\ 0&0&0&1&1\end{array},\]
\[\text{and}\quad D_{\binom{2}{2}}=\Big{(}x_{0}x_{1}x_{3}^{2}+Bx_{3}x+Cx^{2}=0 \Big{)}.\]
**Example 2.9** (Map from object 2 to object 1).: The blow-up \(\sigma\colon\left(\mathbb{F}_{1}^{3},D_{\binom{2}{2}}\right)\to(\mathbb{P}^{ 3},D_{4})\) of the point \([0:0:0:1]\in\mathbb{P}^{3}\) is a volume preserving Sarkisov link of type (I). In coordinates, \(\sigma\colon\mathbb{F}_{1(x_{0},x_{1},x_{2},x_{3},x)}^{3}\to\mathbb{P}^{3}(_{ x_{0},x_{1},x_{2},x_{3}})\) is given by
\[(x_{0},x_{1},x_{2},x_{3},x)\mapsto\Big{(}x_{0},x_{1},x_{2},\frac{x_{3}}{x} \Big{)}\,.\]
**Object \(2^{a}\)**. The pair \(\left(\mathbb{F}_{2}^{3},D_{\binom{1}{2}}^{a}\right)\), where \(\mathbb{F}_{2}^{3}\) is the \(\mathbb{P}^{1}\)-bundle \(\mathbb{P}\big{(}\mathcal{O}\oplus\mathcal{O}(2)\big{)}\) over \(\mathbb{P}^{2}\) with homogeneous coordinates and weights
\[\begin{array}{ccccc}\frac{x_{0}}{1}&x_{1}&x_{2}&x_{3}&x\\ \hline 1&1&1&0&-2\\ 0&0&0&1&1\end{array},\]
\[\text{and}\quad D_{\binom{1}{2}}^{a}=\Big{(}x_{0}x_{3}^{2}+Bx_{3}x+x_{1}Cx^{2} =0\Big{)}.\]
**Example 2.10** (Map from object \(2^{a}\) to object 2).: Consider the rational map
\[\nu^{a}\colon\left(\mathbb{F}_{2}^{3},D_{\binom{1}{2}}^{a}\right)\dashrightarrow \left(\mathbb{F}_{1}^{3},D_{\binom{2}{2}}\right)\]
obtained by blowing-up the curve \((x_{1}=x_{3}=0)\subset D_{\binom{1}{2}}^{a}\), and then blowing-down the strict transform of the divisor \((x_{1}=0)\). In coordinates, \(\nu^{a}\colon\mathbb{F}_{2(x_{0},x_{1},x_{2},x_{3},x)}^{3}\dashrightarrow \mathbb{F}_{1(x_{0},x_{1},x_{2},x_{3},x)}^{3}\) is given by
\[(x_{0},x_{1},x_{2},x_{3},x)\mapsto(x_{0},x_{1},x_{2},x_{3},xx_{1}).\]
It is a volume preserving Sarkisov link of type (II).
Similarly, there is a volume preserving Sarkisov link of type (II) \(\nu^{b}\colon\) obj. \(2^{b}\dashrightarrow\) obj. 2.
It follows that the MI CY pairs \(2^{a}\), \(2^{b}\) and \(2\) are all square equivalent.
**Object \(3^{a}\)**. The pair \((\mathbb{P}(1^{3},2),D_{5}^{a})\), where
\[D_{5}^{a}=\Big{(}x_{0}y^{2}+By+x_{1}C=0\Big{)}.\]
Note that \(D_{5}^{a}\subset\mathbb{P}(1^{3},2)\) is not a general degree five surface in \(\mathbb{P}(1^{3},2)\). Indeed \(D_{5}^{a}\) always contains the curve \(y=x_{1}=0\).
**Object \(3^{b}\)**. The pair \((\mathbb{P}(1^{3},2),D_{5}^{b})\), where
\[D_{5}^{b}=\Big{(}x_{1}y^{2}+By+x_{0}C=0\Big{)}.\]
Object \(3^{b}\) is object \(3^{a}\) with \(x_{0},x_{1}\) swapped in the same way as object \(2^{b}\) is object \(2^{a}\) with \(x_{0},x_{1}\) swapped. However, in general, object \(3^{b}\) is not square equivalent to object \(3^{a}\). Indeed, since \(X^{\dagger}=\mathbb{P}(1,1,1,2)\) is a Mori fibre space over \(\operatorname{Spec}\mathbb{C}\), the two objects are square equivalent if and only if they are isomorphic.
**Example 2.11** (Map from object \(1\) to object \(3^{b}\)).: Consider the birational map \(\epsilon_{b}\colon(\mathbb{P}^{3},D_{4})\dashrightarrow(\mathbb{P}(1^{3},2),D _{5}^{b})\) obtained by a weighted blow-up of \([0:0:0:1]\in\mathbb{P}^{3}\) with weights \((2,1,1)\), followed by the contraction of the strict transform of the divisor \((x_{0}=0)\). In coordinates, \(\epsilon_{b}\colon\mathbb{P}^{3}_{(x_{0},x_{1},x_{2},x_{3})}\dashrightarrow \mathbb{P}(1^{3},2)_{(x_{0},x_{1},x_{2},y)}\) is given by
\[(x_{0},x_{1},x_{2},x_{3})\mapsto(x_{0},x_{1},x_{2},x_{3}x_{0}).\]
It is a volume preserving Sarkisov link of type (II). Similarly, there is a volume preserving Sarkisov link of type (II), \(\epsilon_{a}\colon\) obj. \(1\dashrightarrow\) obj. \(3^{a}\).
**Example 2.12** (Maps from object \(2^{b}\) to object \(3^{b}\) and from object \(2^{a}\) to object \(3^{a}\)).: The blow-up \(\chi^{b}\colon\left(\mathbb{F}^{3}_{2},D_{\binom{1}{2}}^{b}\right)\to( \mathbb{P}(1^{3},2),D_{5}^{b})\) of the singular point \([0:0:0:1]\in\mathbb{P}(1^{3},2)\) is a volume preserving Sarkisov link of type (I). In coordinates, \(\chi^{b}\colon\mathbb{F}^{3}_{2(x_{0},x_{1},x_{2},x_{3},x)}\to\mathbb{P}(1^{3 },2)_{(x_{0},x_{1},x_{2},y)}\) is given by
\[(x_{0},x_{1},x_{2},x_{3},x)\mapsto\Big{(}x_{0},x_{1},x_{2},\frac{x_{3}}{x} \Big{)}\,.\]
From the description in coordinates, it is straightforward to check that \(\chi^{b}=\epsilon_{b}\circ\sigma\circ\nu^{b}\). Similarly, there is a volume preserving Sarkisov link of type (I), \(\chi^{a}\colon\) obj. \(2^{a}\to\) obj. \(3^{a}\), and \(\chi^{a}=\epsilon_{a}\circ\sigma\circ\nu^{a}\).
**3-parameter family of objects 4**. Let \(D_{3,4}\subset\mathbb{P}(1^{3},2^{2})_{(x_{0},x_{1},x_{2},y_{0},y_{1})}\) be the complete intersection given by equations:
\[D_{3,4}=\begin{cases}y_{0}y_{1}+C&=0,\\ x_{0}y_{1}-x_{1}y_{0}-B&=0.\end{cases}\]
For each fixed linear form \(L=L(x_{0},x_{1},x_{2})\), we consider the pair \((X_{4},D_{3,4})\), where \(X_{4}\) is the following quartic containing \(D_{3,4}\):
\[X_{4}=\Big{(}y_{0}y_{1}+C-L(x_{0}y_{1}-x_{1}y_{0}-B)=0\Big{)}.\]
**Example 2.13** (Map from object \(3^{b}\) to object \(4\)).: For each fixed linear form \(L=L(x_{0},x_{1},x_{2})\), we construct a volume preserving Sarkisov link of type (II), \(\phi^{b}\colon(\mathbb{P}(1^{3},2),D_{5}^{b})\dashrightarrow(X_{4},D_{3,4})\). In coordinates, the map \(\phi^{b}\colon\mathbb{P}(1^{3},2)_{(x_{0},x_{1},x_{2},y)}\dashrightarrow\mathbb{ P}(1^{3},2^{2})_{(x_{0},x_{1},x_{2},y_{0},y_{1})}\) is given by
\[(x_{0},x_{1},x_{2},y)\mapsto\left(x_{0},x_{1},x_{2},y,-x_{1}L-\frac{x_{0}x_{1} L^{2}+BL+C}{y-x_{0}L}\right). \tag{2.14}\]
This map is obtained by first blowing-up the curve \(\Gamma\subset\mathbb{P}(1,1,1,2)\) defined by:
\[\Gamma=\left\{\begin{array}{ll}Q=y-x_{0}L=0,\\ F=x_{0}x_{1}L^{2}+BL+C=0,\end{array}\right.\]
and then blowing-down the strict transform of the divisor \((y-x_{0}L=0)\).
In what follows, we describe this map in detail, and deduce expression (2.14) above.
First note that \(\Gamma\subset D_{5}^{b}\):
\[x_{1}y^{2}+By+x_{0}C=Q\left(x_{1}(y+x_{0}L)+B\right)+Fx_{0}. \tag{2.15}\]
In order to describe the blow-up of \(\mathbb{P}(1,1,1,2)\) along \(\Gamma\), consider the toric variety \(\mathbb{F}\) with coordinates and weight matrix:
\[\begin{array}{cccccc}z_{0}&z_{1}&z_{2}&z_{3}&u&v\\ \hline 1&1&1&2&0&-2\\ 0&0&0&0&1&1\end{array}\]
and stability condition chosen so that the nef cone of \(\mathbb{F}\) is the span \(\langle z_{i},u\rangle_{+}\):
This choice of stability condition gives the irrelevant ideal \((z_{0},z_{1},z_{2},z_{3})(u,v)\) and yields a \(\mathbb{P}^{1}\)-bundle morphism \(\pi\colon\mathbb{F}\to\mathbb{P}(1,1,1,2)\). The other contraction of \(\mathbb{F}\) is the divisorial contraction \(\pi^{\prime}\colon\mathbb{F}\to\mathbb{P}(1,1,1,2,2)\) that maps the divisor \(v=0\) to the point \([0:0:0:0:1]\in\mathbb{P}(1,1,1,2,2)\). In coordinates, \(\pi^{\prime}\colon\mathbb{F}_{(z_{0},z_{1},z_{2},z_{3},u,v)}\to\mathbb{P}(1,1,1,2,2)_{(x_{0},x_{1},x_{2},y_{0},y_{1})}\) is given by
\[(z_{0},z_{1},z_{2},z_{3},u,v)\mapsto\left(z_{0},z_{1},z_{2},z_{3},\frac{u}{v} \right).\]
The blow-up \(Z\) of \(\mathbb{P}(1,1,1,2)\) along \(\Gamma\) is cut out in \(\mathbb{F}\) by the equation
\[uQ+vF=0.\]
Let us describe the equation of \(\pi^{\prime}(Z)\subset\mathbb{P}(1,1,1,2,2)\). From \(\frac{u}{v}=-\frac{F}{Q}\), we get \(y_{1}Q+F=0\) and hence \(\pi^{\prime}(Z)\) is cut out by the equation
\[y_{1}(y_{0}-x_{0}L)+x_{0}x_{1}L^{2}+BL+C=0.\]
Combining with Equation (2.15), we see that the strict transform of \(D_{5}^{b}\) in \(\pi^{\prime}(Z)\) is cut out by the equation
\[x_{0}y_{1}=x_{1}y_{0}+x_{0}x_{1}L+B.\]
In coordinates, the composition \(\pi^{\prime}\circ\pi\colon\mathbb{P}(1^{3},2)_{(x_{0},x_{1},x_{2},y)}\to\pi^{ \prime}(Z)\subset\mathbb{P}(1,1,1,2,2)_{(x_{0},x_{1},x_{2},y_{0},y_{1})}\) is given by
\[(x_{0},x_{1},x_{2},y)\mapsto\left(x_{0},x_{1},x_{2},y,-\frac{F}{Q}\right).\]
Next we compose it with the automorphism of \(\mathbb{P}(1,1,1,2,2)_{(x_{0},x_{1},x_{2},y_{0},y_{1})}\) given in coordinates by
\[(x_{0},x_{1},x_{2},y_{0},y_{1})\mapsto(x_{0},x_{1},x_{2},y_{0},y_{1}-x_{1}L).\]
It is immediate to check that, in coordinates, the composed map \(\phi^{b}\colon\mathbb{P}(1^{3},2)_{(x_{0},x_{1},x_{2},y)}\dashrightarrow \mathbb{P}(1^{3},2^{2})_{(x_{0},x_{1},x_{2},y_{0},y_{1})}\) is given by (2.14). The image of \(Z\) is given by the equation
\[0=(y_{1}+x_{1}L)(y_{0}-x_{0}L)+x_{0}x_{1}L^{2}+BL+C=y_{0}y_{1}+C-L(x_{0}y_{1}-x _{1}y_{0}-B),\]
which is precisely the equation of \(X_{4}\). The strict transform of \(D_{5}^{b}\) is the surface of \(\mathbb{P}(1^{3},2^{2})\) given by the equations
\[\begin{cases}y_{0}y_{1}+C-L(x_{0}y_{1}-x_{1}y_{0}-B)&=0,\\ x_{0}(y_{1}+x_{1}L)-x_{1}(y_{0}+x_{0}L)-B&=x_{0}y_{1}-x_{1}y_{0}-B=0,\end{cases}\]
which are precisely the equations of \(D_{3,4}\).
Similarly, one can construct a volume preserving Sarkisov link of type (II), \(\psi^{a}\colon\text{obj. }3^{a}\dashrightarrow\text{obj. }4\).
**6-parameter family of objects \(5^{a}.\)** For each fixed quadratic form \(Q=Q(x_{0},x_{1},x_{2})\), we consider the pair \((X_{4},D_{2,4})\), where \(X_{4}\subset\mathbb{P}(1^{4},2)_{(x_{0},x_{1},x_{2},x_{3},y)}\) is the hypersurface given by equation
\[y(y+Q)-C+x_{3}\big{(}(x_{0}+x_{1})y+x_{1}Q+B\big{)}=0,\]
and \(D_{2,4}\) is cut out in \(X_{4}\) by the equation \((y+x_{1}x_{3}=0)\).
**Remark 2.16**.: The variety \(X_{4}\subset\mathbb{P}(1^{4},2)\) has a \(cA_{2}\)-singularity4 at the point \([0:0:0:1:0]\) and \(D_{2,4}\) is isomorphic to the original \(D\subset\mathbb{P}^{3}\), as one can check by plugging in \(y=-x_{1}x_{3}\) and literally getting the equation of \(D\subset\mathbb{P}^{3}\) back.
**Example 2.17** (Isomorphism of objects \(5^{a}\) and \(5^{b}\)).: Table 1 also lists a \(6\)-parameter family of objects \(5^{b}\). The substitution
\[\widetilde{y}=-y-x_{3}(x_{0}+x_{1})\]
transforms object \(5^{a}\) isomorphically to object \(5^{b}\).
Indeed, rewrite the equation slightly and substitute:
\[y\Big{(}y+Q\Big{)}-C+x_{3} \Big{(}(x_{0}+x_{1})y+x_{1}Q+B\Big{)}=\] \[=y\Big{(}y+x_{3}(x_{0}+x_{1})\Big{)}+Q\Big{(}y+x_{1}x_{3}\Big{)}-C +x_{3}B=\] \[=\Big{(}\widetilde{y}+x_{3}(x_{0}+x_{1})\Big{)}\widetilde{y}-Q \Big{(}\widetilde{y}+x_{0}x_{3}\Big{)}-C+x_{3}B=\] \[=\widetilde{y}\Big{(}\widetilde{y}-Q\Big{)}-C+x_{3}\Big{(}(x_{0} +x_{1})\widetilde{y}-x_{0}Q+B\Big{)}\]
**Example 2.18** (Maps from objects \(3^{\bullet}\) to objects \(5^{\bullet}\)).: For each quadratic form \(Q=Q(x_{0},x_{1},x_{2})\), we construct a volume preserving Sarkisov link of type (II), \(\psi^{a}\colon(\mathbb{P}(1^{3},2),D_{5}^{a})\dashrightarrow(X_{4},D_{2,4})\). In coordinates, the map \(\psi^{a}\colon\mathbb{P}(1^{3},2)_{(x_{0},x_{1},x_{2},y)}\to \mathbb{P}(1^{4},2)_{(x_{0},x_{1},x_{2},x_{3},y)}\) is given by:
\[(x_{0},x_{1},x_{2},y)\mapsto\left(x_{0},x_{1},x_{2},-\frac{y(y+Q)-C}{x_{0}y+B +x_{1}(y+Q)},y\right). \tag{2.19}\]
This map is obtained by first blowing-up the curve \(\Gamma\subset\mathbb{P}(1^{3},2)\) defined by equations:
\[\left\{\begin{array}{l}F_{3}=x_{0}y+B+x_{1}(y+Q)=0,\\ G_{4}=y(y+Q)-C=0,\end{array}\right.\]
and then blowing-down the strict transform of the divisor \(\big{(}x_{0}y+B+x_{1}(y+Q)=0\big{)}\).
Let us describe this map in detail. First note that \(\Gamma\subset D_{5}^{a}\):
\[x_{0}y^{2}+By+x_{1}C\ =\ x_{0}y^{2}+By+x_{1}y(y+Q)-x_{1}y(y+Q)+x_{1}C\ =\\ =\ y(x_{0}y+B+x_{1}(y+Q))-x_{1}(y(y+Q)-C)\ =\ yF_{3}-x_{1}G_{4}. \tag{2.20}\]
In order to describe the blow-up of \(\mathbb{P}(1,1,1,2)\) along \(\Gamma\), consider the toric variety \(\mathbb{F}\) with coordinates and weight matrix:
\[\begin{array}{cccccc}x_{0}&x_{1}&x_{2}&y&u&v\\ \hline 1&1&1&2&0&-1\\ 0&0&0&0&1&1\end{array},\]
and stability condition chosen so that the nef cone of \(\mathbb{F}\) is the span \(\langle x_{i},u\rangle_{+}\):
This choice of stability condition gives the irrelevant ideal \((x_{0},x_{1},x_{2},y)(u,v)\), and yields a \(\mathbb{P}^{1}\)-bundle morphism \(\pi\colon\mathbb{F}\to\mathbb{P}(1,1,1,2)\). The other contraction of \(\mathbb{F}\) is the divisorial contraction \(\pi^{\prime}\colon\mathbb{F}\to\mathbb{P}(1,1,1,1,2)\) that maps the divisor \((v=0)\) to the point \([0:0:0:1:0]\in\mathbb{P}(1,1,1,1,2)\). In coordinates, \(\pi^{\prime}\colon\mathbb{F}_{(x_{0},x_{1},x_{2},y,u,v)}\to\mathbb{P}(1,1,1,1, 2)_{(x_{0},x_{1},x_{2},x_{3},y)}\) is given by
\[(x_{0},x_{1},x_{2},y,u,v)\mapsto\big{(}vx_{0},vx_{1},vx_{2},u,v^{2}y\big{)}. \tag{2.21}\]
The blow-up \(Z\) of \(\mathbb{P}(1,1,1,2)\) along \(\Gamma\) is cut out in \(\mathbb{F}\) by the equation
\[uF_{3}+vG_{4}=0.\]
Let us describe the equation of \(\pi^{\prime}(Z)\subset\mathbb{P}(1,1,1,1,2)\). From \(\frac{u}{v}=-\frac{G_{4}}{F_{3}}\), we get \(x_{3}F_{3}+G_{4}=0\), and hence \(\pi^{\prime}(Z)\) is cut out by the equation
\[y(y+Q)-C+x_{3}(x_{0}y+B+x_{1}(y+Q))=0,\]
which is precisely the equation of \(X_{4}\). Combining with Equation (2.20), we see that the strict transform of \(D_{5}^{a}\) in \(X_{4}\) is cut out by the equation
\[y+x_{1}x_{3}=0,\]
and so it is precisely the divisor \(D_{2,4}\).
In coordinates, the composed map \(\psi^{a}\colon\mathbb{P}(1^{3},2)_{(x_{0},x_{1},x_{2},y)}\to\mathbb{P}(1^{4},2)_{( x_{0},x_{1},x_{2},x_{3},y)}\) is given by:
\[(x_{0},x_{1},x_{2},y)\mapsto\left(x_{0},x_{1},x_{2},-\frac{G_{4}}{F_{3}},y \right),\]
which is precisely (2.19) above.
Similarly, there is a volume preserving Sarkisov link of type (II), \(\psi^{b}\colon\text{obj. }3^{b}\dashrightarrow\text{obj. }5^{b}\).
Finally we denote by \(\widetilde{\psi}^{a}\colon\text{obj. }3^{a}\dashrightarrow\text{obj. }5^{b}\) the composition of \(\psi^{a}\) with the isomorphism in Example 2.17, and similarly \(\widetilde{\psi}^{b}\colon\text{obj. }3^{b}\dashrightarrow\text{obj. }5^{a}\) the composition of \(\psi^{b}\) with the inverse of the isomorphism in Example 2.17,
## 3. Proof of Theorem A
The prototype of CY pairs adressed in Theorem A is \((\mathbb{P}^{n},D_{n+1})\), where \(D_{n+1}\) is a hypersurface of degree \(n+1\) in \(\mathbb{P}^{n}\), \(n\geq 3\). The assumptions implies that \(D_{n+1}\) is factorial, has terminal singularities, and \(\operatorname{Pic}(D_{n+1})=\left\langle\mathcal{O}_{\mathbb{P}^{n}}(1)_{|D_{ n+1}}\right\rangle\). Let us sketch the proof of Theorem A in this case.
Suppose that \(\psi\colon(\mathbb{P}^{n},D_{n+1})/\mathbb{C}\dashrightarrow(X,D)/T\) is a volume-preserving birational map between \(\operatorname{Mf}\) CY pairs that is not biregular. Then the first step of a Sarkisov factorization of \(\psi\) is a divisorial contraction \(\pi\colon Y\to\mathbb{P}^{n}\) with center \(Z\subset\mathbb{P}^{n}\). In Proposition 3.1, we show that \(Z\subset D_{n+1}\) and \(\operatorname{codim}_{\mathbb{P}^{n}}(Z)=2\). The assumption that \(\operatorname{Pic}X\to\operatorname{Cl}D_{n+1}\) is an isomorphism implies that \(Z\) is a complete intersection \(Z=D_{n+1}\cap D_{d}\), where \(D_{d}\subset\mathbb{P}^{n}\) is a hypersurface of degree \(d\). Then we show in Lemma 3.2 that the cone of effective divisors of \(Y\) is
\[\operatorname{Eff}(Y)\ =\ \langle E,\widetilde{D}_{b}\rangle_{+}\,\]
where \(E\) denotes the exceptional divisor of \(\pi\), \(b=\min\{d,n+1\}\), and \(\widetilde{D}_{b}\) denotes the strict transform of \(D_{b}\) in \(Y\). Therefore, the first link in a Sarkisov factorization of \(\psi\) is:
where \(\chi\) is a composition of Mori flips, flops and antiflips, and \(\pi^{\prime}\colon Y^{\prime}\to W\) is either a divisorial contraction or a Mori fibre space that contracts the strict transform of \(\widetilde{D}_{b}\). In any case, \(-K_{Y^{\prime}}\) is \(\pi^{\prime}\)-ample. Since \(-K_{Y^{\prime}}\sim\chi_{*}\widetilde{D}_{n+1}\), we must have \(b=d<n+1\), and \(\pi^{\prime}\colon Y^{\prime}\to W\) is a divisorial contraction with exceptional divisor \(\widetilde{D}_{b}\). A straightforward computation in the proof of Proposition 3.3 shows that in this case \(W\) has worse then terminal singularities, which is not allowed in a Sarkisov factorization of \(\psi\). This contradiction shows that the original map \(\psi\colon\mathbb{P}^{n}\dashrightarrow X\) is an isomorphism.
**Proposition 3.1**.: _Let \((X,D_{X})\) be a (t, lc) CY pair, and \(f\colon(Y,D_{Y})\to(X,D_{X})\) a volume preserving divisorial contraction with center \(Z\subset X\). Then \(Z\subset D_{X}\)._
_Suppose moreover that \((X,D_{X})\) is canonical, and that \(D_{X}\) is terminal at the generic point of \(Z\). Then \(\operatorname{codim}_{X}(Z)=2\), and \(D_{Y}\) is the strict transform of \(D_{X}\) in \(Y\)._
Proof.: Denote by \(\widetilde{D}_{X}\) the strict transform of \(D_{X}\) in \(Y\), and by \(E\) the exceptional divisor of \(f\). Since \(f\colon(Y,D_{Y})\to(X,D_{X})\) is volume preserving, we have \(D_{Y}=\widetilde{D}_{X}+mE\), with \(m\in\{0,1\}\). Suppose that \(Z\not\subset D_{X}\). Then \(\widetilde{D}_{X}=f^{*}D_{X}\), and the equality \(K_{Y}+D_{Y}=f^{*}(K_{X}+D_{X})\) would imply that \(K_{Y}=f^{*}K_{X}-mE\), contradicting the fact that \(X\) is terminal. So we conclude that \(Z\subset D_{X}\).
Now suppose that \((X,D_{X})\) is canonical. It follows from Proposition 2.6 that \(m=0\), and \(D_{Y}=\widetilde{D}_{X}\). By Lemma 2.8, \((Y,D_{Y})\) is also canonical. In particular \(D_{Y}\) has normal support. Denote by \(\overline{f}\colon D_{Y}\to D_{X}\) the restriction of \(f\colon Y\to X\) to \(D_{Y}\). By restricting to \(D_{Y}\) the equality
\[K_{Y}+D_{Y}\ =\ f^{*}(K_{X}+D_{X})\]
and applying adjunction, we get that
\[K_{D_{Y}}\ =\overline{f}^{*}K_{D_{X}}.\]
If moreover \(D_{X}\) is terminal at the generic point of \(Z\), then this last equality implies that \(E\cap D_{Y}\) is not exceptional for \(\overline{f}\colon D_{Y}\to D_{X}\). It follows that \(\operatorname{codim}_{D_{X}}(Z)=1\) and thus \(\operatorname{codim}_{X}(Z)=2\).
**Lemma 3.2**.: _Let \(X\) be a \(\mathbb{Q}\)-factorial terminal Fano variety with \(\operatorname{Cl}(X)\cong\mathbb{Z}\), and denote by \(H_{X}\) the ample generator of \(\operatorname{Cl}(X)\). Let \(Z=H_{a}\cap H_{b}\) be an irreducible and generically reduced complete intersection of two hypersurfaces \(H_{a}\sim_{{}_{\mathbb{Q}}}aH_{X}\) and \(H_{b}\sim_{{}_{\mathbb{Q}}}bH_{X}\), with \(b\leq a\). Let \(\pi\colon Y\to X\) be a terminal divisorial contraction with center \(Z\). Then the cone of effective divisors of \(Y\) is_
\[\operatorname{Eff}(Y)\ =\ \langle E,\widetilde{H}_{b}\rangle_{+}\]
_where \(E\) denotes the exceptional divisor of \(\pi\), and \(\widetilde{H}_{b}\) the strict transform of \(H_{b}\) in \(Y\)._
Proof.: Set \(H:=\pi^{*}H_{X}\), \(\ell:=H^{n-1}\in N_{1}(X)\), and \(\lambda=H^{n}>0\), where \(n=\dim(X)\). At the generic point of \(Z\), the divisorial contraction \(\pi\) coincides with the blow-up of \(Z\). In particular, \(\pi_{|E}\colon E\to Z\) is a \(\mathbb{P}^{1}\)-bundle over the generic point of \(Z\). Denote by \(e\subset E\) a general fiber of \(\pi_{|E}\). We have \(H\cdot\ell=\lambda\), \(E\cdot e=-1\), and \(H\cdot e=E\cdot\ell=0\). Since \(\rho(Y)=2\), the cone \(\operatorname{Eff}(Y)\) has two extremal rays, one of which is generated by \(E\).
We show that \(\widetilde{H}_{b}\) is dominated by a family of curves with class in the ray
\[\mathbb{R}_{\geq 0}\ [\ell-(a\lambda)e]\ \subset\ N_{1}(Y).\]
Let \(k\) be a positive integer such that \(kH_{X}\) is very ample, and let \(C\subset H_{b}\subset X\) be a curve cut out in \(H_{b}\) by \(n-2\) general members of \(\left|kH_{X}\right|\). Then \(C\cdot H_{a}=abk^{n-2}\lambda\) and, inside \(H_{b}\), \(C\) intersects \(Z\) transversally in \(abk^{n-2}\lambda\) smooth points. Therefore, the class of its strict transform \(\widetilde{C}\subset Y\) in \(N_{1}(Y)\) is precisely \(bk^{n-2}[\ell-(a\lambda)e]\).
Let \(D\subset Y\) be a prime divisor distinct from \(\widetilde{H}_{b}\) and \(E\), and write \(D\sim_{{}_{\mathbb{Q}}}dH-mE\). Since \(D\neq E\), it has non-negative intersection with \(e\), and thus \(m\geq 0\). Since \(D\neq\widetilde{H}_{b}\), it has non-negative intersection with \(\ell-(a\lambda)e\), and thus \(d\geq ma\geq mb\). So we can write
\[D\ \sim_{{}_{\mathbb{Q}}}\ (d-mb)H\ +\ m(bH-E)\ \sim_{{}_{\mathbb{Q}}}\ (d-mb)H\ +\ m \widetilde{H}_{b}\]
with \(d-mb\geq 0\). This shows that \(\operatorname{Eff}(Y)=\langle E,\widetilde{H}_{b}\rangle_{+}\).
**Proposition 3.3**.: _Let \((X,D)/S\) be a \(\mathbb{Q}\)-factorial (t, lc) Mf CY pair, where_
1. \(X\) _is a Fano variety with_ \(\operatorname{Cl}(X)\cong\mathbb{Z}\)_; and_
2. \(D\subset X\) _is a normal hypersurface such that the restriction homomorphism_ \(\operatorname{Cl}(X)\to\operatorname{Cl}(D)\) _is an isomorphism._
_Let \(\psi\colon(X,D)/S\dashrightarrow(X^{\prime},D^{\prime})/S^{\prime}\) be a volume preserving birational map between Mf CY pairs. Then any Sarkisov factorization of \(\psi\) starts with a divisorial contraction with center \(Z\subset D\) such that \(\operatorname{codim}_{X}(Z)\geq 3\)._
Proof.: By Theorem 2.5, \(\psi\colon(X,D)\dashrightarrow(X^{\prime},D^{\prime})\) is a composition of volume preserving Sarkisov links. By Proposition 3.1 the first step of a Sarkisov factorization is a divisorial contraction \(\pi\colon Y\to X\) with center \(Z\subset D\). We must rule out the possibility that \(\operatorname{codim}_{X}(Z)=2\). Suppose that this is the case. Then \(\pi\) coincides with the blow-up of \(Z\) at the generic point of \(Z\), and so \((K_{Y}+D_{Y})=\pi^{*}(K_{X}+D)\), where \(D_{Y}\) is the strict transform of \(D\) in \(Y\).
Write \(H_{X}\) for the generator of \(\operatorname{Cl}(X)\), and let \(\iota\in\mathbb{Z}_{>0}\) be such that \(-K_{X}\sim_{{}_{\mathbb{Q}}}D\sim_{{}_{\mathbb{Q}}}\iota H_{X}\). Since the restriction homomorphism \(\operatorname{Cl}(X)\to\operatorname{Cl}(D)\) is an isomorphism, there exists an irreducible hypersurface in \(H_{d}\sim_{{}_{\mathbb{Q}}}\iota H_{X}\) such that \(Z=D\cap H_{d}\).5 We follow the notation of Lemma 3.2 and its proof, with \(\{a,b\}=\{\iota,d\}\). Consider the first link in a Sarkisov factorization of \(\psi\):
Footnote 5: That is, the intersection \(D\cap H_{d}\) is generically reduced and irreducible, and \(D\cap H_{d}=Z\) set-theoretically.
\[\diagram{\nodenode{\pi}}\node{\node{\pi^{\prime}}}\node{\node{\pi^{\prime}}} \node{\node{\pi^{\prime}}}\node{\node{\pi^{\prime}}}\node{\node{\
where \(\chi\) is a (possibly trivial) composition of Mori flips, flops and antiflips, and \((Y^{\prime},D^{\prime}_{Y})\) is a CY pair, where \(D^{\prime}_{Y}=\chi_{*}D_{Y}\).
Next we show that \(\iota=a>b=d\). By Lemma 3.2, \(\operatorname{Eff}(Y^{\prime})=\langle E^{\prime},\widetilde{H}^{\prime}_{b} \rangle_{+}\), where \(E^{\prime}\) and \(\widetilde{H}^{\prime}_{b}\) denote the strict transforms in \(Y^{\prime}\) of \(E\) and \(\widetilde{H}_{b}\), respectively. This implies that either the morphism \(\pi^{\prime}\colon Y^{\prime}\to W\) is a divisorial contraction with exceptional divisor \(\widetilde{H}^{\prime}_{b}\), or it is a Mori fibration given by the linear system \(|m\widetilde{H}^{\prime}_{b}|\) for \(m\gg 0\). The latter occurs if and only if \(a=b\). Since \((Y^{\prime},D^{\prime}_{Y})\) is a CY pair, and \(-K_{Y^{\prime}}\sim D^{\prime}_{Y}\) is \(\pi^{\prime}\)-ample, we conclude that \(D^{\prime}\neq\widetilde{H}^{\prime}_{b}\). So \(\iota=a>b=d\), \(D=H_{a}\), and \(\pi^{\prime}\colon Y^{\prime}\to W\) is a divisorial contraction with exceptional divisor \(\widetilde{H}^{\prime}_{b}\). Note also that \(W\) is a \(\mathbb{Q}\)-factorial terminal Fano variety with \(\rho(W)=1\).
Since \(\pi\) coincides with the blow-up of \(Z\) at the generic point of \(Z\), \(W\) is terminal, and \(\chi\) is a small birational map, we can write
\[\pi^{*}K_{X}+E\ =\ K_{Y}\ =\ \chi^{*}K_{Y^{\prime}}\ =\ \chi^{*}(\pi^{\prime*}K_ {W}+t\widetilde{H}^{\prime}_{b})\ =\ \chi^{*}\pi^{\prime*}K_{W}\ +\ t \widetilde{H}_{b}\]
for some positive rational number \(t\). Recall from the proof of Lemma 3.2 that \(\widetilde{H}_{b}\) is dominated by a family of curves with class in the ray
\[\mathbb{R}_{\geq 0}\ [\ell-(a\lambda)e]\ \subset\ N_{1}(Y)\,\]
where \(\lambda=H^{\dim(X)}_{X}>0\). By intersecting the divisors above with \([\ell-(a\lambda)e]\), we get
\[0\ =\ -\lambda\iota+\lambda a =\ (\pi^{*}K_{X}\ +\ E)\ \cdot\ [\ell-(a\lambda)e]\] \[=\ (\chi^{*}\pi^{\prime*}K_{W}\ +\ t\widetilde{H}_{b})\ \cdot\ [\ell-(a \lambda)e]\] \[=\ K_{W}\cdot(\pi^{\prime}\circ\chi)_{*}[\ell-(a\lambda)e]\ +\ t \lambda(b-a)<0.\]
This contradiction shows that \(\operatorname{codim}_{D}(Z)\geq 2\), and hence \(\operatorname{codim}_{X}(Z)\geq 3\).
Proof of Theorem A.: The result follows directly from Propositions 3.1 and 3.3.
**Remark 3.4**.: It follows from Theorem A that a _very general_ quartic surface \(D\subset\mathbb{P}^{3}\) satisfies \(\operatorname{Bir}(\mathbb{P}^{3},D)=\operatorname{Aut}(\mathbb{P}^{3},D)\). By very general we mean that \(D\) is smooth and satisfies the Lefschetz condition: \(\operatorname{Cl}D=\mathbb{Z}\cdot\mathcal{O}_{\mathbb{P}^{3}}(1)_{|D}\). In this special case, a stronger result holds, and \(D\) can be taken to be only _general_. Indeed, to conclude that \(\operatorname{Bir}(\mathbb{P}^{3},D)=\operatorname{Aut}(\mathbb{P}^{3},D)\), it is enough to assume that \(D\) is smooth and any curve of degree \(<16\) in \(D\) is a complete intersection of \(D\) with another surface \(S\subset\mathbb{P}^{3}\), see [14, Theorem 2.3 and Remark 2.4] and [1, Theorem 3.1].
## 4. Extremal contractions
In this section we classify extremal contractions of different types in various contexts. These results are the toolkit that we need to prove Theorems B and C. Indeed, the proofs of these results centre on the classification of all possible volume preserving Sarkisov links
\[(X,D)/S\dashrightarrow(X^{\dagger},D^{\dagger})/S^{\dagger}\]
that start from a known and given Mf CY pair \((X,D)/S\). The links are themselves constituted of elementary modifications that are naturally associated to contractions of extremal rays: volume preserving extremal divisorial contractions; flips, flops, antiflips; and Mf CY pairs. These elementary constituents are the topic of this section. The results are technical and we advise the reader to skip this section on first reading and use it as reference, coming back to it as needed.
### Divisorial contractions
**Lemma 4.1**.: _Let \(D\subset\mathbb{P}^{3}\) be a quartic surface with canonical singularities, having exactly one singular point \(z\in D\). Let \(f\colon(Y,D_{Y})\rightarrow(\mathbb{P}^{3},D)\) be a volume preserving terminal divisorial contraction mapping a divisor \(E\subset Y\) to a closed point \(x\in\mathbb{P}^{3}\). Then \(x=z\in D\), and \(D_{Y}\) is the strict transform of \(D\) in \(Y\)._
1. _If_ \(z\) _is a singularity of_ \(D\) _of type_ \(A_{1}\)_, then_ \(f\) _is the blow-up of_ \(\mathbb{P}^{3}\) _at_ \(z\)
2. _If_ \(z\) _is a singularity of_ \(D\) _of type_ \(A_{2}\)_, choose homogeneous coordinates such that_ \(D\) _is defined by the equation_ \[x_{0}x_{1}x_{3}^{2}+x_{3}B+C\ =\ 0,\] _where_ \(B=B_{3}(x_{0},x_{1},x_{2})\) _and_ \(C=C_{4}(x_{0},x_{1},x_{2})\) _are homogeneous polynomials of degree_ \(3\) _and_ \(4\)_, respectively. Then either_ \(f\) _is the blow-up of_ \(\mathbb{P}^{3}\) _at_ \(z\)_, or_ \(f\) _is the weighted blow-up of_ \(\mathbb{P}^{3}\) _at_ \(z\) _with weights_ \((2,1,1)\) _or_ \((1,2,1)\) _-- with respect to the affine coordinates_ \(x_{0},x_{1},x_{2}\) _in the open affine chart_ \((x_{3}=1)\)_._
Proof.: By Proposition 3.1, \(x=z\) is the singular point of \(D\). By Proposition 2.6, \(D_{Y}\) is the strict transform of \(D\) in \(Y\). Choose homogeneous coordinates on \(\mathbb{P}^{3}\) such that \(z=[0:0:0:1]\), and the equation of \(D\) has the form
\[x_{3}^{2}Q+x_{3}B+C=0\,\]
where \(Q,B,C\in\mathbb{C}[x_{0},x_{1},x_{2}]\) are homogeneous of degree \(2,3\) and \(4\), respectively. Then \(z\in D\) is an \(A_{1}\)-singularity if and only if \(Q\) is a quadratic form of rank \(3\). If \(z\in D\) is an \(A_{2}\)-singularity, then \(Q\) is a quadratic form of rank two, which -- possibly after changing homogeneous coordinates -- we may assume to be \(Q=x_{0}x_{1}\).
By [13, Theorem 1.1], in suitable _analytic_ coordinates at \(z\in\mathbb{P}^{3}\), the divisorial contraction \(f\colon Y\to\mathbb{P}^{3}\) is the weighted blow-up of \(z\) with weights \((1,a,b)\) where \(a\) and \(b\) are coprime integers. The difficulty in using this result is that at this point we do not know that these analytic coordinates at \(z\) are induced from homogeneous coordinates on \(\mathbb{P}^{3}\). Instead of using the result directly, we will use an equivalent statement that is coordinate-free.
In general, suppose that \(z\in Z\) is a nonsingular point on a \(3\)-fold \(Z\), and that there are analytic coordinates at \(z\in Z\) such that \(f\colon E\subset Y\to z\in Z\) is the weighted blow-up with weights \((1,a,b)\), where \(1\leq a<b\) and \(\operatorname{hcf}(a,b)=1\). The toric description of the weighted blow-up allows us to realize the valuation associated to \(E\) as follows. Construct inductively the tower of blow-ups:
\[\cdots\to Z_{i}\to Z_{i-1}\to\cdots\to Z_{1}\to Z_{0}=Z,\]
where \(Z_{i}\to Z_{i-1}\) is the blow-up of the centre \(\operatorname{z}_{i-1}=\operatorname{z}_{E}Z_{i-1}\) of the valuation \(E\) on \(Z_{i-1}\). Note that \(Z_{1}\to Z_{0}\) is the blow-up of \(\operatorname{z}_{0}=z\). For every \(i\), we denote by \(E_{i}\subset Z_{i}\) the exceptional divisor, and for \(j>i\) we denote by \(E_{i}^{j}\subset Z_{j}\) the strict transform of \(E_{i}\) in \(Z_{j}\). The following key properties follow directly from the toric description of the weighted blow-up:
1. For all \(0\leq j<a\), the centre \(\operatorname{z}_{j}\) is a closed point of \(Z_{j}\). If \(j\geq 1\), then \(\operatorname{z}_{j}\in E_{j}\subset Z_{j}\), and if \(j\geq 2\), then \[\operatorname{z}_{j}\in E_{j}\setminus E_{j-1}^{j}.\]
2. The centre \(\operatorname{z}_{a}\in Z_{a}\) is the generic point of a line \(L_{a}\subset E_{a}\cong\mathbb{P}^{2}\). If \(a\geq 2\), then \[L_{a}\not\subset E_{a-1}^{a}.\]
3. For all \(a+1\leq j<b\), the centre \(\operatorname{z}_{j}\in Z_{j}\) is a section \[L_{j}\subset E_{j}\setminus E_{j-1}^{j}\] of the projection \(E_{j}\to L_{j-1}\).
4. \(E_{b}=E\) (by this we mean that the exceptional divisors \(E_{b}\) and \(E\) induce the same valuation on \(Z\)).
We go back to our volume preserving terminal divisorial contraction \(f\colon(Y,D_{Y})\to(\mathbb{P}^{3},D)\) mapping \(E\subset Y\) to \(z\in\mathbb{P}^{3}\). Consider the tower above, starting with the blow-up \(\sigma_{1}\colon Z_{1}\to Z_{0}=\mathbb{P}^{3}\) of \(\operatorname{z}_{0}=z\in\mathbb{P}^{3}\), and denote by \(D_{i}\subset Z_{i}\) the strict transform of \(D\). We have that \(K_{Z_{1}}=\sigma_{1}^{*}(K_{Z_{0}})+2E_{1}\) and \(D_{1}\sim\sigma_{1}^{*}(D_{0})-2E_{1}\), hence
\[K_{Z_{1}}+D_{1}=\sigma_{1}^{*}(K_{Z_{0}}+D_{0}).\]
In other words, the birational morphism \((Z_{1},D_{1})\to(Z_{0},D_{0})\) is volume preserving, and
\[a(E,K_{Z_{1}}+D_{1})=0.\]
If \(z\) is a singularity of \(D\) of type \(A_{1}\) or \(A_{2}\) then \(D_{1}\subset Z_{1}\) is a smooth surface. Since \(a(E,K_{Z_{1}}+D_{1})=0\), either \(E=E_{1}\) -- in which case we are done -- or the centre \(\mathrm{z}_{E}\,Z_{1}\) is the generic point of a curve on \(D_{1}\cap E_{1}\). In any case, key property (1) above implies that \(a=1\).
Suppose that \(z\in D\) is an \(A_{1}\)-singularity. Then \(D_{1}\cap E_{1}\) is a nonsingular conic in \(E_{1}\cong\mathbb{P}^{2}\). On the other hand, if \(b>1\), then the centre \(\mathrm{z}_{E}\,Z_{1}\) is the generic point of a line on \(E_{1}\cong\mathbb{P}^{2}\) by key property (2) above. This implies that \(1=a=b\), i.e. \(f\) is the blow-up of \(\mathbb{P}^{3}\) at \(z\).
Suppose now that \(z\in D\) is an \(A_{2}\)-singularity. If \(E=E_{1}\), then we are done. Otherwise, by key property (2) above, the centre \(\mathrm{z}_{1}=\mathrm{z}_{E}\,Z_{1}\) is the generic point of one of the two lines (\(x_{0}=0\)), (\(x_{1}=0\)) in \(E_{1}\cong\mathbb{P}^{2}\). Assume \(z_{1}\) is the generic point of the line \(L_{1}=(x_{0}=0)\) -- the other case is similar. Write \(\sigma_{2}\colon Z_{2}\to Z_{1}\) for the blow-up of the line \(L_{1}\subset Z_{1}\). Note that \(K_{Z_{2}}=\sigma_{2}^{*}(K_{Z_{1}})+E_{2}\), and \(D_{2}=\sigma_{2}^{*}(D_{1})-E_{2}\). Hence
\[K_{Z_{2}}+D_{2}=\sigma_{2}^{*}(K_{Z_{1}}+D_{1}),\]
in other words, the composed birational morphism \((Z_{2},D_{2})\to(\mathbb{P}^{3},D)\) is volume preserving and \(a(E,K_{Z_{2}}+D_{2})=0\). If \(E=E_{2}\), then \(f\) is the weighted blow-up with weights \((2,1,1)\) in the native homogeneous coordinates of \(\mathbb{P}^{3}\), and we are done. We show that the assumption that \(E\neq E_{2}\) leads to a contradiction. Consider the centre \(\mathrm{z}_{2}=\mathrm{z}_{E}\,Z_{2}\).
1. By key property (3), \(\mathrm{z}_{2}\) is the generic point of a section \(L_{2}\subset E_{2}\) of the projection \(E_{2}\to L_{1}\) disjoint from \(E_{1}^{2}\).
2. On the other hand, since \(D_{2}\subset Z_{2}\) is a smooth surface, \(\mathrm{z}_{2}\) is the generic point of a curve in \(D_{2}\cap E_{2}\) by Proposition 3.1. Since \(\Gamma=D_{2}\cap E_{2}\) is irreducible, \(\mathrm{z}_{2}\) is the generic point of \(\Gamma\).
Finally, since \(D_{1}\cap E_{1}\) consists of the union of the two lines (\(x_{0}=0\)) or (\(x_{1}=0\)) in \(E_{1}\cong\mathbb{P}^{2}\), the curves \(\Gamma=D_{2}\cap E_{2}\) and \(E_{1}^{2}\cap E_{2}\) intersect. This contradicts (i) and concludes the proof.
### Extremal neighbourhoods
**Definition 4.2**.: An _extremal neighbourhood_ is the analytic germ around a projective curve \(\Gamma\) in a 3-fold \(X\). In our situation the curve \(\Gamma\) is always contained in a given surface \(S\subset X\).
**Definition 4.3**.: Consider an extremal neighbourhood \(\Gamma\subset S\subset X\), where \(X\) is a 3-fold with terminal singularities, \(S\in|-K_{X}|\) is a surface with Du Val singularities, and \(\Gamma\cong\mathbb{P}^{1}\) is a smooth rational curve. We say that the extremal neighbourhood is _trivial_ if it is isomorphic to the analytic germ around \(\Gamma\) in its normal bundle.
Our first result describes the extremal neighbourhood \(\Gamma\subset S\subset X\) in the case when \(S\) and \(X\) are nonsingular, and \(K_{X}\cdot\Gamma=-S\cdot\Gamma>0\).
**Lemma 4.4**.: _Consider an extremal neighbourhood \(\Gamma\subset S\subset X\), where \(X\) is a smooth 3-fold, \(S\in|-K_{X}|\) is a smooth surface, and \(\Gamma\cong\mathbb{P}^{1}\) is a smooth rational curve. Suppose that \(k=K_{X}\cdot\Gamma=-S\cdot\Gamma\geq 1\). Then the following holds._
1. _The extremal neighbourhood_ \(\Gamma\subset S\subset X\) _is trivial._
2. _The antiflip_ \(X\dashrightarrow X^{-}\) _exists, and_ \(X^{-}\) _has terminal singularities if and only if_ \(k=1\)_._
Proof.: To prove (1), we use [12, Lemma 3.33], which in fact goes back to [13, Lemma 9]. It is enough to show that the analytic germ around \(S\subset X\) is trivial. Indeed, \(\Gamma\) is a \((-2)\)-curve on \(S\), and \(S\) itself is isomorphic to the analytic germ of \(\Gamma\) in its normal bundle on \(S\). We denote by \(\mathcal{O}_{S}(1)\) the unique line bundle on \(S\) such that \(\mathcal{O}_{S}(1)_{|\Gamma}=\mathcal{O}_{\Gamma}(1)\).
First we check that the second infinitesimal neighbourhood of \(S\) in \(X\) is trivial. Indeed, the second infinitesimal neighbourhood is an infinitesimal extension of \(\mathcal{O}_{S}\) by \(\mathcal{O}_{S}(-S)=\mathcal{O}_{S}(k)\):
\[0\to\mathcal{O}_{S}(-S)\to\mathcal{O}_{2S}\to\mathcal{O}_{S}\to 0\,\]
and these extensions are classified by
\[H^{1}(S,T_{S}(-S))=H^{1}\Big{(}\mathcal{O}_{S}(-2+k)\oplus\mathcal{O}_{S}(2+k) \Big{)}=0.\]
Now that we have shown that the second infinitesimal neighbourhood is trivial, we are ready to apply [12, Lemma 3.33], which states that the neighbourhood of \(S\subset X\) is formally trivial if
\(H^{1}\big{(}S,T_{X|S}\otimes\mathcal{O}_{S}(-nS)\big{)}=0\) for all \(n\geq 2\). The latter statement in turn follows from the exact sequence
\[0\to T_{S}\to T_{X|S}\to\mathcal{O}_{S}(S)\to 0\,\]
giving
\[H^{1}\Big{(}S,\mathcal{O}_{S}(-2+nk)\oplus\mathcal{O}_{S}(2+nk)\Big{)}\to H^{1} \Big{(}S,T_{X|S}\otimes\mathcal{O}_{S}(-nS)\Big{)}\to H^{1}\Big{(}S,\mathcal{O }_{S}\big{(}(n-1)k\big{)}\Big{)}\]
for all \(n\). We have shown that the neighbourhood of \(S\) in \(X\) is formally trivial. The neighbourhood is analytically trivial by the main result of [10].
Now let us prove (2). It follows from (1) that the extremal neighbourhood \(\Gamma\subset S\subset X\) is isomorphic to the neighbourhood of \(\mathbb{P}^{1}\) in the total space of the bundle \(\mathcal{O}(-2)\oplus\mathcal{O}(-k)\). Equivalently, \(X\) is isomorphic to the neighbourhood of \(\mathbb{P}^{1}=(y_{0}=y_{1}=0)\) in the geometric quotient \(\mathbb{C}^{4}/\!\!/\mathbb{C}^{\times}\) for the action given by the weights:
\[\begin{array}{cccc}x_{0}&x_{1}&y_{0}&y_{1}\\ \hline 1&1&-2&-k\,\end{array}\]
with \((>0)\) stability condition. Under this identification, \(S\) is given by the equation \((y_{1}=0)\).
The antiflip \(X\dashrightarrow X^{-}\) is obtained by changing the stability condition to \((<0)\). If \(k=1\), then \(X^{-}\) has terminal singularities. If \(k>1\), then \(X^{-}\) has strictly canonical (and Gorenstein) singularity of type
\[\frac{1}{k}(1,1,-2).\]
**Remark 4.5**.: In Lemma 4.4 above, the existence of the surface \(S\) is needed for the triviality of the neighbourhood, as shown by simple counterexamples.
Next we want to describe the extremal neighbourhood \(\Gamma\subset S\subset X\) in the case when \(S\) is non-singular outside a single ordinary node, where \(X\) has a quotient singularity of type \(1/2(1,1,1)\), and \(K_{X}\cdot\Gamma=-S\cdot\Gamma>0\). In this case, we expect that the extremal neighbourhood is trivial. If this is true, then \(X\) is isomorphic to the neighbourhood of \(\mathbb{P}^{1}=(y_{0}=y_{1}=0)\) in the geometric quotient \(\mathbb{C}^{4}/\!\!/\mathbb{C}^{\times}\) for the action given by the weights:
\[\begin{array}{cccc}x_{0}&x_{1}&y_{0}&y_{1}\\ \hline 1&2&-3&-k\,\end{array}\]
where \(k\) is the integer such that \(K_{X}\cdot\Gamma=-S\cdot\Gamma=k/2\), and the stability condition is \((>0)\). As in the proof of Lemma 4.4 above, we could then construct the antiflip \(X\dashrightarrow X^{-}\) by changing the stability condition to \((<0)\), and easily check that \(X^{-}\) has terminal singularities if and only if \(k=1\). However, the presence of singularities makes it much harder to prove triviality of the extremal neighbourhood. Instead, we shall construct the flip by hand, and verify whether it has terminal singularities.
**Lemma 4.6**.: _Consider a \(3\)-dimensional extremal neighbourhood \(\Gamma\subset S\subset X\), where \(\Gamma\cong\mathbb{P}^{1}\) is a smooth rational curve, and \(S\in|-K_{X}|\) is nonsingular outside a single ordinary node, where \(X\) has a quotient singularity of type \(1/2(1,1,1)\). Suppose that \(K_{X}\cdot\Gamma=-S\cdot\Gamma=k/2>0\), with \(k\geq 3\) an odd integer. Then the antiflip \(X\dashrightarrow X^{-}\) exists, and \(X^{-}\) has worse than terminal singularities._
Proof.: Consider the blow-up \((E\subset Y)\to(\Gamma\subset X)\) of the curve \(\Gamma\subset X\), denote by \(S^{\prime}\subset Y\) the strict transform of \(S\), and set \(\Gamma^{\prime}=S^{\prime}\cap E\). It is easy to compute that:
\[E\cdot\Gamma^{\prime}=-\frac{2}{3},\quad\text{and}\quad S^{\prime}\cdot\Gamma^ {\prime}=-m+2,\text{ where }m=\frac{k}{2}+\frac{1}{2}\geq 2.\]
We claim that the extremal neighbourhood of \(\Gamma^{\prime}\subset Y\) is analytically isomorphic to the analytic germ around the curve \((y_{0}=y_{1}=0)\) in the geometric quotient \(\mathbb{C}^{4}/\!\!/\mathbb{C}^{\times}\) for the action given by the weights:
\[\begin{array}{cccc}\frac{x_{0}}{1}&x_{1}&y_{0}&y_{1}\\ \hline 1&2&-3&-2m+4\end{array},\]
where \(E=(y_{0}=0)\), \(S^{\prime}=(y_{1}=0)\), and the stability condition is \((>0)\).
To prove the claim, note that the curve \(\Gamma^{\prime}\) is the complete intersection of the surfaces \(S^{\prime}\) and \(E\). Denoting by \(P\in\Gamma^{\prime}\subset S^{\prime}\subset Y\) the singular point, choose a divisor \(B_{0}\) through \(P\) such that \(B_{0}\cdot\Gamma^{\prime}=\frac{1}{2}\), and a divisor \(B_{1}\) meeting \(\Gamma^{\prime}\) transversally at some other point \(Q\in\Gamma^{\prime}\). The divisors \(B_{0},B_{1},S^{\prime},E\) map the neighbourhood of \(\Gamma^{\prime}\subset Y\) isomorphically to the model toric quotient neighbourhood given in the claim, in such a way that the divisors \(B_{0},B_{1},S^{\prime},E\) map to the divisors \((x_{0}=0),(x_{1}=0),(y_{0}=0),(y_{1}=0)\). There are two cases to discuss: \(m=2\) and \(m>2\).
If \(m>2\), then, in order to perform the antiflip \(Y\dashrightarrow Y^{-}\) of the curve \(\Gamma^{\prime}\), we change the stability condition to \((<0)\). Denote by \(S^{-}\) and \(E^{-}\subset Y^{-}\) the strict transforms of \(S^{\prime}\) and \(E\), respectively. Then \(S^{-}\) and \(E^{-}\) are disjoint in \(Y^{-}\), and the antiflip \(X^{-}\) of the original \(\Gamma\subset X\) is given by
\[X^{-}=\operatorname{Proj}R(Y^{-},S^{-}).\]
The divisor \(E^{-}\) is contracted to a point in \(X^{-}\), and this point is a strictly canonical singularity.
If \(m=2\), then \(E^{-}\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\), and \(X^{-}\) is the contraction of \(E^{-}\) along the ruling other than the one contracted to \(X\). The image of \(E^{-}\) is a curve of \(A_{2}\)-singularities, and so \(X^{-}\) has worse than terminal singularities.
In the proof of Theorem C, we need to understand some neighbourhoods where the curve \(\Gamma\) is not irreducible.
**Lemma 4.7**.: _Consider an extremal neighbourhood \(\Gamma\subset S\subset X\), where \(X\) is a smooth 3-fold, \(S\in|-K_{X}|\) is a smooth surface, and \(\Gamma=\Gamma_{0}\cup\Gamma_{1}\subset S\) is a chain of two \((-2)\)-curves intersecting transversally. Suppose that \(-K_{X}\cdot\Gamma_{0}=S\cdot\Gamma_{0}=-a<0\) and \(-K_{X}\cdot\Gamma_{1}=S\cdot\Gamma_{1}=-b<0\). Then the extremal neighbourhood is isomorphic to the analytic germ around the curve_
\[\Gamma_{0}\cup\Gamma_{1}=(x_{0}=x_{2}=0)\cup(x_{0}=x_{3}=0)\]
_in the geometric quotient \(\mathbb{C}^{5}/\!\!/\mathbb{C}^{\times}\) for the action given by the weights:_
\[\begin{array}{ccccc}x_{0}&x_{1}&x_{2}&x_{3}&x_{4}\\ \hline-a&1&1&0&-2\\ -b&-2&0&1&1\end{array},\]
_where the stability condition is taken in the quadrant \(\langle(1,0),(0,1)\rangle_{+}\). Under this identification, \(S\) is given by the equation \((x_{0}=0)\)._
Proof.: By Lemma 4.4, the neighbourhood is trivial around each of the two curves \(\Gamma_{0}\), \(\Gamma_{1}\). It follows from this that we can find divisors \(D_{0},\ldots,D_{4}\) on \(X\) as follows:
1. \(D_{0}=S\);
2. \(D_{0}\cap D_{1}=\Gamma_{1}\) scheme-theoretically. It follows from this that \(D_{1}\) intersects \(\Gamma_{0}\) transversally at the point where it intersects \(\Gamma_{1}\);
3. \(D_{2}\) intersects \(\Gamma_{0}\) transversally at one point and is disjoint from \(\Gamma_{1}\);
4. \(D_{3}\) intersects \(\Gamma_{1}\) transversally at one point and is disjoint from \(\Gamma_{0}\);
5. \(D_{0}\cap D_{4}=\Gamma_{0}\) scheme-theoretically. It follows from this that \(D_{4}\) intersects \(\Gamma_{1}\) transversally at the point where it intersects \(\Gamma_{0}\).
Note that the intersection multiplicities of these divisors with the curves \(\Gamma_{0}\) and \(\Gamma_{1}\) are as follows:
\[\begin{array}{ccccc}&D_{0}&D_{1}&D_{2}&D_{3}&D_{4}\\ \hline\cdot\Gamma_{0}&-a&1&1&0&-2\\ \cdot\Gamma_{1}&-b&-2&0&1&1\end{array}\]
and so the divisors \(D_{0},\ldots,D_{4}\) map the neighbourhood isomorphically to the model toric quotient neighbourhood given above.
### Fano and weak Fano \(\mathbb{P}^{1}\)-bundles over \(\mathbb{P}^{2}\)
We state the classification of rank two vector bundles \(\mathcal{E}\) on \(\mathbb{P}^{2}\) such that \(\mathbb{P}(\mathcal{E})\) is Fano or weak Fano. We also collect some elementary facts and formulas on projective space bundles for later use.
We begin by clarifying our conventions regarding vector bundles. In this section, a vector bundle on a scheme is a locally free sheaf on it. Following Grothendieck, if \(\mathcal{E}\) is a vector bundle on \(Y\) we denote by
\[\mathbb{P}(\mathcal{E})=\underline{\operatorname{Proj}}_{\mathcal{O}_{X}} \bigoplus_{n\in\mathbb{N}}\mathit{Sym}^{n}\,\mathcal{E}\]
the space of \(1\)-dimensional _quotients_ of \(\mathcal{E}\), by
\[\pi\colon\mathbb{P}(\mathcal{E})\to Y\]
the natural projection, by \(\mathcal{O}_{\mathbb{P}(\mathcal{E})}(1)\) -- or simply \(\mathcal{O}(1)\) -- the tautological line bundle on \(\mathbb{P}(\mathcal{E})\), so that
\[\mathcal{E}=\pi_{*}\mathcal{O}(1),\]
and by \(\xi=c_{1}\left(\mathcal{O}(1)\right)\in A^{1}\left(\mathbb{P}(\mathcal{E})\right)\) its first Chern class. The Chow ring of \(\mathbb{P}(\mathcal{E})\) admits the following description:
\[A^{\bullet}\left(\mathbb{P}(\mathcal{E})\right)=\frac{A^{\bullet}(Y)[\xi]}{( \xi^{r}+\sum_{i=1}^{r}(-1)^{i}\xi^{r-i}c_{i}(\mathcal{E}))}\,\]
where \(r=\operatorname{rk}\mathcal{E}\), and the formula defines the \(i\)-th _Chern class_\(c_{i}(\mathcal{E})\in A^{i}(Y)\). The _Chern polynomial_ of \(\mathcal{E}\) is the polynomial
\[c_{t}(\mathcal{E})=1+\sum_{i=1}^{r}c_{i}(\mathcal{E})t^{i}.\]
In what follows, \(\mathcal{E}\) is a rank two vector bundle on \(\mathbb{P}^{2}\). We denote by \(\ell\in A^{1}(\mathbb{P}^{2})\) the class of a line and we abuse notation slightly and write the first and second Chern classes of \(\mathcal{E}\) as
\[c_{1}\ell,\quad c_{2}\ell^{2},\quad\text{where }c_{1},c_{2}\in\mathbb{Z}.\]
We say that \(\mathcal{E}\) is _normalised_ if \(c_{1}\in\{0,-1\}\). We can always achieve this by tensoring \(\mathcal{E}\) with a line bundle.
**Theorem 4.8**.: _Let \(\mathcal{E}\) be a normalised rank two vector bundle on \(\mathbb{P}^{2}\). Then_
1. \(\mathbb{P}(\mathcal{E})\) _is Fano if and only if_ \(\mathcal{E}\) _is one of the bundles in List_ \(1\) _below_ _[_13, 14_]__._
2. \(\mathbb{P}(\mathcal{E})\) _is strictly weak Fano -- i.e.,_ \(-K_{\mathbb{P}(E)}\) _is nef and big but not ample -- if and only if_ \(\mathcal{E}\) _is one of the bundles in List_ \(2\) _below_ _[_15_, Theorem B]__,_ _[_16_, Theorem 3.4]__._
**List 1**.: **Rank \(2\) vector bundles \(\mathcal{E}\) on \(\mathbb{P}^{2}\) with \(c_{1}(\mathcal{E})\in\{0,-1\}\) such that \(\mathbb{P}(\mathcal{E})\) is Fano**__
1. \(\mathcal{O}_{\mathbb{P}^{2}}\oplus\mathcal{O}_{\mathbb{P}^{2}}(-1),\ c_{1}=-1, c_{2}=0;\)__
2. \(\mathcal{O}_{\mathbb{P}^{2}}(1)\oplus\mathcal{O}_{\mathbb{P}^{2}}(-1),\ c_{1}=0, c_{2}=-1;\)__
3. \(\mathcal{O}_{\mathbb{P}^{2}}\oplus\mathcal{O}_{\mathbb{P}^{2}},\ c_{1}=c_{2}=0;\)__
4. \(T_{\mathbb{P}^{2}}(-2)),\ c_{1}=-1,c_{2}=1;\)__
5. \(\mathcal{E}\) is determined by the exact sequence \(0\ \to\mathcal{O}_{\mathbb{P}^{2}}\ \to\mathcal{E}\ \to\mathcal{I}_{p}\ \to 0\), where \(\mathcal{I}_{p}\) is the ideal sheaf of a point \(p\in\mathbb{P}^{2}\), \(c_{1}=0,c_{2}=1\) (see Remark 4.9 below);__
6. \(\mathcal{E}\) is a stable bundle with \(c_{1}=0,c_{2}=2;\)__
7. \(\mathcal{E}\) is a stable bundle with \(c_{1}=0,c_{2}=3.\)__
**List 2**.: **Rank \(2\) vector bundles \(\mathcal{E}\) on \(\mathbb{P}^{2}\) with \(c_{1}(\mathcal{E})\in\{0,-1\}\) such that \(\mathbb{P}(\mathcal{E})\) is strictly weak Fano**__
1. \(\mathcal{O}_{\mathbb{P}^{2}}(1)\oplus\mathcal{O}_{\mathbb{P}^{2}}(-2),\)\(c_{1}=-1,c_{2}=-2;\)__
2. \(\mathcal{E}\) is determined by the exact sequence \(0\ \to\mathcal{O}_{\mathbb{P}^{2}}\ \to\mathcal{E}\ \to\mathcal{I}_{p}(-1)\ \to 0\), where \(\mathcal{I}_{p}\) is the ideal sheaf of a point \(p\in\mathbb{P}^{2}\), \(c_{1}=-1,c_{2}=1\) (see Remark 4.9 below);__
3. \(\mathcal{E}\) is a stable bundle with \(c_{1}=-1,2\leq c_{2}\leq 5;\)__
4. \(\mathcal{E}\) is a stable bundle with \(c_{1}=0,4\leq c_{2}\leq 6.\)__
**Remark 4.9**.: One may compute the Chern classes of the vector bundles (5) and (9) above as follows. For \(k\leq 2\), let \(\mathcal{E}\) be the unique vector bundle on \(\mathbb{P}^{2}\) that sits in the exact sequence6
Footnote 6: A simple computation shows that
\[\text{ for all }k\in\mathbb{Z},\quad\text{Ext}^{1}_{\mathcal{O}_{\mathbb{P}^{2}}}( \mathcal{I}_{p}(k),\mathcal{O}_{\mathbb{P}^{2}})=\begin{cases}\mathbb{C}&\text{ if }k\leq 2;\\ 0&\text{ if }k\geq 3;\end{cases},\]
and that the nontrivial extensions are also locally nontrivial.
\[0\to\mathcal{O}_{\mathbb{P}^{2}}(-2+k)\to\mathcal{O}_{\mathbb{P}^{2}}(-1+k)^ {\oplus 2}\to\mathcal{I}_{p}(k)\to 0,\]
yields that \(c_{t}(\mathcal{I}_{p}(k))(1+(-2+k)t)=(1+(-1+k)t)^{2}\), and from this we conclude that \(c_{1}(\mathcal{E})=k\) and \(c_{2}(\mathcal{E})=1\).
**Lemma 4.10**.: _Let \(\mathcal{E}\) be a rank \(r+1\) vector bundle on \(Y\). Then the anticanonical class of \(\mathbb{P}(\mathcal{E})\) is_
\[-K_{\mathbb{P}(\mathcal{E})}=(r+1)\xi-\pi^{*}\big{(}c_{1}(\mathcal{E})+K_{Y} \big{)}\.\]
Proof.: This follows from the Euler sequence computing the relative tangent bundle:
\[0\to\mathcal{O}_{\mathbb{P}(\mathcal{E})}\to\big{(}\pi^{*}\mathcal{E}^{\vee} \big{)}\otimes\mathcal{O}_{\mathbb{P}(\mathcal{E})}(1)\to T_{\pi}\to 0\,\]
and the exact sequence \(0\to T_{\pi}\to T_{\mathbb{P}(\mathcal{E})}\to\pi^{*}T_{Y}\to 0\).
## 5. Proof of Theorem B
In this section we prove Theorem B. Consider a Mf CY pair \((\mathbb{P}^{3},D)\), where \(D\subset\mathbb{P}^{3}\) is a quartic surface that is nonsingular outside a unique singular point \(z\in D\) of type \(A_{1}\), and such that the class group \(\text{Cl}(D)\cong\mathbb{Z}\) is generated by the class of a hyperplane section. In this case, birational rigidity fails. The following Mf CY pair \((X,D_{X})\to\mathbb{P}^{2}\) is a nontrivial element in \(\mathcal{P}(\mathbb{P}^{3},D)\). Let \(\sigma\colon X\to\mathbb{P}^{3}\) be the blow-up of \(z\), and \(D_{X}\subset X\) the strict transform of \(D\). It is a smooth \(K3\) surface. The projection \(\mathbb{P}^{3}\dashrightarrow\mathbb{P}^{2}\) from \(z\) induces a Mori fibration \(\pi\colon X\to\mathbb{P}^{2}\). One easily computes that
\[K_{X}+D_{X}\ =\ \sigma^{*}(K_{\mathbb{P}^{3}}+D).\]
Theorem B states that the Mf CY pair \((X,D_{X})\to\mathbb{P}^{2}\) is the only nontrivial element in the pliability set \(\mathcal{P}(\mathbb{P}^{3},D)\).
In the course of the proof, we will need the following result.
**Proposition 5.1**.: _Let \(Y\) be a smooth projective variety of dimension \(n\), \(A\) a smooth semi-ample divisor on \(Y\), and \(\varphi\colon Y\to\mathbb{P}^{N}\) the morphism induced by the linear system \(|mA|\) for \(m\gg 0\). Suppose that:_
1. _the generic fibre dimension of_ \(\varphi\) _is_ \(\leq n-3\)_;_
2. _for all_ \(x\in\mathbb{P}^{N}\)_,_ \(\dim\big{(}\varphi^{-1}(x)\big{)}\leq n-2\)_._
_Then the cokernel of the restriction homomorphism \(H^{2}(Y;\mathbb{Z})\to H^{2}(A;\mathbb{Z})\) is torsion-free._
Proof.: Set \(\mathcal{U}=Y\setminus A\), and denote by \(i\colon\mathcal{U}\to Y\) and \(j\colon A\to Y\) the inclusions. Consider the long exact sequence
\[\ldots\ \to H^{2}_{c}(\mathcal{U},\mathbb{Z})\ \xrightarrow{i_{*}}\ H^{2}(Y, \mathbb{Z})\ \xrightarrow{\mathcal{J}}\ H^{2}(A,\mathbb{Z})\ \to\ H^{3}_{c}(\mathcal{U},\mathbb{Z})\ \to\ \ldots\]
where \(H^{i}_{c}(\mathcal{U},\mathbb{Z})\) denotes the singular cohomology with compact support. It is enough to show that \(H^{3}_{c}(\mathcal{U},\mathbb{Z})\) is torsion free. By Poincare duality, \(H^{3}_{c}(\mathcal{U},\mathbb{Z})\cong H_{2n-3}(\mathcal{U},\mathbb{Z})\).
Let \(H\subset\mathbb{P}^{N}\) be the hyperplane such that \(\varphi^{*}H=mA\). Then \(\varphi\) restricts to a proper morphism \(\varphi_{\mathcal{U}}\colon\mathcal{U}\to\mathbb{P}^{N}\setminus H\). For each integer \(k\), we denote by \(\phi(k)\) be the dimension of the set of points \(y\in\varphi(\mathcal{U})\) such that \(\dim\big{(}\varphi^{-1}(y)\big{)}=k\). If this set is empty, we set \(\phi(k)=-\infty\). By the main
result of [12, Part II, Chap. 1, Sec. 1.1\(\star\)] (Homotopy Dimension with Large Fibres), \(\mathcal{U}\) has the homotopy type of a CW complex of real dimension less or equal than
\[n+\sup_{k}\left\{2k-n+\phi(k)+\inf\{\phi(k),0\}\right\}\ \leq\ 2n-3.\]
-- where the last inequality follows from the assumptions (i) and (ii). Finally, by [13, Theorem 3], \(H_{2n-3}(\mathcal{U},\mathbb{Z})\) is torsion free.
We are now ready to proceed with the proof of Theorem B. We start by collecting a few useful facts on the geometry of \(D_{X}\).
**5.2** (The geometry of \(D_{X}\)).: The surface \(D_{X}\subset X\) intersects the exceptional divisor \(E\cong\mathbb{P}^{2}\) of \(\sigma\colon X\to\mathbb{P}^{3}\) transversely along a smooth conic \(e\). Denote by \(h\) the pull-back of a general hyperplane under \(\sigma_{|D_{X}}\colon D_{X}\to\mathbb{P}^{3}\). Then \(\operatorname{Pic}(D_{X})=\mathbb{Z}[h]\oplus\mathbb{Z}[e]\), and the intersection matrix of \(\operatorname{Pic}(D_{X})\) with respect to the basis \(\left([h],[e]\right)\) is
\[\left(\begin{array}{cc}4&0\\ 0&-2\end{array}\right). \tag{5.3}\]
The condition that \(\operatorname{Cl}(D)\cong\mathbb{Z}\) is generated by the class of a hyperplane section guarantees that \(D\) contains no lines, and so the restriction \(\pi_{|D_{X}}\colon D_{X}\to\mathbb{P}^{2}\) is finite of degree \(2\). It ramifies over a sextic curve. The associated involution \(\tau\colon D_{X}\to D_{X}\) maps the \((-2)\)-curve \(e\) to another \((-2)\)-curve \(e^{\prime}\), the strict transform of the intersection of \(D\) with its tangent cone at \(z\). Note that \(\operatorname{NE}(D_{X})=\langle[e],[e^{\prime}]\rangle_{+}\), and \(e+e^{\prime}\sim(\pi_{|D_{X}})^{*}\mathcal{O}_{\mathbb{P}^{2}}(2)\sim 2h-2e\). Thus, \(e^{\prime}\sim 2h-3e\).
**Lemma 5.4**.: _With the notation and assumptions of the preceding discussion 5.2, we have_
\[\operatorname{Bir}(D_{X})=G_{D}\rtimes\langle\tau\rangle,\]
_where \(G_{D}\subset\operatorname{PGL}_{4}(\mathbb{C})\) is the group of projective automorphisms of \(D\subset\mathbb{P}^{3}\)._
Proof.: Since \(D_{X}\) is the unique minimal model of \(D\), it is follows that \(\operatorname{Bir}(D)=\operatorname{Aut}(D_{X})\). Let us denote this group by \(G\). It naturally acts on \(\operatorname{Pic}(D_{X})\) respecting the quadratic form and the Mori cone, thus we get a group homomorphism
\[\rho\colon G\to\{\pm 1\}\]
where \(\rho(g)=1\) if \(\rho\) fixes \(e\) and \(e^{\prime}\), and \(\rho(g)=-1\) if \(g\) exchanges \(e\) with \(e^{\prime}\). The involution \(\tau\) exchanges \(e\) and \(e^{\prime}\), thus \(G=G_{D}\rtimes\langle\tau\rangle\), where \(G_{D}=Ker(\rho)\) is the subgroup of \(\operatorname{Aut}(D_{X})\) of elements that act trivially on \(\operatorname{Pic}(X)\). Thus \(G_{D}\) fixes the class \(h\in\operatorname{Pic}(X)\) and hence it acts on the linear system \(|\mathcal{O}_{D}(1)|\) as projective linear transformations.
**Remark 5.5**.: It is easy to see that if \(D\) is general then the group \(G_{D}\) of projective automorphisms of \(D\subset\mathbb{P}^{3}\) is trivial.
We are now ready to prove the first conclusion of Theorem B, i.e., that the pliability of the pair \((\mathbb{P}^{3},D)\) is the set with two elements \(\{(\mathbb{P}^{3},D)/\operatorname{Spec}\mathbb{C},(X,D_{X})/\mathbb{P}^{2}\}\).
Proof of Theorem B(1).: Let \((\mathbb{P}^{3},D)\) be a Mf CY pair, where \(D\subset\mathbb{P}^{3}\) is a quartic surface having exactly one singular point \(z\in D\) of type \(A_{1}\), and such that \(\operatorname{Cl}(D)\cong\mathbb{Z}\) is generated by the class of a hyperplane section. Note that \((\mathbb{P}^{3},D)\) is canonical.
Let \((Y,D_{Y})\to S_{Y}\) be a Mf CY pair, and \(\varphi\colon(\mathbb{P}^{3},D)\dashrightarrow(Y,D_{Y})\) a volume preserving birational map that is not biregular. By Theorem 2.5, the map \(\varphi\) factors as the composition of volume preserving Sarkisov links. By Proposition 3.3, the first step of any Sarkisov factorization is a divisorial contraction with zero-dimensional center. By Lemma 4.1, this divisorial contraction \(\sigma\colon X\to\mathbb{P}^{3}\) is the blow-up of \(z\). Let \(\pi\colon X\to\mathbb{P}^{2}\) be the Mori fibration induced by the projection from \(z\), and assume that \(\varphi\circ\sigma\colon X\dashrightarrow\mathbb{P}^{3}\) is not biregular. Let \(D_{X}\subset X\) be the strict transform of \(D\). Since by assumption \(D\) does not contain any line, the restriction \(\pi_{|D_{X}}\colon D_{X}\to\mathbb{P}^{2}\) is a finite morphism of degree \(2\).
Note that \(X\) has Picard rank two, and the morphisms \(\sigma\) and \(\pi\) are the two extremal contractions of \(X\). Therefore, the Sarkisov factorization of \(\varphi\) must proceed with a volume preserving divisorial contraction \(g\colon(Z,D_{Z})\to(X,D_{X})\). Since \(D_{X}\subset X\) is smooth, Proposition 3.1 implies
that the center of the divisorial contraction \(g\colon Z\to X\) is a curve \(\mathcal{C}\subset D_{X}\), and \(D_{Z}\) is the strict transform of \(D_{X}\) in \(Z\). At the generic point of \(\mathcal{C}\), the morphism \(g\colon Z\to X\) coincides with the blow-up of \(\mathcal{C}\). We denote by \(F\subset Z\) the exceptional divisor of \(g\colon Z\to X\).
In order to describe the next link in a Sarkisov factorization of \(\varphi\), we describe the classes of irreducible curves that are contracted by \(\pi\circ g\). They are either contained in fibers of \(g_{|F}\colon F\to\mathcal{C}\), or they are strict transforms of fibers of \(\pi\colon X\to\mathbb{P}^{2}\). Set \(\Gamma=\pi(\mathcal{C})\subset\mathbb{P}^{2}\), and denote by \(d\) the degree of the finite morphism \(\pi_{|\mathcal{C}}\colon\mathcal{C}\to\Gamma\). We know that \(d\in\{1,2\}\) (as we just said, \(\mathcal{C}\subset D_{X}\)), but we will soon see that \(d=1\). For all \(q\in\mathbb{P}^{2}\setminus\Gamma\), \((\pi\circ g)^{-1}(q)\cong\mathbb{P}^{1}\). For a general point \(q\in\Gamma\), \((\pi\circ g)^{-1}(q)\) has \(d+1\) rational components: the strict transform of \(\pi^{-1}(q)\), and \(d\) fibers of \(g_{|F}\colon F\to\mathcal{C}\), corresponding to the \(d\) points of intersection \(\pi^{-1}(q)\cap\mathcal{C}\).
The next link in the Sarkisov factorization of \(\varphi\) is either of type (I) or of type (II). We show that it cannot be a Sarkisov link of type (I), as in the following diagram:
Here \(Z\dashrightarrow X^{\prime}\) is a sequence of Mori flips, flops and antiflips, \(\pi^{\prime}\colon X^{\prime}\to S\) is a Mori fiber space, and \(r\colon S\to\mathbb{P}^{2}\) is a divisorial contraction with center a point \(q\in\mathbb{P}^{2}\). Note that \((r\circ\pi^{\prime})^{-1}(q)\) is a surface in \(X^{\prime}\), and so its strict transform in \(Z\) is also surface. The commutativity of the above diagram then implies that \((\pi\circ g)^{-1}(q)\) is a surface, which is impossible since the fibers of \(\pi\circ g\) are \(1\)-dimensional.
We conclude that the next link in a Sarkisov factorization of \(\varphi\) is of type (II), as in the following diagram:
Here \(\chi\colon Z\dashrightarrow Z^{\prime}\) is a sequence of Mori flips, flops and antiflips, and \(g^{\prime}\colon Z^{\prime}\to X^{\prime}\) is a divisorial contraction. We denote by \(D_{X^{\prime}}\) the strict transform of \(D_{X}\) in \(X^{\prime}\). It is normal by Lemma 2.8. We will show that \(\pi^{\prime}\colon X^{\prime}\to\mathbb{P}^{2}\) is a \(\mathbb{P}^{1}\)-bundle square equivalent to \(\pi\colon X\to\mathbb{P}^{2}\), and that \(\overline{g}=g^{\prime}\circ\chi\circ g^{-1}\colon X\dashrightarrow X^{\prime}\) restricts to an isomorphism between \(D_{X}\) and \(D_{X^{\prime}}\).
From the description of the curves contracted by \(\pi\circ g\) above, and the fact that \(\chi\colon Z\dashrightarrow Z^{\prime}\) is an isomorphism over the complement of a finite subset of \(\mathbb{P}^{2}\), we see that \(g^{\prime}\colon Z^{\prime}\to X^{\prime}\) contracts the strict transform of \(\pi^{-1}(\Gamma)\) in \(Z^{\prime}\) onto a curve \(\mathcal{C}^{\prime}\subset X^{\prime}\), which is mapped to \(\Gamma\) by \(\pi^{\prime}\). Recall that \(\pi_{|\mathcal{C}}\colon\mathcal{C}\to\Gamma\) is a finite morphism of degree \(d\in\{1,2\}\). If \(d=2\), then \(D_{X^{\prime}}\) would be singular along \(\mathcal{C}^{\prime}\), and hence not normal. So we conclude that \(d=1\), and the general fiber of \(\pi^{\prime}\colon X^{\prime}\to\mathbb{P}^{2}\) over \(\Gamma\) is irreducible, and thus isomorphic to \(\mathbb{P}^{1}\). Thus, \(\pi^{\prime}\colon X^{\prime}\to\mathbb{P}^{2}\) is a \(\mathbb{P}^{1}\)-bundle over the complement of a finite subset of \(\mathbb{P}^{2}\). It follows from [1, Theorem 5] that \(\pi^{\prime}\colon X^{\prime}\to\mathbb{P}^{2}\) is a \(\mathbb{P}^{1}\)-bundle. It is clearly square equivalent to \(\pi\colon X\to\mathbb{P}^{2}\) via \(\overline{g}\).
To show that the restricted map \(\overline{g}_{|D_{X}}\colon D_{X}\dashrightarrow D_{X^{\prime}}\) is an isomorphism, we first note that it does not contract any curve. This follows from the commutativity of the diagram above, and the fact that \(D_{X}\) does not contain any fiber of \(\pi\). By Zariski's Main Theorem, the birational inverse of \(\overline{g}_{|D_{X}}\) is a morphism. Adjunction yields that \(K_{D_{X}}\sim 0\) and \(K_{D_{X^{\prime}}}\sim 0\). Since \(D_{X}\) is smooth, in particular terminal, we conclude that \((\overline{g}_{|D_{X}})^{-1}\colon D_{X^{\prime}}\to D_{X}\) is an isomorphism.
The same argument as above shows that the next link in the Sarkisov factorization of \(\varphi\) cannot be of type (I). It also shows that, if it is of type (II), then it ends with a \(\mathbb{P}^{1}\)-bundle \(\pi^{\prime\prime}\colon X^{\prime\prime}\to\mathbb{P}^{2}\), square equivalent to \(\pi^{\prime}\colon X^{\prime}\to\mathbb{P}^{2}\), and the birational map \(X^{\prime}\dashrightarrow X^{\prime\prime}\) restricts
to an isomorphism between \(D_{X^{\prime}}\) and its strict transform \(D_{X^{\prime\prime}}\). So, after a finite number of Sarkisov links of type (II), we reach a \(\mathbb{P}^{1}\)-bundle, which we keep denoting by \(\pi^{\prime}\colon X^{\prime}\to\mathbb{P}^{2}\), square equivalent to \(\pi\colon X\to\mathbb{P}^{2}\), and either the Sarkisov factorization of \(\varphi\) is finished, or it must continue with a link of type (III) or (IV). Note moreover that the strict transform \(D_{X^{\prime}}\) of \(D_{X}\) in \(X^{\prime}\) is a smooth member of \(|-K_{X^{\prime}}|\) isomorphic to \(D_{X}\), and it does not contain any fiber of \(\pi^{\prime}\). In order to prove the theorem, assuming that the Sarkisov factorization of \(\varphi\) is not finished, we must show that \(X^{\prime}\) is isomorphic to the blow-up of \(\mathbb{P}^{3}\) at a point.
If the Sarkisov factorization of \(\varphi\) is not finished, then the next link starts with a birational map corresponding to an extremal ray \(R\subset\operatorname{NE}(X^{\prime})\). Let \(\gamma\subset X^{\prime}\) be a reduced and irreducible curve such that \(R=\mathbb{R}_{\geq 0}[\gamma]\). We shall show that \(-K_{X^{\prime}}\cdot\gamma\geq 0\).
Suppose for a contradiction that \(-K_{X^{\prime}}\cdot\gamma<0\). Then the contraction of the extremal ray \(R=\mathbb{R}_{\geq 0}[\gamma]\) is small, and the Sarkisov link starts with an antiflip \(X^{\prime}=Y^{+}\dashrightarrow Y^{-}\). We will show that \(Y^{-}\) has worse than terminal singularities, which is not allowed in the definition of volume preserving Sarkisov link (Definition 2.4).
The assumption that \(-K_{X^{\prime}}\cdot\gamma=D_{X^{\prime}}\cdot\gamma<0\) implies that \(\gamma\subset D_{X^{\prime}}\). Recall from Paragraph 5.2 that \(\operatorname{NE}(D_{X^{\prime}})=\langle e,e^{\prime}\rangle_{+}\), where \(e\) and \(e^{\prime}\) are \((-2)\)-curves in \(D_{X^{\prime}}\). Therefore, either \(\gamma=e\) or \(\gamma=e^{\prime}\). Set \(k:=-D_{X^{\prime}}\cdot\gamma>0\). It follows from (5.3) that \(k\) is even, and hence \(k\geq 2\). By Lemma 4.4, \(Y^{-}\) has worse than terminal singularities, as anticipated.
This contradiction proves that \(-K_{X^{\prime}}\cdot\gamma\geq 0\). The other extremal ray of \(\operatorname{NE}(X^{\prime})\) is generated by the class of a fiber of the \(\mathbb{P}^{1}\)-bundle \(\pi^{\prime}\colon X^{\prime}\to\mathbb{P}^{2}\), which has positive intersection with \(-K_{X^{\prime}}\). Therefore, if \(-K_{X^{\prime}}\cdot\gamma>0\) then \(X^{\prime}\) is Fano. If \(-K_{X^{\prime}}\cdot\gamma=0\), then the contraction of the extremal ray \(R=\mathbb{R}_{\geq 0}[\gamma]\), which is induced by the linear system \(\big{|}-mK_{X^{\prime}}\big{|}\) for \(m\gg 0\), is small -- otherwise after the contraction we get a variety with strictly canonical singularities, which is not allowed -- and the Sarkisov link starts with a Mori flop. In this case \(X^{\prime}\) is weak Fano (i.e., \(-K_{X^{\prime}}\) is nef and big).
Theorem 4.8 and the two lists that accompany it show the rank \(2\) vector bundles \(\mathcal{E}\) on \(\mathbb{P}^{2}\) with \(c_{1}(\mathcal{E})\in\{0,-1\}\) such that \(\mathbb{P}(\mathcal{E})\) is Fano or weak Fano. Below we follow the conventions on vector bundles summarised in section 4.3.
Let \(\mathcal{E}\) be the rank two vector bundle on \(\mathbb{P}^{2}\) with \(c_{1}(\mathcal{E})\in\{0,-1\}\) such that \(X^{\prime}\cong\mathbb{P}(\mathcal{E})\), and denote by \(\pi^{\prime}\colon X^{\prime}\to\mathbb{P}^{2}\) the natural projection. In order to prove the theorem, we must show that \(\mathcal{E}\cong\mathcal{O}_{\mathbb{P}^{2}}\oplus\mathcal{O}_{\mathbb{P}^{2} }(-1)\). To do so, we compare the lattice \(\operatorname{Pic}(D_{X^{\prime}})\) with the sublattice obtained as the image of the restriction homomorphism
\[r\colon\operatorname{Pic}(X^{\prime})\ \to\ \operatorname{Pic}(D_{X^{\prime}}).\]
The Picard group \(\operatorname{Pic}(X^{\prime})\) is generated by \(L^{\prime}=\big{[}(\pi^{\prime})^{*}\big{(}\mathcal{O}_{\mathbb{P}^{2}}(1) \big{)}\big{]}\) and \(\xi=c_{1}\big{(}\mathcal{O}_{\mathbb{P}(\mathcal{E})}(1)\big{)}\). Working in \(A^{\bullet}(X^{\prime})\) and using that \(D_{X^{\prime}}\ \sim\ -K_{X^{\prime}}\ \sim 2\xi+\ (-c_{1}+3)L^{\prime}\) (by Lemma 4.10), we compute the intersection matrix of \(r\Big{(}\operatorname{Pic}\big{(}\mathbb{P}(\mathcal{E})\big{)}\Big{)}\subset \operatorname{Pic}(D_{X^{\prime}})\) in the basis \(r(L^{\prime}),r(\xi)\):
\[\left(\begin{array}{cc}L^{\prime 2}\cdot D_{X^{\prime}}&\xi\cdot L^{\prime} \cdot D_{X^{\prime}}\\ \xi\cdot L^{\prime}\cdot D_{X^{\prime}}&\xi^{2}\cdot D_{X^{\prime}}\end{array} \right)=\left(\begin{array}{cc}2&c_{1}+3\\ c_{1}+3&c_{1}^{2}+3c_{1}-2c_{2}\end{array}\right). \tag{5.6}\]
Direct inspection shows that this matrix has rank two for all vector bundles in the two lists, except for the one in (8), namely \(\mathcal{O}_{\mathbb{P}^{2}}(1)\oplus\mathcal{O}_{\mathbb{P}^{2}}(-2)\). On the other hand, we cannot have \(X^{\prime}\cong\mathbb{P}\big{(}\mathcal{O}_{\mathbb{P}^{2}}(1)\oplus \mathcal{O}_{\mathbb{P}^{2}}(-2)\big{)}\) because the anti-canonical contraction of \(\mathbb{P}\big{(}\mathcal{O}_{\mathbb{P}^{2}}(1)\oplus\mathcal{O}_{\mathbb{P}^ {2}}(-2)\big{)}\) is divisorial and not small. In all other cases, the restriction homomorphism \(r\colon\operatorname{Pic}(X^{\prime})\to\operatorname{Pic}(D_{X^{\prime}})\) is injective with finite cokernel. We claim that it is also surjective. Pick \(\alpha\in\operatorname{Pic}(D_{X^{\prime}})\). By Proposition 5.1, the first Chern class \(c_{1}(\alpha)\in H^{2}(D_{X^{\prime}},\mathbb{Z})\) is in the image of \(H^{2}(X^{\prime},\mathbb{Z})\). By the Lefschetz theorem on \((1,1)\) classes, \(H^{2}(X^{\prime},\mathbb{Z})=\operatorname{Pic}(X^{\prime})\), hence there is a line bundle \(\widetilde{\alpha}\) on \(X^{\prime}\) such that
\[c_{1}\left(r(\widetilde{\alpha})\right)=c_{1}(\alpha).\]
But \(c_{1}\colon\operatorname{Pic}(D_{X^{\prime}})\to H^{2}(D_{X^{\prime}},\mathbb{Z})\) is injective, hence \(r(\widetilde{\alpha})=\alpha\). It follows that the matrices (5.3) and (5.6) must have the same determinant. One checks easily that this only happens in case (1), i.e., when \(X^{\prime}\cong\mathbb{P}\big{(}\mathcal{O}_{\mathbb{P}^{2}}\oplus\mathcal{O}_{ \mathbb{P}^{2}}(-1)\big{)}\) is the blow-up of \(\mathbb{P}^{3}\) at a point. Denote by \(D^{\prime}\) the image of
\(D_{X^{\prime}}\) in \(\mathbb{P}^{3}\). Since the blow-up \((X^{\prime},D_{X^{\prime}})\to(\mathbb{P}^{3},D^{\prime})\) is volume preserving, its restriction to \(D_{X^{\prime}}\) contracts one of the two \((-2)\)-curves \(e\) or \(e^{\prime}\). So we have \(D\cong D^{\prime}\), and \((\mathbb{P}^{3},D)\cong(\mathbb{P}^{3},D^{\prime})\).
Next, we prove the second conclusion of Theorem B, that is, we describe the group \(\operatorname{Bir}(\mathbb{P}^{3},D)\) of volume preserving birational self-maps of \((\mathbb{P}^{3},D)\).
Proof of Theorem B(2).: By Proposition 2.6, since \((\mathbb{P}^{3},D)\) is canonical, there is a restriction homomorphism
\[r\colon\operatorname{Bir}(X,D)\ \to\ \operatorname{Bir}(D). \tag{5.7}\]
We use the same notation as above: \(\sigma\colon X\to\mathbb{P}^{3}\) denotes the blow-up of the singular point \(z\in D\), \(D_{X}\subset X\) the strict transform of \(D\), and \(\pi\colon X\to\mathbb{P}^{2}\) the fibration induced by the projection from \(z\). Recall from Paragraph 5.2 that the restriction \(\pi_{|D_{X}}\colon D_{X}\to\mathbb{P}^{2}\) is a double cover, and denote by \(\tau\colon D_{X}\to D_{X}\) the associated involution. Lemma 5.4 states that
\[\operatorname{Bir}(D)=\operatorname{Aut}(D_{X})\cong\operatorname{Aut}( \mathbb{P}^{3},D)\rtimes\langle\tau\rangle.\]
After a change of coordinates, we can write the equation of \(D\) in \(\mathbb{P}^{3}\) as
\[x_{3}^{2}A(x_{0},x_{1},x_{2})+x_{3}B(x_{0},x_{1},x_{2})+C(x_{0},x_{1},x_{2})=0,\]
where \(A=A(x_{0},x_{1},x_{2})\), \(B=B(x_{0},x_{1},x_{2})\) and \(C=C(x_{0},x_{1},x_{2})\) are homogeneous of degree \(2\), \(3\) and \(4\), respectively. The singular point of \(D\) has coordinates \(z=[0:0:0:1]\).
**Claim 5.8**.: The restriction homomorphism (5.7) is surjective and admits a splitting
\[\operatorname{Bir}(\mathbb{P}^{3},D)\ \stackrel{{\curvearrow}}{{ \rightarrow}}\ \operatorname{Bir}(D)=\operatorname{Aut}(\mathbb{P}^{3},D)\rtimes \langle\tau\rangle.\]
Proof of Claim 5.8.: To see that \(r\colon\operatorname{Bir}(\mathbb{P}^{3},D)\ \to\ \operatorname{Bir}(D)= \operatorname{Aut}(\mathbb{P}^{3},D)\rtimes\langle\tau\rangle\) is surjective notice that the birational involution
\[\varphi(x_{0}:x_{1}:x_{2}:x_{3})=(Ax_{0}:Ax_{1}:Ax_{2}:-Ax_{3}-B), \tag{5.9}\]
restricts to the nontrivial birational involution \(\tau\).
Consider the splitting \(\operatorname{Bir}(D)\to\operatorname{Bir}(\mathbb{P}^{3},D)\) of \(r\) that is canonical on the normal subgroup \(\operatorname{Aut}(\mathbb{P}^{3},D)\), and sends \(\tau\) to \(\varphi\). To show that this is well-defined, we must check that, for any automorphism \(h\in\operatorname{Aut}(\mathbb{P}^{3},D)\), we have \(\varphi\circ h\circ\varphi\in\operatorname{Aut}(\mathbb{P}^{3},D)\). In order to prove this, we first describe a volume preserving Sarkisov factorization of \(\varphi\):
The factorization starts with the blow-up \(\sigma\colon X\to\mathbb{P}^{3}\) of \(z\). Denote by \(E\subset X\) its exceptional divisor. The base locus of \(\varphi\circ\sigma\) contains the curve \(e=E\cap D_{X}\). Note that \(\pi\colon X\to\mathbb{P}^{2}\) maps \(e\) isomorphically onto the conic \(\big{(}A=0\big{)}\subset\mathbb{P}^{2}\), and the cylinder \(\pi^{-1}\big{(}A=0\big{)}\subset X\) is precisely the strict transform of the tangent cone of \(D\subset\mathbb{P}^{3}\) at \(z\). The next link in the Sarkisov factorization is the composition \(\beta\circ\alpha^{-1}\), where \(\alpha\) and \(\beta\) are described as follows. The morphism \(\alpha:Z\to X\) is the blow-up of \(e=E\cap D_{X}\). Denote by \(F\subset Z\) its exceptional divisor. The base locus of \(\varphi\circ\sigma\circ\alpha\) contains the curve \(e_{F}=F\cap D_{Z}\). The morphism \(\beta:Z\to X^{\prime}\cong\mathbb{P}\big{(}\mathcal{O}_{\mathbb{P}^{2}}\oplus \mathcal{O}_{\mathbb{P}^{2}}(3)\big{)}\) contracts the rulings of the strict transform of the tangent cone of \(D\subset\mathbb{P}^{3}\) at \(z\), and maps \(F\subset Z\) isomorphically onto the cylinder \(F^{\prime}=(\pi^{\prime})^{*}\big{(}A=0\big{)}\subset X^{\prime}\). The base locus of \(\varphi\circ\sigma\circ\alpha\circ\beta^{-1}\) consists of the curve \(e_{F^{\prime}}=\beta(e_{F})\subset F^{\prime}\cap D_{X^{\prime}}\), which is mapped by \(\pi^{\prime}\) isomorphically onto the conic \(\big{(}A=0\big{)}\subset\mathbb{P}^{2}\). The next link in the Sarkisov factorization is the composition \(\delta\circ\gamma^{-1}\), where \(\gamma:Z^{\prime}\to X^{\prime}\) is the blow-up of \(e_{F^{\prime}}\), and \(\delta:Z^{\prime}\to X\) contracts the rulings of the strict
transform of the cylinder \(F^{\prime}\) on \(Z^{\prime}\). The factorization then ends with the blow-up \(\sigma\colon X\to\mathbb{P}^{3}\) of \(z\).
Any automorphism \(h\in\operatorname{Aut}(\mathbb{P}^{3},D)\) fixes \(z\) and stabilizes the tangent cone of \(D\) at \(z\). So it lifts to an automorphism of \(X\) that stabilizes \(E\), \(D_{X}\), and the rulings over \(\big{(}A=0\big{)}\). Since \(\alpha\) blows-up \(E\cap D_{X}\), \(h\) lifts to an automorphism of \(Z\). On the other hand, \(\beta\) contracts the ruling of the strict transform of \(\pi^{-1}\big{(}A=0\big{)}\), and thus \(h\) descends to an automorphism of \(X^{\prime}\) that stabilizes \(D_{X^{\prime}}\), the ruling over \(\big{(}A=0\big{)}\), and each of the components of \(D_{X^{\prime}}\cap(\pi^{\prime})^{*}\big{(}A=0\big{)}\). The same argument shows that \(h\) lifts to \(Z^{\prime}\), and then descends to \(X\), always stabilizing the strict transform of \(E\). This shows that the birational map \(\varphi\circ h\) has the same Sarkisov decomposition of \(\varphi\), and therefore \(\varphi\circ h\circ\varphi\) is biregular.
It follows from Claim 5.8 that there is a split exact sequence:
\[1\to\mathbb{G}\to\operatorname{Bir}(\mathbb{P}^{3},D)\ \xrightarrow{\curvearrow} \operatorname{Bir}(D)\to 1, \tag{5.10}\]
where \(\mathbb{G}\) is the group of birational self maps of \(\mathbb{P}^{3}\) fixing \(D\) pointwise. We saw in the proof of Theorem B(1) that any \(\psi\in\operatorname{Bir}(\mathbb{P}^{3},D)\) preserves the star of lines through \(z\). Therefore, we can identify \(\mathbb{G}\) with the group of birational self-maps of \(X\) over \(\mathbb{P}^{2}\) fixing \(D_{X}\) pointwise.
We view \(X\) as a model of \(\mathbb{P}^{1}\) over \(\mathbb{C}(x,y)\) with projective coordinates \((u:v)\). Setting \(a(x,y)=A(1,x,y)\), \(b(x,y)=B(1,x,y)\) and \(c(x,y)=C(1,x,y)\), \(\mathbb{G}\) becomes the identity component of the subgroup \(G_{Q}\) of \(PGL\big{(}2,\mathbb{C}(x,y)\big{)}\) of projective transformations preserving the quadratic form
\[Q(u,v)\ =\ a(x,y)u^{2}+b(x,y)uv+c(x,y)v^{2}\]
up to scaling.
**Lemma 5.11**.: _Let \(Q(u,v)=Au^{2}+Buv+Cv^{2}\) be a quadratic form with coefficients in a field \(K\), and let \(G_{Q}\) be the subgroup of \(PGL(2,K)\) of projective transformations preserving \(Q\) up to scalar:_
\[G_{Q}:=\left\{\phi=\left(\begin{array}{cc}\alpha&\beta\\ \gamma&\delta\end{array}\right)\in PGL(2,K)\,\Big{|}\,Q(\phi(u,v))=\lambda Q(u, v),\,\lambda\in K\setminus\{0\}\right\}.\]
_Then \(G_{Q}\) has two irreducible components, given by_
\[\left\{\begin{array}{l}\alpha=-\frac{B}{A}\gamma+\delta,\\ \beta=-\frac{C}{A}\gamma\end{array}\right.\quad\text{ and }\quad\left\{ \begin{array}{l}\alpha=-\delta,\\ \beta=\frac{C}{A}\gamma-\frac{B}{A}\delta.\end{array}\right. \tag{5.12}\]
Proof.: We have that
\[Q(\phi(u,v))=u^{2}(\alpha^{2}A+\alpha\gamma B+\gamma^{2}C)+uv(2\alpha\beta A+ \alpha\delta B+\beta\gamma B+2\gamma\delta C)+v^{2}(\beta^{2}A+\beta\delta B+ \delta^{2}C).\]
Therefore, \(\phi=\left(\begin{array}{cc}\alpha&\beta\\ \gamma&\delta\end{array}\right)\in G_{Q}\) if and only if
\[\operatorname{rank}\left(\begin{array}{cc}A&B&C\\ \alpha^{2}A+\alpha\gamma B+\gamma^{2}C&2\alpha\beta A+\alpha\delta B+\beta \gamma B+2\gamma\delta C&\beta^{2}A+\beta\delta B+\delta^{2}C\end{array} \right)\leq 1,\]
or, equivalently, if and only if
\[\left\{\begin{array}{l}\alpha^{2}AC+\alpha\gamma BC+\gamma^{2}C^{2}-\beta^{2 }A^{2}-\beta\delta AB-\delta^{2}AC=0,\\ \alpha^{2}AB+\alpha\gamma B^{2}+\gamma^{2}BC-2\alpha\beta A^{2}-\alpha\delta AB -\beta\gamma AB-2\gamma\delta AC=0.\end{array}\right. \tag{5.13}\]
If the 4-tuple \((\alpha,\beta,\gamma,\delta)\in\mathbb{C}^{4}\) satisfies the pair of equations (5.13), then it satisfies one of the following pairs of equations:
\[\left\{\begin{array}{l}\alpha=-\delta,\\ \beta=\frac{C}{A}\gamma-\frac{B}{A}\delta,\end{array}\right.\ \left\{ \begin{array}{l}\alpha=-\frac{B}{A}\gamma+\delta,\\ \beta=-\frac{C}{A}\gamma,\end{array}\right.\ \left\{\begin{array}{l}\alpha=-\frac{B+ \varepsilon}{2A}\gamma,\\ \beta=\frac{B(B-\varepsilon)-4AC}{2A\varepsilon}\delta,\end{array}\right.\ \left\{ \begin{array}{l}\alpha=-\frac{B-\varepsilon}{2A}\gamma,\\ \beta=-\frac{B(B+\varepsilon)-4AC}{2A\varepsilon}\delta,\end{array}\right.\]
where \(\varepsilon\in\mathbb{C}\) is such that \(\varepsilon^{2}=B^{2}-4AC\). On the other hand, if \((\alpha,\beta,\gamma,\delta)\in\mathbb{C}^{4}\) satisfies the third or the fourth pair of equations, then \(\det\left(\begin{array}{cc}\alpha&\beta\\ \gamma&\delta\end{array}\right)=0\). Therefore \(G_{Q}\) has two irreducible components, given by (5.12).
Lemma 5.11 shows that \(G_{Q}\) has two irreducible components over \(\mathbb{C}(x,y)\):
\[G_{Q}^{1}=\left\{\left(\begin{array}{cc}\left(-\frac{b(x,y)}{a(x,y)}\alpha+ \beta\right)&\left(-\frac{c(x,y)}{a(x,y)}\alpha\right)\\ \alpha&\beta\end{array}\right)\right\},\ \ G_{Q}^{2}=\left\{\left(\begin{array}{cc} -\beta&\left(\frac{c(x,y)}{a(x,y)}\alpha-\frac{b(x,y)}{a(x,y)}\beta\right)\\ \alpha&\beta\end{array}\right)\right\}. \tag{5.14}\]
The component \(G_{Q}^{1}\) contains the identity, and is precisely the group \(\mathbb{G}\) of (5.10). In order to describe \(\mathbb{G}\) as a form of \(\mathbb{G}_{m}\), set \(w\ =\ 2a(x,y)u+b(x,y)v\), and compute
\[4a(x,y)Q(u,v)\ =\ w^{2}-\delta(x,y)v^{2},\]
where \(\delta(x,y)=\Delta(1,x,y)\). Applying Lemma 5.11 with the new projective coordinates \((v:w)\), \(\mathbb{G}\) can be presented as the subgroup of \(PGL\big{(}2,\mathbb{C}(x,y)\big{)}\) of elements of the form
\[\left(\begin{array}{cc}U&\delta(x,y)V\\ V&U\end{array}\right)\]
where \(U,V\in\mathbb{C}(x,y)\) are such that \(U^{2}-\delta(x,y)V^{2}=1\). This is the form of \(\mathbb{G}_{m}\) described in [10, Chapter 2, Example 2.3.2 (c)].
**Remark 5.15**.: We can write down all the elements of \(\mathrm{Bir}(\mathbb{P}^{3},D)\) explicitly. It follows from (5.14) above that they can be written in one of the following two forms:
\[\varphi^{1}\colon\ (x_{0}:x_{1}:x_{2}:x_{3})\ \mapsto\ \big{(}A(Fx_{3}+G)x_{0}:A(Fx_{3}+G)x_{1}:A(Fx_{3}+G)x_{2}:(AG- BF)x_{3}-CF\big{)}\]
where either \(F=0\) and \(\deg(G)=0\), or \(F,G\in\mathbb{C}[x_{0},x_{1},x_{2}]\) are homogeneous with \(\deg(G)=\deg(F)+1\);
\[\varphi^{2}\colon\ (x_{0}:x_{1}:x_{2}:x_{3})\ \mapsto\ \big{(}A(Fx_{3}+G)x_{0}:A(Fx_{3}+G)x_{1}:A(Fx_{3}+G)x_{2}:-AGx_{3}+ CF-BG\big{)}\]
where either \(F=0\) and \(\deg(G)=0\), or \(F,G\in\mathbb{C}[x_{0},x_{1},x_{2}]\) are homogeneous with \(\deg(G)=\deg(F)+1\).
## 6. Proof of Theorem C
In this section, we determine the pliability of Mf CY pairs of the form \((\mathbb{P}^{3},D)\), where \(D\) is a quartic surface having exactly one singular point \(z\in D\) of type \(A_{2}\), and such that \(\mathrm{Cl}(D)\cong\mathbb{Z}\cdot\mathcal{O}_{D}(1)\) is generated by the class of a hyperplane section. This last condition implies in particular that \(D\) does not contain lines. After a change of coordinates, we may assume that the singular point is \(z=[0:0:0:1]\), and write the equation of \(D\) as
\[D=\Big{(}x_{0}x_{1}x_{3}^{2}+Bx_{3}+C=0\Big{)}\subset\mathbb{P}^{3},\]
where \(B=B(x_{0},x_{1},x_{2})\) and \(C=C(x_{0},x_{1},x_{2})\) are homogeneous polynomials of degree \(3\) and \(4\), respectively.
At the end of Section 2, we constructed volume preserving Sarkisov links between the Mf CY pairs from Table 1:
In order to prove Theorem C, we will show that these are all the Sarkisov links from these Mf CY pairs, except for chains of square equivalent Sarkisov links from objects \(2\), \(2^{a}\) and \(2^{b}\).
Proof of Theorem C.: Let \((Y,D_{Y})/T\) be a Mf CY pair, and
\[\Phi\colon(\mathbb{P}^{3},D)\dasharrow(Y,D_{Y})\]
a volume preserving birational map. The goal is to show that \((Y,D_{Y})\to T\) is square equivalent to one of the objects in the conclusion of Theorem C:
1. \((\mathbb{P}^{3},D)\) (object 1);
2. \((X,D_{X})=\left(\mathbb{F}_{1}^{3},D_{\binom{2}{2}}\right)\) (object 2);
3. \((\mathbb{P}(1^{3},2),D_{5}^{a})\) or \((\mathbb{P}(1^{3},2),D_{5}^{b})\) (object \(3^{a}\) or \(3^{b}\));
4. a member of the 3-parameter family \(\big{\{}(X_{4},D_{3,4})\big{\}}\), with \(X_{4}\subset\mathbb{P}(1^{3},2^{2})\) (object 4);
5. a member of the 6-parameter family \(\big{\{}(X_{4},D_{2,4})\big{\}}\), with \(X_{4}\subset\mathbb{P}(1^{4},2)\) (object \(5^{a}\)).
By the Sarkisov program for volume preserving birational maps of Mf CY pairs (Theorem 2.5), there is a sequence of Mf CY pairs
\[(\mathbb{P}^{3},D)/\operatorname{Spec}\mathbb{C}=(X_{0},D_{0})/S_{0}\,\ (X_{1},D_{1})/S_{1}\,\ \dots\,\ (X_{n},D_{n})/S_{n}=(Y,D_{Y})/T\,\]
and volume preserving Sarkisov links
\[\Phi_{i}\colon(X_{i-1},D_{i-1})/S_{i-1}\dashrightarrow(X_{i},D_{i})/S_{i},\]
\(i=1,\dots,n\), such that \(\Phi=\Phi_{n}\circ\dots\circ\Phi_{1}\).
We prove by increasing induction on \(i\) that each \((X_{i},D_{i})/S_{i}\) is either isomorphic to one of the objects \(1\), \(3^{a}\), \(3^{b}\), \(4\), and \(5^{a}\), or it is obtained from \((X,D_{X})/\mathbb{P}^{2}\) (object 2) after finitely many volume preserving Sarkisov links of type (II). In the latter case, it is in particular square equivalent to object \(2\). The base case \(i=0\) is clear since \((X_{0},D_{0})/S_{0}=(\mathbb{P}^{3},D)/\operatorname{Spec}\mathbb{C}\) is object \(1\).
For \(i>0\), we assume by induction that \((X_{i-1},D_{i-1})/S_{i-1}\) is either isomorphic to one of the objects \(1\), \(3^{a}\), \(3^{b}\), \(4\), and \(5^{a}\), or is obtained from \((X,D_{X})/\mathbb{P}^{2}\) after finitely many volume preserving Sarkisov links of type (II). We shall prove that the same holds for \((X_{i},D_{i})/S_{i}\). We discuss several cases depending on the nature of \((X_{i-1},D_{i-1})/S_{i-1}\).
**Case 1.** Suppose that \((X_{i-1},D_{i-1})/S_{i-1}\cong(\mathbb{P}^{3},D)/\operatorname{Spec}\mathbb{C}\) (object 1). We prove in Lemma 6.1 below that the only volume preserving Sarkisov links
\[(\mathbb{P}^{3},D)/\operatorname{Spec}\mathbb{C}\dashrightarrow(X^{\dagger}, D^{\dagger})/S^{\dagger}\]
are the maps \(\sigma^{-1},\epsilon_{a},\epsilon_{b}\) to objects \(2\), \(3^{a}\) and \(3^{b}\), respectively. It follows that \((X_{i},D_{i})/S_{i}\) is isomorphic to one of the objects \(2\), \(3^{a}\) or \(3^{b}\).
**Case 2.** Suppose that \((X_{i-1},D_{i-1})/S_{i-1}\) is obtained from \((X,D_{X})/\mathbb{P}^{2}\) after finitely many volume preserving Sarkisov links of type (II). We show in Lemma 6.5 below that one of the following holds:
1. \(\Phi_{i}\) is a Sarkisov link of type (II);
2. \((X_{i-1},D_{i-1})/S_{i-1}\) is isomorphic to object \(2\), and \(\Phi_{i}=\sigma\);
3. \((X_{i-1},D_{i-1})/S_{i-1}\) is isomorphic to object \(2^{a}\), and \(\Phi_{i}=\chi^{a}\);
4. \((X_{i-1},D_{i-1})/S_{i-1}\) is isomorphic to object \(2^{b}\), and \(\Phi_{i}=\chi^{b}\).
In case (a), it follows that \((X_{i},D_{i})/S_{i}\) is obtained from \((X,D_{X})\) after finitely many volume preserving Sarkisov links of type (II). In cases (b), (c) and (d), it follows that \((X_{i},D_{i})/S_{i}\) is isomorphic to object \(1\), \(3^{a}\) and \(3^{b}\), respectively.
**Case 3.** Suppose that \((X_{i-1},D_{i-1})/S_{i-1}\) is isomorphic to object \(3^{a}\) (case \(3^{b}\) is similar). We prove in Lemma 6.6 below that the only volume preserving Sarkisov links
\[(\mathbb{P}(1^{3},2),D_{5}^{a})/\operatorname{Spec}\mathbb{C}\dashrightarrow(X^{ \dagger},D^{\dagger})/S^{\dagger}\]
are the maps \(\epsilon_{a}^{-1},(\chi^{a})^{-1},\phi^{a},\psi^{a}\) to objects \(1\), \(2^{a}\), \(4\) and \(5^{a}\), respectively.
**Case 4.** Suppose that \((X_{i-1},D_{i-1})/S_{i-1}\) is isomorphic to object \(4\). We prove in Lemma 6.9 below that the only volume preserving Sarkisov links
\[(X_{4},D_{3,4})/\operatorname{Spec}\mathbb{C}\dashrightarrow(X^{\dagger},D^{ \dagger})/S^{\dagger}\]
are the maps \((\phi^{a})^{-1}\), \((\phi^{b})^{-1}\), and thus \((X_{i},D_{i})/S_{i}\) is isomorphic to object \(3^{a}\) or \(3^{b}\).
**Case 5**.: Suppose that \((X_{i-1},D_{i-1})/S_{i-1}\) is isomorphic to object \(5^{a}\). We prove in Lemma 6.10 below that the only volume preserving Sarkisov links
\[(X_{4},D_{3,4})/\operatorname{Spec}\mathbb{C}\dashrightarrow(X^{\dagger},D^{ \dagger})/S^{\dagger}\]
are the maps \((\psi^{a})^{-1}\) and \((\widetilde{\psi}^{b})^{-1}\), and hence \((X_{i},D_{i})/S_{i}\) is isomorphic to object \(3^{a}\) or \(3^{b}\).
**Lemma 6.1**.: _The only volume preserving Sarkisov links_
\[\Psi\colon(\mathbb{P}^{3},D)/\operatorname{Spec}\mathbb{C}\dashrightarrow(X^ {\dagger},D^{\dagger})/S^{\dagger}\]
_are the maps \(\sigma^{-1},\epsilon_{a}\), and \(\epsilon_{b}\) described in Examples 2.9 and 2.11._
Proof.: Since \(\mathbb{P}^{3}\) has Picard rank \(1\), the link \(\Psi\) begins with a volume preserving divisorial contraction \(f\colon(Z,D_{Z})\to(\mathbb{P}^{3},D)\). By Proposition 3.3 and Lemma 4.1, the contraction \(f\) is either the usual blow-up of the singular point \(z\), or the weighted blow-up at \(z\) with weights \((2,1,1)\) or \((1,2,1)\) with respect to the affine coordinates \(x_{0}\), \(x_{1}\) and \(x_{2}\) in the affine chart \((x_{3}=1)\). If \(f=\sigma\) is the blow-up of \(z\), then \(\Psi=\sigma^{-1}\). If \(f\) is the weighted blow-up at \(z\) with weights \((2,1,1)\) (respectively \((1,2,1)\)), then \(\Psi\) is the composition of \(f\) with the contraction of the strict transform of the divisor \((x_{0}=0)\) (respectively \((x_{1}=0)\)), as described in Example 2.11. Hence, \(\Psi=\epsilon_{b}\) and \((X^{\dagger},D^{\dagger})/S^{\dagger}\cong(\mathbb{P}(1^{3},2),D_{5}^{b})\) (respectively \(\Psi=\epsilon_{a}\) and \((X^{\dagger},D^{\dagger})/S^{\dagger}\cong(\mathbb{P}(1^{3},2),D_{5}^{a})\)).
As before, we denote by \(\sigma\colon X\to\mathbb{P}^{3}\) the blow-up of the singular point \(z\in D\), by \(D_{X}\) the (smooth) strict transform of \(D\) in \(X\), and by \(\pi\colon X\to\mathbb{P}^{2}\) the fibration induced by the projection from \(z\). In order to determine all the volume preserving Sarkisov links from \((X,D_{X})/\mathbb{P}^{2}\), we need a good understanding of the geometry of the smooth K3 surface \(D_{X}\).
**6.2** (The geometry of \(D_{X}\)).: Denote by \(E\cong\mathbb{P}^{2}\) the exceptional divisor of the blow-up \(\sigma:X\to\mathbb{P}^{3}\). The intersection of \(D_{X}\) with \(E\) is the union of two \((-2)\)-curves \(e_{0}\) and \(e_{1}\). These curves are mapped isomorphically via \(\pi\) to the lines \((x_{0}=0)\) and \((x_{1}=0)\) in \(\mathbb{P}^{2}\), respectively. Denote by \(h\) the pull-back of a general hyperplane under \(\sigma_{|D_{X}}\colon D_{X}\to\mathbb{P}^{3}\). Then \(\operatorname{Pic}(D_{X})=\mathbb{Z}[h]\oplus\mathbb{Z}[e_{0}]\oplus\mathbb{Z }[e_{1}]\), and the intersection matrix of \(\operatorname{Pic}(D_{X})\) with respect to the basis \(\big{(}[h],[e_{0}],[e_{1}]\big{)}\) is
\[\left(\begin{array}{ccc}4&0&0\\ 0&-2&1\\ 0&1&-2\end{array}\right).\]
By assumption, \(D\) does not contain lines. Hence, the morphism \(\pi_{|D_{X}}\colon D_{X}\to\mathbb{P}^{2}\) is finite of degree \(2\). Set \(\alpha=(\pi_{|D_{X}})^{*}\mathcal{O}_{\mathbb{P}^{2}}(1)\), and denote by \(\tau\colon D_{X}\to D_{X}\) the involution associated to \(\pi_{|D_{X}}\). Then \(\tau\colon D_{X}\to D_{X}\) maps the \((-2)\)-curves \(e_{0}\) and \(e_{1}\) to other \((-2)\)-curves \(e_{0}^{\prime}\) and \(e_{1}^{\prime}\), respectively. Note that
\[\alpha\ =\ (\pi_{|D_{X}})^{*}\mathcal{O}_{\mathbb{P}^{2}}(1)\ \sim\ h-e_{0}-e_{1}\ \sim\ e_{0}+e_{0}^{\prime}\ \sim\ e_{1}+e_{1}^{\prime}\.\]
Thus \(e_{0}^{\prime}\sim h-2e_{0}-e_{1}\), and \(e_{1}^{\prime}\sim h-2e_{1}-e_{0}\). The intersection matrix of \(\operatorname{Pic}(D_{X})\) with respect to the basis \(\big{(}[\alpha],[e_{0}],[e_{1}]\big{)}\) is
\[\left(\begin{array}{ccc}2&1&1\\ 1&-2&1\\ 1&1&-2\end{array}\right).\]
The following intersection numbers will be useful later on
\[\left\{\begin{array}{l}\alpha\cdot e_{0}=\alpha\cdot e_{1}=\alpha\cdot e_{0} ^{\prime}=\alpha\cdot e_{1}^{\prime}=1,\\ e_{0}\cdot e_{1}^{\prime}=e_{0}^{\prime}\cdot e_{1}=0,\\ e_{0}\cdot e_{0}^{\prime}=e_{1}\cdot e_{1}^{\prime}=3.\end{array}\right.\]
The \((-2)\)-curves \(e_{0}\), \(e_{1}\), \(e_{0}^{\prime}\) and \(e_{1}^{\prime}\) generate extremal rays of \(\operatorname{NE}(D_{X})\). Next we show that these are all:
\[\operatorname{NE}(D_{X})\ =\ \big{\langle}[e_{0}],[e_{1}],[e_{0}^{\prime}],[e_{1}^{ \prime}]\big{\rangle}_{+}. \tag{6.3}\]
Indeed, let \(C\subset D_{X}\) be an irreducible curve different from \(e_{0}\), \(e_{1}\), \(e^{\prime}_{0}\) and \(e^{\prime}_{1}\), and write \(C\sim dh-m_{0}e_{0}-m_{1}e_{1}\). By intersecting \(C\) with \(h\), \(e_{0}\), \(e_{1}\), \(e^{\prime}_{0}\) and \(e^{\prime}_{1}\), we get that
\[\left\{\begin{array}{l}d>0,\\ 0\leq m_{0}\leq\frac{4}{3}d,\\ 0\leq m_{1}\leq\frac{4}{3}d.\end{array}\right.\]
Therefore, we may write
\[C\ \equiv\ \frac{d}{2}e^{\prime}{}_{0}\ +\ \frac{d}{2}e^{\prime}{}_{1}\ +\ \left(\frac{3}{2}d-m_{0}\right)e_{0}\ +\ \left(\frac{3}{2}d-m_{1}\right)e_{1},\]
with \(\frac{3}{2}d-m_{0}>0\) and \(\frac{3}{2}d-m_{1}>0\). This gives (6.3).
**Remark 6.4**.: Fix homogeneous coordinates \(x_{0},x_{1},x_{2},x_{3},x\) on \(X\cong\mathbb{F}_{1}^{3}\) with weights:
\[\begin{array}{cccc}x_{0}&x_{1}&x_{2}&x_{3}&x\\ \hline 1&1&1&0&-1\\ 0&0&0&1&1\end{array}\]
as in Table 1. In these coordinates, \(\sigma\colon X\to\mathbb{P}^{3}\) is given by
\[(x_{0},x_{1},x_{2},x_{3},x)\mapsto(xx_{0},xx_{1},xx_{2},x_{3})\]
while \(\pi\colon X\to\mathbb{P}^{2}\) is given by \((x_{0},x_{1},x_{2},x_{3},x)\mapsto(x_{0},x_{1},x_{2})\). The equation of \(D_{X}\subset X\) is
\[x_{0}x_{1}x_{3}^{2}+Bx_{3}x+Cx^{2}=0\]
and the \((-2)\)-curves \(e_{0}\), \(e_{1}\), \(e^{\prime}_{0}\) and \(e^{\prime}_{1}\) of the discussion above are given by:
\[\begin{array}{l}e_{0}=\{x=x_{0}=0\},\\ e_{1}=\{x=x_{1}=0\},\\ e^{\prime}_{0}=\{x_{0}=Bx_{3}+Cx=0\},\\ e^{\prime}_{1}=\{x_{1}=Bx_{3}+Cx=0\}.\end{array}\]
We are ready to determine the volume preserving Sarkisov links from \((X,D_{X})/\mathbb{P}^{2}\), and, more generally, from any Mf CY pair obtained from \(\pi\colon(X,D_{X})\to\mathbb{P}^{2}\) after finitely many volume preserving Sarkisov links of type (II).
**Lemma 6.5**.: _Suppose that \(\pi^{\prime}\colon(X^{\prime},D^{\prime})\to\mathbb{P}^{2}\) is a Mf CY pair obtained from \(\pi\colon(X,D_{X})\to\mathbb{P}^{2}\) after finitely many volume preserving Sarkisov links of type (II). Let \(\Psi\colon(X^{\prime},D^{\prime})/\mathbb{P}^{2}\dashrightarrow(X^{\dagger},D^ {\dagger})/S^{\dagger}\) be a volume preserving Sarkisov link. Then one of the following holds:_
1. \(\Psi\) _is a Sarkisov link of type (II);_
2. \((X^{\prime},D^{\prime})/\mathbb{P}^{2}\) _is isomorphic to_ \((X,D_{X})/\mathbb{P}^{2}\) _and_ \(\Psi=\sigma\)_;_
3. \((X^{\prime},D^{\prime})/\mathbb{P}^{2}\) _is isomorphic to object_ \(2^{a}\) _and_ \(\Psi=\chi^{a}\)_;_
4. \((X^{\prime},D^{\prime})/\mathbb{P}^{2}\) _is isomorphic to object_ \(2^{b}\) _and_ \(\Psi=\chi^{b}\)_._
Proof.: Let \(\Phi\colon(X,D_{X})/\mathbb{P}^{2}\dashrightarrow(X^{\prime},D_{X^{\prime}})/ \mathbb{P}^{2}\) be a composition of finitely many (possibly none) volume preserving Sarkisov links of type (II):
Here \(g_{i-1}\colon Z_{i-1}\to X_{i-1}\) and \(g_{i}\colon Z_{i}\to X_{i}\) are divisorial contractions centered at curves \(\mathcal{C}_{i-1}\subset D_{i-1}\) and \(\mathcal{C}_{i}\subset D_{i}\), and \(\varphi_{i}\colon Z_{i-1}\dashrightarrow Z_{i}\) is a sequence of flips, flops and antiflips. Notice that \(g_{i}\colon Z_{i}\to X_{i}\) contracts the strict transform of the surface \(\pi_{i-1}^{-1}(\pi_{i-1}(\mathcal{C}_{i-1}))\) onto \(\mathcal{C}_{i}\).
Step 1. We prove the following facts about \((X^{\prime},D_{X^{\prime}})/\mathbb{P}^{2}\):
1. \(\pi^{\prime}\colon X^{\prime}\to\mathbb{P}^{2}\) is a \(\mathbb{P}^{1}\)-bundle;
2. the induced birational map \(D_{X}\dashrightarrow D_{X^{\prime}}\) is an isomorphism;
3. no fibre of \(\pi^{\prime}\) is contained in \(D_{X^{\prime}}\).
To prove this, we proceed by induction on \(i\) as in the proof of Theorem B. Note that at each step, the curve \(\mathcal{C}_{i-1}\subset D_{i-1}\) is mapped birationally to its images under \(\pi_{i-1}\), since \(D_{i}\) is normal by Lemma 2.8. Thus, \(\pi_{i}\colon X_{i}\to\mathbb{P}^{2}\) is a \(\mathbb{P}^{1}\)-bundle over the complement of a finite subset of \(\mathbb{P}^{2}\). So (a) follows from [1, Theorem 5].
Next we show (b). Note that \(D_{i-1}\) does not contain any fiber of \(\pi_{i}\), and thus \(D_{i-1}\dashrightarrow D_{i}\) does not contract any curve. By Zariski's Main Theorem \(D_{i}\to D_{i-1}\) is a morphism. Adjunction yields that \(K_{D_{i-1}}\sim 0\) and \(K_{D_{i}}\sim 0\). Since \(D_{i-1}\) is smooth, we conclude that \(D_{i}\to D_{i-1}\) is an isomorphism. So we (b) is proven, and (c) follows from (b).
Step 2. Let \(\Psi\colon(X^{\prime},D_{X^{\prime}})/\mathbb{P}^{2}\dashrightarrow(X^{ \dagger},D^{\dagger})/\mathbb{P}^{2}\) be a volume preserving Sarkisov link. The link \(\Psi\) can not be a link of type (I) -- for the same reasons as in the proof of Theorem B(1) -- and we suppose it is not of type (II). So the link \(\Psi\) starts with a birational modification along an extremal ray \(R\subset\operatorname{NE}(X^{\prime})\). Let \(\Gamma\subset X^{\prime}\) be an irreducible curve such that \(R=\mathbb{R}_{\geq 0}[\Gamma]\). In this step, we show that \(-K_{X^{\prime}}\cdot\Gamma\geq 0\). This implies in particular that \(X^{\prime}\) is weak Fano.
Suppose for a contradiction that \(-K_{X^{\prime}}\cdot\Gamma=D_{X^{\prime}}\cdot\Gamma<0\). It follows that the extremal contraction \(f_{R}\colon X^{\prime}\to W\) is small, and \(\Gamma\subset D_{X^{\prime}}\) is contracted to a point. By Paragraph 6.2, \(\operatorname{NE}(D_{X^{\prime}})\ =\ \left\langle[e_{0}],[e_{1}],[e_{0}^{ \prime}],[e_{1}^{\prime}]\right\rangle_{+}\), and thus \(\Gamma\) must be one of the curves \(e_{0},e_{1},e_{0}^{\prime},e_{1}^{\prime}\). By relabelling these curves if necessary, we may assume that \(\Gamma=e_{0}\). We discuss two cases in turn:
1. \(\Gamma\) is not a connected component of the exceptional set of the contraction \(f_{R}\);
2. \(\Gamma\) is a connected component of the exceptional set of the contraction \(f_{R}\).
**Case (i): \(\Gamma=e_{0}\) is not a connected component of the exceptional set of \(f_{R}\).** The exceptional set of \(f_{R}\) must be \(e_{0}\cup e_{1}\). The class of \(e_{1}\) in \(H_{2}(X^{\prime})\) is proportional to \(e_{0}\) and then it is immediate that actually \([e_{1}]=[e_{0}]\in H_{2}(X^{\prime})\). It follows that
\[D_{X^{\prime}}\cdot e_{0}=D_{X^{\prime}}\cdot e_{1}=-a<0.\]
By Lemma 4.7, the extremal neighbourhood around \(e_{0}\cup e_{1}\subset D_{X^{\prime}}\subset X^{\prime}\) is isomorphic to the analytic germ around the curve
\[\Gamma_{0}\cup\Gamma_{1}=(x_{0}=x_{2}=0)\cup(x_{0}=x_{3}=0)\]
in the geometric quotient \(\mathbb{C}^{5}/\mathbb{C}^{\times}\) for the action given by the weights:
\[\begin{array}{ccccc}x_{0}&x_{1}&x_{2}&x_{3}&x_{4}\\ \hline-a&1&1&0&-2\\ -a&-2&0&1&1\end{array},\]
where the stability condition is taken in the quadrant \(\langle(1,0),(0,1)\rangle_{+}\). In these coordinates, \(D_{X^{\prime}}\) is given by \((x_{0}=0)\). To perform the antiflip \(X^{\prime}\dashrightarrow X^{-}\) we need to make \(D_{X^{\prime}}\) ample, that is, we change the stability condition to \(D_{X^{\prime}}=(-a,-a)\). The new irrelevant ideal is \((x_{0},x_{1}x_{4})\); thus \(X^{-}\) is covered by the charts \(\{x_{0}\neq 0\}\) and \(\{x_{1}\neq 0,x_{4}\neq 0\}\), and we see by looking at the chart \(\{x_{1}\neq 0,x_{4}\neq 0\}\) that the antiflip \(X^{-}\) has strictly canonical singularities of type \(\frac{1}{3}(0,1,2)\) (it has a curve of \(A_{2}\)-singularities), a contradiction.
**Case (ii): \(\Gamma=e_{0}\) is a connected component of the exceptional set of \(f_{R}\).** It follows from Lemma 4.4(2) that \(-K_{X^{\prime}}\cdot\Gamma=D_{X^{\prime}}\cdot e_{0}=-1\) and \(\Gamma=e_{0}\) has normal bundle \(N_{\Gamma/X^{\prime}}\cong\mathcal{O}(-2)\oplus\mathcal{O}(-1)\). It follows from Step 1 that \(X^{\prime}=\mathbb{P}(\mathcal{E})\) for some rank two vector bundle \(\mathcal{E}\) on \(\mathbb{P}^{2}\). After twisting \(\mathcal{E}\) with a line bundle if necessary, we may assume that \(c_{1}\in\{0,-1\}\). Write \(L^{\prime}=\left[(\pi^{\prime})^{*}\big{(}\mathcal{O}_{\mathbb{P}^{2}}(1) \big{)}\right]\) and \(\xi=\big{[}\mathcal{O}_{\mathbb{P}(\mathcal{E})}(1)\big{]}\). By Lemma 4.10,
\[D_{X^{\prime}}\ \sim\ -K_{X^{\prime}}\ \sim\ (3-c_{1})L^{\prime}+2\xi.\]
We compare the lattice \(\operatorname{Pic}(D_{X^{\prime}})\) with the sublattice obtained as the image of the restriction homomorphism
\[r\colon\ \operatorname{Pic}(X^{\prime})\ \to\ \operatorname{Pic}(D_{X^{\prime}}).\]
As in the proof of Theorem B(1), one computes that the intersection matrix of \(r\big{(}\operatorname{Pic}(X^{\prime})\big{)}\) in the restricted basis \((r(L^{\prime}),r(\xi))\) is
\[\left(\begin{array}{cc}2&c_{1}+3\\ c_{1}+3&c_{1}^{2}+3c_{1}-2c_{2}\end{array}\right).\]
We will show that this cannot be a sublattice of \(\operatorname{Pic}(D_{X^{\prime}})\cong\operatorname{Pic}(D_{X})\). Suppose otherwise, and write
\[r(\xi)=a\alpha+b_{0}e_{0}+b_{1}e_{1}\]
for some \(a,b_{0},b_{1}\in\mathbb{Z}\). Intersecting with \(r(L^{\prime})=\alpha\) we get:
\[c_{1}+3=r(L^{\prime})\cdot r(\xi)=2a+b_{0}+b_{1}\]
and hence
\[r(D_{X^{\prime}})=2r(\xi)+(3-c_{1})r(L)=2r(\xi)+(3-c_{1})\alpha=(6-b_{0}-b_{1 })\alpha+2b_{0}e_{0}+2b_{1}e_{1}\.\]
From this we conclude that
\[D_{X^{\prime}}\cdot e_{0}=6-5b_{0}+b_{1}=-1.\]
On the other hand, \(\operatorname{NE}(X^{\prime})=\langle e_{0},f\rangle_{+}\) where \(f\) is a fibre of \(\pi\), and \(e_{0},f\) are a basis of \(H_{2}(X^{\prime})\). It follows that in \(H_{2}(X^{\prime})\) we can write:
\[e_{1}=e_{0}+\lambda f\text{ and }e_{1}^{\prime}=e_{0}+\mu f,\text{ for some }\lambda,\mu\geq 0.\]
By computing intersection numbers, we get:
\[D_{X^{\prime}}\cdot e_{1} =6+b_{0}-5b_{1}\geq-1,\] \[D_{X^{\prime}}\cdot e_{1}^{\prime} =6-b_{0}+5b_{1}\geq-1.\]
Combining these equations we get
\[5b_{0}-b_{1} =7,\] \[-7\leq b_{0}-5b_{1} \leq 7,\]
which do not have common integer solutions, a contradiction. We conclude that \(-K_{X^{\prime}}\cdot\Gamma\geq 0\), and thus \(X^{\prime}=\mathbb{P}(\mathcal{E})\) is weak Fano.
Step 3. We determine the rank 2 vector bundles \(\mathcal{E}\) on \(\mathbb{P}^{2}\) for which \(\mathbb{P}(\mathcal{E})\) is weak Fano and contains an anti-canonical divisor isomorphic to \(D_{X}\). Theorem 4.8 and the two lists that accompany it show the rank 2 vector bundles \(\mathcal{E}\) on \(\mathbb{P}^{2}\) with \(c_{1}\in\{0,-1\}\) such that \(\mathbb{P}(\mathcal{E})\) is Fano or weak Fano. We continue with the set-up and notation of the proof of Case (ii) of Step 2 above. In oder to determine the possible vector bundles \(\mathcal{E}\), we shall determine the possible values of \(c_{2}=\frac{1}{2}(c_{1}^{2}+3c_{1}-r(\xi)^{2})\). Write \(r(\xi)\) in terms of the basis \(\big{(}[\alpha],[c_{0}],[c_{1}]\big{)}\) of \(\operatorname{Pic}(D_{X})\):
\[r(\xi)=\ a\alpha+b_{0}e_{0}+b_{1}e_{1},\]
for some \(a,b_{0},b_{1}\in\mathbb{Z}\). Intersecting with \(r(L^{\prime})=\alpha\) we get:
\[c_{1}+3=2a+b_{0}+b_{1},\]
and hence
\[r(D_{X^{\prime}})=(6-b_{0}-b_{1})\alpha+2b_{0}e_{0}+2b_{1}e_{1}.\]
Since \(-K_{X^{\prime}}\sim D_{X^{\prime}}\) is nef we have:
\[D_{X^{\prime}}\cdot e_{0} =-5b_{0}+b_{1}+6\geq 0,\] \[D_{X^{\prime}}\cdot e_{0}^{\prime} =5b_{0}-b_{1}+6\geq 0,\] \[D_{X^{\prime}}\cdot e_{1} =b_{0}-5b_{1}+6\geq 0,\] \[D_{X^{\prime}}\cdot e_{1}^{\prime} =-b_{0}+5b_{1}+6\geq 0.\]
The region of the \((b_{0},b_{1})\)-plane defined by these inequalities is pictured here together with its integral points:
If \((b_{0},b_{1})=(0,0)\), then one computes that \(a=1\), \(c_{1}=-1\) and \(c_{2}=-2\). In this case we must have \(\mathcal{E}\cong\mathcal{O}_{\mathbb{P}^{2}}(1)\oplus\mathcal{O}_{\mathbb{P}^{2 }}(-2)\), which is case (8) of List 2. This leads to a bad link: the nonfibering contraction of \(\mathbb{P}\big{(}\mathcal{O}_{\mathbb{P}^{2}}(1)\oplus\mathcal{O}_{\mathbb{P} ^{2}}(-2)\big{)}\) contracts a divisor to a strictly canonical singularity.
If \((b_{0},b_{1})\in\big{\{}(1,-1),(-1,1)\big{\}}\), then one computes that \(a=1\), \(c_{1}=-1\) and \(c_{2}=1\). In this case, \(-K_{X^{\prime}}\cdot e=0\) for some \(e\in\{e_{0},e_{1},e^{\prime}_{0},e^{\prime}_{1}\}\), and thus \(X^{\prime}\) is weak Fano but not Fano. So \(\mathcal{E}\) must be as in case (9) of List 2. Again this is not possible because the nonfibering contraction of this \(\mathbb{P}^{1}\)-bundle is divisorial and not small, so it cannot lead to a Sarkisov link.
If \((b_{0},b_{1})\in\big{\{}(1,1),(-1,-1)\big{\}}\), then one computes that \(r(\xi)=(e_{0}+e_{1})\) or \((e^{\prime}_{0}+e^{\prime}_{1})\), \(c_{1}=-1\) and \(c_{2}=0\). In this case, \(\mathcal{E}\cong\mathcal{O}_{\mathbb{P}^{2}}\oplus\mathcal{O}_{\mathbb{P}^{2 }}(-1)\), and \(X^{\prime}\) is the blow-up of \(\mathbb{P}^{3}\) at a point. The contraction \(X^{\prime}\to\mathbb{P}^{3}\) maps \(D_{X^{\prime}}\) to \(D\), contracting either \(e_{0}\cup e_{1}\) or \(e^{\prime}_{0}\cup e^{\prime}_{1}\) to the \(A_{2}\) singular point of \(D\).
If \((b_{0},b_{1})\in\big{\{}(0,1),(0,-1),(1,0),(-1,0)\big{\}}\), then one computes that \(r(\xi)=\alpha+e\) for some \(e\in\{e_{0},e_{1},e^{\prime}_{0},e^{\prime}_{1}\}\), \(c_{1}=0\) and \(c_{2}=-1\). In this case, \(\mathcal{E}\cong\mathcal{O}_{\mathbb{P}^{2}}(1)\oplus\mathcal{O}_{\mathbb{P}^{ 2}}(-1)\), and \(X^{\prime}\) is the blow-up of \(\mathbb{P}(1,1,1,2)\) at its singular point \(q=[0:0:0:1]\). This contraction \(\chi\colon X^{\prime}\to\mathbb{P}(1,1,1,2)\) is induced by the linear system
\[\big{|}L^{\prime}+\xi\big{|}\ =\ \big{|}(\chi)^{*}\mathcal{O}_{\mathbb{P}}(2) \big{|}.\]
It contracts the section \(E^{\prime}\sim\xi-L^{\prime}\) of \(\pi^{\prime}\) containing \(e\) to the singular point \(q\in\mathbb{P}(1,1,1,2)\). The image of \(D_{X^{\prime}}\) is a quintic hypersurface \(D_{5}\subset\mathbb{P}(1,1,1,2)\). The restriction \(\chi_{|D_{X^{\prime}}}\colon D_{X^{\prime}}\to D_{5}\) contracts \(e\) to \(q\), and is an isomorphism elsewhere. We now compute the equation of \(D_{5}\). By assumption, \((X^{\prime},D^{\prime})/\mathbb{P}^{2}\) is obtained from \((X,D_{X})/\mathbb{P}^{2}\) after finitely many volume preserving Sarkisov links of type (II):
Hence, the composed birational map \(\psi=\chi\circ\phi\circ(\sigma^{-1})\colon\mathbb{P}^{3}\dashrightarrow\mathbb{ P}(1,1,1,2)\) is given in coordinates by \(\psi(x_{0},x_{1},x_{2},x_{3})=(x_{0},x_{1},x_{2},x_{3}L+Q)\), where \(L\in\mathbb{C}[x_{0},x_{1},x_{2}]\) is a linear form and \(Q\in\mathbb{C}[x_{0},x_{1},x_{2}]\) is a quadratic form. After a change of variables in \(\mathbb{P}(1,1,1,2)\), we may assume that \(\psi\) is given by \(\psi(x_{0},x_{1},x_{2},x_{3})=(x_{0},x_{1},x_{2},x_{3}L)\), and so \(\psi^{-1}\colon\mathbb{P}(1,1,1,2)\to\mathbb{P}^{3}\) is given in coordinates by \(\psi^{-1}(x_{0},x_{1},x_{2},y)=(x_{0}L,x_{1}L,x_{2}L,y)\).
The strict transform of \(D\subset\mathbb{P}^{3}\) has equation dividing
\[(x_{0}x_{1}y^{2}+yLB+L^{2}C=0)\subset\mathbb{P}(1,1,1,2).\]
For it to be a quintic, there are just two possibilities: either \(L=x_{0}\) and \(D_{5}=D_{5}^{b}\), or \(L=x_{1}\) and \(D_{5}=D_{5}^{a}\).
**Lemma 6.6**.: _The only volume preserving Sarkisov links_
\[\Psi\colon(\mathbb{P}(1,1,1,2),D_{5}^{a})/\operatorname{Spec}\mathbb{C}\dashto (X^{\dagger},D^{\dagger})/S^{\dagger}\]
_are the maps \(\epsilon_{a}^{-1},(\chi^{a})^{-1},\phi^{a},\psi^{a}\) described in Examples 2.11, 2.12, 2.13 and 2.18._
_Similarly, the only volume preserving Sarkisov links_
\[\Psi\colon(\mathbb{P}(1,1,1,2),D_{5}^{b})/\operatorname{Spec}\mathbb{C}\dashto (X^{\dagger},D^{\dagger})/S^{\dagger}\]
_are the maps \(\epsilon_{b}^{-1},(\chi^{b})^{-1},\phi^{b},\psi^{b}\) described in Examples 2.11, 2.12, 2.13 and 2.18._
Proof.: We prove the statement for the pair \((\mathbb{P}(1,1,1,2),D_{5}^{a})/\operatorname{Spec}\mathbb{C}\). The other case is similar. The volume preserving Sarkisov link
\[\Psi\colon(\mathbb{P}(1,1,1,2),D_{5}^{a})\,/\operatorname{Spec}\mathbb{C} \dashto(X^{\dagger},D^{\dagger})/S^{\dagger}\]
must begin with a divisorial contraction \(g\colon Y\to\mathbb{P}(1,1,1,2)\). By Proposition 3.1, the center \(Z\) of this divisorial contraction is either a curve \(\Gamma\subset D_{5}^{a}\), or the singular point \([0:0:0:1]\in\mathbb{P}(1,1,1,2)\), which is also the singular point of \(D_{5}^{a}\). By [13, Theorem 5], either \(g\colon Y\to\mathbb{P}(1,1,1,2)\) is the blow-up of \([0:0:0:1]\) with weights \(\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\), or \(Z=\Gamma\subset D_{5}^{a}\) is a curve not passing through \([0:0:0:1]\). In the first case, if we view \(\mathbb{P}(1,1,1,2)\) as the cone over a Veronese surface, then \(g\colon Y\to\mathbb{P}(1,1,1,2)\) is the standard blow-up of the vertex, and it leads to the link \((\chi^{a})^{-1}\). From now on we assume that \(Z=\Gamma\subset D_{5}^{a}\) is a curve not passing through \([0:0:0:1]\).
Recall that \(D_{5}^{a}\subset\mathbb{P}(1,1,1,2)_{(x_{0},x_{1},x_{2},y)}\) is given by the equation
\[x_{0}y^{2}+B_{3}(x_{0},x_{1},x_{2})y+x_{1}C_{4}(x_{0},x_{1},x_{2})=0,\]
where \(B_{3}\) and \(C_{4}\) are homogeneous polynomials of degree three and four, respectively. The point \([0:0:0:1]\) is the unique singular point of \(D_{5}^{a}\). It is a singularity of type \(A_{1}\). The divisor class group of \(D_{5}^{a}\) is generated by the curves \(e_{1}=\{x_{1}=x_{0}y+B_{3}=0\}\), \(\overline{e}_{1}=\{y=x_{1}=0\}\), with intersection matrix
\[\left(\begin{array}{cc}-\frac{3}{2}&3\\ 3&-2\end{array}\right).\]
Let \(\Gamma\subset D_{5}^{a}\subset\mathbb{P}(1,1,1,2)\) be a reduced and irreducible curve not passing through the singularity of \(D_{5}^{a}\), and write its class in \(\operatorname{Cl}(D_{5}^{a})\) as \([\Gamma]=a\overline{e}_{1}+2be_{1}\), with \(a,b\in\mathbb{N}\). Let \(\pi:Y\to\mathbb{P}(1,1,1,2)\) be the blow-up of \(\mathbb{P}(1,1,1,2)\) along \(\Gamma\) with exceptional divisor \(E\). Then \(\operatorname{NE}(Y)\) has two extremal rays. One of them is generated by a curve \(e\subset E\) that is contracted by \(\pi\), and we denote the other one by \(R\). Since
\[K_{Y}=\pi^{*}K_{\mathbb{P}(1,1,1,2)}+E\,\]
we have \(-K_{Y}\cdot e=1\).
Suppose that \(-K_{Y}\cdot R<0\). Then the curve generating \(R\) is contained in the strict transform \(\widehat{D}_{5}^{a}\) of \(D_{5}^{a}\), which is mapped isomorphically to \(D_{5}^{a}\). Hence, \(R\) is generated either by \(\overline{e}_{1}\) or by \(e_{1}\). (Here we denote by the same symbols the strict transforms of \(\overline{e}_{1}\) and \(e_{1}\) in \(\widehat{D}_{5}^{a}\subset Y\).) Suppose that \(R\) is generated by \(\overline{e}_{1}\). By Lemma 4.4, \(-K_{Y}\cdot R=-1\). Then \(a=3b-3\) and \(\overline{e}_{1}=(4a-8b)e+\frac{2}{3}e_{1}\). So we must have \(b<3\), that is
\[(a,b)\in\{(0,1),(3,2)\}. \tag{6.7}\]
Suppose that \(R\) is generated by \(e_{1}\). Then
\[\widehat{D}_{5}^{a}\cdot e_{1}=\frac{15}{2}+3b-3a=-\frac{k}{2}\,\]
where \(k=-15+6(a-b)\) is a positive integer, and hence \(k\geq 3\). By Lemma 4.6, the antiflip \(Y^{-}\) of \(R\) has worse than terminal singularities, a contradiction.
Now suppose that \(-K_{Y}\cdot R\geq 0\), and so \(-K_{Y}\) is nef. In particular, \(-K_{Y}\cdot\bar{e}_{1}\geq 0\), \(-K_{Y}\cdot e_{1}\geq 0\), and these two inequalities translate into the following system
\[\left\{\begin{array}{l}5+2a-6b\geq 0,\\ \frac{15}{2}+3b-3a\geq 0.\end{array}\right. \tag{6.8}\]
The area in the \((a,b)\)-plane delimited by (6.8) and the inequalities \(a\geq 0\), \(b\geq 0\), along with its integral points, is displayed in the following picture
Taking into account these integral points, together with (6.7), we are left with the following possibilities for the class \([\Gamma]=a\overline{e}_{1}+2be_{1}\):
\[(a,b)\in\{(1,0),(1,1),(3,1),(0,1),(2,0),(2,1),(4,2),(3,2)\}.\]
We shall show that \((a,b)=(1,0)\) leads to the link \(\epsilon_{a}^{-1}\), \((a,b)=(1,1)\) leads to \(\phi^{a}\), \((a,b)=(3,1)\) leads to \(\psi^{a}\), while all the other cases will be excluded. We discuss first the cases that occur.
Case \((a,b)=(1,0)\). In this case, \(\Gamma\sim\overline{e}_{1}\). Blowing it up we get back to the weighted blow-up of \(\mathbb{P}^{3}\) at \([0:0:0:1]\) with weights \((1,2,1)\). This is the link \(\epsilon_{a}^{-1}\).
Case \((a,b)=(1,1)\). In this case \(\Gamma\sim\overline{e}_{1}+2e_{1}\). Let \(P\) be a divisor on \(\Gamma\) associated to \(\mathcal{O}_{\Gamma}(2)\). Then \(\deg(P)=8\) and, since \(\deg(K_{\Gamma})=4\), Riemann-Roch yields \(h^{0}(\Gamma,P)=6\). Since \(h^{0}(\mathbb{P}(1,1,1,2),\mathcal{O}_{\mathbb{P}(1,1,1,2)}(2))=7\), we have \(h^{0}(\mathcal{I}_{\Gamma}(2))\geq 1\). Let \(Q\subset\mathbb{P}(1,1,1,2)\) be a quadric containing \(\Gamma\). Then \(Q\cap D_{5}^{a}\) has class \(2\overline{e}_{1}+2e_{1}=\overline{e}_{1}+(\overline{e}_{1}+2e_{1})\). So \(Q\cap D_{5}^{a}\) is the union of \(\Gamma\) and a residual curve of class \(\overline{e}_{1}\), which must be \(\overline{e}_{1}\) itself since it is rigid in \(D_{5}^{a}\).
We may write \(Q=\{y-x_{1}L(x_{0},x_{1},x_{2})=0\}\) where \(L\) is a linear form. Substituting \(y=x_{1}L\) in the equation of \(D_{5}^{a}\) we get
\[\Gamma=\left\{\begin{array}{l}Q=y-x_{1}L=0,\\ F=x_{0}x_{1}L^{2}+BL+C=0.\end{array}\right.\]
We see from Lemma 3.2 that blowing-up \(\Gamma\) leads to the link \(\phi^{a}\) described in Example 2.13.
Case \((a,b)=(3,1)\). In this case \(\Gamma\sim 3\overline{e}_{1}+2e_{1}\). Let \(P\) be a divisor on \(\Gamma\) associated to \(\mathcal{O}_{\Gamma}(3)\). Then \(\deg(P)=18\) and, since \(\deg(K_{\Gamma})=12\), Riemann-Roch yields \(h^{0}(\Gamma,P)=12\). Since \(h^{0}(\mathbb{P}(1,1,1,2),\mathcal{O}_{\mathbb{P}(1,1,1,2)}(3))=13\), we have \(h^{0}(\Gamma,\mathcal{I}_{\Gamma}(3))\geq 1\). Let \(S\subset\mathbb{P}(1,1,1,2)\) be a cubic containing \(\Gamma\). Then \(S\cap D_{5}^{a}\) has class \(3\overline{e}_{1}+3e_{1}=e_{1}+(3\overline{e}_{1}+2e_{1})\). So \(S\cap D_{5}^{a}\) is the union of \(\Gamma\) and a residual curve of class \(e_{1}\), which must be \(e_{1}\) itself since it is rigid in \(D_{5}^{a}\).
We may write \(S=\{x_{0}y+B+x_{1}(y+Q)=0\}\), where \(Q=Q(x_{0},x_{1},x_{2})\) is a quadratic polynomial. Then \(S\cap D_{5}^{a}=\Gamma\cup e_{1}\) and \(\Gamma\subset\mathbb{P}(1,1,1,2)\) is defined by
\[\left\{\begin{array}{l}F_{3}=x_{0}y+B+x_{1}(y+Q)=0,\\ G_{4}=y(y+Q)-C=0.\end{array}\right.\]
By Lemma 3.2, blowing-up \(\Gamma\) leads to the link \(\psi^{a}\) described in Example 2.18.
Easy cases. Next we exclude all other cases except \((a,b)=(3,2)\). This case is more difficult and we deal with it at the end.
First, note that, since \(e_{1}\) is rigid inside \(D_{5}^{a}\), if \(\Gamma\sim 2e_{1}\) then \(\Gamma=2e_{1}\) and \(\Gamma\) is not reduced: this rules out the case \((a,b)=(0,1)\). Similarly, we rule out the case \((a,b)=(2,0)\), that is, \(\Gamma\sim 2\overline{e}_{1}\).
Note that \(\mathcal{O}_{\mathbb{P}(1,1,1,2)}(1)_{|D_{5}^{a}}\sim e_{1}+\overline{e}_{1}\). If \((a,b)=(2,1)\) then \(\Gamma\sim 2(e_{1}+\overline{e}_{1})\sim\mathcal{O}_{\mathbb{P}(1,1,1,2)}(2)_{|D_{5} ^{a}}\). Consider the exact sequence
\[0\rightarrow\mathcal{I}_{\Gamma}(2)\rightarrow\mathcal{O}_{\mathbb{P}(1,1,1,2)} (2)_{|\Gamma}\rightarrow\mathcal{O}_{\Gamma}(2)\to 0.\]
The degree of the divisor \(P\) associated to \(\mathcal{O}_{\Gamma}(2)\) on \(\Gamma\) is given by \(\deg(P)=2(e_{1}+\overline{e}_{1})\cdot\Gamma=4(e_{1}+\overline{e}_{1})^{2}=10\). By adjunction, \(K_{\Gamma}=(K_{D_{5}^{a}}+\Gamma)_{|\Gamma}\). Since \(K_{D_{5}^{a}}\) is trivial, \(\deg(K_{\Gamma})=\Gamma^{2}=10\). (Notice that this holds if \(\Gamma\) is reduced and irreducible, but not necessarily smooth [1, Theorem 1.1]).
Chapter II.) Then \(\deg(K_{\Gamma}-P)=0\) and hence \(h^{0}(\Gamma,K_{\Gamma}-P)\leq 1\). By Riemann-Roch, we get that \(h^{0}(\Gamma,P)\in\{5,6\}\). Since \(h^{0}(\mathbb{P}(1,1,1,2),\mathcal{O}_{\mathbb{P}(1,1,1,2)}(2))=7\), we have \(h^{0}(\Gamma,\mathcal{I}_{\Gamma}(2))\geq 1\). Therefore there is a quadric \(Q\subset\mathbb{P}(1,1,1,2)\) containing \(\Gamma\), and hence \(\Gamma=D_{5}^{a}\cap Q\) is a complete intersection. By the same argument in the proof of Proposition 3.3, the extraction of \(\Gamma\) leads to a bad link.
If \((a,b)=(4,2)\), then \(\Gamma\sim 4(e_{1}+\overline{e}_{1})\sim\mathcal{O}_{\mathbb{P}(1,1,1,2)}(4)_{|D_{ 5}^{a}}\). In this case, consider the exact sequence
\[0\to\mathcal{I}_{\Gamma}(4)\to\mathcal{O}_{\mathbb{P}(1,1,1,2)}(4)_{|\Gamma} \to\mathcal{O}_{\Gamma}(4)\to 0.\]
The degree of the divisor \(P\) associated to \(\mathcal{O}_{\Gamma}(4)\) on \(\Gamma\) is given by \(\deg(P)=4(e_{1}+\overline{e}_{1})\cdot\Gamma=16(e_{1}+\overline{e}_{1})^{2}=40\), while \(\deg(K_{\Gamma})=\Gamma^{2}=40\). Then \(K_{\Gamma}-P\sim 0\), and hence \(h^{0}(\Gamma,K_{\Gamma}-P)\leq 1\). By Riemann-Roch, we get that \(h^{0}(\Gamma,P)\in\{20,21\}\). Since \(h^{0}(\mathbb{P}(1,1,1,2),\mathcal{O}_{\mathbb{P}(1,1,1,2)}(4))=22\), we have \(h^{0}(\Gamma,\mathcal{I}_{\Gamma}(2))\geq 1\). Therefore there is a quartic \(S\subset\mathbb{P}(1,1,1,2)\) containing \(\Gamma\), and hence \(\Gamma=D_{5}^{a}\cap S\) is a complete intersection. Again, by the argument in the proof of Proposition 3.3, the extraction of \(\Gamma\) leads to a bad link.
Case \((a,b)=(3,2)\). We will show that this case leads to a bad link. In short: we will blow-up \(\Gamma\), analyze the resulting \(2\)-ray game with the method of the proof of Lemma 3.2, and find that it leads to a bad link.
We have \(\Gamma\sim\overline{\mathcal{A}}_{1}+4e_{1}\). Let \(P\) be a divisor on \(\Gamma\) associated to \(\mathcal{O}_{\Gamma}(4)\). Then \(\deg(P)=36\) and, since \(\deg(K_{\Gamma})=30\), Riemann-Roch yields \(h^{0}(\Gamma,P)=21\). Since \(h^{0}(\mathbb{P}(1,1,1,2),\mathcal{O}_{\mathbb{P}(1,1,1,2)}(4))=22\), we have \(h^{0}(\Gamma,\mathcal{I}_{\Gamma}(4))\geq 1\). Let \(S\subset\mathbb{P}(1,1,1,2)\) be a quartic containing \(\Gamma\). Then \(S\cap D_{5}^{a}\) has class \(4\overline{e}_{1}+4e_{1}=\overline{e}_{1}+(3\overline{e}_{1}+4e_{1})\). So \(S\cap D_{5}^{a}\) is the union of \(\Gamma\) and a residual curve of class \(\overline{e}_{1}\), which then must be \(\overline{e}_{1}\) itself.
We may write \(S=\{yQ-x_{1}F_{3}=0\}\), where \(Q=Q(x_{0},x_{1},x_{2})\) is a quadratic polynomial and \(F_{3}=F_{3}(x_{0},x_{1},x_{2})\) is a cubic polynomial. Then \(S\cap D_{5}^{a}=\Gamma\cup\overline{e}_{1}\) and \(\Gamma\subset\mathbb{P}(1,1,1,2)\) is defined by
\[\operatorname{rank}\left(\begin{array}{ccc}C_{4}&F_{3}&y\\ x_{0}y+B_{3}&Q&x_{1}\end{array}\right)<2.\]
Since \(\Gamma\) cannot pass through the singular point, the monomial \(y\) must appear in \(Q\), and hence we may assume that \(Q=y+A_{2}(x_{0},x_{1},x_{2})\), where \(A_{2}\) is a quadratic polynomial.
Consider the toric variety \(\mathbb{F}\) with coordinates and weight matrix
\[\begin{array}{ccccccc}x_{0}&x_{1}&x_{2}&y&u_{0}&u_{1}&u_{2}\\ \hline 1&1&1&2&0&-1&-2\\ 0&0&0&0&1&1&1\end{array}\]
and stability condition chosen so that the nef cone of \(\mathbb{F}\) is the span \(\langle x_{i},u_{0}\rangle_{+}\). This choice gives the irrelevant ideal \((x_{0},x_{1},x_{2},y)(u_{0},u_{1},u_{2})\), and ensures that we have a \(\mathbb{P}^{2}\)-bundle morphism \(\pi\colon\mathbb{F}\to\mathbb{P}(1,1,1,2)\). Consider the variety \(Z\subset F\) cut out by the equations
\[\left(\begin{array}{ccc}C_{4}&F_{3}&y\\ x_{0}y+B_{3}&Q&x_{1}\end{array}\right)\left(\begin{array}{c}u_{2}\\ u_{1}\\ u_{0}\end{array}\right)=\left(\begin{array}{c}0\\ 0\end{array}\right).\]
It is not hard to see that \(Z\) has cDV singularities, that \(\pi_{|Z}\colon Z\to\mathbb{P}(1,1,1,2)\) is a birational morphism with exceptional set a divisor \(E\) mapping to \(\Gamma\subset\mathbb{P}(1,1,1,2)\), and that \(-K_{Z}\) is \(\pi_{|Z}\)-ample. It follows from all this that \(\pi_{|Z}\colon E\subset Z\to\Gamma\subset\mathbb{P}(1,1,1,2)\) is the unique divisorial contraction that generically blows up \(\Gamma\subset\mathbb{P}(1,1,1,2)\). 7
Footnote 7: In general, if \(W\) is a normal variety and \(\pi\colon E\subset Z\to\Gamma\subset W\) is a proper birational morphism with exceptional set a prime divisor \(E\) and \(-K_{Z}\)\(\mathbb{Q}\)-Cartier and \(\pi\)-ample, then
\[Z=\operatorname{Proj}_{\mathcal{O}_{W}}\bigoplus_{n\geq 0}f_{\mathcal{O}_{Z}}(-nK_{Z})\,.\]
It follows from this characterization that, if \(\pi^{\prime}\colon E^{\prime}\subset Z^{\prime}\to\Gamma\subset W\) has the same properties and \(E=E^{\prime}\) as valuations of the fraction field \(\mathbb{C}(W)\), then \(Z=Z^{\prime}\). In other words, in the situation of our proof, there is at most one extremal divisorial contraction \(E\subset Z\to\Gamma\subset\mathbb{P}(1,1,1,2)\). 8
The Mori chamber decomposition of \(\mathbb{F}\) is displayed in the following picture
The first wall-crossing \(\mathbb{F}\dashrightarrow\mathbb{F}^{\prime}\) is the flip of \(\{u_{1}=u_{2}=0\}\), whose restriction to \(Z\) is the flip \(Z\dashrightarrow Z^{\prime}\) of the strict transform of \(\overline{e}_{1}\). The next wall-crossing corresponds to a divisorial contraction \(\pi^{\prime}\colon\mathbb{F}^{\prime}\to\mathbb{P}(1,1,1,1,2,2)\).
We fix homogeneous coordinates \((\xi_{0},\xi_{1},\xi_{2},\xi_{3},w_{0},w_{1})\) on \(\mathbb{P}(1,1,1,1,2,2)\). The composed birational map \(\mathbb{F}_{(x_{0},x_{1},x_{2},y_{0},w_{1},u_{2})}\dashrightarrow\mathbb{P}(1,1,1,1,2,2)_{(\xi_{0},\xi_{1},\xi_{2},\xi_{3},w_{0},w_{1})}\) is given by
\[(x_{0},x_{1},x_{2},y,u_{0},u_{1},u_{2})\mapsto(x_{0}u_{2},x_{1}u_{2},x_{2}u_{ 2},u_{1},yu_{2}^{2},u_{0}u_{2}),\]
and the image \(X^{\prime}\) of \(Z\) in \(\mathbb{P}(1,1,1,1,2,2)\) is given by
\[\left\{\begin{array}{l}C_{4}+\xi_{3}F_{3}+w_{0}w_{1}=0,\\ \xi_{0}w_{0}+B_{3}+\xi_{3}Q_{2}+\xi_{1}w_{1}=0,\end{array}\right.\]
where \(Q_{2}=w_{0}+A_{2}(\xi_{0},\xi_{1},\xi_{2})\). This is a bad link: the point \([0:0:0:1:0:0]\in X^{\prime}\) is a hypersurface singularity of multiplicity at least \(3\), and hence it is not terminal.
**Lemma 6.9**.: _Let \((X_{4},D_{3,4})/S\) be a Mf CY pair of the family of objects \(4\) described in the end of Section 2. The only volume preserving Sarkisov links_
\[\Psi\colon(X_{4},D_{3,4})/\operatorname{Spec}\mathbb{C}\dashrightarrow(X^{ \dagger},D^{\dagger})/S^{\dagger}\]
_are the maps \((\phi^{a})^{-1},(\phi^{b})^{-1}\) described in Example 2.13._
Proof.: Since \(X_{4}\) has Picard rank \(1\), any link \(\Psi\colon(X_{4},D_{3,4})/\operatorname{Spec}\mathbb{C}\dashrightarrow(X^{ \dagger},S^{\dagger})\) must start with a divisorial contraction \(\pi\colon(Z,D_{Z})\to(X_{4},D_{3,4})\).
The key observation is this: \(X_{4}\) has two singular points of type \(\frac{1}{2}(1,1,1)\) on the line \(\mathbb{P}^{1}_{y_{0},y_{1}}\subset\mathbb{P}(1,1,1,1,2,2)\), and \(D_{3,4}\) contains these two points as \(A_{1}\)-singularities. It follows from this that \(\operatorname{Cl}(X_{4})\to\operatorname{Cl}(D_{3,4})\) is an isomorphism.
By Propositions 3.1 and 3.3, \(\pi\colon Z\to X_{4}\) contracts the unique exceptional divisor to one of the two singular points. By the main result of [10], the only divisorial contraction to a singular point of type \(\frac{1}{2}(1,1,1)\) is the weighted blow-up with weights \(\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\). The two extremal contractions give the links \((\varphi^{a})^{-1}\) and \((\varphi^{b})^{-1}\).
**Lemma 6.10**.: _Let \((X_{4},D_{2,4})/S\) be a Mf CY pair of the family of objects \(5^{a}\) described in the end of Section 2. The only volume preserving Sarkisov links_
\[\Psi\colon(X_{4},D_{2,4})/\operatorname{Spec}\mathbb{C}\dashrightarrow(X^{ \dagger},D^{\dagger})/S^{\dagger}\]
_are the maps \((\psi^{a})^{-1},(\widetilde{\psi}^{b})^{-1}\) described in Example 2.18._
Proof.: Since \(X_{4}\) has Picard rank \(1\), any link \(\Psi\colon(X_{4},D_{2,4})/\operatorname{Spec}\mathbb{C}\dashrightarrow(X^{ \dagger},S^{\dagger})\) must start with a divisorial contraction \(\pi\colon(Z,D_{Z})\to(X_{4},D_{2,4})\).
The key observation is this: \(X_{4}\) has a unique singular point \([0:0:0:1:0]\in X_{4}\), analytically isomorphic to the germ at the origin of the hypersurface
\[xy+z^{3}+t^{3}=0 \tag{6.11}\]
in \(\mathbb{C}^{4}\). The surface \(D_{2,4}\) passes through this point, and has an \(A_{2}\)-singularity there. In fact, upon substituting \(y=-x_{1}x_{3}\), one sees that the surface \(D_{2,4}\) is the original surface \(D\subset\mathbb{P}^{3}\). It follows from this that \(\operatorname{Cl}(X_{4})\to\operatorname{Cl}(D_{2,4})\) is an isomorphism.
By Propositions 3.1 and 3.3, \(\pi\colon Z\to X_{4}\) contracts the unique exceptional divisor to the singular point. By [10], up to isomorphism, there are precisely two divisorial contractions to a singular point as in Equation 6.11, given by the weighted blow-ups with weights \((2,1,1,1)\) and \((1,2,1,1)\).8 These two extremal contractions give the links \((\psi^{a})^{-1}\) and \((\widetilde{\psi}^{b})^{-1}\).
Indeed, consider the link \(\psi^{a}\colon(\mathbb{P}(1^{3},2),D_{5}^{a})\dashrightarrow(X_{4},D_{2,4})\), as described in detail in Example 2.18. As described there, the link terminates with \(\pi^{\prime}\colon\mathbb{F}\to\mathbb{P}(1^{4},2)\), and it is clear from Equation 2.21 that \(\pi\) is the weighted blow up of the point \(x=[0:0:0:1:0]\in\mathbb{P}(1^{4},2)\) with weights \((1,1,1,2)\). The tangent cone of \(x\in X_{4}\) is \(y(y+x_{0}+x_{1})\), hence \(\pi\) induces the extremal divisorial contractions to \(x\in X_{4}\) where \(y\) has weight \(2\). The change of coordinates that transform object \(5^{a}\) to \(5^{b}\) (Example 2.17) sets \(-\widetilde{y}=y+x_{3}(x_{0}+x_{1})\): and this shows that \(\widetilde{\psi}^{b}\) terminates with the blow-up where \(\widetilde{y}\) has weight two, that is, the "other" extremal divisorial contraction to \(x\in X_{4}\).
|
2309.15931 | The outer dusty edge of accretion disks in active galactic nuclei | Recent models for the inner structure of active galactic nuclei (AGN) aim at
connecting the outer region of the accretion disk with the broad-line region
and dusty torus through a radiatively accelerated, dusty outflow. Such an
outflow not only requires the outer disk to be dusty and so predicts disk sizes
beyond the self-gravity limit but requires the presence of nuclear dust with
favourable properties. Here we investigate a large sample of type 1 AGN with
near-infrared (near-IR) cross-dispersed spectroscopy with the aim to constrain
the astrochemistry, location and geometry of the nuclear hot dust region.
Assuming thermal equilibrium for optically thin dust, we derive the
luminosity-based dust radius for different grain properties using our
measurement of the temperature. We combine our results with independent dust
radius measurements from reverberation mapping and interferometry and show that
large dust grains that can provide the necessary opacity for the outflow are
ubiquitous in AGN. Using our estimates of the dust covering factor, we
investigate the dust geometry using the effects of the accretion disk
anisotropy. A flared disk-like structure for the hot dust is favoured. Finally,
we discuss the implication of our results for the dust radius-luminosity plane. | Hermine Landt | 2023-09-27T18:13:27Z | http://arxiv.org/abs/2309.15931v1 | # The outer dusty edge of accretion disks in active galactic nuclei
###### Abstract
Recent models for the inner structure of active galactic nuclei (AGN) aim at connecting the outer region of the accretion disk with the broad-line region and dusty torus through a radiatively accelerated, dusty outflow. Such an outflow not only requires the outer disk to be dusty and so predicts disk sizes beyond the self-gravity limit but requires the presence of nuclear dust with favourable properties. Here we investigate a large sample of type 1 AGN with near-infrared (near-IR) cross-dispersed spectroscopy with the aim to constrain the astrochemistry, location and geometry of the nuclear hot dust region. Assuming thermal equilibrium for optically thin dust, we derive the luminosity-based dust radius for different grain properties using our measurement of the temperature. We combine our results with independent dust radius measurements from reverberation mapping and interferometry and show that large dust grains that can provide the necessary opacity for the outflow are ubiquitous in AGN. Using our estimates of the dust covering factor, we investigate the dust geometry using the effects of the accretion disk anisotropy. A flared disk-like structure for the hot dust is favoured. Finally, we discuss the implication of our results for the dust radius-luminosity plane.
Active galactic nuclei Quasars Dust continuum emission Dust physics Near-infrared astronomy
## 1 Introduction
A prominent feature in the multi-wavelength spectral energy distributions (SEDs) of active galactic nuclei (AGN) is strong infrared (IR) continuum emission, which originates from thermal dust radiation. The structure of the dust producing it is commonly assumed to be an optically thick toroid, which is aligned with the plane of the accretion disk. The accretion disk then provides the UV/optical photons that heat the dust. An alternative to the dusty torus was already proposed early on by Phinney (1989), namely, an extended and warped dusty disk. Such a structure is attractive since it could present a natural transition between the accretion disk and the central dust, without the stability problem of the torus (Krolik, 2007).
The extended dust structure emits over a large IR wavelength range, with the near-IR believed to be dominated by the hottest dust located closest to the central supermassive black hole. Therefore, AGN are most suitable to investigate the chemical composition and grain properties of astrophysical dust. Their UV/optical luminosities are usually high enough to heat the central dust to sublimation temperatures, a property which they have in common with protoplanetary disks around young stars. If we can observe
these highest temperatures, we can in principle constrain the chemistry since different species condense out of the gas phase in different environmental conditions. Previously dust temperatures were measured in a handful of AGN by obtaining simultaneous photometry at several near-IR wavelengths (Clavel et al., 1989; Glass, 2004; Schnulle et al., 2013, 2015), but has only come of age with the availability of efficient near-IR cross-dispersed spectrographs. Landt et al. (2011) and Landt et al. (2014) derived dust temperatures from such spectroscopy for the largest sample of type 1 AGN so far (\(\sim 30\) sources). They found a very narrow dust temperature distribution with an average value of \(T\sim 1400\) K. This result, which is similar to what has been found in protoplanetary disks (Monnier et al., 2005), either indicates dust at sublimation but composed mainly of silicate grains and so an oxygen-rich environment from which the dust formed. Or, if carbon dominates the composition, then the dust is _not_ heated to close to sublimation, since carbonaceous dust, e.g. graphite, can survive up to \(T\sim 2000\) K (Salpeter, 1977).
The first near-IR _spectroscopic_ monitoring campaign of the hot dust in an AGN, namely, NGC 5548, found that a single component dominated both the mean and variable emission, with the dust reponse time and luminosity-based dust radius being consistent with each other if the emissivity of a pure blackbody was assumed (Landt et al., 2019). Thus, the dust grain size of the hot dust in this AGN could be constrained to relatively large values (of a few \(\mu\)m). From the estimated dust temperature and its variability they concluded that the dust composition was predominantly carbonaceous and well below the sublimation threshold. The reverberation signal of the dust was then mainly due to a heating and cooling process in response to the variable UV/optical accretion disk flux irradiating it. Most importantly, the dust reverberation signal showed tentative evidence for a second hot dust component most likely associated with the accretion disk. The existence of dust in the outer regions of the accretion disk is a prerequisite for the recent models of the AGN structure proposed by Czerny et al. (2017) and Baskin and Laor (2018). These authors explain both the broad emission line region (BLR) and the dusty torus as part of the same outflow launched from the outer accretion disk by radiation pressure on dust. Since carbonaceous dust has a higher opacity than silicate dust, the former is preferred in this scenario. Most recently, Landt et al. (2023) presented results from a near-IR spectroscopic monitoring campaign on the high-luminosity AGN Mrk 876. The comparison of the mean and variable (rms) spectrum clearly showed that at least two hot dust components are present in AGN, with the second component possibly originating in the accretion disk. However, contrary to the case of NGC 5548, the independent measure of the dust radius via reverberation mapping yielded a value a factor of \(\sim 2\) lower than the luminosity-based dust radius estimate, indicating that the anisotropy effects of the accretion disk illumination and through it the dust geometry can be detected and studied in this way. For Mrk 876, it was concluded that the geometry is most likely a flared, dusty disk with an enlarged 'inner hole', which is very similar to the paradigm commonly assumed for protoplanetary disks.
Given the high potential that a comparison between luminosity-based dust radii and those obtained by independent methods, e.g. by reverberation mapping, carries for revealing the astrochemistry and geometry of the hot dust in AGN, this study sets out to apply it to a large sample of AGN. In Section 2, we discuss the selection of the sample, whereas Section 3 gives details of the measurements required for the estimation of the luminosity-based dust radius. In Section 4, we discuss our main results and finally present the conclusions in Section 5.
## 2 The sample selection
As discussed by Landt et al. (2019, 2023), combining the assumption of radiative equilibrium of optically thin dust, which encodes a dependence on the dust chemical species and grain size through the dust emissivity parameter (see eq. 1), with an independent measurement of the dust radius can constrain the
dust astrochemistry. Alternative dust radius measurements can come from, e.g., the dust response time estimated by reverberation mapping or the geometric distance of the dust measured through near-IR interferometry. A promising new method is also the estimate of the location of the scattering region through optical spectropolarimetry of the BLR (Shablovinskaya et al., 2020). Therefore, this study required a sample of type 1 AGN with an optical/near-IR continuum dominated by the AGN rather than the host galaxy starlight. Furthermore, the available near-IR spectrum should cover a sufficiently large wavelength range to allow for both an estimate of the accretion disk flux level as well as a measurement of the hot dust temperature in order to be able to calculate the luminosity-based dust radius. This requirement is usually met only by cross-dispersed near-IR spectra (Landt et al., 2011b). Finally, the type 1 AGN with available cross-dispersed near-IR spectra were required to have a published near-IR dust radius based on an alternative method. The total sample comprises of 39 objects and its properties are listed in Table 1. The independent dust radius measurement was mostly from near-IR (photometric) dust reverberation mapping campaigns covering the wavelength region of up to a few \(\mu\)m and so sampling the SED of the hot dust, and in only 4/39 sources the dust radius measurement was based on near-IR interferometry.
## 3 The luminosity-based dust radius
The calculation of the luminosity-based dust radius requires a measurement of the dust temperature and an estimate of the UV/optical (accretion disk) luminosity that heats the dust. Cross-dispersed near-IR spectra with their relatively large wavelength range cover about half the hot dust SED in low-redshift AGN and additionally a considerable part of the accretion disk spectrum. The latter is expected to be the dominant contributor to the continuum flux up to \(\sim 1\)\(\mu\)m (Landt et al., 2011a,b). In order to obtain the necessary measurements, we decomposed the spectral continuum into these two components, accretion disk and dust emission, following the approach described in Landt et al. (2019). As already noted in Landt et al. (2019), due to the relatively small spectral aperture usually used for near-IR spectroscopy, the contribution of host galaxy light to the total observed continuum flux is expected to be negligible in most AGN. In short, we have first approximated the rest-frame wavelength range of \(\mathrel{\hbox{\hbox to 0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{$<$}}}1\)\(\mu\)m with the spectrum of a standard accretion disk, which we have subsequently subtracted from the total spectrum. For the calculation of the accretion disk spectrum we adopted the black hole mass listed in Bentz and Katz (2015) for a geometrical scaling factor of \(f=4.3\), where available, and else we have estimated it based on the relationship between black hole mass and near-IR virial product presented in Landt et al. (2013). The scaling of the accretion disk spectrum to the near-IR spectrum then readily gives the accretion rate. Furthermore, we assumed that the disk is relatively large and extends out to \(r_{\rm out}=10^{4}r_{\rm g}\). We then fitted the resultant hot dust spectrum at wavelengths \(>1\)\(\mu\)m with a blackbody, representing emission by large dust grains. Table 1 lists the physical parameters derived from the near-IR spectral decomposition. As in Landt et al. (2019), we then calculated luminosity-based dust radii, \(R_{\rm d,lum}\), from the best-fit dust temperatures assuming radiative equilibrium between the luminosity of the irradiating source and the dust:
\[\frac{L_{\rm uv}}{4\pi R_{\rm d,lum}^{2}}=4\sigma T^{4}\langle Q^{\rm em}\rangle, \tag{1}\]
where \(\sigma\) is the Stefan-Boltzmann constant and \(\langle Q^{\rm em}\rangle\) is the Planck-averaged emission efficiency. We have approximated \(L_{\rm uv}\) with the accretion disk luminosity, as given in Table 1, and have used \(\langle Q^{\rm em}\rangle=1\), which is appropriate for the case of a blackbody. The blackbody case is reached for grain sizes of \(a\mathrel{\hbox{\hbox to 0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{$>$}}}0.4\)\(\mu\)m and \(\mathrel{\hbox{\hbox to 0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{$>$}}}2\)\(\mu\)m for carbon and silicate dust, respectively (see Fig. 8 of Landt et al., 2019). As Landt et al. (2019,
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline Object name & \(z\) & log \(L_{\rm uv}\) & \(T_{\rm d}\) & log \(L_{\rm d}\) & \(R_{\rm d,lum}\) & Ref. & \(R_{\rm d,rev}\) & Ref. \\ & & (erg s\({}^{-1}\)) & (K) & (erg s\({}^{-1}\)) & (lt-days) & & (lt-days) & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline
3C 351 & 0.372 & 46.63 & 1373 & 46.78 & 792 & R06 & 1203\(\pm\)542 & Lyu19 \\ PG 2233\(+\)134 & 0.326 & 46.85 & 1413 & 45.65 & 964 & This work & 343\(\pm\)29 & Lyu19 \\ PG 0953\(+\)414 & 0.234 & 46.45 & 1391 & 45.75 & 627 & This work & 566\({}^{+50}_{-38}\) & M19 \\ PG 0947\(+\)396 & 0.206 & 45.22 & 1406 & 44.96 & 149 & This work & 1629\(\pm\)23 & Lyu19 \\ PDS 456 & 0.184 & 47.41 & 1425 & 46.83 & 1806 & L08 & 1599\(\pm\)213* & G20 \\
3C 273 & 0.158 & 47.59 & 1443 & 46.76 & 2166 & L08 & 1656\(\pm\)59 & Lyu19 \\ PG 0052\(+\)251 & 0.155 & 46.07 & 1198 & 45.29 & 546 & L13 & 347\(\pm\)37 & Lyu19 \\ PG 1307\(+\)085 & 0.155 & 46.22 & 1307 & 45.25 & 545 & L13 & 310\(\pm\)40 & Lyu19 \\ PG O026\(+\)129 & 0.145 & 46.41 & 1127 & 45.15 & 913 & L13 & 487\(\pm\)36 & Lyu19 \\ PG 1519\(+\)226 & 0.137 & 45.85 & 1538 & 45.24 & 257 & R06 & 170\(\pm\)90 & Lyu19 \\ PG 1612\(+\)261 & 0.131 & 45.67 & 1549 & 45.25 & 206 & R06 & 555\(\pm\)35 & Lyu19 \\ Mrk 876 & 0.129 & 46.09 & 1306 & 45.55 & 469 & L23 & 334\({}^{+42}_{-37}\) & M19 \\ PG 1415\(+\)451 & 0.114 & 45.92 & 1461 & 45.07 & 309 & R06 & 269\(\pm\)208 & Lyu19 \\ PG 0804\(+\)761 & 0.100 & 45.97 & 1314 & 45.70 & 405 & L13 & 600\(\pm\)21 & Lyu19 \\ PG 1211\(+\)143 & 0.081 & 46.22 & 1337 & 45.10 & 521 & L13 & 338\(\pm\)83 & Lyu19 \\ Mrk 478 & 0.079 & 46.11 & 1547 & 45.34 & 343 & R06 & 237\(\pm\)37 & Lyu19 \\ PG 1448\(+\)273 & 0.065 & 45.31 & 1477 & 44.55 & 150 & R06 & 263\(\pm\)30 & Lyu19 \\ PG 0844\(+\)349 & 0.064 & 46.18 & 1190 & 44.76 & 628 & L11 & 99\({}^{+13}_{-10}\) & M19 \\ Mrk 1513 & 0.063 & 46.33 & 1356 & 45.04 & 575 & L13 & 494\(\pm\)42 & Lyu19 \\ I Zw 1 & 0.060 & 45.58 & 1412 & 45.21 & 224 & G12 & 274\(\pm\)41 & Lyu19 \\ PG 1126\(-\)041 & 0.060 & 45.65 & 1515 & 45.09 & 211 & R06 & 523\(\pm\)26 & Lyu19 \\ Mrk 734 & 0.050 & 45.70 & 1597 & 44.48 & 201 & This work & 103\(\pm\)7 & Lyu19 \\ Mrk 231 & 0.041 & 45.67 & 1529 & 45.71 & 212 & This work & 393\(\pm\)83* & K09 \\ Mrk 841 & 0.036 & 45.22 & 1437 & 44.23 & 143 & P22 & 110\(\pm\)15 & Lyu19 \\ Mrk 110 & 0.035 & 46.00 & 1452 & 44.56 & 343 & L11 & 117\(\pm\)6 & K14 \\ Mrk 509 & 0.034 & 45.77 & 1398 & 44.79 & 284 & L08 & 121\(\pm\)2 & K14 \\ Ark 120 & 0.033 & 45.45 & 1102 & 45.16 & 316 & L08 & 138\(\pm\)18 & K14 \\
3C 120 & 0.033 & 45.28 & 1389 & 44.59 & 164 & L13 & 94\({}^{+4}_{-7}\) & R18 \\ Mrk 817 & 0.031 & 45.65 & 1401 & 44.53 & 246 & L11 & 93\({}^{+9}_{-9}\) & K14 \\ Mrk 290 & 0.030 & 45.36 & 1353 & 44.26 & 189 & L08 & 124\(\pm\)3 & Lyu19 \\ H 2106\(-\)099 & 0.027 & 45.28 & 1342 & 44.27 & 175 & L08 & 303\(\pm\)36* & G23 \\ Mrk 335 & 0.026 & 45.63 & 1308 & 44.40 & 276 & L08 & 168\(\pm\)6 & K14 \\ Mrk 79 & 0.022 & 44.82 & 1364 & 44.21 & 100 & L11 & 68\(\pm\)5 & K14 \\ Mrk 1239 & 0.019 & 44.85 & 1443 & 44.64 & 92 & R06 & 189\(\pm\)30* & G23 \\ NGC 5548 & 0.017 & 44.50 & 1450 & 44.02 & 61 & L19 & 75\(\pm\)8 & L19 \\ NGC 7469 & 0.016 & 45.28 & 1551 & 44.19 & 131 & L08 & 85\(\pm\)1 & K14 \\ NGC 3783 & 0.010 & 45.23 & 1472 & 44.01 & 138 & P22 & 131\({}^{+25}_{-50}\) & E23 \\ NGC 4593 & 0.009 & 44.72 & 1380 & 43.71 & 87 & L08 & 42\(\pm\)1 & K14 \\ NGC 4151 & 0.003 & 43.24 & 1328 & 43.06 & 17 & L08 & 46\(\pm\)1 & K14 \\ \hline \end{tabular} The columns are: (1) object name; (2) redshift; (3) total accretion disk luminosity; for a blackbody emissivity (4) dust temperature; (5) total dust luminosity and (6) dust radius; (7) reference for the cross-dispersed near-IR spectral data used for the continuum fits; (8) near-IR dust reverberation lag time in the rest-frame taken from reference in (9). References are E23: Esser et al. (2023), G12: Garcia-Rissmann et al. (2012), G20: Gravity Collaboration et al. (2020), G23: Gravity Collaboration et al. (2023), K09: Kishimoto et al. (2009), K14: Koshida et al. (2014), L08: Landt et al. (2008), L11: Landt et al. (2011b), L13: Landt et al. (2013), L19: Landt et al. (2019), L23: Landt et al. (2023), Lyu19: Lyu et al. (2019), M19: Minezaki et al. (2019), P22: Prieto et al. (2022), R06: Riffel et al. (2006) and R18: Ramolla et al. (2018).
* * [
2023) have shown, radii for dust composed of small grains (\(a\lesssim 0.1\)\(\mu\)m) are a factor of \(\sim 6\) and \(\sim 10\) larger if the composition is mainly carbonaceous or silicates, respectively. The error on the luminosity-based dust radius is determined by the error on the temperature and the error on the total accretion disk luminosity. Due to the relatively large wavelength coverage of the cross-dispersed near-IR spectra, which are also of relatively high signal-to-noise, the error on the temperature is small (\(\sim 10-30\) K; Landt et al., 2019, 2023). The accretion disk luminosity is mostly determined by the accretion rate, which is set by the flux level of the near-IR spectrum. As discussed in Landt et al. (2019, 2023), since the telluric standard star is observed close in time with the science target, photometric correction factors are on average \(\sim 10-15\%\). Therefore, errors on the luminosity-based dust radius are assumed to be \(\sim 10\%\).
## 4 Results and Discussion
Based on two well-studied sources, namely, NGC 5548 and Mrk 876, Landt et al. (2023) have recently suggested that there are considerable similarities between the hot dust in AGN and that present in protoplanetary disks around young stars. In particular, these similarities are: (i) the prevalance of large dust grain sizes, which in protoplanetary disks might have formed in the midplane of the disk by differential settling; (ii) the presence of an 'inner hole' beyond the dust-free zone expected due to dust sublimation, which in protoplanetary disks is thought to be a cavity filled with gaseous disk material that can change its optical thickness and thus influence the location of the puffed-up so-called 'wall' or 'inner rim'; and (iii) a general temperature-radius relationship consistent with that for a dusty, flared and passively illuminated protoplanetary disk. Here we will test and expand on this proposition by using a sample of \(\sim 40\) AGN.
### The prevalence of large grains in the hot dust of AGN
As discussed in Landt et al. (2023), the assumption of dust, which is optically thick to the UV/optical radiation heating it and optically thin to its own IR radiation, allows us to constrain the astrochemistry of the dust through the dust emissivity parameter \(Q^{\rm em}\) (see eq. 1), _if the dust radius can be measured by an independent method._ Fortunately, several such methods exist, including, e.g., the dust response time measured through reverberation mapping and a geometric dust radius measurement based on near-IR interferometry. Here we have compiled dust radius measurements from these two independent methods for a sample of AGN with available near-IR spectra that were suitable to derive luminosity-based dust radii. Fig. 1 plots the comparison between the luminosity-based dust radius and that obtained by an independent method, which was mostly near-IR photometric reverberation mapping. We note that the two dust radii are generally not contemporaneous and so could be affected by the variability effects of the irradiating luminosity, although Landt et al. (2019) argued that the hot dust radius is not set by sublimation and is probably luminosity-invariant.
Fig. 1 shows that the large majority of the data points cluster around the line of equality for a blackbody emissivity, which corresponds to the largest dust grains. The mean ratio is \(R_{\rm d,lum}/R_{\rm rev}=1\), with a dispersion around the mean of a factor of \(\sim 2\). We note that for most sources the luminosity-based dust radius is _larger_ than the dust radius measured by the independent method. Assuming instead an emissivity corresponding to small dust grains of predominantly carbonaceous or silicate composition would move the line of equality to luminosity-based dust radii larger by a factor of \(\sim 6\) (green solid line) and \(\sim 10\) (blue solid line), respectively. Although in a handful of sources this possibility cannot be excluded, in general small grains do not seem to dominate the hot dust composition.
### A flared, passively illuminated (hot) dusty disk with an 'inner hole' in AGN
It is in general of high interest to understand the origin of the scatter in Fig. 1 and if it is caused by AGN physics rather than being mainly due to variability and/or measurement uncertainties. Such an investigation is particularly timely, since considerable research is under way to help establish the hot dust radius as a suitable standard candle for cosmological studies using AGN (e.g. Honig et al., 2017). A first indication that a considerable inconsistency can be present between the luminosity-based hot dust radius and that obtained by other means was found for the high-luminosity AGN Mrk 876 by (Landt et al., 2023). The difference between the two values of a factor of \(\sim 2\) could be well explained if the dust was assembled in a flared, disk-like geometry, which is naturally expected to be carved out by the anisotropy of the accretion disk illumination (Kawaguchi and Mori, 2010, 2011). Then, the dust is expected to be illuminated by the UV/optical accretion disk luminosity reduced by a factor \(\cos\theta\), with \(\theta\) the angle between the accretion disk rotation axis and the location of the dust. Since such a pronounced difference was not found for the low-luminosity AGN NGC 5548 by the study of Landt et al. (2019), there is a strong possibility that the anisotropy effect depends on AGN luminosity. An indication that the accretion disk illumination anisotropy considered by Kawaguchi and Mori (2010, 2011) affects low- and high-luminosity AGN differently was also found by Minezaki et al. (2019). Their sample showed a best-fit slope for the logarithmic reverberation dust radius versus optical luminosity relationship of \(\sim 0.4\), i.e. shallower than the slope of 0.5 predicted by radiative equilibrium considerations (see eq. 1).
Figure 1: The logarithmic near-IR dust reverberation lag time versus the logarithmic luminosity-based dust radius for large dust grain sizes. The solid black line indicates the line of equality, which corresponds to the mean ratio between the two radii, whereas the dashed black lines mark the \(1\)\(\sigma\) region around it, corresponding to a factor of 2. Luminosity-based dust radii corresponding to small-grain (\(a\lesssim 0.1\)\(\mu\)m) carbon and silicate dust are expected to be larger than the values for large dust grains by a factor of \(\sim 6\) (solid green line) and \(\sim 10\) (solid blue line), respectively.
Fig. 2 (left panel) investigates this conjecture, where we plot the total accretion disk luminosity estimated from the fits to the near-IR spectroscopy versus the ratio between the luminosity-based dust radius and the dust radius measured by reverberation mapping (or near-IR interferometry). There is clearly a trend for the dust radius ratio to increase with luminosity and so for the disk structure to be more pronounced in high-luminosity sources. The significance of a linear correlation is \(P=95.5\%\). Fig. 2 (right panel) plots the dependence of the dust radius ratio on the dust temperature. The dust temperatures reach values well above the sublimation temperature for silicates (of \(\sim 1400\) K) and, therefore, the hot dust in AGN is most likely carbonaceous, as already argued previously (e.g. Mor et al., 2009; Landt et al., 2011a; Landt et al., 2019). A trend is present also in Fig. 2 (right panel) that indicates that the 'inner hole' of the disk is more enlarged in high-luminosity sources, if we assume a single chemical species for the hot dust and so a unique sublimation temperature (e.g. \(\sim 1900\) K, corresponding to carbonaceous dust). The significance of a linear correlation is \(P=93.5\%\). However, we note that the scatter is considerable in both relationships displayed in Fig. 2.
### The outer dusty edge of accretion disks in AGN
A considerably tighter relationship is present between the hot dust covering factor, defined as the ratio between the total dust luminosity and the total accretion disk luminosity, and the dust radius ratio (see Fig. 3). The significance of a linear correlation is \(P>99.999\%\) for a best-fit relationship of \(\log(L_{\rm d}/L_{\rm UV})=(-0.72\pm 0.05)-(0.84\pm 0.15)\cdot\log(R_{\rm d,lum}/R_{ \rm d,rev})\). This relationship could be interpreted as evidence that the ratio between the luminosity-based dust radius and that measured by reverberation mapping (or near-IR interferometry) is a suitable indicator of the dust geometry. Then, the larger this ratio, the more pronounced the flared, dusty disk structure and so the lower the expected dust covering factor. In other words, as the ratio increases, the dust geometry moves from almost circular to a short disky component with a large flare and eventually to a long disky component with a smaller flare. A sketch of this possible
Figure 2: The logarithmic total accretion disk luminosity (left panel) and the dust temperature (right panel) versus the logarithm of the ratio between the luminosity-based dust radius and the dust radius measured by reverberation mapping (or near-IR interferometry).
change in dust geometry is exemplified in Fig. 4. Baskin and Laor (2018) investigated the geometry of a dusty, inflated accretion disk structure. Whereas they assumed largely varying grain sizes (spanning about two orders of magnitude), which resulted in the sublimation process occuring over a transition zone rather than at a single location, the scale height of this geometry could be related to the change in dust geometry proposed here. In their model, the disk height is set by the balance between the effects of gravity and radiation pressure and depends only on the accretion rate and the opacity of the material (see their eq. 22). Therefore, it predicts that more luminous AGN have higher dust covering factors, which is contrary to the observed relationship presented in Fig. 2.
Recently, Landt et al. (2023) presented the comparison of the mean and variable (rms) spectrum in the high-luminosity AGN Mrk 876, which showed clearly that at least two hot dust components are present in AGN. They found that the total dust emission was dominated (at the \(\sim 70\%\) level) by the non-variable component, which could have its origin in the outer (flared) part of the dusty disk or the outer dusty edge of the accretion disk or both. How would such a second hot dust component manifest itself in the relationship shown in Fig. 3? We would need to know the viewing angle, \(\theta\), in order to estimate the accretion disk luminosity as seen by the dust. Then, the true dust covering factor should be calculated relative to this (reduced) \(L_{\rm UV}\). In other words, if the main origin of the second dust component is from the flare, the more correct our already assumed accretion disk luminosity is. However, if the main origin of the second dust component is instead in the accretion disk, we should expect to have _overestimated_ the accretion disk luminosity as seen by the dust and, therefore, _underestimated_ the dust covering factor. If indeed the dust radius ratio is a suitable indicator of the dust geometry as described above, as this value increases, the flare becomes less important and the accretion disk dust more important, thus apparently reducing the
Figure 3: The logarithmic hot dust covering factor versus the logarithm of the ratio between the luminosity-based dust radius and the dust radius measured by reverberation mapping (or near-IR interferometry). The best-fit relationship is shown as the black solid line. See text for more details.
dust covering factor and tightening the relationship displayed in Fig. 3. We note that although the dust in the accretion disk will not necessarily be externally illuminated, if it is composed of \(\mu\)m-sized grains, it will be well-coupled to the accretion disk gas and so affected by its (transmitted) total luminosity in a similar way. Stalevski et al. (2016) investigated in detail the effects of the anisotropy of the accretion disk luminosity on the covering factor for the torus dust. They found that for type 1 AGN low and high covering factors will be underestimated and overestimated, respectively, with the effect being more pronounced for the former than for the latter (see their Fig. 10). Their results could partly explain the steep slope we find for the relationship in Fig. 3 and also the fact that we find dust covering factors exceed unity as well as extremely low values (of only a few per cent).
## 5 Conclusions
This study used a sample of \(\sim 40\) AGN with available cross-dispersed near-IR spectroscopy to perform a comparison between luminosity-based dust radii and those obtained by independent methods, e.g. by reverberation mapping. The main aim was to reveal the astrochemistry and geometry of the hot dust in AGN. The main results can be summarized as follows.
(i) It could be shown that luminosity-based dust radii obtained with the assumption of a blackbody emissivity were consistent within a factor of \(\sim 2\) with dust radii obtained by independent methods. Therefore, large dust grains (with sizes of a few \(\mu\)m) appear to be ubiquitous in the hot dust of AGN. Assuming instead an emissivity corresponding to small dust grains implied much larger luminosity-based dust radii (by factors of \(\sim 6-10\)).
(ii) We found a tight relationship between the dust covering factor and the ratio between the luminosity-based dust radius and the dust radius measured by reverberation mapping (**or near-IR interferometry**). This new relationship can be understood if one assumes that the dust radius ratio is a suitable indicator of the dust geometry and that hot dust is present in the accretion disk in addition to that assembled in a flared, dusty disk, as proposed by Landt et al. (2023).
(iii) The effects of the anisotropy of the accretion disk illumination might be different for low- and high-luminosity AGN, as suggested by the flattening of the slope at high luminosities in the dust radius-luminosity plane of Minezaki et al. (2019). We find a trend that the dust radius ratio increases with AGN
Figure 4: Sketch of a possible change in the dust geometry as the ratio between the luminosity-based dust radius and the dust radius measured by reverberation mapping (or near-IR interferometry) increases and thus the dust covering factor decreases.
luminosity, indicating a change in dust geometry whereby the disk structure becomes more pronounced in high-luminosity AGN.
## Conflict of Interest Statement
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
## Author Contributions
The first author has performed all original measurements and analysis of the data, which were not already reported elsewhere. The first author has written the entire manuscript.
## Funding
HL acknowledges a Daphne Jackson Fellowship sponsored by the Science and Technology Facilities Council (STFC), UK, and support from STFC grants ST/P000541/1, ST/T000244/1 and ST/X001075/1.
## Data Availability Statement
The near-IR spectra used for the first time in this work are available on request from the first author.
|
2309.12086 | Outflow energy and black-hole spin evolution in collapsar scenarios | We explore the collapsar scenario for long gamma-ray bursts by performing
axisymmetric neutrino-radiation magnetohydrodynamics simulations in full
general relativity for the first time. In this paper, we pay particular
attention to the outflow energy and the evolution of the black-hole spin. We
show that for a strong magnetic field with an aligned field configuration
initially given, a jet is launched by magnetohydrodynamical effects before the
formation of a disk and a torus, and after the jet launch, the matter accretion
onto the black hole is halted by the strong magnetic pressure, leading to the
spin-down of the black hole due to the Blandford-Znajek mechanism. The
spin-down timescale depends strongly on the magnetic-field strength initially
given because the magnetic-field strength on the black-hole horizon, which is
determined by the mass infall rate at the jet launch, depends strongly on the
initial condition, although the total jet-outflow energy appears to be huge
$>10^{53}$ erg depending only weakly on the initial field strength and
configuration. For the models in which the magnetic-field configuration is not
suitable for quick jet launch, a torus is formed and after a long-term
magnetic-field amplification, a jet can be launched. For this case, the matter
accretion onto the black hole continues even after the jet launch and
black-hole spin-down is not found. We also find that the jet launch is often
accompanied with the powerful explosion of the entire star with the explosion
energy of order $10^{52}$ erg by magnetohydrodynamical effects. We discuss an
issue of the overproduced energy for the early-jet-launch models. | Masaru Shibata, Sho Fujibayashi, Alan Tsz-Lok Lam, Kunihito Ioka, Yuichiro Sekiguchi | 2023-09-21T13:57:25Z | http://arxiv.org/abs/2309.12086v1 | # Outflow energy and black-hole spin evolution in collapsar scenarios
###### Abstract
We explore the collapsar scenario for long gamma-ray bursts by performing axisymmetric neutrino-radiation magnetohydrodynamics simulations in full general relativity for the first time. In this paper, we pay particular attention to the outflow energy and the evolution of the black-hole spin. We show that for a strong magnetic field with an aligned field configuration initially given, a jet is launched by magnetohydrodynamical effects before the formation of a disk and a torus, and after the jet launch, the matter accretion onto the black hole is halted by the strong magnetic pressure, leading to the spin-down of the black hole due to the Blandford-Znajek mechanism. The spin-down timescale depends strongly on the magnetic-field strength initially given because the magnetic-field strength on the black-hole horizon, which is determined by the mass infall rate at the jet launch, depends strongly on the initial condition, although the total jet-outflow energy appears to be huge \(>10^{53}\,\mathrm{erg}\) depending only weakly on the initial field strength and configuration. For the models in which the magnetic-field configuration is not suitable for quick jet launch, a torus is formed and after a long-term magnetic-field amplification, a jet can be launched. For this case, the matter accretion onto the black hole continues even after the jet launch and black-hole spin-down is not found. We also find that the jet launch is often accompanied with the powerful explosion of the entire star with the explosion energy of order \(10^{52}\,\mathrm{erg}\) by magnetohydrodynamical effects. We discuss an issue of the overproduced energy for the early-jet-launch models.
## I Introduction
The collapsar model [1; 2] is the widely accepted model for explaining the central engine of long gamma-ray bursts. In this model, one supposes a massive, rotating, and magnetised progenitor star that collapses into a black hole. After the formation of a spinning black hole, one assumes that the black hole is penetrated by a poloidal magnetic field with a sufficiently high field strength, with which the Poynting luminosity by the Blandford-Znajek mechanism [3] is sufficiently high. Motivated by this idea, a number of general relativistic magnetohydrodynamics simulations (in the fixed black-hole spacetime) have been performed in the last two decades (e.g., Refs. [4; 5; 6; 7; 8; 9; 10]), and indicated that jets are indeed launched in the presence of strong poloidal magnetic fields that penetrate a spinning black hole, which are hypothetically assumed.
In the force-free approximation, the Poynting luminosity associated with the Blandford-Znajek mechanism is approximately written as (e.g., Ref. [11])
\[\frac{dE}{dt}\approx\frac{4}{3}(B^{r})^{2}M_{\mathrm{BH}}^{4}\hat{r}_{+}^{2 }(\hat{r}_{+}+2)\omega(\Omega_{\mathrm{BH}}-\omega), \tag{1}\]
where \(B^{r}\) is the typical value of the (lab-frame) radial magnetic-field strength on the black-hole horizon, \(M_{\mathrm{BH}}\) is the black-hole mass, \(\hat{r}_{+}=1+\sqrt{1-\chi^{2}}\) with \(\chi\) being the black-hole spin and \(M_{\mathrm{BH}}\hat{r}_{+}\) being the radius of the black-hole horizon in the Boyer-Lindquist coordinates (e.g., Ref. [12]), \(\omega\) is the angular velocity of the magnetic field lines, and \(\Omega_{\mathrm{BH}}\) is the angular velocity of the black hole written as (e.g., Refs. [12; 13])
\[\Omega_{\mathrm{BH}}=\frac{\chi}{2M_{\mathrm{BH}}\hat{r}_{+}}. \tag{2}\]
To derive Eq. (1) we assume that \(B^{r}\) and \(\omega\) are constant on the black-hole horizon. Throughout this paper we use the geometrical units in which \(c=1=G\) where \(c\) and \(G\) are the speed of light and gravitational constant, respectively. We note that \(dM_{\mathrm{BH}}/dt=-dE/dt\) in the absence of matter accretion onto the black hole, and that the source of the Poynting luminosity is the rotational kinetic energy of the black hole. We also note that Eq. (1) is valid only when the poloidal magnetic field penetrates the entire surface of the black-hole horizon. If the poloidal magnetic field penetrates a part of the surface, the luminosity is lower.
Although \(\omega\) is a function of spatial coordinates determined by the detailed magnetic-field profile, we assume it as a constant for simplicity and set it as \(\omega=f\Omega_{\mathrm{BH}}\) where \(f\) is assumed to be a constant as well because previous numerical studies often showed that \(f\) is \(\sim 1/2\) (see, e.g., Ref. [11]). Then, Eq. (1) is written as
\[\frac{dM_{\mathrm{BH}}}{dt} \approx -\frac{f(1-f)}{3}\left(B^{r}M_{\mathrm{BH}}\chi\right)^{2}(\hat{r} _{+}+2) \tag{3}\] \[\approx -1.1\times 10^{50}\,f_{1/2}\frac{1-f}{1/2}\left(\frac{M_{\mathrm{BH}} }{10M_{\odot}}\right)^{2}\] \[\times\left(\frac{B^{r}}{10^{14}\,\mathrm{G}}\right)^{2}\left( \frac{\chi}{0.7}\right)^{2}\left(\frac{\hat{r}_{+}+2}{4}\right)\,\mathrm{erg/s},\]
where \(f_{1/2}=f/(1/2)\). In the following we suppose the typical values of \(B^{r}\), \(M_{\mathrm{BH}}\), and \(\chi\) as \(10^{14}\,\mathrm{G}\), \(\sim 10M_{\odot}\), and
\(\gtrsim 0.5\) because with these values, the typical luminosity of long gamma-ray bursts can be reproduced assuming that the conversion efficiency of the Poynting luminosity to gamma-ray luminosity is of order 10% and an opening angle of the jet is \(5^{\circ}\)-\(10^{\circ}\).
Associated with the energy extraction, the angular momentum of the black hole is also extracted with the rate (e.g., Ref. [11])
\[\frac{dJ_{\rm BH}}{dt} \approx -\frac{4}{3}(B^{r})^{2}M_{\rm BH}^{4}\hat{r}_{+}^{2}(\hat{r}_{+}+2 )(\Omega_{\rm BH}-\omega) \tag{4}\] \[= -\frac{2(1-f)}{3}(B^{r})^{2}(\hat{r}_{+}+2)\hat{r}_{+}M_{\rm BH}^{ 3}\chi.\]
Before proceeding, a relation between the loss of the angular momentum and mass of the black hole is derived. From Eqs. (3) and (4), we obtain
\[J_{\rm BH}\frac{dJ_{\rm BH}}{dt}=\frac{2M_{\rm BH}^{3}\hat{r}_{+}\,dM_{\rm BH}} {f}, \tag{5}\]
where we used \(J_{\rm BH}=M_{\rm BH}^{2}\chi\). Because \(\hat{r}_{+}\) depends only weakly on \(M_{\rm BH}\) and \(J_{\rm BH}\) for moderately spinning black holes, we here approximate that it is constant and integrate Eq. (5) in time, giving
\[\Delta J_{\rm BH}^{2}\approx\frac{\hat{r}_{+}}{f}\Delta M_{\rm BH}^{4}, \tag{6}\]
where \(\Delta J_{\rm BH}^{2}\) and \(\Delta M_{\rm BH}^{4}\) are the total changes of \(J_{\rm BH}^{2}\) and \(M_{\rm BH}^{4}\) during the black-hole evolution by the Blandford-Znajek mechanism. Setting \(M_{\rm BH}=M_{0}+\Delta M\) where \(M_{0}\) is the initial black-hole mass and \(M_{0}\gg|\Delta M|\) with \(\Delta M<0\) is always satisfied, we obtain \(\Delta M_{\rm BH}^{4}\approx 4M_{0}^{3}\Delta M\).
If a substantial amount of the angular momentum is extracted from the black hole, \(\Delta J_{\rm BH}^{2}\) may be approximated by \(J_{0}^{2}\) where \(J_{0}\) is the initial value of \(J_{\rm BH}\). Then, we obtain
\[|\Delta M| \approx \frac{f}{4\hat{r}_{+}}\left(\frac{J_{0}}{M_{0}^{2}}\right)^{2}M_ {0}\] \[\approx 5.5\times 10^{53}\,f_{1/2}\left(\frac{\hat{r}_{+}}{2}\right)^{-1} \left(\frac{\chi_{0}}{0.7}\right)^{2}\left(\frac{M_{0}}{10M_{\odot}}\right) \,{\rm erg},\]
where \(\chi_{0}=J_{0}/M_{0}^{2}\). Thus, the total energy budget for the spinning black holes with typical mass of \(M_{0}\gtrsim 4M_{\odot}\) is larger than \(10^{53}\,{\rm erg}\) for \(\chi_{0}\gtrsim 0.5\) if \(f_{1/2}\sim 1\). The total energy of gamma-ray bursts (including the afterglow and associated supernova) are less than \(10^{53}\,{\rm erg}\) (typically \(\lesssim 10^{52}\,{\rm erg}\)) for the majority [14], so that the spin angular momentum of the black hole should not be entirely transferred to the matter surrounding the black hole during the stages of the prompt gamma-ray emission, its afterglow, and associated supernova (unless the factor \(f\) is extremely small); otherwise, they had to be extremely bright.
From the spin angular momentum of the black hole, \(J_{\rm BH}=M_{\rm BH}^{2}\chi\), and the angular-momentum extraction rate of Eq. (4), we can estimate the timescale of the spin-down as
\[\tau := \frac{J_{\rm BH}}{|dJ_{\rm BH}/dt|}=\frac{3}{2(1-f)(B^{r})^{2}\hat {r}_{+}(\hat{r}_{+}+2)M_{\rm BH}} \tag{8}\] \[\approx 1.0\times 10^{4}\,\left(\frac{1-f}{1/2}\right)^{-1}\left(\frac{B^{r }}{10^{14}\,{\rm G}}\right)^{-2}\] \[\quad\times\left(\frac{M_{\rm BH}}{10M_{\odot}}\right)^{-1}\left( \frac{\hat{r}_{+}(\hat{r}_{+}+2)}{8}\right)^{-1}\,{\rm s}.\]
For the duration of a gamma-ray burst of \(\Delta t\), the spin angular momentum of the black hole decreases to \(J_{0}\exp(-\Delta t/\tau)\), and thus, for \(\Delta t\ll\tau\),
\[|\Delta J_{\rm BH}^{2}|=J_{0}^{2}\left[1-\exp(-2\Delta t/\tau)\right]\approx 2 J_{0}^{2}(\Delta t/\tau), \tag{9}\]
where we assumed that \(\tau\) is approximately constant. Hence,
\[|\Delta M| \approx \frac{f_{1/2}\Delta t}{4\hat{r}_{+}\tau}\left(\frac{J_{0}}{M_{0} ^{2}}\right)^{2}M_{0} \tag{10}\] \[\approx 1.1\times 10^{52}\,f_{1/2}\left(\frac{1-f}{1/2}\right)\left( \frac{\Delta t}{10^{2}\,{\rm s}}\right)\left(\frac{B^{r}}{10^{14}\,{\rm G}} \right)^{2}\] \[\quad\times\left(\frac{\hat{r}_{+}+2}{4}\right)\left(\frac{\chi_ {0}}{0.7}\right)^{2}\left(\frac{M_{0}}{10M_{\odot}}\right)^{2}\,{\rm erg},\]
yielding the typically required magnitude for the long gamma-burst energy with the typical duration of \(\Delta t=10\)-\(100\,{\rm s}\). This analysis suggests that if a few percent (i.e., \(\Delta t/\tau\)) of the rotation kinetic energy of a spinning black hole is liberated, the total energy of long gamma-ray bursts can be explained assuming that the Blandford-Znajek mechanism is the primary mechanism of the central engine.
In recent papers [15; 16], the authors suggest that the black hole may spin down significantly within a timescale of order \(10\,{\rm s}\) in the context of the collapsar scenario. However, as we illustrated above, if the black hole is formed with an appreciable spin magnitude of \(\chi\gtrsim 0.5\), the total rotational kinetic energy available for the extraction by the Blandford-Znajek mechanism is \(\gtrsim 10^{53}\,{\rm erg}\), which is too large to explain the observed energy of long gamma-ray bursts (and afterglows) with the typical luminosity of \(dE/dt\sim 10^{50}\,{\rm erg/s}\). Our analysis suggests that only a fraction of the rotational kinetic energy and angular momentum of a black hole should be extracted to reproduce typical long gamma-ray bursts and afterglows.
To examine how much spin angular momentum is extracted from spinning black holes during stellar collapse, we perform a neutrino-radiation magnetohydrodynamics simulation in full general relativity. For the magnetohydrodynamics simulations we employ an axisymmetric numerical-relativity code developed in Refs. [17; 18] with a modification by which the spin angular momentum of black holes is better resolved (see Appendix B of
Ref. [19]). For the initial condition, we employ a model from stellar evolution, which results in a rapidly rotating progenitor star of Ref. [20], and construct an initial data composed of a spinning black hole and infalling matter with weak poloidal magnetic fields by using the method developed in our previous paper [19]. We will show that only when the initial magnetic-field strength is high in the vicinity of black holes and the field is aligned well with the spin axis of the black hole, the timescale of the black-hole spin-down becomes very short with \(\leq 100\,\mathrm{s}\), while for a reasonable choice of the initial field strength, the spin-down timescale is much longer than the typical duration of long gamma-ray bursts or the spin-up by the matter accretion onto the black hole overcomes the spin-down by the Blandford-Znajek mechanism. We also show that the magnetic-field strength on the horizon at the jet launch is determined by the mass infall rate (i.e., the ram pressure) at the launch time, and thus, for the later jet-launch models, the magnetic-field strength on the black-hole horizon is lower and the spin-down timescale becomes longer.
The paper is organized as follows. In Sec. II, we summarize the setup in the present numerical simulation. In Sec. III, numerical results are presented focusing on the mechanism of the jet launch in the present setting, on the outflow energy, and on the evolution of the black-hole spin by the Blandford-Znajek mechanism. Section IV is devoted to a summary and discussion, in particular on the problem of the overproduced energy by the Blandford-Znajek mechanism. Throughout this paper, \(k_{\mathrm{B}}\) denotes Boltzmann's constant.
## II Simulation setup
We employ the same formulation and simulation code as in Refs. [17; 18] for the present neutrino-radiation magnetohydrodynamics study. Specifically, we numerically solve neutrino-radiation resistive magnetohydrodynamics equations in full general relativity in this code. A tabulated equation of state referred to as DD2 [21] is employed, with the extension of the table down to low-density (\(\rho\approx 0.17\,\mathrm{g/cm^{3}}\)) and low-temperature (\(k_{\mathrm{B}}T=10^{-3}\,\mathrm{MeV}\)) region; see Ref. [22] for the procedure. In this paper, we take the ideal magnetohydrodynamics limit by setting a high conductivity \(\sigma_{c}\) with which the resistive dissipation timescale is much longer than the simulation time (\(\gg 10\,\mathrm{s}\)).
In the present work, the key ingredient is to accurately evolve the mass and angular momentum of black holes. For this purpose, we have modified the treatment inside black-hole horizons for our Einstein's equation solver (a test result for evolving a vacuum black hole with a dimensionless spin parameter of \(0.8\) is presented in Appendix B of Ref. [19]). Specifically, in the current setting (grid spacing \(\Delta x\leq 0.016M_{\mathrm{BH}}\); see below), the numerical error for the mass and dimensionless spin is within \(1.5\%\) and \(0.5\%\), respectively, for the time evolution of \(5\,\mathrm{s}\). For the dimensionless spin, the error size is much smaller than the spin-down fraction shown in Sec. III.4.
Following our recent work [19], we prepare a system of a spinning black hole with matter infalling to the central region instead of using the original progenitor star model. This is partly motivated to save computational costs but is mainly from the physical consideration. As described in Eq. (3), the typical magnetic-field strength required on the horizon is \(B\sim 10^{14}\,\mathrm{G}\) for the long gamma-ray burst models. The magnetic pressure for such a field strength is \(B^{2}/8\pi=O(10^{26})\,\mathrm{dyn/cm^{2}}\). On the other hand, the ram pressure of the infalling matter for given values of the rest-mass density \(\rho\) and the infall velocity \(v_{\mathrm{infall}}\) is
\[\rho v_{\mathrm{infall}}^{2} \approx 2.2\times 10^{26}\left(\frac{\rho}{10^{6}\,\mathrm{g/cm^{3}}} \right)\left(\frac{v_{\mathrm{infall}}}{c/2}\right)^{2}\,\mathrm{dyn/cm^{2}}. \tag{11}\]
This suggests that until the density of the infalling matter decreases below \(\sim 10^{6}\,\mathrm{g/cm^{3}}\), the magnetic pressure cannot overcome the ram pressure to launch a jet or outflow by the Blandford-Znajek mechanism (hereafter, we refer to a small-opening angle outflow along the \(z\)-axis as a jet even if it is not very relativistic inside the star). In the early stage of the stellar core collapse and black-hole evolution, the rest-mass density near the black hole is much higher than \(10^{6}\,\mathrm{g/cm^{3}}\). For this reason, we start the simulations from a black hole and infalling matter. We note that a jet could be launched earlier in the presence of an extremely strong fossil magnetic field, but we do not consider this possibility in this paper.
To obtain the initial data we first take the progenitor models from a stellar evolution calculation of Ref. [20] for which the black hole is likely to be formed in a short timescale after core bounce and be evolved simply by the accretion from the outer region without forming an accretion disk in an early stage [19]. We then construct the initial data by solving constraint equations of general relativity in the hypothesis that in the early stage of the black-hole evolution, the system is composed of a spinning black hole and nearly free-falling matter. In this paper, we employ the model for which the zero-age main-sequence mass of the progenitor is \(M_{\mathrm{ZAMS}}=35M_{\odot}\)[20] (i.e., AD35 model of Ref. [19]). This progenitor star is very compact at the onset of the collapse, and hence, it is reasonable to assume that a black hole is formed in a short timescale after the onset of the collapse [23]. We set up an initial data at a stage just prior to the formation of a disk. For such a choice, the mass and dimensionless spin of the black hole are \(M_{\mathrm{BH},0}=15M_{\odot}\) and \(\chi_{0}=0.66\) (see Ref. [19] for a detail), and the rest mass and angular momentum of the matter outside the black hole is \(M_{\mathrm{mat}}=10.5M_{\odot}\) and \(J_{\mathrm{mat}}=4.32J_{\mathrm{BH},0}\) at the initial stage. Here, \(J_{\mathrm{BH},0}=M_{\mathrm{BH},0}^{2}\chi_{0}\), and the mass and dimensionless spin of the black hole are determined by analyzing the equatorial and polar circumferential radii, \(C_{e}\) and \(C_{p}\), respectively, of apparent horizons (e.g., see Ref. [24]). Specifically, the mass is determined by the
relation of
\[M_{\rm BH}=\frac{C_{e}}{4\pi}, \tag{12}\]
and the dimensionless spin is determined from \(C_{p}/C_{e}\), which is a monotonic function of the dimensionless spin, \(\chi\), for Kerr black holes and can be used to identify the value of \(\chi\). We also check that the mass and spin obtained by them satisfy the relation of the area, \(A_{\rm AH}=8\pi M_{\rm BH}^{2}(1+\sqrt{1-\chi^{2}})\), with high accuracy (the error is less than \(0.1\%\)). We cut out the matter outside \(10^{5}\,\)km because the computational domain in our simulation is \(10^{5}\times 10^{5}\,\)km for \(\varpi\) and \(z\) where \(\varpi\) is the cylindrical coordinate. As shown in our paper [19], the matter infall onto the black hole with no disk formation proceeds for the first \(\approx 2\,\)s for this model, illustrating that our assumption is valid.
We also performed several simulations employing the \(M_{\rm ZAMS}=20M_{\odot}\) model of Ref. [20] and found that the results are qualitatively very similar to those for the \(M_{\rm ZAMS}=35M_{\odot}\) model. For the \(M_{\rm ZAMS}=20M_{\odot}\) model, the matter infall rate is lower than that for \(M_{\rm ZAMS}=35M_{\odot}\), and hence, a jet can be launched with a lower magnetic-field strength. Besides this quantitative difference, we do not find a significant modification about the conclusion of this paper.
We superimpose a poloidal magnetic field, with which the electromagnetic energy density is initially much smaller than the rest-mass density, to the spinning black hole and infalling matter. Because it is not clear what kind of the magnetic-field profile in the infalling matter around a massive black hole is developed in the stellar core collapses, we choose a rather ad hoc poloidal field configuration in the present numerical experiment, although it is a strong assumption to initially give an aligned poloidal field. We primarily prepare the magnetic field only of the \(z\) component and \(\sqrt{\gamma}B^{z}\) is a function only of \(\varpi\) where \(\gamma\) is the determinant of the three metric \(\gamma_{ij}\). Here, we set that \((B^{z})^{2}(\varpi)\) is approximately proportional to the pressure on the equatorial plane, which results in
\[B^{z}=\frac{B_{0}}{\varpi\sqrt{\gamma}}\frac{d}{d\varpi}\left(\sqrt{\frac{ \varpi_{0}^{2}}{\varpi^{2}+\varpi_{0}^{2}}}\varpi^{2}\right), \tag{13}\]
where \(\varpi_{0}\) is a constant with a fiducial value of \(10^{3}\,\)km, which is \(\approx 45M_{\rm BH,0}\), and \(B_{0}\) is a constant which determines the magnetic-field strength. With this setting, the divergence-free condition of the magnetic field is automatically satisfied. Because the magnetic-field lines are aligned with the spin axis of the black hole and the magnetic-field strength does not decrease with \(z\), this setting is quite favorable for launching a jet along the spin axis; we intentionally choose this setting to study a jet launch, subsequent spin-down of black holes, dependence of the spin-down rate on the initial magnetic-field strength, and Poynting luminosity by the Blandford-Znajek mechanism.
For several models, we also choose
\[B^{\varpi} = -\frac{B_{0}}{\varpi\sqrt{\gamma}}\frac{\partial}{\partial z} \left(\sqrt{\frac{\varpi_{0}^{2}}{r^{2}+\varpi_{0}^{2}}}\varpi^{2}\right),\] \[B^{z} = \frac{B_{0}}{\varpi\sqrt{\gamma}}\frac{\partial}{\partial\varpi} \left(\sqrt{\frac{\varpi_{0}^{2}}{r^{2}+\varpi_{0}^{2}}}\varpi^{2}\right), \tag{14}\]
and
\[B^{\varpi} = -\frac{B_{0}}{\varpi\sqrt{\gamma}}\frac{\partial}{\partial z} \left(\frac{\varpi_{0}^{2}}{r^{2}+\varpi_{0}^{2}}\varpi^{2}\right),\] \[B^{z} = \frac{B_{0}}{\varpi\sqrt{\gamma}}\frac{\partial}{\partial\varpi} \left(\frac{\varpi_{0}^{2}}{r^{2}+\varpi_{0}^{2}}\varpi^{2}\right), \tag{15}\]
where \(r=\sqrt{\varpi^{2}+z^{2}}\). With these settings, the magnetic-field strength on the horizon can be set to be initially identical with that with Eq. (13), but the field strength in the outer region becomes weaker. Specifically, the magnetic field strength along the \(z\) axis for the distant region is \(\propto z^{0}\), \(\propto z^{-1}\), and \(\propto z^{-2}\) with Eqs. (13), (14), and (15), respectively. We will illustrate that the evolution of the magnetic-field strength on the horizon depends strongly on the initial field configurations. In particular, in the initial condition of Eq. (15) with \(\varpi_{0}=10^{3}\,\)km, the magnetic-field strength on the horizon does not increase significantly with time due to the matter accretion in the central region, and hence, even for an initially high magnetic-field strength, a jet is not quickly launched. For Eq. (15), the magnetic pressure along the \(z\)-axis is proportional to \(z^{-4}\) for the distant region, which is steeper than that for the gas pressure. This is also disadvantageous for launching a jet along the \(z\)-axis. For the choice of Eq. (15), we also perform simulations varying the value of \(\varpi_{0}\) to confirm that higher values of \(\varpi_{0}\) are advantageous for the jet launch.
We specify the models by the maximum magnetic-field strength, \(B_{\rm max}\). We choose it as \(B_{\rm max}=3\times 10^{11}\), \(2\times 10^{11}\)
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Model & \(B_{\rm max}\,\)(G) & Config & \(\varpi_{0,3}\) & Jet? & Spin down? \\ \hline B11.5 & \(3\times 10^{11}\) & Eq. (13) & 1 & Yes & Yes \\ B11.3 & \(2\times 10^{11}\) & Eq. (13) & 1 & Yes & Yes \\ B11.0 & \(1\times 10^{11}\) & Eq. (13) & 1 & Yes & Yes \\ B10.5 & \(3\times 10^{10}\) & Eq. (13) & 1 & Yes & No \\ B10.0 & \(1\times 10^{10}\) & Eq. (13) & 1 & No & No \\ Br11.0 & \(1\times 10^{11}\) & Eq. (14) & 1 & Yes & Yes \\ Br10.5 & \(3\times 10^{10}\) & Eq. (14) & 1 & No & No \\ Bq12.5 & \(3\times 10^{12}\) & Eq. (15) & 1 & Yes & No \\ Bq12.0 & \(1\times 10^{12}\) & Eq. (15) & 1 & No & No \\ Bq11.0 & \(1\times 10^{11}\) & Eq. (15) & 1 & No & No \\ Bq11.0b & \(1\times 10^{11}\) & Eq. (15) & 5 & Yes & No \\ Bq11.0c & \(1\times 10^{11}\) & Eq. (15) & 10 & Yes & Yes \\ \hline \end{tabular}
\end{table}
Table 1: List of initial setting: model name, maximum magnetic-field strength and the type of the magnetic field configuration initially given, and the value of \(\varpi_{0}\) in units of \(10^{3}\,\)km (\(\varpi_{0,3}\)). The last two columns show whether a jet launch is found or not and whether the spin down of the black hole is found or not in the simulation time, typically, of \(\sim 10\,\)s.
\(1\times 10^{11}\), \(3\times 10^{10}\), and \(1\times 10^{10}\,\mathrm{G}\) for the magnetic field of Eq. (13), and each model is referred to as models B11.5, B11.3, B11.0, B10.5, and B10.0. For Eq. (14), we choose \(B_{\mathrm{max}}=1\times 10^{11}\) and \(3\times 10^{10}\,\mathrm{G}\), and refer to each model as Br11.0 and Br10.5. For Eq. (15), we choose \(B_{\mathrm{max}}=3\times 10^{12}\), \(1\times 10^{12}\), and \(1\times 10^{11}\,\mathrm{G}\), and refer to each model as \(\mathrm{Bq12.5}\), \(\mathrm{Bq12.0}\), and \(\mathrm{Bq11.0}\) for \(\varpi_{0}=10^{3}\,\mathrm{km}\). For Eq. (15) with \(B_{\mathrm{max}}=10^{11}\,\mathrm{G}\), we also prepare the models with \(\varpi_{0,3}=\varpi_{0}/10^{3}\,\mathrm{km}=5\) and \(10\), which are referred to as \(\mathrm{Bq11.0}\) and \(\mathrm{Bq11.0c}\). Table 1 summarizes the models and their parameters.
Since the initial electromagnetic pressure is weaker than the gas pressure and ram pressure of the infalling matter for these choices, the effect of the magnetic field is always negligible in the early stage of the simulations; in other words, the total electromagnetic energy is much smaller than the internal and kinetic energies. The choice of \(B_{\mathrm{max}}\leq 3\times 10^{12}\,\mathrm{G}\) is likely to be reasonable, because the maximum magnetic-field strength for neutron stars is typically \(10^{11}\)-\(10^{13}\,\mathrm{G}\)[25] and the black hole is likely to be formed through a shorter-term protoneutron star stage, although we have to keep in mind that it is not very clear whether an aligned poloidal magnetic field is established during the evolution of the black hole by the matter accretion. We also note that in the presence of a strong magnetic field, the explosion may take place in the protoneutron-star stage [26; 27; 28; 29] or in an early evolution stage of a new-born black hole. The present choice of the relatively weakly magnetic fields stems partly from excluding this possibility.
For \(B_{\mathrm{max}}\gtrsim 1\times 10^{11}\,\mathrm{G}\) with Eqs. (13) and (14) or with Eq. (15) and \(\varpi_{0,3}=10\), a jet is generated by the Blandford-Znajek mechanism before a torus (geometrically thick disk) is developed around the black hole. For this case, the initially-given magnetic field is amplified by the winding associated with the black-hole spin and by the compression of the magnetic field associated with the infalling matter motion. Thus, this is considered to be the models that a fossil magnetic field, which is strong enough, induces the jet. This scenario is possible only in the presence of the strong fossil poloidal magnetic field in the progenitor star. For some of other models (B10.5, \(\mathrm{Bq12.5}\), and \(\mathrm{Bq11.0b}\)), a jet is launched after the formation of a disk and a torus. For this case, the evolution of the torus partly plays a role for enhancing the strength of the magnetic fields that penetrate the horizon. These models indicate the importance of the co-evolution of the torus and black-hole magnetosphere, which is the key to an eventual jet launch.
For the even smaller initial field strength or with the initial condition of Eq. (15) with \(B_{\mathrm{max}}\leq 10^{12}\,\mathrm{G}\) and \(\varpi_{0,3}=1\), we do not find the launch of a jet/outflow in the simulation time although it may be driven after long-term torus evolution in reality (see the discussion in Sec. III.1.2). Since our simulation is carried out assuming axisymmetry and thus it cannot fully follow the magnetorotational-instability (MRI) turbulence [30] due to the anti-dynamo theorem [31], the enhancement of the magnetic-field strength on the horizon is limited. It is natural to suppose, in reality, that a turbulence should be developed after the disk/torus formation, magnetic-field strength is quickly amplified in the disk/torus, and eventually a strong poloidal magnetic field that penetrates the black hole and can be the source of the Blandford-Znajek mechanism is developed (see, e.g., Refs. [32; 33; 22] for related issues). This scenario may be more realistic, but we cannot study it in the present setting.
## III Numerical results
As in our recent paper [19], the simulation is performed on a two-dimensional domain of \(\varpi\) and \(z\) (see also Refs. [34; 35]). For the \(\varpi\) and \(z\) directions, the following non-uniform grid is employed for the present numerical simulations: For \(x\lesssim 7GM_{\mathrm{BH},0}/4c^{2}\) (\(x=\varpi\) or \(z\)), a uniform grid is used, while outside this region, the grid spacing \(\Delta x_{i}\) is increased uniformly as \(\Delta x_{i+1}=1.01\Delta x_{i}\), where the subscript \(i\) denotes the \(i\)-th grid. The black-hole horizon (apparent horizon) is always located in the uniform grid zone, and the outer boundaries along the \(\varpi\) and \(z\) axes are located at \(\approx 10^{5}\,\mathrm{km}\). The grid resolution of the uniform grid zone is \(\Delta x=360\,\mathrm{m}\approx 0.016GM_{\mathrm{BH},0}/c^{2}\), which is chosen to derive a reliable result for the black-hole spin evolution (see Appendix B of Ref. [19]). For two models (B11.5 and B11.3) we perform higher resolution runs with \(\Delta x=300\,\mathrm{m}\approx 0.0135GM_{\mathrm{BH},0}/c^{2}\) to confirm that the rate of the spin-down of black holes are computed in a fair accuracy irrespective of the grid resolution. We refer to these models as B11.5hi and B11.3hi, respectively.
### Jet launch or not
For our present setting, a mildly relativistic jet is found except for (i) the models for which the initial magnetic-field strength is too weak (B10.0 and Br10.5) and the field amplification is not large enough in the simulation time (\(\sim 10\,\mathrm{s}\)), and (ii) the models with the initial field configuration of Eq. (15) with \(B_{\mathrm{max}}\leq 1\times 10^{12}\,\mathrm{G}\) and \(\varpi_{0}=10^{3}\,\mathrm{km}\). The mechanism for launching the jet depends on the initial magnetic-field strength and configuration (see Ref. [36] for a related topic). Thus, we will describe it separately.
#### iii.1.1 Strong initial field cases
For the initial conditions with strong magnetic fields aligned well with the black-hole spin axis, a jet is launched in a short timescale after the magnetic-field amplification near the black hole by the winding associated with the black-hole spin and by the compression due to the matter infall onto the black hole.
Figure 1 displays the snapshots for the rest-mass density, temperature, entropy per baryon, and electron fraction at 6 selected time slices for model B11.5. For which the initial magnetic-field strength is high enough to launch a jet in a short timescale (\(\sim 0.4\) s). We note that the displayed range is wider for the later-stage snapshots in this figure.
For this model, the magnetic field is quickly amplified by the winding associated with the black-hole spin far before the formation of a disk around the black hole (note that the pancake structure for the rest-mass density at the second panel of Fig. 1 does not imply the formation of the orbiting disk, because it is compact enough to be subsequently swallowed by the black hole). Since the angular velocity of the black hole is approximately
\[\Omega_{\rm BH} \approx 2.8\times 10^{3}\left(\frac{\chi}{0.7}\right)\left(\frac{M_{ \rm BH}}{15M_{\odot}}\right)^{-1}\left(\frac{\hat{r}_{+}}{1.7}\right)\,{\rm rad /s}, \tag{16}\]
and the magnetic-field strength can increase approximately in proportional to \(\Omega_{\rm BH}t\) near the black hole, the maximum field strength can increase by \(10^{3}\) times in \(\sim 0.4\) s. Indeed, the maximum magnetic-field strength in the polar region of the black hole exceeds \(10^{14}\) G at \(t\approx 0.4\) s, leading to high magnetic pressure of \(\approx 4\times 10^{26}(B/10^{14}\,{\rm G})^{2}\) dyn/cm\({}^{2}\). On the other hand, the ram pressure of the infalling matter is \(\rho v_{\rm infall}^{2}=4\times 10^{26}\) dyn/cm\({}^{2}\) for \(\rho=10^{6}\) g/cm\({}^{3}\) and \(v_{\rm infall}=2c/3\approx 2\times 10^{5}\) km/s (see Eq. (11)). Thus, when the density of the infalling matter decreases below \(\sim 10^{6}\) g/cm\({}^{3}\), the magnetic pressure overcomes the ram pressure in this case. Indeed, for model B11.5, the rest-mass density along the \(z\)-axis is a few times \(10^{6}\) g/cm\({}^{3}\) at the launch of the jet (cf. the second panel of Fig. 1).
Once the jet is launched, subsequently, it quickly goes outward because the ram pressure decreases with the radius while the magnetic-field strength does not steeply decrease for this model. The magnetic-field strength near the black-hole horizon also becomes strong enough to halt the mass accretion from the equatorial direction as well as from the polar direction. Thus, a magnetically arrested disk (MAD) [37; 38; 8] structure is established after the jet launch. Indeed, the dimensionless MAD parameter
Figure 1: Snapshots of the rest-mass density (top-left), entropy per baryon (top-right), temperature (bottom-left), and electron fraction (bottom-right) on the \(\varpi\)-\(z\) plane are shown at selected time slices for model B11.5. Note that for each panel (except for the first two panels), the regions displayed are different. The black filled circles in the first two panels denote the region inside the black hole. An animation for this model is found at [https://www2.yukawa.kyoto-u.ac.jp/~sho.fujibayashi/share/B11.5-multiscale.mp4](https://www2.yukawa.kyoto-u.ac.jp/~sho.fujibayashi/share/B11.5-multiscale.mp4)
defined by
\[\phi_{\rm AH}:=\frac{\Phi_{\rm AH}}{\sqrt{4\pi G^{2}c^{-3}\dot{M}_{\rm BH,*}M_{\rm BH }^{2}}}, \tag{17}\]
is high \(\phi_{\rm BH}\gtrsim 20\) up to \(\gtrsim 10^{3}\) after the jet launch (cf. Fig. 7). Here, \(\Phi_{\rm AH}\) is the magnetic flux that penetrates the black-hole horizon (practically apparent horizon) and \(\dot{M}_{\rm BH,*}\) denotes the rest-mass infall rate across the horizon. In later stages, \(\dot{M}_{\rm BH,*}\) becomes quite low \(\lesssim 10^{-4}M_{\odot}/\)s for models B11.5, B11.3, and Bq11.0c.
We also find similar jet generation mechanisms for model B11.3, for which \(|\Phi_{\rm AH}|\) is only slightly smaller than that for model B11.5 (cf. Fig. 7). Since the magnetic-field strength near the horizon for given time is higher for models with higher initial field strength, the jet launch is earlier for higher values of \(B_{\rm max}\). In other words, the jet launch is delayed until the formation of a disk at \(t\sim 2\) s, if \(B_{\rm max}\) is smaller than a threshold value.
Models B11.0 and Br11.0 have values of \(B_{\rm max}\) which
Figure 3: The magnetic-field lines and field strength in an inner region of 300 km\(\times\)300 km are shown for the stage at which the outgoing jet is established for models B11.5 (left), B11.0 (middle), and B10.5 (right) on the \(\varpi\)-\(z\) plane.
Figure 2: The same as Fig. 1 but for model B10.5. An animation is found at [https://www2.yukawa.kyoto-u.ac.jp/~sho.fujibayashi/share/B10.5-multiscale.mp4](https://www2.yukawa.kyoto-u.ac.jp/~sho.fujibayashi/share/B10.5-multiscale.mp4)
are close to such a threshold value, and hence, the jet launch times (\(t\sim 2\,\mathrm{s}\)) are appreciably later than those for models B11.5 and B11.3. However, a jet is launched before the disk formation for these models. It is worthy to emphasize again that for higher values of \(B_{\mathrm{max}}\), the magnetic-field strength on the horizon is higher after the jet launch (cf. Fig. 3). This stems from the fact that the ram pressure at the jet launch is higher for the earlier jet-launch case (i.e., for larger values of \(B_{\mathrm{max}}\)). This results in higher Poynting luminosity by the Blandford-Znajek mechanism during the jet propagation for larger values of \(B_{\mathrm{max}}\) (see below).
For models B11.3, B11.0, and Br11.0, we followed the evolution of the collapsing envelope for a long timescale and found that the star entirely explodes together with the jet propagation (see, e.g., Refs. [6, 39, 40, 41] for related issues). For these models, poloidal magnetic-field lines that penetrate the spinning black hole are present not only along the polar region but also for the other regions. In such a situation, the magneto-centrifugal force associated with the black-hole spin plays an important role in transporting the angular momentum from the inner to the outer region, which can be an engine of the stellar explosion. Also strong toroidal magnetic fields enhanced by the winding associated with the black-hole spin can be the source of the Tayler instability [42, 43, 44]. The Tayler instability can induce a convective motion for redistributing the entropy and the angular momentum of the fluid elements [45], and hence, it may also contribute to the stellar explosion. The Tayler instability appears to play a more important role for the models with an initially large cylindrical component of the magnetic fields, e.g., for models Br11.0, Br10.5, Bq12.5, and Bq11.0b (see below).
In the presence of efficient neutrino cooling, the jet propagation may be decelerated by the reduction of the thermal pressure. However, for the early jet-launch models considered in this subsection, the maximum neutrino luminosity is of order \(10^{50}\,\mathrm{erg/s}\), which is smaller than the Poynting luminosity associated with the Blandford-Znajek mechanism (cf. Sec. III.3). Hence, it is unlikely that the neutrino cooling gives a seriously negative effect for the jet launch. It should be also mentioned that the jets are driven by the magnetohydrodynamical effect, and hence, the thermal pressure does not play a primary role.
#### iii.2.2 Weak initial field cases
For the models with initially weak magnetic fields such as models B10.5, B10.0, Br10.5, Bq12.5, Bq12.0,
Figure 4: The first 5 panels are the same as Fig. 1 and the panel in the bottom right is the same as Fig. 3 but for model Bq11.0b. An animation is found at [https://www2.yukawa.kyoto-u.ac.jp/~sho.fujibayashi/share/Bq11.0b-multiscale.mp4](https://www2.yukawa.kyoto-u.ac.jp/~sho.fujibayashi/share/Bq11.0b-multiscale.mp4)
and Bq11.0b, a jet is generated after the formation of a disk/torus or no jet formation is found in the simulation time. Also, the evolution process is qualitatively different from that for the initially strong-field cases discussed in the previous subsection.
Figure 2 displays the same plots as Fig. 1 but for model B10.5. For this case, a jet is not driven before the formation of a disk around the black hole, and for an early stage, the formation of a disk and a torus proceeds (see the first panel). Because the disk/torus evolves only quasi-steadily and orbits the black hole with less angular velocity than \(\Omega_{\rm BH}\), the magnetic stress is enhanced due to the winding of the magnetic-field lines connecting between a black hole and orbiting matter and also the angular momentum is transformed from the black hole to the matter1. At the formation of the torus (see the first panel of Fig. 2), in addition, an oblique shock is formed around its surface and enhances the matter flow toward the polar region. This also enhances the magnetic-flux inflow toward the black hole, and consequently, the magnetic-field strength near the polar region of the black hole is increased. When the magnetic pressure exceeds the ram pressure at \(t\gtrsim 2.6\,\)s near the horizon, a jet is driven from the vicinity of the black hole toward the polar region (cf. the second and third panels of Fig. 2).
Footnote 1: For slowly spinning black holes with \(\chi\lesssim 0.36\), the angular velocity of the matter orbiting the black holes can be larger than \(\Omega_{\rm BH}\), and hence, the angular momentum may not be transported outward. In this case, the orbiting matter may contribute to spin-up of the black holes.
However, for this case, the magnetic-field strength on the horizon achieved until the jet launch is not as high as those for models B11.5, B11.3, B11.0, Br11.0, and Bq11.0c (cf. Fig. 3), and hence, the jet is once decelerated on the way of the propagation (cf. the third and fourth panels of Fig. 2). During this stage, the opening angle of the jet is widen to the equatorial region and the magnetic flux on the horizon decreases (see Fig. 7). Nevertheless, the winding of the magnetic-field lines by the black-hole spin continuously enhances the magnetic pressure near the polar region and, at the same time, the ram pressure of the infalling matter decreases with time. This eventually causes the revival of the jet (cf. the fifth and sixth panels of Fig. 2) although the propagation speed is much lower than those for models B11.5, B11.3, B11.0, Br110.0, and Bq11.0c.
The situation for models Bq12.5 and Bq11.0b is similar to that of model B10.5 although the disk/torus evolution stage is longer (see Fig. 4 for model Bq11.0b). For these models, due to the angular momentum transport associ
Figure 5: The same as Fig. 4 but for model Br10.5, for which the jet launch was not found in the simulation time of 12 s. An animation is found at [https://www2.yukawa.kyoto-u.ac.jp/~sho.fujibayashi/share/Br10.5-multiscale.mp4](https://www2.yukawa.kyoto-u.ac.jp/~sho.fujibayashi/share/Br10.5-multiscale.mp4)
ated with magneto-centrifugal effects by the black-hole spin (around the equatorial plane) and orbital motion, the torus expands gradually with time, in particular toward the equatorial direction. Also due to the matter infall, the black-hole mass and spin increase with time. For models B10.5, Bq12.5, and Bq11.0b, the MAD parameter is \(\sim 5\)-\(10\) in the late stage of the jet propagation (see Fig. 7) because the magnetic flux on the horizon is by one order of magnitude smaller than those for models B11.5 and B11.3. The evolution processes for models B10.5, Bq12.5, and Bq11.0b indicate that in the presence of a poloidal magnetic field that penetrates a spinning black hole, a jet may be always generated after long-term winding of the magnetic-field lines even if the initial magnetic field strength is not very strong.
Figure 3 displays the magnetic-field lines and field strength in the vicinity of the black hole on the \(\varpi\)-\(z\) plane for the stages at which a jet was already launched for models B11.5, B11.0, and B10.5 (see also the bottom-right panel of Fig. 4). This clearly shows that for higher initial magnetic-field models (i.e., for earlier jet-launch models), the magnetic fields around the black hole are stronger reflecting the ram pressure at the jet launch. For model B10.5, the magnetic-field strength is highest around the equatorial plane at the selected time slice because a torus is present there and the magnetic field is amplified by the winding and partly by the MRI (note that due to the anti-dynamo nature in the axisymmetric simulation [31], the MRI dynamo cannot be developed in this simulation). For model B11.0, an orbiting disk is not formed around the black hole before the jet launch, but mass accretion proceeds from the equatorial region, gradually increasing the black-hole mass (cf. Fig. 10).
For models B10.0, Br10.5, and Bq12.0, neither a jet nor an outflow is launched in the simulation time of \(\sim 10\,\mathrm{s}\). For these models, the magnetic-field strength on the black-hole horizon is not enhanced enough to launch a jet during the torus formation, and the torus is simply evolved around the black hole (see Fig. 5 for model Br10.5). In particular for the initial condition with Eq. (15) with \(\varpi_{0}\leq 5\times 10^{3}\,\mathrm{km}\), the magnetic-field strength on the black hole does not significantly increase with time in the early stage before the torus formation even for very high values of \(B_{\mathrm{max}}\) (cf. Figs. 6 and 7). Only for a high value of \(\varpi_{0}\), e.g., \(10^{4}\,\mathrm{km}\), a jet is quickly launched even for Eq. (15) with \(B_{\mathrm{max}}\gtrsim 10^{11}\,\mathrm{G}\) (this is likely to be also the case for very high values of \(B_{\mathrm{max}}\) with \(\varpi_{0}=10^{3}\,\mathrm{km}\)). This indicates that the magnetic-field strength on the horizon before the formation of a disk/torus depends strongly on the field profile in the progenitor star. In the present axisymmetric simulation, the dynamo mechanism does not work, and hence, the poloidal magnetic fields are not amplified sufficiently in the torus and on the black hole in a short timescale. As a consequence, the strong poloidal magnetic field that penetrates the black hole and launches a jet, is not developed quickly in the absence of initially strong fields (compare the last panel of Fig. 5 with Fig. 3). In these cases, the poloidal magnetic field is not well aligned near the rotation axis.
However, in reality (i.e., in non-axisymmetric simulations in which dynamo and resultant turbulence can be modelled), a strong poloidal field could be developed after the formation of a disk/torus within a certain timescale and we may expect that a jet is launched eventually (see, e.g., Refs. [22; 33; 32] for recent relevant works). An accretion disk/torus for which the equipartition is established in the turbulent state would have the relation (e.g., Ref. [22])
\[\frac{B_{\mathrm{disk}}^{2}}{8\pi}\sim f_{\mathrm{eq}}\rho_{\mathrm{disk}}c_{ s}^{2}, \tag{18}\]
where \(B_{\mathrm{disk}}\) is the typical magnetic-field strength inside the disk/torus, \(c_{s}\) is the typical sound speed, and \(f_{\mathrm{eq}}\) is approximately constant with 0.02-0.05 for disks/tori in equipartition. Hence, for the typical values at the inner region of the disk/torus, we expect the magnetic-field strength
\[B_{\mathrm{disk}} \sim\ 1\times 10^{14}\left(\frac{f_{\mathrm{eq}}}{0.04}\right)^{1/2} \left(\frac{\rho_{\mathrm{disk}}}{10^{10}\,\mathrm{g/cm^{3}}}\right)^{1/2}\] \[\times\left(\frac{c_{s}}{10^{9}\,\mathrm{cm/s}}\right)\,\mathrm{G}. \tag{19}\]
Thus, by the accretion of the turbulent matter onto the black hole with a coherent magnetic-field polarity, a poloidal magnetic field that penetrates the black hole with \(B\sim B_{\mathrm{disk}}\sim 10^{14}\,\mathrm{G}\) can be formed as illustrated in recent simulation works [32; 33; 22]. This can be strong enough to launch a jet if the density of the infalling matter is low enough, satisfying \(B^{2}/8\pi>\rho_{\mathrm{infall}}v_{\mathrm{infall}}^{2}\), near the black hole, i.e.,
\[\rho_{\mathrm{infall}}<f_{\mathrm{eq}}\rho_{\mathrm{disk}}\left(\frac{c_{s}}{v _{\mathrm{infall}}}\right)^{2}. \tag{20}\]
Indeed this condition is satisfied in the late stage, e.g., for models Br10.5 and B10.0 if \(f_{\mathrm{eq}}=O(0.01)\), i.e., \(\rho_{\mathrm{infall}}\lesssim 10^{-4}\rho_{\mathrm{disk}}\) assuming \(c_{s}/v_{\mathrm{infall}}\sim 0.1\). Therefore, for the initially weak field cases, a jet may be launched after a turbulent state is established in the disk/torus.
Models Bq12.5 and Bq11.0b show not only a jet launch but also an explosion of the entire star (cf. Fig. 4). As in the cases of the torus-formed models such as B10.0, Br10.5, and Bq12.0, initially a disk and subsequently a torus are developed in the early stage of the evolution for these models. Then, the magneto-centrifugal force associated with the black-hole spin around the equatorial plane appears to play an important role for developing a gradually expanding torus because the magnetic-field strength is high around the equatorial plane. Subsequently, the toroidal magnetic field is amplified by the winding associated with the black-hole spin and inside the torus. A convective motion resulting from the Tayler instability [42; 45] is also seen. As a result of these effects, the torus starts exploding approximately simultaneously
with the jet launch. This explosive motion is accelerated with the decrease of the ram pressure of the infalling matter. This result suggests that, although the explosion was not observed for the models such as B10.0, Br10.5, and Bq12.0, in the longer-term evolution, these models may lead to the explosion eventually by the long-term winding of the magnetic field lines as well.
For all the massive disk/torus-formation models considered in this subsection, neutrino luminosity is enhanced to \(\sim 2\times 10^{52}\,\mathrm{erg/s}\), and in the presence of the stellar explosion, it subsequently decreases with time. The energy source for this neutrino emission is the shock heating on the shock surface around the torus. This neutrino cooling may not play an important role in the jet launch that can be driven primarily by the magnetohydrodynamical effect. However, it can decelerate the stellar explosion because the neutrino luminosity is much higher than the Poynting luminosity by the Blandford-Znajek mechanism which is smaller than \(10^{51}\,\mathrm{erg/s}\) for the disk/torus-formation models (see Fig. 8). This fact indicates that, for the stellar explosion found in the current study, not the thermal pressure but the magnetohydrodynamical effect associated with the extraction of the rotational kinetic energy of the black hole plays a major role.
### Magnetic-field energy and magnetic flux on the horizon
Figure 6 shows the evolution of the electromagnetic energy \(E_{\mathrm{B}}\). Here the electromagnetic energy is defined in the same way as in Ref. [18]. For all the models with the initial magnetic-field configuration by Eq. (13) or (14), \(E_{\mathrm{B}}\) monotonically increases with the compression due to the matter infall and winding in an early stage. For initially strong and well-aligned magnetic-field models (B11.5, B11.3, B11.0, Br11.0, and Bq11.0c), a jet is launched along the \(z\)-axis before the formation of a disk/torus by this magnetic-field amplification. The jet subsequently makes a cocoon around it and a convective motion is developed. Associated with this motion, the magnetic fields are wound and compressed, and hence, the magnetic energy is quickly enhanced until a saturation is reached. Here, at the saturation the electromagnetic energy becomes comparable to the rotational kinetic energy of the matter, \(10^{50}\)-\(10^{51}\,\mathrm{erg}\). The saturated values of \(E_{\mathrm{B}}\) are slightly larger for higher initial field strengths reflecting the ram pressure at the jet launch.
For initially weaker magnetic-field models (B10.5, Br10.5, and B10.0) with Eq. (13) or (14) and models with Eq. (15) and \(\varpi_{0}\leq 5\times 10^{3}\,\mathrm{km}\), a significant amplification of the electromagnetic energy found at \(t\gtrsim 2\,\mathrm{s}\) takes place due to the formation of a disk and a torus, in which the magnetic fields are amplified by the compression and winding. When the torus (geometrically thick disk) is formed, the matter velocity vector converges to the spin-axis direction, and hence, the magnetic flux is also converged, leading to the enhancement of the magnetic-field strength. Also, the magnetic field in the torus has a substantial fraction of the \(\varpi\) component, which plays an important role in the magnetic-field amplification by the winding. In addition, the MRI may partly play a role for the magnetic-field amplification after the \(z\)-component of the magnetic field becomes high enough to resolve the fastest growing mode of the MRI in the limited grid resolution. As already mentioned, a torus is developed from a geometrically thin disk and the oblique shock on the shock surface around the torus enhances the matter and magnetic-flux accretions onto the black hole. After the magnetic flux that penetrates the black hole becomes high enough, a slowly-expanding jet can be eventually launched from the vicinity of the black hole, although this is found only for models B10.5,
Figure 6: Evolution of the electromagnetic energy for the models with Eq. (13) (left) and with Eqs. (14) and (15) (right). We stopped the simulation for model Bq11.0 at \(t\approx 4.3\,\mathrm{s}\) because the evolution path looks similar to that for model Bq12.0 after the disk formation.
Eq12.5, and Bq11.0b. After the jet launch, a cocoon and associated convective motion are developed until a saturation at which the electromagnetic energy relaxes to \(\sim 10^{50}\,\mathrm{erg}\), which is comparable to the rotational kinetic energy of the matter as well.
For the initial magnetic field of Eq. (15), a steep increase of the electromagnetic energy is not found in an early stage. For this case, the magnetic-field strength rather decreases with time in the vicinity of the black hole because the magnetic flux decreases with the matter infall due to the magnetic-field configuration initially given (see also Fig. 7). Only for model Bq11.0c, for which \(\varpi_{0}\) is large (\(10^{4}\,\mathrm{km}\)), a steep increase of the electromagnetic energy takes place, leading to an early jet launch. This result illustrates that the timing and mechanism of the jet launch depend strongly on the initial magnetic-field configuration.
For the initially weak magnetic-field models with the initial configurations of Eq. (13) or (14) and for most of the models with Eq. (15) (except for model Bq11.0c), the electromagnetic energy in the torus contributes substantially to the total one. This is developed mainly by the magnetic winding. Because of the anti-dynamo nature in the axisymmetric simulation, the poloidal magnetic field in the torus does not increase with time significantly.
Figure 7 displays the evolution of the magnetic flux, \(\Phi_{\mathrm{AH}}\), on apparent horizons and resultant MAD parameter, \(\phi_{\mathrm{AH}}\), as functions of time. It is found that for the models with the initial magnetic-field profiles given by Eqs. (13) and (14) the magnetic flux on the horizon steeply increases soon after the onset of the simulations and eventually approaches a relaxed value. Only if the magnetic flux exceeds \(\approx 1\times 10^{29}\,\mathrm{G}\,\mathrm{cm}^{2}\) a jet is launched before the disk formation in the present stellar model. For most of the models with the initial field configuration of Eq. (15) (except for model Bq11.0c), the magnetic flux relaxes to low values. In particular, for model Bq12.5, the magnetic flux appreciably decreases for \(t\lesssim 3\,\mathrm{s}\). This is the reason why we do not find the quick jet launch for this model in spite of the large initial field strength on the horizon.
The condition of the jet launch can be also discussed in terms of the MAD parameter [8]. In the present study, jets are launched for the models only with \(|\phi_{\mathrm{AH}}|\gtrsim 5\). The MAD parameter is high for the models with the initial field configuration of Eq. (13), and for models B11.5, B11.3, and Bq11.0c, it can be extremely high \(\gtrsim 10^{3}\), reflecting that the mass accretion onto the black hole is sig
Figure 7: Evolution of the magnetic flux on apparent horizons (top) and resultant MAD parameter (bottom) as functions of time. The left and right panels show the results for the models with Eq. (13) (left) and with Eqs. (14) and (15) (right), respectively. To see the trend clearly, moving averages are taken with the time interval \(0.2\) s. Note that for model Bq11.0 \(|\phi_{\mathrm{AH}}|\) is smaller than \(0.3\).
nificantly halted. For models B11.0 and Br11.0 in which the initial magnetic-field strength is weaker, the MAD parameter is typically 10-100 and jets are steadily generated from a relatively early stage. For model B10.5, by contrast, the jet (or outflow along the rotation axis) is launched in the early stage but it is stalled in the middle of the outward propagation. This may be interpreted as an insufficient MAD parameter \(\sim 1\) in such a stage. In the later stage of this model, the MAD parameter increases to \(\sim 10\), in which the outward propagation of the jet/outflow is observed. The situation is similar to those of models Bq12.5 and Bq11.0b, in which a jet is launched after the MAD parameter increases beyond \(\sim 5\).
As we find from Fig. 7, the magnetic flux on the horizon and MAD parameter are good indicators to determine whether a jet can be launched or not. This clearly shows that the magnetic flux on the horizon is one of the crucial quantities. To obtain the high value of the magnetic flux on the horizon in the collapsar model, we may need a suitable initial magnetic-field profile. However, this does not give us the comprehensive scenario for generating jets from black holes because the magnetic-field strength and configuration should have a wide variety in the progenitor stars. The other possibility to universally generate the strong poloidal magnetic field that penetrates black holes is the mechanism throughout the enhancement of the magnetic-field strength in the torus due to the MRI turbulence and accretion of the strong magnetic flux onto the horizon. In the axisymmetric study, we cannot explore this possibility due to the anti-dynamo nature, and thus, obviously we need a simulation in which the dynamo effect is taken into account (a high-resolution three-dimensional simulation or a phenomenological simulation; e.g., Ref. [18]) to confirm this possibility.
For higher resolution runs of models B11.5 and B11.3, the magnetic-field energy and field strength on the horizon are larger than those for the corresponding lower resolution runs in later stages. Our interpretation for this is that for the lower grid resolution, the numerical dissipation and diffusion of the magnetic field are stronger. This results in lower Poynting luminosity and slower spindown of the black hole with the lower resolution runs (see below).
### Poynting luminosity, ejecta mass, and outflow energy
Figure 8 shows the Poynting luminosity, \(L_{\rm B2}\), extracted from the spinning black hole by the Blandford-Znajek mechanism. The surface integral of the Poynting flux is performed on apparent horizons. Only the portion that the outgoing energy flux including the matter energy flux is positive (i.e., net energy is extracted from the black hole) contributes to the surface integral (see Appendix A for the definition of \(L_{\rm BZ}\)). \(L_{\rm BZ}\) is naturally higher for higher values of \(|\Phi_{\rm AH}|\) (see Fig. 7) for the models in which a jet is launched.
Figure 8 illustrates that, broadly speaking, the Poynting flux is steadily generated for the early-jet-launch models with the luminosity of \(\gtrsim 10^{51}\,\rm erg/s\). This reflects that the poloidal magnetic field that penetrates the black hole is in a quasi-steady state during the jet generation. It is also found that the Poynting luminosity is higher for the higher initial field strength, which results in the higher strength of the magnetic field that penetrates the black hole during the jet generation: For models B11.5, B11.3, and Bq11.0c, for which a jet is launched in early stages of the evolution, the Poynting luminosity can be much higher than \(10^{51}\,\rm erg/s\) after the jet launch, and for models B11.0 and Br11.0, \(L_{\rm BZ}\sim 10^{51}\,\rm erg/s\).
However, the total energy carried away by the Poynting flux seems to depend only weakly on the initial field strength, because the spin-down timescale of the black hole is shorter for the higher initial field strength (see Sec. III.4). Irrespective of the initial condition, the predicted total energy emitted by the Poynting flux, i.e., \(\Delta E=\)(the average Poynting luminosity)\(\times\)(spin-down
Figure 8: Poynting luminosity measured on apparent horizons as a function of time for selected models. The left and right panels show the results of the models with the initial magnetic field given by Eq. (13) (left) and by Eqs. (14) and (15) (right), respectively. We note that for models Bq12.0 and Bq11.0, \(L_{\rm BZ}\) is smaller than \(10^{48}\,\rm erg/s\).
timescale of Sec. III.4), is \(1.4\)-\(2.1\times 10^{53}\,\mathrm{erg}\), if the Poynting luminosity is assumed to be approximately constant in the spin-down timescale (see Table 2). Taking into account the uncertainty for the value of \(f\) and for the validity of the force-free approximation, the order of the magnitude for this is consistent with Eq. (7), indicating that the estimate carried out in Sec. I is good.
The estimated values of \(\Delta E\) are about 20-30% of the result from Eq. (7). The primary reason for this is that the poloidal magnetic field coherently penetrates only a
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model & \(\langle M_{\mathrm{ej}}\rangle\) (\(M_{\odot}\)/s) & \(\langle E_{\mathrm{exp}}\rangle\) (\(10^{51}\) erg/s) & \(\langle L_{\mathrm{BZ}}\rangle\) (\(10^{51}\) erg/s) & \(\langle L_{\mathrm{BZ}}\rangle/\langle E_{\mathrm{exp}}\rangle\) & \(\tau\) (s) & \(\langle L_{\mathrm{BZ}}\rangle\,\tau\,(10^{53}\,\mathrm{erg})\) \\ \hline B11.5,hi & 0.90 & 5.2 & 5.0 & 1.0 & 30 & 1.5 \\ B11.3,hi & 0.57 & 4.2 & 3.5 & 0.8 & 60 & 2.1 \\ B11.5 & 0.86 & 4.2 & 3.2 & 0.8 & 50 & 1.6 \\ B11.3 & 0.88 & 4.0 & 2.4 & 0.6 & 70 & 1.8 \\ B11.0 & 0.60 & 1.3 & 1.1 & 0.9 & 175 & 2.0 \\ B10.5\(\dagger\) & 0.34 & 1.1 & 0.07 & 0.1 & – & – \\ Br11.0 & 0.68 & 1.4 & 0.7 & 0.5 & 250 & 1.7 \\ Bq12.5\(\dagger\) & 0.49 & 1.5 & 0.20 & 0.1 & – & – \\ Bq11.0b\(\dagger\) & 0.62 & 0.41 & 0.10 & 0.2 & – & – \\ Bq11.0c & 0.76 & 2.4 & 1.9 & 0.8 & 75 & 1.4 \\ \hline \end{tabular}
\end{table}
Table 2: Average increase rates of the ejecta mass, \(\dot{M}_{\mathrm{ej}}\), and explosion energy, \(\dot{E}_{\mathrm{exp}}\), the Poynting luminosity, \(L_{\mathrm{BZ}}\), and the ratio, \(L_{\mathrm{BZ}}/\dot{E}_{\mathrm{exp}}\), for the models in which a jet is launched. The quantities are averaged over the last 5 seconds of each simulation. The last two columns list the approximate spin-down timescale and expected total electromagnetic energy carried by the Blandford-Znajek mechanism for models in which the spin-down is found. \(\dagger\) specifies the models for which a jet is launched after the formation of a disk/torus and the spin-down is not found.
portion of the black-hole horizon (see, e.g, Fig. 3) so that the Poynting luminosity should be smaller than that of Eq. (3). Near the equatorial plane the magnetic-field lines are not very coherently aligned and moreover the force-free condition is not well satisfied because of the presence of infalling matter. Even with such magnetic-field lines, the angular momentum of the black hole can be extracted because the angular velocity of the black hole is larger than that of the matter around the black hole, while the energy extraction may be less efficient in the presence of dense matter (see, e.g., Fig. 9 of Ref. [11] and Ref. [46] for a discussion on the matter effect). It should be also pointed out that \(L_{\rm BZ}\) is appreciably smaller than \(L_{\rm BZ}^{\rm full}\) (see Appendix A): On a large portion of the surface of apparent horizon, the total energy flux (matter plus electromagnetic energy flux) outgoing from the horizon is negative. Thus, the effect of the matter infall plays a significantly negative role for the extraction of the rotational kinetic energy of the black hole in the collapsar scenario.
However, the total amount of the outgoing energy is still larger than \(10^{53}\,\)erg, which is much larger than the typical energy of gamma-ray bursts (including the afterglow and associated supernova). This suggests that the energy injection from the black hole has to be stopped before the entire spin-down of the black hole (see a discussion in Sec. IV).
Figure 9 shows the evolution of the ejecta mass, \(M_{\rm ej}\), and outflow energy (including the explosion energy of the star), \(E_{\rm exp}\), as well as the increase rates of the ejecta mass and outflow energy for selected models. The ejecta mass and outflow energy are calculated using the similar formulae as in Ref. [47] with the extraction radius of \(1\times 10^{5}\,\)km (see Appendix A for the formulae). In the present context, the contribution from the jet is appreciable in the ejecta mass and outflow energy. Since these quantities increase monotonically with time until the end of the simulations in the present study (i.e., they do not relax to constants), we also plot the time derivative of them, i.e., \(\dot{M}_{\rm ej}\) and \(\dot{E}_{\rm exp}\), in Fig. 9. Table 2 also shows average values of \(\dot{M}_{\rm ej}\) and \(\dot{E}_{\rm exp}\) as well as of \(L_{\rm BZ}\).
It is found that \(\dot{E}_{\rm exp}\) is of the same order of magnitude as \(L_{\rm BZ}\) for models B11.5, B11.3, B11.0, Br11.0, and Bq11.0c for which a jet is launched before the disk formation. For these models, thus, the Blandford-Znajek mechanism is the major central engine for the jet launch. For models B11.3, B11.0 and Br11.0, we confirmed that the entire star explodes, indicating that the Blandford-Znajek mechanism can also be the engine of the stellar explosion (but see discussions in Sec. IV). It is also found that the mass ejection rate is quite high for these models as \(\sim M_{\odot}/\)s. Since the total mass outside the black hole in the present models is \(M_{\rm env}\sim 10M_{\odot}\), and hence, \(M_{\rm env}/\dot{M}_{\rm ej}\sim 10\,\)s is much shorter than the spin-down timescale by the Blandford-Znajek mechanism (see Sec. III.4 for the spin-down timescale). Thus, in the late stage of the evolution of the system, the Poynting flux will be used to accelerate the ejected matter if the Blandford-Znajek mechanism works until the complete spin-down of the black hole.
For models B10.5, Bq12.5, and Bq11.0b for which a jet is launched after the formation of the disk and torus, \(L_{\rm BZ}\sim 10^{50}\,\)erg/s, which is by one or two orders of magnitude lower than those for the early-jet-launch models such as B11.5 and B11.3. For these models, the magnetic-field strength is weaker around the polar region on the horizon (see Fig. 7), and thus, this result is quite reasonable. However, \(\dot{E}_{\rm exp}\) for these models is not very small, and thus, the ratio of \(L_{\rm BZ}/\dot{E}_{\rm exp}\) is much lower than those for the early-jet-launch models. The reason for this is that for these models the magnetohydrodynamical effects such as magneto-centrifugal effect and Tayler instability play an important role not only in the jet launch but also in the stellar explosion for a substantial fraction of the stellar envelope. Specifically, the angular momentum transport from the black hole to matter around the equatorial region through the winding of the magnetic-field lines associated with the black-hole spin (like a propeller effect by a rotating neutron star [48]) appears to play an important role for extracting the angular momentum (and rotational kinetic energy) of the black hole (see, e.g., Ref. [49] for the related issue).
As already mentioned, the dynamo effect is not taken into account in the present axisymmetric modelling. In reality, the dynamo and resulting turbulence that effectively generate the viscous effect will contribute to the activity of the torus, likely leading to more efficient mass ejection and energetic explosion [19]. In addition, an enhanced magnetic-field strength on the horizon would increase the Poynting luminosity. Thus, for models B10.5, Bq12.5, and Bq11.0b, the values of \(L_{\rm BZ}\) and \(E_{\rm exp}\) may be even higher in reality. Since \(E_{\rm exp}\) was already much higher than the typical supernova energy (\(\sim 10^{51}\,\)erg) at \(t=10\,\)s, these can be models for powerful supernovae with the explosion energy higher than \(\gtrsim 10^{52}\,\)erg (but the overproduced energy by the extraction of huge rotational kinetic energy of spinning black holes can be an issue as well; see a discussion in Sec. IV).
### Spin evolution of black holes
Figure 10 shows the evolution of the mass and dimensionless spin of black holes for the models with the initial magnetic-field configuration of Eq. (13) (top left) and with those of Eqs. (14) and (15) (top right). For comparison, we plot results in viscous-hydrodynamics simulations with the alpha parameters of 0.03 and 0.10 of Ref. [19]. For the better view of the spin-down, we also plot the evolution of the dimensionless spin focusing only on the models for which the spin eventually decreases with time (bottom panel).
For all the models, both the mass and dimensionless spin initially increase with time due to the matter accretion onto the black hole. For models B11.5 and B11.3 for which the magnetic-field lines are well aligned with
the black-hole spin axis and its strength is very high initially, a MAD state is quickly established as a result of the amplification of the magnetic field that penetrates the black hole and launches a jet (cf. Fig. 7). The evolution process of the black hole for model Bq11.0c is similar to these models. For this model, the cylindrical component of the magnetic field is present from the beginning, and thus, the strong magneto-centrifugal force also plays a role in halting the matter accretion from the equatorial direction. After the jet launch for these models, the mass accretion onto the black hole essentially ceases except for an intermittent accretion from the equatorial direction, leading to a state of \(dM_{\rm BH}/dt<0.1M_{\odot}\)/s. For this stage, the black hole is evolved primarily by the Blandford-Znajek mechanism, and the dimensionless spin decreases with time (see the bottom panel of Fig. 10 for the zoom-up view). We note that for model B11.3 (standard resolution run), an intermittent spin-up stage is seen at \(t\sim 4.7\)-\(5.0\,\)s at which the MAD state was disrupted for a while. This is due to an accidental large-mass accretion from the equatorial region. However, for \(t\gtrsim 5\,\)s, the MAD state is recovered and the dimensionless spin steadily decreases again.
The timescale of the spin-down is evaluated using Eq. (9). For the present numerical results, \(\Delta J=J_{0}-J_{\rm BH}\) (\(\Delta J>0\)) is a small fraction of \(J_{0}\). Thus, \(|\Delta J_{\rm BH}^{2}|\approx 2J_{0}\Delta J\), and hence, \(\tau\) is determined approxi
Figure 10: Top left: Evolution of the mass (upper panel) and dimensionless spin (lower panel) of spinning black holes for all the models with the initial conditions of Eq. (13) as well as a viscous-hydrodynamics model with the alpha parameter of 0.03 (solid curve) and 0.10 (dotted curve) of Ref. [19]. Top right: The same as top left panel but for the selected models with the initial configuration of Eqs. (14) and (15). The curves for models Br10.5 and Bq12.0 are accidentally very similar. Bottom: Zoom-up of the spin evolution for the selected models as a function of \(t-t_{\rm sd}\) where \(t_{\rm sd}\) denotes the approximate time at which the spin starts decreasing.
mately by
\[\tau=\Delta t\frac{J_{0}}{\Delta J}\approx\Delta t\frac{\chi_{0}}{\Delta\chi}, \tag{21}\]
where \(\Delta t\) denotes a time duration in the spin-down stage and \(\Delta\chi=\chi_{0}-\chi\).
For models B11.5, B11.3, and Bq11.0c for which the spin-down sets in soon after the jet launch, we find \(\tau<100\,\)s from Fig. 10 (see Table 2 for the results). These results illustrate that a short-term spin-down, i.e., the case that the spin-down timescale is comparable or shorter than the typical time duration of long gamma-ray bursts \(\lesssim 100\,\)s, is possible if a MAD state is developed in an early stage of gravitational collapse. For lower resolution runs of B11.5 and B11.3, the spin-down timescale is longer than for the corresponding higher resolution runs. This is due to a numerical dissipation and diffusion of the magnetic fields.
For models B11.0 and Br11.0, for which the initial field strength is high enough to launch a jet before the disk formation, the mass accretion onto the black hole is also suppressed due to the strong magnetic pressure near the horizon after the jet launch. However, the mass accretion still proceeds for a while, and associated with it, the dimensionless spin increases with time overcoming the spin-down by the Blandford-Znajek mechanism in an early stage. Only for late stages the spin-down by the Blandford-Znajek mechanism overcomes the spin-up by the mass accretion. For models B11.0 and Br11.0, the MAD state appears to be developed only for \(t\gtrsim 6\,\)s and \(4\,\)s, respectively, after which \(\dot{M}_{\rm BH}\) is less than \(0.1M_{\odot}\)/s, although a strong jet is launched earlier, and the spin-down rate is low: \(\Delta\chi\sim 0.01\) for \(\Delta t\sim 3\)-\(4\,\)s, and the estimated spin-down timescale is longer than \(100\,\)s, much longer than those for models B11.5, B11.3, and Bq11.0c. This is due to a weaker magnetic-field strength achieved around the black hole after the jet launch.
For the initially weak magnetic-field models (B10.5, B10.0, and Br10.5) as well as for most of the models with the initial magnetic-field configuration of Eq. (15), for which a MAD state is not achieved or only weakly achieved in the simulation time, the black-hole mass increases with the mass accretion from the equatorial direction and the dimensionless spin does not decrease even after the jet launch (for models B10.5, Bq12.5, and Bq11.0b). Even for these models, after the mass accretion onto the black hole ceases, the black-hole spin may eventually decrease with time due to the Blandford-Znajek mechanism. However, for these models the magnetic-field strength on the horizon is much weaker than those for models B11.5, B11.3, B11.0, Br11.0, and Bq11.0c for which the jet is launched after the sufficient enhancement of the magnetic-field strength before the disk formation. In reality the magnetic-field strength on the horizon can be increased if the MRI turbulence in the disk after its formation is fully resolved. As we estimated at Eq. (19), however, the magnetic-field strength would not be as high as those for models B11.5, B11.3, B11.0, Br11.0, and Bq11.0c for such cases. Hence, the spin-down time scale would be \(\gg 100\,\)s.
As we discussed in Sec. I, suppose that a fossil magnetic field in the progenitor star is not extremely strong, it is natural to consider that the strong poloidal magnetic field that penetrates the black hole and is responsible to the Blandford-Znajek mechanism should be developed after the disk formation, in which the magnetic-field amplification takes place and from which the magnetic flux is provided to the black hole. Our present numerical results indicate that the rapid spin-down is achieved only for the case that the magnetic-field strength is high even in the absence of the disk. This suggests that the spin-down effect of the black hole might be minor in the typical collapsar scenario during the typical duration of long gamma-ray bursts of 10-100 s.
For models Br10.5, B10.0, Bq12.5, Bq12.0, and Bq11.0b, the evolution process of the black hole is similar to that for viscous hydrodynamics models with different viscous efficiency. For these models, the dimensionless spin achieved after the evolution is slightly smaller than that for the viscous hydrodynamics models, indicating that the outward angular momentum transport in the disk/torus becomes more efficient by the magnetohydrodynamical effect, e.g., by the magneto-centrifugal effect associated with the black-hole spin, than by the viscous effect. This is in particular the case for model Bq12.5. Nevertheless, the evolution path of the black hole in the magnetohydrodynamics models is similar to those in the viscous hydrodynamics models. This suggests that in the absence of jets by the Blandford-Znajek mechanism, the evolution of the system is similar irrespective of the physical mechanisms of the angular momentum transport. On the other hand, in the presence of a strong jet, the growth of the black hole by the mass accretion could be suppressed, and hence, amount of the matter outside the black hole, which could be the ejecta and energy source of electromagnetic signals, may be larger.
## IV Summary and discussion
We performed neutrino-radiation magnetohydrodynamics simulations in full general relativity in the context of the collapsar scenario. The simulations were started from a system of a moderately-rapidly spinning black hole and infalling matter, which are prepared based on a stellar evolution model [20]. Poloidal magnetic fields with a variety of the field strengths and configurations are superimposed initially. Axial symmetry is assumed to achieve long-term evolution with \(\gtrsim 10\,\)s duration.
We found that the evolution process of the system depends strongly on the magnetic-field strength and configuration initially given. For the models with initially strong magnetic fields of \(B_{\rm max}\geq 10^{11}\,\)G and with the field aligned well with the black-hole spin, a jet is launched due to the Blandford-Znajek mechanism in a short timescale after the magnetic-field amplification
by the winding associated primarily with the black-hole spin. For this case, the jet is launched before the formation of a disk around the black hole and a MAD state is eventually established after the jet launch due to the strong magnetic field achieved, which halts the matter accretion onto the black hole. The black hole subsequently evolves primarily by the Blandford-Znajek mechanism with subdominant matter accretion, and its dimensionless spin decreases with time. The timescale of the spin-down depends on the field strength initially given, because the magnetic-field strength at the jet launch, which is determined by the ram pressure of the infalling matter, is higher for the stronger initial field strength. For a sufficiently strong field strength, the timescale of the spin-down is shorter than \(100\,\mathrm{s}\) in the present models, i.e., shorter than or comparable to the typical duration of long gamma-ray bursts. However, for the models initially with a lower magnetic-field strength, the timescale is longer than \(100\,\mathrm{s}\).
The expected total energy emitted by the Blandford-Znajek mechanism depends weakly on the initial profiles of the magnetic field, because for models with shorter spin-down timescales the Poynting luminosity is higher. The expected total energy is about 20-30% of the rotational kinetic energy of black holes that can be liberated by the Blandford-Znajek mechanism (see Eq. (7) with \(f_{1/2}=1\)). The most likely reason for this reduction is that although the outgoing Poynging flux is generated on the horizon, the matter infall onto the horizon (primarily from the equatorial region) prevents the outward emission of electromagnetic waves.
For several models (B10.5, Bq12.5, and Bq11.0b), a jet is launched after the formation of a disk/torus. For these models, the strength of the magnetic field that penetrates the black hole increases by the winding associated with the black-hole spin and by the accumulation of the magnetic flux from the torus in a long timescale of \(\sim 10\,\mathrm{s}\). Because the matter accretion from the equatorial region continues even after the jet launch, the spin-down of the black hole is not found for these models. For models Bq12.5 and Bq11.0b, an entire stellar explosion after the expansion of the torus is found together with the jet launch. This results from the winding of the magnetic-field lines around the equatorial region associated with the black-hole spin, which enhances the toroidal magnetic-field strength and the magneto-centrifugal force to the torus. Because the toroidal magnetic fields become very high, the Tayler instability appears to play an important role in inducing a convective motion in the torus and infalling matter, which appears to contribute to the stellar explosion.
For models with initially weak magnetic-field strengths or with the field configuration of Eq. (15), \(B_{\mathrm{max}}\leq 10^{12}\,\mathrm{G}\), and \(\varpi_{0}=10^{3}\,\mathrm{km}\), jets are not launched in the simulation time. For these models, a disk/torus is formed and its size increases gradually with time due to the matter infall and magneto-centrifugal effect associated with the black-hole spin. The mass and dimensionless spin for the black hole also increase simply with time. For a very long-term run, a jet launch and stellar explosion may occur for these models due to the continuous injection of the energy and angular momentum from the black hole. However, this is likely to be the results associated with axial symmetry imposed in this work. In non-axisymmetric simulations, MRI and associated turbulence would be developed, enhancing angular momentum transport, mass ejection, and mass accretion onto the black hole. Associated with the mass accretion onto the black hole, a strong poloidal magnetic field that penetrates the black hole is likely to be developed as previous simulation works demonstrated (e.g., Refs. [32; 33; 22]). If this is the case, a jet may be driven after the evolution of the disk/torus in a relatively early stage. However, in this scenario, the ram pressure at the jet launch should be weaker than those in the earlier jet-launch models (models B11.5, B11.3, B11.0, Br11.0, and Bq11.0c), and hence, the spin-down timescale will be \(\gg 100\,\mathrm{s}\) (or spin-down may not be found as in models B10.5, Bq12.5, and Bq11.0b).
As discussed in Sec. I, the gamma-ray burst energy had to be much larger than the observed values if a substantial fraction of the black-hole spin were extracted by the Blandford-Znajek mechanism and the corresponding rotational kinetic energy is distributed to the mat
Figure 11: The same as Fig. 3 but for the evolution of the magnetic-field profile for model B11.0.
ter surrounding the black hole. Taking into account the non-observation of such extremely energetic gamma-ray bursts, afterglows, and supernovae, the initially strong magnetic-field models with the short spin-down timescales are not suitable for the models of long gamma-ray bursts. This suggests that the magnetic field, which penetrates black hole and is the source of the Blandford-Znajek mechanism, is likely to be generated after the disk/torus formation, its evolution, and subsequent amplification of the magnetic field in it in long gamma-ray bursts. In this scenario, the degree of the black-hole spin-down during the generation of gamma-ray bursts should not be appreciable.
However we still have an issue. Although the spin-down timescale of the black hole is likely to be longer than the typical duration of long gamma-ray bursts, the spin-down should proceed for a long timescale as long as the poloidal magnetic field that penetrates the black hole is present. If a substantial fraction of the rotational kinetic energy of the black hole is transported to the matter surrounding the black hole, an extremely bright electromagnetic signal, which has not been observed, had to be emitted. To avoid this possibility, magnetic fields have to be dissipated within the spin-down timescale. One possible mechanism to reduce the poloidal magnetic-field strength on the horizon is the reconnection around the equatorial plane [50]. During the jet generation inside the funnel region, the magnetic pressure balances with the gas pressure of the infalling matter or torus. In the late stage of the evolution of the system, the density and pressure of these matter fields decrease. Then, the opening angle of the magnetic-field lines around the rotational axis should increase. This can take place after most of the progenitor-star matter falls onto the central region and the torus matter are ejected outward or accreted onto the black hole by a (effectively) viscous process. The infall process proceeds approximately in the dynamical timescale of the system as
\[t_{\rm ff}=\sqrt{\frac{R_{*}^{3}}{GM_{*}}}, \tag{22}\]
where \(R_{*}\) and \(M_{*}\) denote the stellar radius and mass of the progenitor star at the onset of the collapse. In the current model, they are approximately \(3\times 10^{5}\,\)km and \(27M_{\odot}\), and hence, \(t_{\rm ff}\sim 90\,\)s. The viscous timescale is much shorter than this timescale assuming that the viscous alpha parameter is of order \(10^{-2}\) and the torus radius is smaller than \(100M_{\rm BH}\) (see, e.g., Ref. [19]). Thus, in \(\sim 100\,\)s, the matter density in the vicinity of the black hole is likely to become low and the opening angle of the poloidal magnetic field becomes wide. Indeed, in some of our present models for which the initial field strength is high, the widening of the poloidal magnetic field configuration is seen (see Fig. 11). This widening could eventually lead to the magnetic-field configuration similar to the split monopole and to a magnetic reconnection near the equatorial plane (see, e.g., Refs. [51, 52, 4]). Exploring this possibility for the very late stage of stellar collapses is one of the issues in our future work.
The other possible mechanism is the reconnection resulting from the interaction between the aligned magnetic fields along the black-hole spin axis and the magnetic loop ejected from the accretion torus that is in a turbulent state. General relativistic magnetohydrodynamics simulations have shown that accretion disks/tori around spinning black holes are in a turbulent state as a result of the MRI (see, e.g., Ref. [53] for the latest investigation). From such a turbulent disk/torus, the matter and magnetic loop are ejected, and some of the magnetic loops move toward the spin axis of the black hole. Here, the polarity of the magnetic loops should be quite random. Hence, if an ejected magnetic loop has a polarity different from that of the aligned magnetic field along the spin axis of the black hole, the magnetic-field strength becomes weaker by the reconnection. If this process continuously occurs, the Poynting luminosity associated with the Blandford-Znajek mechanism may decrease with time. Indeed, this process is often observed in a magnetohydrodynamics simulation with a phenomenological dynamo term [18].
We note that the same problem (overproduction energy problem) is present for the short gamma-ray burst scenario by neutron-star mergers. For the case of binary neutron star mergers, the formed black hole is likely to have mass between \(2.5M_{\odot}\) and \(3M_{\odot}\) with the dimensionless spin of 0.6-0.8 (e.g., Ref. [24]), while for black hole-neutron star mergers, the black-hole mass and spin are likely to be similar to those in the collapsar scenario. In both cases, the total rotation kinetic energy of the black hole available is larger than \(10^{53}\,\)erg (see Eq. (7)) which is much larger than the typical energy of short gamma-ray bursts (\(\sim 10^{49}\)-\(10^{50}\,\)erg [54]). The latest neutrino-radiation magnetohydrodynamics simulations have shown that a strong magnetic field is developed by magnetohydrodynamics instabilities such as MRI for the merger remnants irrespective of the binary type [53, 55, 22]. As Eq. (8) shows, the spin-down timescale of the black hole is of order \(10^{3}\)-\(10^{4}\,\)s, much longer than the typical timescale of short gamma-ray bursts. This suggests that to explain the short timescale (\(\lesssim 2\,\)s) of short gamma-ray bursts, we need a mechanism to stop the emission toward the observer direction associated with the Blandford-Znajek mechanism (e.g., Ref. [53]), and in addition, we need a dissipation process of the magnetic field that penetrates a black hole within a timescale much shorter than the spin-down timescale of \(10^{3}\)-\(10^{4}\,\)s. In other words, it might not be particularly strange that long gamma-ray bursts took place after neutron-star mergers [56, 57, 58, 59, 60, 61, 62] if the dissipation timescale of magnetic fields cannot be very short in a class of neutron-star merger remnants.
The present work is based on an axisymmetric simulation, and as a result, we cannot follow the turbulence, which should be developed by the MRI in the formed disk/torus. In its presence, the magnetic-field strength
is likely to be amplified more in them, the magnetic-flux supply onto the black hole may be more efficient, and a jet may be launched earlier. It is also likely that the turbulence activity develops the effective viscosity in the disk/torus, which can contribute to the explosion of the entire star as found in our viscous hydrodynamics simulation [19]. These are the issues to be pursued in the next step.
###### Acknowledgements.
We thank David Aguilera-Dena for providing their stellar evolution models. Numerical computation was performed on Sakura, Momiji, Cobra, and Raven clusters at Max Planck Computing and Data Facility. This work was in part supported by Grant-in-Aid for Scientific Research (grant Nos. 20H00158, 22H00130, 23H04900, and 23H05430) of Japanese MEXT/JSPS.
## Appendix A Definition of \(L_{\rm BZ}\) and explosion energy
In the following, the Greek and Latin indices denote the spacetime and spatial components, respectively.
Our (approximate) definition of the Poynting flux as well as the total energy flux on the horizon is based on the energy equation in the lab frame (see, e.g., Eq. (4.144) of Ref. [24]). By combining the continuity equation for the rest-mass density \(\rho\), we have
\[\partial_{t}\bar{S}_{0}+\partial_{i}F^{i}_{0}=\alpha\sqrt{\gamma}(T_{ij}K^{ij} -\gamma^{ij}J_{i}\partial_{j}\ln\alpha), \tag{100}\]
where \(\bar{S}_{0}=(\alpha^{2}T^{tt}-\rho\alpha u^{t})\sqrt{\gamma}\), \(J_{i}=-\alpha T^{t}_{i}\), \(\alpha\) is the lapse function, \(K_{ij}\) is the extrinsic curvature, \(T_{\mu\nu}\) is the energy-momentum tensor, \(u^{\mu}\) is the four velocity of the fluid, and
\[F^{i}_{0} = \bar{S}_{0}v^{i}+\left(P-\frac{E^{2}+B^{2}}{8\pi}\right)(v^{i}+ \beta^{i})\sqrt{\gamma} \tag{101}\] \[+\frac{\alpha\sqrt{\gamma}}{4\pi}\epsilon^{i}_{\ jk}E^{j}B^{k}\] \[= \sqrt{\gamma}\rho w(hw-1)v^{i}+\sqrt{\gamma}\bigg{(}P-\frac{E^{2 }+B^{2}}{8\pi}\bigg{)}\beta^{i}\] \[+\frac{\alpha\sqrt{\gamma}}{4\pi}\epsilon^{i}_{\ jk}E^{j}B^{k}.\]
Here, \(v^{i}=u^{i}/u^{t}\) is the three velocity, \(w=\alpha u^{t}\), \(P\) is the pressure, \(h\) is the specific enthalpy, \(E^{i}\) and \(B^{i}\) are an electric field and a magnetic field in the lab frame, \(\beta^{i}\) is the shift vector, and \(\epsilon_{ijk}\) is the completely antisymmetric tensor in three dimension. Note that \(E^{i}\) and \(B^{i}\) are defined from the electromagnetic tensor \(F^{\mu\nu}\) by
\[E^{i}=-\alpha F^{it}\ \ \text{and}\ \ B^{i}=\frac{1}{2}\epsilon^{\mu\nu}F_{\mu\nu}. \tag{102}\]
The last term of Eq. (101) denotes the Poynting flux, and hence, the Poynting luminosity on the horizon is defined by
\[L_{\rm BZ}^{\rm full}=\oint_{\rm horizon}\frac{\alpha\sqrt{\gamma}}{4\pi} \epsilon^{i}_{\ jk}E^{j}B^{k}dS_{i} \tag{103}\]
where \(dS_{i}\) denotes an area element on the horizon. Throughout this paper, the surface integral for the Poynting luminosity is performed on apparent horizons.
For a wide portion of the black-hole horizon, in particular near the equatorial plane, the net extracted energy can be negative if the matter inflow onto the black hole is present. For such a situation, it would not be appropriate to consider that the energy is extracted from the horizon. Thus, in this paper, we practically define a Poynting luminosity by
\[L_{\rm BZ}=\oint_{\rm horizon}\frac{\alpha\sqrt{\gamma}}{4\pi}\epsilon^{i}_{ \ jk}E^{j}B^{k}\Theta(F^{l}_{0}n_{l})dS_{i}, \tag{104}\]
where \(\Theta\) is the Heaviside step function and \(n_{l}\) is the spatial unit vector normal to horizons. With the factor of \(\Theta(F^{l}_{0}n_{l})\), we integrate the Poynting flux only if the total energy flux on the portion of the surface of apparent horizon is positive. Thus, \(L_{\rm BZ}\) defined in this paper does not give the entire Poynting luminosity extracted from the black hole (i.e., \(L_{\rm BZ}<L_{\rm BZ}^{\rm full}\)) but a net one that is emitted outward.
On the other hand, outflow energy (explosion energy) is extracted from the quantities in the outer region. Assuming that \((\partial_{t})^{\mu}\) is a timelike Killing vector, with which the energy-momentum tensor satisfies \(\partial_{\mu}(\sqrt{-g}T^{\mu}{}_{\nu}(\partial_{t})^{\nu})=\partial_{\mu}( \sqrt{-g}T^{\nu}{}_{t})=0\), the outflow energy is defined by
\[E_{\rm exp} =\int^{t}\oint_{r=r_{\rm ext}}\sqrt{-g}\left[-T^{k}{}_{t}-h_{\rm min }\rho u^{k}\right]\Theta(e_{\rm bind})dS_{k}dt\] \[+\int_{r<r_{\rm ext}}\sqrt{-g}\left[-T^{t}{}_{t}-h_{\rm min}\rho u ^{t}\right]\Theta(e_{\rm bind})d^{3}x,\]
where \(r_{\rm ext}\) denotes an extraction radius, which is chosen to be \(\approx 10^{5}\,\)km, \(e_{\rm bind}\) is the specific binding energy of a fluid element defined by
\[e_{\rm bind}=\frac{-T^{t}{}_{t}}{\rho u^{t}}-h_{\rm min}, \tag{105}\]
and \(h_{\rm min}=c^{2}+\varepsilon_{\rm min}\) is the minimum value of the specific enthalpy for a given equation-of-state table. For the DD2 equation of state [21] which we employ in this paper, \(\varepsilon_{\rm min}\approx-0.0013c^{2}\). In electromagnetohydrodynamics, the energy and momentum flux densities conserved in stationary spacetime, \(-\sqrt{-g}T^{\mu}{}_{t}\), are written as
\[-\sqrt{-g}T^{t}{}_{t} =\sqrt{\gamma}\rho u^{t}\left[\alpha(hw-P/\rho w)-\beta^{k}hu_{k}\right]\] \[+\sqrt{\gamma}\left[\alpha\frac{E^{2}+B^{2}}{8\pi}+\beta^{k}\frac {1}{4\pi}\epsilon_{klm}E^{l}B^{m}\right], \tag{106}\]
\[-\sqrt{-g}T^{i}{}_{t} =\sqrt{\gamma}\rho wv^{i}\bigg{(}\alpha hw-hu_{k}\beta^{k}\bigg{)}\] \[+\sqrt{\gamma}\bigg{[}-\alpha\frac{E^{2}+B^{2}}{8\pi}\beta^{i}+ \beta^{k}(E_{k}E^{i}+B_{k}B^{i})\] \[\qquad\qquad+\alpha\bigg{(}\frac{\beta^{i}\beta^{k}}{\alpha^{2}}+ \gamma^{ik}\bigg{)}\frac{1}{4\pi}\epsilon_{klm}E^{l}B^{m}\bigg{]}, \tag{20}\]
where we used \(V_{t}=\beta^{k}V_{k}\) for a spatial vector \(V_{\mu}\) that satisfies \(V_{\mu}n^{\mu}=0\); i.e., for \(E_{\mu}\), \(B_{\mu}\), and \(\epsilon_{\mu kl}E^{k}B^{l}\). The expression found in the first lines of Eqs. (20) and (20) corresponds to \(-\sqrt{-g}T^{\mu}{}_{t}\) of the ideal fluid [63].
|
2310.20631 | Hybrid Hadronization of Jet Showers from $e^++e^-$ to $A+A$ with
JETSCAPE | In this talk we review jet production in a large variety of collision systems
using the JETSCAPE event generator and Hybrid Hadronization. Hybrid
Hadronization combines quark recombination, applicable when distances between
partons in phase space are small, and string fragmentation appropriate for
dilute parton systems. It can therefore smoothly describe the transition from
very dilute parton systems like $e^++e^-$ to full $A+A$ collisions. We test
this picture by using JETSCAPE to generate jets in various systems. Comparison
to experimental data in $e^++e^-$ and $p+p$ collisions allows for a precise
tuning of vacuum baseline parameters in JETSCAPE and Hybrid Hadronization.
Proceeding to systems with jets embedded in a medium, we study in-medium
hadronization for jet showers. We quantify the effects of an ambient medium,
focusing in particular on the dependence on the collective flow and size of the
medium. Our results clarify the effects we expect from in-medium hadronization
of jets on observables like fragmentation functions, hadron chemistry and jet
shape. | Cameron Parker, Aaron Angerami, Ritu Arora, Steffen Bass, Shanshan Cao, Yi Chen, Raymond Ehlers, Hannah Elfner, Wenkai Fan, Rainer J. Fries, Charles Gale, Yayun He, Ulrich Heinz, Barbara Jacak, Peter Jacobs, Sangyong Jeon, Yi Ji, Lauren Kasper, Michael Kordell II, Amit Kumar, Joseph Latessa, Yen-Jie Lee, Roy Lemmon, Dananjaya Liyanage, Arthur Lopez, Matt Luzum, Abhijit Majumder, Simon Mak, Andi Mankolli, Christal Martin, Haydar Mehryar, Tanner Mengel, James Mulligan, Christine Nattrass, Jaime Norman, Jean-Francois Paquet, Joern H. Putschke, Gunther Roland, Bjoern Schenke, Loren Schwiebert, Arjun Sengupta, Chun Shen, Chathuranga Sirimanna, Ron A. Soltz, Ismail Soudi, Michael Strickland, Yasuki Tachibana, Julia Velkovska, Gojko Vujanovic, Xin-Nian Wang, Wenbin Zhao | 2023-10-31T17:00:01Z | http://arxiv.org/abs/2310.20631v3 | # Hybrid Hadronization of Jet Showers from \(e^{+}+e^{-}\) to \(A+A\) with Jetscape
###### Abstract:
In this talk we review jet production in a large variety of collision systems using the JETSCAPE event generator and Hybrid Hadronization. Hybrid Hadronization combines quark recombination, applicable when distances between partons in phase space are small, and string fragmentation appropriate for dilute parton systems. It can therefore smoothly describe the transition from very dilute parton systems like \(e^{+}+e^{-}\) to full \(A+A\) collisions. We test this picture by using JETSCAPE to generate jets in various systems. Comparison to experimental data in \(e^{+}+e^{-}\) and \(p+p\) collisions allows for a precise tuning of vacuum baseline parameters in JETSCAPE and Hybrid Hadronization. Proceeding to systems with jets embedded in a medium, we study in-medium hadronization for jet showers. We quantify the effects of an ambient medium, focusing in particular on the dependence on the collective flow and size of the medium. Our results clarify the effects we expect from in-medium hadronization of jets on observables like fragmentation functions, hadron chemistry and jet shape.
Introduction
JETSCAPE is a modular, task-based framework for simulating all aspects of heavy-ion collisions [1]. We first concern ourselves with tuning JETSCAPE using a novel hadronization module: Hybrid Hadronization [2, 3]. This method first uses Monte Carlo recombination [4, 5] on the partons after the shower stage and then hadronizes the rest with the Lund string model [6]. We use Bayesian analysis to tune JETSCAPE with Hybrid Hadronization to CMS and PHENIX data in vacuum proton-proton systems. Although JETSCAPE is primarily intended to compute heavy-ion collisions, a solid vacuum baseline is needed.
We then examine medium effects on hadronization by modeling how a single jet hadronizes using a brick of quark gluon plasma. Hybrid Hadronization is unique among hadronization models for shower Monte Carlos as it can take into account medium effects on hadronization through recombination of shower partons with thermal partons, and by allowing thermal partons to become part of strings connecting to shower partons. Both the existence of the medium and flow of the medium are expected to have pronounced effects on the final state hadrons produced. Medium flow both in the direction of the jet and transverse to it should provide flow effects on softer jet hadrons.
## 2 Vacuum Systems
To examine vacuum systems, we model complete proton-proton collisions in JETSCAPE. The initial hard scattering is handled by PYTHIA 8 [7]. The partons produced are then showered with MATTER until their virtuality is below a cutoff \(Q_{0}\)[8, 9]. All partons below that cutoff are then hadronized with Hybrid Hadronization. We generate events for \(\sqrt{s}=2.76\) TeV and \(\sqrt{s}=200\) GeV to compare to data from LHC and RHIC, respectively. For LHC events we consider data for jets clustered with the anti-\(k_{T}\) algorithm with various jet radii \(R\) as well as total charged hadrons at high \(p_{T}\)[10, 11]. For RHIC we consider high-\(p_{T}\) pions [12].
We tune 8 different parameters with our setup. In MATTER we adjust the lower virtuality cutoff \(Q_{0}\), the virtuality factor \(f=Q_{\rm max}^{2}/p_{T}^{2}\) which determines the upper limit for the virtuality of a particle with transverse momentum \(p_{T}\), and \(\Lambda_{QCD}\). In the recombination section of Hybrid Hadronization we can rescale the size of the pion, kaon and proton wave functions with parameters called pion width, kaon width, and proton width, respectively. In PYTHIA 8 string fragmentation we vary the strange to up-down ratio, and the diquark to quark ratio.
The Bayesian analysis process begins with creating a starting set of design points within a parameter space given by the prior ranges for each parameter. Our prior distribution of parameters is assumed flat within parameters space, and we use a Latin hypercube to generate these points. They will be used to run JETSCAPE. A Gaussian process emulator is utilized to generate observables between the design points in parameter space. The observables produced are then compared to data. A Markov chain Monte Carlo determines new sets of points that improve the description. This process is repeated until convergence, giving us the posterior distribution. The posterior distributions for our set of parameters are shown in Fig. 1 together with correlations between pairs of parameters. The observables for the posterior distribution are shown in Fig. 2.
The most likely values we find are \(Q_{0}=2.29\) GeV, a virtuality factor \(f=0.478\) and \(\Lambda_{QCD}=0.292\) GeV for MATTER. The scale factors for pion, kaon and proton widths are
respectively. The strange to up-down ratio is 0.206, and the diquark to quark ratio is 0.114. Note that this preliminary result is for a limited scope of observables. In particular, without identified kaon and proton spectra included the posterior distributions of the last four parameters are rather broad, as expected. We intend to include identified hadrons as well as hadron spectra at lower \(p_{T}\), for both energies, in the future. We are also building the appropriate event generation infrastructure for \(e^{+}+e^{-}\) collisions and intend to include those observables in the tune as well.
Figure 1: Posterior parameter distributions with highlighted maximum values. Most parameter distributions exhibit solid peaks away from the boundaries of their parameter ranges.
## 3 Medium Effects
In this section we use a simplified event pipeline. Since we are not interested in the entire event, only how the medium affects hadronization, we only examine a single jet in a quark gluon plasma medium. This is done by firing a single parton in the \(x\)-direction through a medium with a set length and temperatured (a "brick"), showering the parton with MATTER and LBT, and then hadronizing the shower with Hybrid Hadronization. Flow is emulated by adding a set velocity to the thermal partons at the hadronization stage. In the following, longitudinal is defined as in the direction of the jet (\(x\)) and transverse is perpendicular to the jet.
We first examine the effects of the presence of thermal partons during hadronization. We plot the proton-to-pion ratio as a function of hadron momentum \(p_{x}\) for a variety of different brick lengths from 0 (vacuum) to 8 fm. A large proton-to-pion ratio is known as a signature of quark
Figure 3: Proton to pion ratios for a jet in bricks of various lengths without flow (left panel) versus with flow (right panel). We see an enhancement in proton production in larger bricks. Flow in the direction of the jet pushes the proton peak to higher \(p_{T}\).
Figure 2: Generated data from the posterior compared to observables. We see a very strong agreement with the jet data and an agreement with hadron data that improves as \(p_{T}\) increases. We compare to CMS data [10, 11] for LHC energy and PHENIX data [12] for RHIC energy.
recombination in hadronization. As shown in the left panel of Fig. 3, the proton-to-pion ratio around 1 GeV indeed increases in magnitude as the brick increases in size. This is consistent with the idea that recombination with thermal partons increases in a larger medium.
When a homogenous longitudinal flow in jet direction is added, the peaks in the proton-to-pion ratios grow and are shifted to 2-2.5 GeV. This can be understood by flowing thermal partons adding more momentum to the hadrons they recombine into. Similar effects can be seen in the \(\Lambda\)-to-\(K\) ratio, not shown here.
We then examine the effects of transverse flow of brick partons. We scatter plot the components of the momentum perpendicular to the jet (\(p_{y}\) and \(p_{z}\)) for each hadron to showcase the effects of the flow. Soft hadrons, defined as 2 GeV \(<p_{x}<\) 10 GeV, demonstrate a significant deflection in the direction of the flow as shown in Fig. 4. On the other hand, leading hadrons demonstrate no noticeable deflection. This is consistent with expectation. Leading partons are distant from thermal partons in phase space, and therefore have a negligible chance to recombine with them.
## 4 Conclusion
We have found a tune for \(p+p\) collisions using MATTER and Hybrid Hadronization which gives acceptable results for jet and hadron spectra at LHC and RHIC energies, as long as the hadron transverse momenta are not too small. Moving forward we will be incorporating soft hadron spectra, more identified hadron spectra, additional collision energies, and \(e^{+}+e^{-}\) collisions. This will allow us to build a more comprehensive baseline.
Our study of medium effects in jet hadronization is similarly promising. Using a brick with flow we reproduce all the expected effects, including baryon enhancement increasing with medium size, a shift of the baryon/meson peak in momentum with longitudinal flow, and sideways deflection of soft hadrons with transverse flow. We intend to progress on to heavy flavor jets and simulations of jets in full \(A\) + \(A\) collisions.
Figure 4: Transverse momentum scatter plot of soft hadrons with no flow (left panel) versus transverse flow (right panel) in the brick. Soft hadrons are noticeably deflected in the direction of the flow due to partons from the medium recombining with shower partons.
This work was supported by the U.S. National Science Foundation under awards 1812431 and 2111568, and under award 2004571 through a subcontract with Wayne State University.
|
2309.12092 | Jung-type Inequalities and Blaschke-Santaló Diagrams for Different
Diameter Variants | We study geometric inequalities for the circumradius and diameter with
respect to general gauges, partly also involving the inradius and the Minkowski
asymmetry. There are a number of options for defining the diameter of a convex
body that fall apart when we consider non-symmetric gauges. These definitions
correspond to different symmetrizations of the gauge, i.e. means of the gauge
$C$ and its origin reflection $-C$. | René Brandenberg, Mia Runge | 2023-09-21T14:06:09Z | http://arxiv.org/abs/2309.12092v2 | # Jung-type inequalities and Blaschke-Santalo diagrams for different diameter variants
###### Abstract.
We study geometric inequalities for the circumradius and diameter with respect to general gauges, partly also involving the inradius and the Minkowski asymmetry. There are a number of options for defining the diameter of a convex body that fall apart when we consider non-symmetric gauges. These definitions correspond to different symmetrizations of the gauge, i.e. means of the gauge \(C\) and its origin reflection \(-C\).
Key words and phrases:Diameter, Blaschke-Santalo diagram, Symmetrizations, Jung-type inequalities, Completion, Asymmetric gauges
Introduction
The study of the geometry of the \((r,R,D)\)-diagram of the \((
to consider in general only "somehow" centered gauges and the exact definition of the "somehow" should fit the theory.
The _Minkowski asymmetry_ of \(C\in\mathcal{C}^{n}\) is defined as \(s(C):=R(C,-C)\) and we say that \(C\) is _Minkowski-centered_ if \(C\subset^{\mathrm{opt}}-s(C)C\). For \(C\in\bar{\mathcal{C}}^{n}\), the range of the Minkowski asymmetry is \([1,n]\), where \(s(C)=1\) if and only if \(C\) is symmetric and \(s(C)=n\) if and only if C is an \(n\)-dimensional simplex [26]. The symmetrizations can be ordered and the first and third containments are always optimal [6, 20].
**Proposition 1.2**.: _Let \(C\in\mathcal{C}^{n}_{0}\). Then,_
\[C_{\mathrm{MIN}}\subset^{\mathrm{opt}}C_{\mathrm{HM}}\subset C_{\mathrm{AM}} \subset^{\mathrm{opt}}C_{\mathrm{MAX}}.\]
**Remark**.: From now on, unless otherwise specified, we always assume that \(K\in\mathcal{C}^{n}\), that the gauge \(C\in\mathcal{C}^{n}_{0}\) is a Minkowski-centered fulldimensional convex body and \(s\in\mathds{R}^{n}\setminus\{0\}\).
## 2. Diameter Definitions
There are several ways to interpret the diameter in the symmetric case and we extend these ideas to the non-symmetric case. We will see that these correspond to different symmetrizations of the gauge. First, one can try to define the diameter of \(K\in\mathcal{C}^{n}\) by finding a "maximal" segment. But how should this "maximality" be defined? We could measure the distance between two points \(x,y\in K\) using the gauge function \(\|x-y\|_{C}\) and define the diameter as the maximal such distance \(\max_{x,y\in K}\|x-y\|_{C}\). Or, we define it as the maximal circumradius of segments in \(K\): \(\max_{x,y\in K}2R([x,y],C)\). However, the diameter could also be defined as the maximal distance between two parallel supporting hyperplanes of \(K\).
\[\max_{s\in\mathrm{bd}(C^{\circ})}h_{K}(s)+h_{K}(-s)=\max_{s\in\mathds{R}^{n} \setminus\{0\}}\frac{h_{K}(s)+h_{K}(-s)}{h_{C}(s)}.\]
As already mentioned, if the gauge \(C\) is symmetric, all these definitions lead to the same diameter.
\[\max_{x,y\in K}\|x-y\|_{C}=\max_{x,y\in K}2R([x,y],C)=\max_{s\in\mathds{R}^{n }\setminus\{0\}}\frac{h_{K}(s)+h_{K}(-s)}{h_{C}(s)}.\]
The most common diameter corresponds to the segement-radius definition and is therefore equal to two times the first core-radius of the set \(K\)[11]. We call it the _arithmetic diameter_ (or standard diameter).
**Definition 2.1**.:
1. The \(s\)-length \(l_{s,\mathrm{AM}}\) is defined as \[l_{s,\mathrm{AM}}(K,C):=\max_{x-y\in(K-K)\cap\mathrm{lin}(s)}2R([x,y],C).\]
2. The \(s\)-breadth \(b_{s,\mathrm{AM}}\) is defined as \[b_{s,\mathrm{AM}}(K,C):=2\cdot\frac{h_{K}(s)+h_{K}(-s)}{h_{C}(s)+h_{C}(-s)}.\]
Figure 1. The equilateral triangle (black) and its symmetrizations: minimum (blue), harmonic mean (purple), arithmetic mean (red), maximum (orange) (cf. [6]).
3. The arithmetic diameter is defined as the maximal \(s\)-length: \[D_{\mathrm{AM}}(K,C):=\max_{s\in\mathrm{R}^{n}\setminus\{0\}}l_{s,\mathrm{AM}}(K,C).\]
In [12] the following properties of \(D_{\mathrm{AM}}\), which are well known for symmetric gauges (cf. [25]), are proven.
**Proposition 2.2**.:
1. \[D_{\mathrm{AM}}(K,C)=\max_{s\in\mathrm{R}^{n}\setminus\{0\}}b_{s,\mathrm{AM}}( K,C)\]
2. \[l_{s,\mathrm{AM}}(K,C)=l_{s,\mathrm{AM}}\left(\frac{K-K}{2},C\right)=l_{s, \mathrm{AM}}\left(K,\frac{C-C}{2}\right)=l_{s,\mathrm{AM}}\left(\frac{K-K}{2},\frac{C-C}{2}\right)\]
3. \[b_{s,\mathrm{AM}}(K,C)=b_{s,\mathrm{AM}}\left(\frac{K-K}{2},C\right)=b_{s, \mathrm{AM}}\left(K,\frac{C-C}{2}\right)=b_{s,\mathrm{AM}}\left(\frac{K-K}{2},\frac{C-C}{2}\right)\]
4. \[D_{\mathrm{AM}}(K,C) =D_{\mathrm{AM}}\left(\frac{K-K}{2},C\right)=D_{\mathrm{AM}} \left(K,\frac{C-C}{2}\right)\] \[=D_{\mathrm{AM}}\left(\frac{K-K}{2},\frac{C-C}{2}\right)=2R\left( \frac{K-K}{2},\frac{C-C}{2}\right)\]
5. \[\min_{s\in\mathrm{R}^{n}\setminus\{0\}}l_{s,\mathrm{AM}}(K,C)=\min_{s\in \mathrm{R}^{n}\setminus\{0\}}b_{s,\mathrm{AM}}(K,C)=2r\left(\frac{K-K}{2}, \frac{C-C}{2}\right)\]
The fact that we can replace \(C\) by its symmetrization \(\frac{C-C}{2}\) is the reason why this diameter is called \(D_{\mathrm{AM}}\).
Arguably the most intuitive way to measure a diameter is using the gauge function \(\|x-y\|_{C}\). This diameter has been studied by Leichtweiss in [35] for non-symmetric gauges.
**Definition 2.3**.:
1. The asymmetric \(s\)_-length_\(l^{\prime}_{s,\mathrm{MIN}}\) is defined as \[l^{\prime}_{s,\mathrm{MIN}}(K,C):=\max_{x-y\in(K-K)\cap\mathrm{pos}(s)}\|x-y \|_{C}.\]
2. The symmetric \(s\)_-length_\(l_{s,\mathrm{MIN}}\) is defined as \[l_{s,\mathrm{MIN}}(K,C):=\max_{x-y\in(K-K)\cap\mathrm{lin}(s)}\|x-y\|_{C}.\]
3. The _minimum diameter_ is defined as the maximal symmetric \(s\)-length: \[D_{\mathrm{MIN}}(K,C):=\max_{s\in\mathrm{R}^{n}\setminus\{0\}}l_{s,\mathrm{ MIN}}(K,C).\]
As we maximize over the \(s\)-lengths to obtain the diameter, both definitions of the \(s\)-length lead to the same diameter: \(D_{\mathrm{MIN}}(K,C)=\max_{s\in\mathrm{R}^{n}\setminus\{0\}}l_{s,\mathrm{ MIN}}(K,C)=\max_{s\in\mathrm{R}^{n}\setminus\{0\}}l^{\prime}_{s,\mathrm{MIN}}(K,C)\).
**Lemma 2.4**.:
1. \[l_{s,\mathrm{MIN}}(K,C) =\max_{x-y\in(K-K)\cap\mathrm{pos}(s)}\max(\|x-y\|_{C},\|x-y\|_{- C})\] \[=l_{s,\mathrm{MIN}}(K,C\cap(-C))=l_{s,\mathrm{AM}}(K,C\cap(-C))\] \[=l_{s,\mathrm{MIN}}\left(\frac{K-K}{2},C\cap(-C)\right)=l_{s, \mathrm{MIN}}\left(\frac{K-K}{2},C\right)\]
_ii)_
\[D_{\mathrm{MIN}}(K,C) =D_{\mathrm{MIN}}(K,C\cap(-C))=D_{\mathrm{MIN}}\left(\frac{K-K}{2},C \cap(-C)\right)\] \[=D_{\mathrm{MIN}}\left(\frac{K-K}{2},C\right)=D_{\mathrm{AM}}(K,C \cap(-C))\]
Proof.: Since \(\|v\|_{C\cap-C}=\max(\|v\|_{C},\|-v\|_{C})\) for every \(v\in\mathds{R}^{n}\), we obtain
\[\max_{x-y\in(K-K)\cap\mathrm{pos}(s)}\max(\|x-y\|_{C},\|y-x\|_{C}) =\max_{x-y\in(K-K)\cap\mathrm{lin}(s)}\|x-y\|_{C}\] \[=\max_{x-y\in(K-K)\cap\mathrm{lin}(s)}\|x-y\|_{C\cap-C} =\max_{x-y\in(K-K)\cap\mathrm{lin}(s)}2R([x,y],C\cap-C).\]
This proves i) and ii) follows obviously.
The next two diameters have been first introduced in [7, Appendix]. For the first, instead of taking the maximum of \(\|x-y\|_{C}\) and \(\|x-y\|_{-C}\) one takes the arithmetic mean of these two values in the definition of the \(s\)-length. This diameter definition corresponds to the harmonic mean of the gauge.
**Definition 2.5**.:
1. The _\(s\)-length_\(l_{s,\mathrm{HM}}\) is defined as \[l_{s,\mathrm{HM}}(K,C):=\max_{x-y\in(K-K)\cap\mathrm{lin}(s)}\frac{1}{2}(\|x- y\|_{C}+\|x-y\|_{-C}).\]
2. The _harmonic diameter_ is defined as the maximal \(s\)-length: \[D_{\mathrm{HM}}(K,C):=\max_{s\in\mathds{R}^{n}\setminus\{0\}}l_{s,\mathrm{HM }}(K,C).\]
**Lemma 2.6**.:
1. \[l_{s,\mathrm{HM}}(K,C) =\max_{x-y\in(K-K)\cap\mathrm{lin}(s)}\|x-y\|_{\left(\frac{C^{ \circ}-C^{\circ}}{2}\right)^{\circ}}\] \[=l_{s,\mathrm{HM}}\left(K,\left(\frac{C^{\circ}-C^{\circ}}{2} \right)^{\circ}\right)=l_{s,\mathrm{AM}}\left(K,\left(\frac{C^{\circ}-C^{ \circ}}{2}\right)^{\circ}\right)\] \[=l_{s,\mathrm{HM}}\left(\frac{K-K}{2},\left(\frac{C^{\circ}-C^{ \circ}}{2}\right)^{\circ}\right)=l_{s,\mathrm{HM}}\left(\frac{K-K}{2},C\right)\]
2. \[D_{\mathrm{HM}}(K,C) =D_{\mathrm{HM}}\left(K,\left(\frac{C^{\circ}-C^{\circ}}{2} \right)^{\circ}\right)=D_{\mathrm{HM}}\left(\frac{K-K}{2},\left(\frac{C^{ \circ}-C^{\circ}}{2}\right)^{\circ}\right)\] \[=D_{\mathrm{HM}}\left(\frac{K-K}{2},C\right)=D_{\mathrm{AM}} \left(K,\left(\frac{C^{\circ}-C^{\circ}}{2}\right)^{\circ}\right)\]
Proof.: We can use the fact that \(h_{C^{\circ}}(a)=\|a\|_{C}\) for any \(a\in\mathds{R}^{n}\) and \(C\in\mathcal{C}_{0}^{n}\) to obtain
\[\|a\|_{\left(\frac{C^{\circ}-C^{\circ}}{2}\right)^{\circ}} =h_{\frac{C^{\circ}-C^{\circ}}{2}}(a)=\frac{1}{2}(h_{C^{\circ}}(a )+h_{-C^{\circ}}(a))\] \[=\frac{1}{2}(\|a\|_{C}+\|a\|_{-C})=\frac{1}{2}(\|a\|_{C}+\|-a\|_{ C}).\]
Since \(C_{\mathrm{HM}}\) is symmetric, the first part follows. The second part follows again directly from the first.
Instead of dividing by the mean \(\frac{h_{C}(s)+h_{C}(-s)}{2}\) in the definition of the \(s\)-breadth, one may also divide by the maximum of \(h_{C}(s)\) and \(h_{C}(-s)\). With this idea, we obtain our last diameter, the maximum diameter.
**Definition 2.7**.:
1. The _\(s\)-breadth_\(b_{s,\mathrm{MAX}}\) is defined as \[b_{s,\mathrm{MAX}}(K,C):=\frac{h_{K}(s)+h_{K}(-s)}{\max(h_{C}(s),h_{C}(-s))}.\]
2. The _maximum diameter_ is defined as the maximal \(s\)-breadth: \[D_{\mathrm{MAX}}(K,C):=\max_{s\in\mathrm{R}^{n}\setminus\{0\}}b_{s,\mathrm{MAX}} (K,C).\]
**Lemma 2.8**.:
1. \[b_{s,\mathrm{MAX}}(K,C) =\frac{h_{K}(s)+h_{K}(-s)}{h_{\mathrm{conv}(C\cup(-C))}(s)}\] \[=b_{s,\mathrm{MAX}}(K,\mathrm{conv}(C\cup(-C)))=b_{s,\mathrm{AM} }(K,\mathrm{conv}(C\cup(-C)))\] \[=b_{s,\mathrm{MAX}}\left(\frac{K-K}{2},\mathrm{conv}(C\cup(-C)) \right)=b_{s,\mathrm{MAX}}\left(\frac{K-K}{2},C\right)\]
2. \[D_{\mathrm{MAX}}(K,C) =D_{\mathrm{MAX}}\left(K,\mathrm{conv}(C\cup(-C))\right)=D_{ \mathrm{MAX}}\left(\frac{K-K}{2},\mathrm{conv}(C\cup(-C))\right)\] \[=D_{\mathrm{MAX}}\left(\frac{K-K}{2},C\right)=D_{\mathrm{AM}}(K, \mathrm{conv}(C\cup(-C))\]
Proof.: The first part follows directly from the fact that \(\max(h_{C}(s),h_{C}(-s))=h_{C_{\mathrm{MAX}}}(s)\) and the second part again directly from the first.
Since all definitions are equivalent for \(0\)-symmetric gauges, we can always use that \(D_{\mathrm{M}}(K,C)=D_{\mathrm{M}}(K,C_{\mathrm{M}})=D_{\mathrm{AM}}(K,C_{ \mathrm{M}})\) for \(\mathrm{M}\in\{\mathrm{MIN},\mathrm{HM},\mathrm{AM},\mathrm{MAX}\}\) and results known about the arithmetic diameter. If we consider a symmetric gauge we omit the index and denote the diameter by \(D(K,C)\). Moreover, for all diameter definitions we say that \(x,y\in K\) is a _diametral pair_ if \(D_{\mathrm{M}}(K,C)=D_{\mathrm{M}}([x,y],C)\).
**Remark 2.9**.: Using the different definitons of the \(s\)-lengths and -breadths, width-definitions can be done analogously to the diameters. For \(g\in\{l,b\}\) such that \(g_{s,\mathrm{M}}\) is defined, the _width_ is defined as
\[w_{\mathrm{M}}(K,C):=\min_{s\in\mathrm{R}^{n}\setminus\{0\}}g_{s,\mathrm{M}}( K,C).\]
In the standard case \(\mathrm{M}=\mathrm{AM}\) it does not make a difference whether we minimize over the \(s\)-length or -breadth. By lemmas 2.4, 2.6, and 2.8 we can symmetrize the arguments of the width as well.
## 3. Properties of the Diameters and Blaschke-Santalo Diagrams
In the following, we study properties of the diameters and how concepts such as completeness translate when using other definitions. Furthermore, we compare the diameter to other functionals such as the circumradius and inradius and introduce some theory on Blaschke-Santalo diagrams.
Let \(\mathrm{M}\in\{\mathrm{MIN},\mathrm{HM},\mathrm{AM},\mathrm{MAX}\}\) be one of the symmetrizations.
**Remark 3.1**.: The inradius, circumradius and diameter are increasing and homogenious of degree \(+1\) in the first argument and decreasing and homogenious of degree \(-1\) in the second argument.
**Lemma 3.2** (Linearity of \(r,R,D_{\mathrm{M}}\)).: _Let \(K_{1},K_{2}\in\mathcal{C}^{n}\) and \(\lambda\in[0,1]\)._
1. _If_ \(r_{1}C\subset^{\mathrm{opt}}K_{1}\) _and_ \(r_{2}C\subset^{\mathrm{opt}}K_{2}\)_, then_ \[r(\lambda K_{1}+(1-\lambda)K_{2},C)\geq\lambda r_{1}+(1-\lambda)r_{2}\] _and equality holds if we can choose the same outer normals of_ \(K_{1}\) _and_ \(K_{2}\) _in Proposition_ 1.1_._
2. _If_ \(K_{1}\subset^{\mathrm{opt}}R_{1}C\) _and_ \(K_{2}\subset^{\mathrm{opt}}R_{2}C\)_, then_ \[R(\lambda K_{1}+(1-\lambda)K_{2},C)\leq\lambda R_{1}+(1-\lambda)R_{2}.\] _If we can choose the same (up to dilatation) touching points_ \(p^{i}\) _in the boundary of_ \(C\) _as in in Proposition_ 1.1_, equality is obtained._
* _If_ \(D_{1}=D_{\rm M}(K_{1},C)\) _and_ \(D_{2}=D_{\rm M}(K_{2},C)\)_, then_ \[D_{\rm M}(\lambda K_{1}+(1-\lambda)K_{2},C)\leq\lambda D_{1}+(1-\lambda)D_{2}.\] _If the diameters are defined by the same_ \(s\)_-breadth or_ \(s\)_-length, we have equality._
Proof.: i) Obviously, \((\lambda r_{1}+(1-\lambda)r_{2})C\subset\lambda K_{1}+(1-\lambda)K_{2}\). Let \(r_{1}p_{i}\), \(r_{2}p_{i}\), with \(p_{i}\in{\rm bd}(C)\), be the touching points and \(a_{i}\) the corresponding outer normals as in Proposition 1.1. Then, \(h_{\lambda K_{1}+(1-\lambda)K_{2}}(a_{i})=\lambda r_{1}p_{i}^{T}a_{i}+(1- \lambda)r_{2}p_{i}^{T}a_{i}=(\lambda r_{1}+(1-\lambda)r_{2})p_{i}^{T}a_{i}=h_{( \lambda r_{1}+(1-\lambda)r_{2})C}(a_{i})\). Thus, \(p_{i}\in{\rm bd}((\lambda r_{1}+(1-\lambda)r_{2})C)\cap{\rm bd}(\lambda K_{1} +(1-\lambda)K_{2})\) and \(0\) is in the convex hull of the \(a_{i}\), which shows that we have optimal containment. ii) The statement for the circumradius follows analogously. Here, the outer body is \(C\) in both cases, so we automatically have the same supporting hyperplanes. iii) We know \(D_{\rm M}(K,C)=2R(K_{\rm AM},C_{\rm M})\). Thus, the inequality follows from part \(ii)\). If the diameters are attained by the same \(s\)-length we have the same touching points in the containments \(\frac{K_{1}-K_{1}}{2}\subset^{\rm opt}\frac{D_{\rm M}(K_{1},C)}{2}C_{\rm M}\) and \(\frac{K_{2}-K_{2}}{2}\subset^{\rm opt}\frac{D_{\rm M}(K_{2},C)}{(2)}C_{\rm M}\) and the equality follows from part \(ii)\). If the diameter is attained by the same \(s\)-breadth, we have \[\lambda D_{\rm M}(K_{1},C)+(1-\lambda)D_{\rm M}(K_{2},C) =\lambda b_{s,\rm M}(K_{1},C)+(1-\lambda)b_{s,\rm M}(K_{2},C)\] \[=\lambda\frac{h_{K_{1}-K_{1}}(s)}{h_{C_{\rm M}}(s)}+(1-\lambda) \frac{h_{K_{2}-K_{2}}(s)}{h_{C_{\rm M}}(s)}\] \[=\frac{h_{(\lambda K_{1}+(1-\lambda)K_{2})-(\lambda K_{1}+(1- \lambda)K_{2})}(s)}{h_{C_{\rm M}}(s)}\] \[=b_{s,\rm M}(\lambda K_{1}+(1-\lambda)K_{2},C)\] \[\leq D_{\rm M}(\lambda K_{1}+(1-\lambda)K_{2})\] and equality follows.
**Lemma 3.3** (Invariance under transformations).: _Let \(A:{\mathds{R}}^{n}\to{\mathds{R}}^{n}\) be a non-singular affine transformation and \(L_{A}\) its corresponding linear transformation. Then_
\[D_{\rm M}(A(K),L_{A}(C))=D_{\rm M}(K,C)\] \[R(A(K),A(C))=R(K,C)\] \[r(A(K),A(C))=r(K,C).\]
Proof.: It follows from Proposition 1.1 that the in- and circumradius are invariant under affine transformations. All symmetrizations interchange with linear transformations: \((L_{A}(C))_{\rm M}=L_{A}(C_{\rm M})\) (see [6], Lemma 4). We must confine the transformation in the second argument of the diameter to be linear, since the symmetrizations (besides the arithmetic) are not invariant under translations. Because we can interprete the diameter as a circumradius with \(D_{\rm M}(K,C)=2R(\frac{K-K}{2},C_{\rm M})\), the invariance of the diameter follows. The position of \(K\) does not change the diameter and therefore we can apply a corresponding affine transformation to \(K\).
To analyse properties such as constant width and completeness we extend their definitions to the different diameters.
**Definition 3.4**.: Let \(K,K^{*}\in\mathcal{C}^{n}\), \(K^{*}\supset K\), \(C\in\mathcal{C}^{n}_{0}\).
* \(K\) is of _constant width_ if \(w_{\rm M}(K,C)=D_{\rm M}(K,C)\).
* \(K\) is _complete_ if \(D_{\rm M}(K^{\prime},C)>D_{\rm M}(K,C)\) for all \(K^{\prime}\in\mathcal{C}^{n}\) such that \(K^{\prime}\supsetneq K\).
* \(K^{*}\) is a _completion_ of \(K\) if it is complete and \(D_{\rm M}(K^{*},C)=D_{\rm M}(K,C)\).
**Remark 3.5**.: Since all diameters can be expressed by the arithmetic diameter \(w.\,r.\,t.\)\(C_{\rm M}\), we know the following [18]:
* \(K\) has constant width if and only if \(K_{\rm AM}=\lambda C_{\rm M}\) for some \(\lambda\in{\mathds{R}}\).
2. If \(K\) is of constant width, it is complete.
3. In the planar case, \(K\) is complete if and only if it has constant width.
**Lemma 3.6**.: _Let \(K\in\bar{\mathcal{C}}^{n}\). If \(K^{*}\) is a completion of \(K\), then \(K\subset^{\mathrm{opt}}K^{*}\)._
Proof.: By definition \(K\subset K^{*}\) and \(D_{\mathrm{M}}(K^{*},C)=D_{\mathrm{M}}(K,C)\). Now, assuming there exist \(c\in\mathds{R}^{n}\) and \(0\leq\rho<1\) such that \(K\subset^{\mathrm{opt}}c+\rho K^{*}\) implies \(D_{\mathrm{M}}(K,C)\leq D_{\mathrm{M}}(c+\rho K^{*},C)<D_{\mathrm{M}}(K^{*},C)\), a contradiction.
**Remark 3.7**.: Let us observe two things:
1. Whenever the gauge is symmetric, the only (up to translation and dilatation) complete and symmetric set is the gauge itself. Thus, when considering \(D_{\mathrm{M}}\) with respect to a possibly non-symmetric gauge \(C\), the only complete and symmetric set is always \(C_{\mathrm{M}}\).
2. In the case that \(\mathrm{M}=\mathrm{AM}\), the gauge itself is always complete. This is not always the case with other diameter definitions. Therefore, in the following sections, we will characterize when the gauge is complete and what the completion looks like.
In the following the containment factors between \(C_{\mathrm{AM}}\) and \(C_{\mathrm{M}}\) will prove helpful to analyse the diameter \(D_{\mathrm{M}}\).
**Notation**.: _By \(\delta_{\mathrm{M}}:=\delta_{\mathrm{M}}(C)\) and \(\rho_{\mathrm{M}}:=\rho_{\mathrm{M}}(C)\) we denote the dilatation factors needed, such that_
\[\rho_{\mathrm{M}}C_{\mathrm{M}}\subset^{\mathrm{opt}}C_{\mathrm{AM}}\subset^ {\mathrm{opt}}\delta_{\mathrm{M}}C_{\mathrm{M}}.\]
_These factors always exist since all symmetrizations are 0-symmetric and fulldimensional. For better readability, we omit the argument \(C\), the gauge body, whenever it is clear from the context._
_Segments \(L\) optimally contained in \(C\) with \(D(L,C)=2\rho_{\mathrm{M}}\) are denoted by \(L_{w}\) and in case of \(D(L,C)=2\delta_{\mathrm{M}}\) by \(L_{D}\)._
**Lemma 3.8**.:
1. _For any segment_ \(L\subset^{\mathrm{opt}}C\)_:_ \[2\rho_{\mathrm{M}}\leq D_{\mathrm{M}}(L,C)\leq 2\delta_{\mathrm{M}}\] _with equality on the right side iff_ \(L\) _is diametral and equality on the left iff_ \(L\) _is a width chord of_ \(C\)_. All values in between are attained._
2. _The diameter of_ \(C\) _with respect to itself is_ \(D_{\mathrm{M}}(C,C)=2\delta_{\mathrm{M}}\)_, and the width of_ \(C\) _with respect to itself is_ \(w_{\mathrm{M}}(C,C)=2\rho_{\mathrm{M}}\)_._
Proof.: We begin by showing part \(ii)\): By the definition of \(\delta_{M}\) as well as the diameter properties collected in Proposition 2.2 and Lemmas 2.4, 2.6, and 2.8, we have
\[D_{\mathrm{M}}(C,C)=D_{\mathrm{AM}}(C_{\mathrm{AM}},C_{\mathrm{M}})=2R(C_{ \mathrm{AM}},C_{\mathrm{M}})=2\delta_{\mathrm{M}}\]
as well as
\[w_{\mathrm{M}}(C,C)=w_{\mathrm{AM}}(C_{\mathrm{AM}},C_{\mathrm{M}})=2r(C_{ \mathrm{AM}},C_{\mathrm{M}})=2\rho_{\mathrm{M}}.\]
Now, for part \(i)\), let \(L\subset C\) be a segment. It follows that \(D_{\mathrm{M}}(L,C)\leq 2\delta_{\mathrm{M}}\). By definition, \(L\) is the convex hull of a diametral pair if and only if \(D_{\mathrm{M}}(L,C)=D_{\mathrm{M}}(C,C)=2\delta_{\mathrm{M}}\).
Segments with circumradius 1 have diameter 2 when considering the arithmetic mean. Hence, \(2=D_{\mathrm{AM}}(L,C)\leq\frac{1}{\rho_{\mathrm{M}}}D_{\mathrm{M}}(L,C)\). If \(L\) provides us with the minimal \(s\)-length or -breadth, we obtain by part \(ii)\): \(D_{\mathrm{M}}(L,C)=\min_{s\in\mathds{R}^{n}\setminus\{0\}}l_{s,\mathrm{M}}(C, C_{\mathrm{M}})=2\rho_{\mathrm{M}}\) or the analogue for the \(s\)-breadth. All values in between are attained since the \(s\)-length and \(s\)-breadth are continuous as a function of \(s\) on \(\mathds{R}^{n}\setminus\{0\}\).
**Definition 3.9**.: The set
\[K^{\mathrm{sup}}=\bigcap_{x\in K}x+D_{\mathrm{M}}(K,C)C_{\mathrm{M}}.\]
is called the _supercompletion_ of \(K\).
Let us remark that Moreno and Schneider [38] call \(K^{\sup}\) the _wide spherical hull_. It is shown in [18] for arbitrary Minkowski spaces (i.e. for \(0\)-symmetric \(C\)) that a set \(K\) is complete w. r. t. a symmetric gauge \(C\) if and only if \(K^{\sup}=K\) and in [37] that \(K^{\sup}\) is the union of all completions of \(K\). All the above were previously only defined for symmetric \(C\), but it is obvious that these properties stay true in the general case.
**Definition 3.10**.:
1. A _supporting slab_ of \(K\) is the intersection of two antipodal parallel supporting halfspaces of \(K\).
2. A boundary point of \(K\) is called smooth if the supporting hyperplane of \(C\) in this point is unique.
3. A supporting slab is _regular_ if at least one of the bounding hyperplanes contains a smooth boundary point of \(K\).
4. We say that \(s\in\mathds{R}^{n}\setminus\{0\}\) defines a regular slab if there exists a supporting slab such that the defining halfspaces have outer normals \(\pm s\).
It is easy to argue that a subdimensional convex body is never complete. On the the other hand, every fulldimensional convex body is the intersection of its regular slabs and completeness can be characterized by using these slabs [38, Theorem 1].
**Proposition 3.11**.: _Let \(K\) be fulldimensional. Then the following are equivalent:_
1. \(K\) _is complete._
2. _For every outer normal_ \(s\) _defining a regular supporting slab of_ \(K\) _we have_ \(\frac{h_{K}(s)+h_{-K}(s)}{h_{\mathrm{C}_{\mathrm{M}}}(s)}=D_{\mathrm{M}}(K,C)\)_._
**Remark 3.12**.: As mentioned after the definition of the supercompletion, \(K^{*}=K^{\sup}\) implies uniqueness for the completion \(K^{*}\) of \(K\). Thus, defining \(K_{X}:=\bigcap_{x\in X}x+D_{\mathrm{M}}(K,C)C_{\mathrm{M}}\) for some \(X\subset K\) we obviously have \(K^{*}\subset K^{\sup}\subset K_{X}\). This means that describing properties for such subsets \(X\) in the following that imply \(K^{*}=K_{X}\) implicitely guarantee uniqueness of the completion \(K^{*}\).
**Lemma 3.13**.: _Let \(X\) be a closed subset of \(K\). Then the following are equivalent:_
1. \(K_{X}:=\bigcap_{x\in X}x+D_{\mathrm{M}}(K,C)C_{\mathrm{M}}\) _is a completion of_ \(K\)_._
2. _For every_ \(s\in\mathds{R}^{n}\setminus\{0\}\) _that defines a regular slab of_ \(C_{\mathrm{M}}\) _there exist_ \(\tilde{s}\in\{s,-s\}\) _and_ \(p\in X\) _such that_ \(p^{T}(-\tilde{s})=h_{K_{X}}(-\tilde{s})\) _and_ \(h_{K_{X}}(\tilde{s})=h_{p+D_{\mathrm{M}}(K,C)C_{\mathrm{M}}}(\tilde{s})\)_._
Proof.: Let us abbreviate \(D:=D_{\mathrm{M}}(K,C)\) for the proof. \(ii)\Rightarrow i)\): In [38] it is shown that the diameter is the supremum of the breadthes \(b_{s}(K_{X},C_{\mathrm{M}})\) where \(s\) defines a regular slab of \(C_{\mathrm{M}}\). For any such \(s\) and \(p\in X\) as defined in \(ii)\) we have
\[b_{s}(K_{X},C_{\mathrm{M}}) =\frac{h_{K_{X}}(s)+h_{K_{X}}(-s)}{h_{C_{\mathrm{M}}}(s)}=\frac{h _{p+DC_{\mathrm{M}}}(\tilde{s})+p^{T}(-\tilde{s})}{h_{C_{\mathrm{M}}}(\tilde{ s})}\] \[=\frac{p^{T}\tilde{s}+Dh_{C_{\mathrm{M}}}(\tilde{s})+p^{T}(- \tilde{s})}{h_{C_{\mathrm{M}}}(\tilde{s})}=D.\]
Thus \(D_{\mathrm{M}}(K_{X},C)=D\). To show completeness of \(K_{X}\) using Proposition 3.11, we need that all the regular slabs of \(K_{X}\) are of diametral breadth. However, by the construction of \(K_{X}\), every \(s\) which defines a regular slab of \(K_{X}\) also defines a regular slab of \(C_{\mathrm{M}}\).
\(i)\Rightarrow ii)\): Assume \(K_{X}\) is a completion of \(K\) and there exists some \(s\in\mathds{R}^{n}\setminus\{0\}\) that defines a regular slab such that there is no \(p\) as defined in \(ii)\). By the construction of \(K_{X}\) there exist \(p^{1},p^{2}\in X\) such that \(h_{K_{X}}(s)=(p^{1})^{T}s+h_{DC_{\mathrm{M}}}(s)\) and \(h_{K_{X}}(-s)=(p^{2})^{T}(-s)+h_{DC_{\mathrm{M}}}(-s)\). By our assumption \((p^{2})^{T}s<(p^{1})^{T}s+h_{DC_{\mathrm{M}}}(s)\), otherwise we could choose \(p=p^{2}\). Then,
\[b_{s}(K_{X},C_{\mathrm{M}}) =\frac{h_{K_{X}}(s)+h_{K_{X}}(-s)}{h_{C_{\mathrm{M}}}(s)}=\frac{(p ^{1})^{T}s+h_{DC_{\mathrm{M}}}(s)+(p^{2})^{T}(-s)+h_{DC_{\mathrm{M}}}(-s)}{h_{ C_{\mathrm{M}}}(s)}\] \[>\frac{(p^{2})^{T}s+(p^{2})^{T}(-s)+h_{DC_{\mathrm{M}}}(-s)}{h_{ C_{\mathrm{M}}}(s)}=\frac{h_{DC_{\mathrm{M}}}(-s)}{h_{C_{\mathrm{M}}}(s)}=D=D_{ \mathrm{M}}(K,C).\]
which implies that \(K_{X}\) is not a completion of \(K\).
Now, we consider the special case where \(X\) is a simplex.
**Definition 3.14**.: We say that a subset \(X\subset K\in\mathcal{C}^{n}\) is a _diametric simplex_ of \(K\) if
1. \(X\) is a simplex, and
2. \(D_{\mathrm{M}}([x,y],K)=D_{\mathrm{M}}(K,C)\) for all pairs of vertices \(x,y\) of \(X\).
**Lemma 3.15**.: _Let \(X\) be a diametric triangle of \(K\in\mathcal{C}^{2}\). Then, \(K_{X}\) is the unique completion of \(K\). As a consequence, any triangle \(T\) for which \(X=T\) is diametric has a unique completion._
Proof.: We show that property \(ii)\) of Lemma 3.13 is fulfilled. Assume w. l. o. g. that \(D_{\mathrm{M}}(X,C)=D_{\mathrm{M}}(K,C)=1\) and let \(X=\mathrm{conv}\left(\left\{p^{1},p^{2},p^{3}\right\}\right)\). Then, the translations \(-p^{i}+X\) with \(i\in\{1,2,3\}\) are subsets of \(C_{\mathrm{M}}\), all with one vertex in the origin and the other two on the boundary of \(C_{\mathrm{M}}\) (cf. Figure 2). Since \(p_{i}-p_{j}\) and \(p_{j}-p_{i}\) are each other's negative, we have three pairs of points in the boundary of \(C_{\mathrm{M}}\). For each, we choose an outer normal \(a_{k}\), \(k\in\{1,2,3\}\) ordered as given in Figure 2. Now, the boundary of \(K_{X}=K_{\mathrm{ext}(X)}\) consists of three parts which are built by parts of the boundary of \(C_{\mathrm{M}}\) (colored in blue in the left part of Figure 2). Then, if \(s\in\mathrm{pos}(\{a_{i},-a_{j}\})\), \(i\neq j\), property \(ii)\) of Lemma 3.13 holds for \(K_{X}\) with \(p=p_{k}\), \(k\neq i,j\). Hence, for all \(s\in\mathds{R}^{n}\setminus\{0\}\), property \(ii)\) is fulfilled and it follows that \(K_{X}\) is a completion. Using Remark 3.12 we obtain the uniqueness.
From the containment chain in Proposition 1.2 we know
\[D_{\mathrm{MAX}}(K,C)\leq D_{\mathrm{AM}}(K,C)\leq D_{\mathrm{HM}}(K,C)\leq D _{\mathrm{MIN}}(K,C)\]
The containment factors between the symmetrizations of the gauge can be used to improve this chain and to formulate new inequalities.
**Lemma 3.16**.:
1. \(\rho_{\mathrm{M}}D_{\mathrm{AM}}(K,C)\leq D_{\mathrm{M}}(K,C)\leq\delta_{ \mathrm{M}}D_{\mathrm{AM}}(K,C)\)_,_
2. \(\delta_{\mathrm{M}}r(K,C)\leq\dfrac{D_{\mathrm{M}}(K,C)}{2}\)_,_
3. \(\dfrac{D_{\mathrm{M}}(K,C)}{2}\leq\delta_{\mathrm{M}}R(K,C)\)_,_
4. \(\rho_{\mathrm{M}}(s(C)r(K,C)+R(K,C))\leq(s(C)+1)\dfrac{D_{\mathrm{M}}(K,C)}{2}\)_, and_
5. \(r(K,C)+R(K,C)\leq R(C_{\mathrm{M}},C)D_{\mathrm{M}}(K,C)\)_._
Proof.:
1. Follows directly from Remark 3.1 and Proposition 1.2.
Figure 2. If \(K\) contains a diametric triangle, its completion is constructed similar to the Reuleaux triangle in the euclidean case since it suffices to consider the extreme points, i. e. the vertices of a diametric triangle.
2. Since \(r(K,C)C\) is contained in a translate of \(K\), we obtain \[\delta_{\mathrm{M}}r(K,C)=\frac{1}{2}D_{\mathrm{M}}(C,C)r(K,C)=\frac{1}{2}D_{ \mathrm{M}}(r(K,C)C,C)\leq\frac{D_{\mathrm{M}}(K,C)}{2}.\]
3. Follows directly from Lemma 3.8 and Remark 3.1.
4. By [10, Theorem 1.1] we have \[s(C)r(K,C)+R(K,C)\leq\frac{s(C)+1}{2}D_{\mathrm{AM}}(K,C)\] and \[(s(C)+1)\frac{D_{\mathrm{AM}}(K,C)}{2}\leq(s(C)+1)\frac{D_{\mathrm{M}}(K,C)}{ 2\rho_{\mathrm{M}}}\] follows directly from part \(i)\).
5. For the symmetrization \(C_{\mathrm{M}}\) it holds \(r(K,C_{\mathrm{M}})+R(K,C_{\mathrm{M}})\leq D(K,C_{\mathrm{M}})\). Thus, \[D_{\mathrm{M}}(K,C)\geq r(K,C_{\mathrm{M}})+R(K,C_{\mathrm{M}})\geq\frac{1}{R( C_{\mathrm{M}},C)}\left(r(K,C)+R(K,C)\right).\]
We would like to describe the values the inradius, circumradius, and diameter of sets \(K\in\mathcal{C}^{n}\) may have, when we consider a fixed, Minkowski-centered \(C\in\mathcal{C}^{n}_{0}\). To do so, we study the following Blaschke-Santalo diagrams.
**Definition 3.17**.: Let \(f_{\mathrm{M}}\) be the following mapping.
\[f_{\mathrm{M}}:\bar{\mathcal{C}}^{n}\times\mathcal{C}^{n}_{0}\to\mathds{R}^{2},\,f_{\mathrm{M}}(K,C)=\left(\frac{r(K,C)}{R(K,C)},\frac{D_{\mathrm{M}}(K,C)}{ 2R(K,C)}\right) \tag{1}\]
The set \(f_{\mathrm{M}}(\bar{\mathcal{C}}^{n},C)\) is called the _Blaschke-Santalo diagram_ for the inradius, circumradius, and diameter (depending on the respective definitions) with regard to the gauge \(C\) - the \((r,R,D_{\mathrm{M}})\)-diagram.
As for the diameter we only write \(f\) if the gauge is symmetric. In [9]\(f_{\mathrm{AM}}(\bar{\mathcal{C}}^{2},S)\) is described and it is shown that this diagram is equal to the union of the diagrams over all possible gauges.
**Proposition 3.18**.: _For every triangle \(S\in\mathcal{C}^{2}\), the diagram \(f_{\mathrm{AM}}(\bar{\mathcal{C}}^{2},S)\) is fully described by the inequalities_
\[D_{\mathrm{AM}}(K,S) \leq 2R(K,C)\] \[4r(K,S)+2R(K,S) \leq 3D_{\mathrm{AM}}(K,S)\] \[\frac{D_{\mathrm{AM}}(K,C)}{2R(K,C)}\left(1-\frac{D_{\mathrm{AM}}(K,C)}{2R(K,C)}\right) \leq\frac{r(K,C)}{R(K,C)}.\]
_Moreover, \(f_{\mathrm{AM}}(\bar{\mathcal{C}}^{2},S)=f_{\mathrm{AM}}(\bar{\mathcal{C}}^{2 },\mathcal{C}^{2}_{0})\)._
The diagrams \(f(\bar{\mathcal{C}}^{2},\mathds{B}^{2}_{2})\)[41] (name giving) and \(f_{\mathrm{AM}}(\bar{\mathcal{C}}^{2},S)\)[9] can be seen in Figure 3.
It is shown in [9] that \(f_{\mathrm{AM}}(\bar{\mathcal{C}}^{n},C)\) is star-shaped with respect to the vertex \(f_{\mathrm{AM}}(C,C)=(1,1)\). This means that these diagrams can be fully described by characterizing the boundaries of the set. In the following, we prove similar (slightly weaker, but sufficient for our purposes) results for the other diameters. One may note that all diagrams with respect to triangles that are described in the following chapters are still star-shaped w.r.t. \(f_{\mathrm{M}}(C,C)\).
**Lemma 3.19**.: _The diagram \(f_{\mathrm{M}}(\bar{\mathcal{C}}^{n},C)\) is closed and if there is a continous description of the outer boundary, it is simply connected._
Proof.: Assume there is a sequence \((K_{n})_{n\in\mathds{N}}\subset\mathcal{C}^{n}\) such that \(K_{n}\subset^{\mathrm{opt}}C\) for all \(n\in\mathds{N}\) and \(r(K_{n},C)\to r^{*}\) and \(D_{\mathrm{M}}(K_{n},C)\to D^{*}\) for \(n\to\infty\). The sequence \((K_{n})_{n\in\mathds{N}}\) is bounded as all sets are contained in \(C\). Thus, by the Blaschke-Selection-Theorem there exists a converging subsequence \(K_{n_{k}}\to K^{*}\) for \(k\to\infty\). The inradius and diameter are continuous and therefore \(r(K^{*},C)=r^{*}\) and
\(D_{\mathrm{M}}(K^{*},C)=D^{*}\). Hence, \(f_{\mathrm{M}}(\bar{\mathcal{C}}^{n},C)\) is closed.
As a consequence, we know that \(f_{\mathrm{M}}(\bar{\mathcal{C}}^{n},C)\) can only have open holes and therefore only fulldimensional holes. For \(K\in\mathcal{C}^{n}\) such that \(K\subset^{\mathrm{opt}}C\), define \(K_{t}:=(1-t)K+tC\) for \(t\in[0,1]\).
Then by Lemma 3.2,
\[r(K_{t},C)=(1-t)r(K,C)+t,\]
\[R(K_{t},C)=(1-t)R(K,C)+t=1\]
and
\[D_{\mathrm{M}}(K_{t},C)\leq(1-t)D_{\mathrm{M}}(K,C)+tD_{\mathrm{M}}(C,C).\]
In the case \(\mathrm{M}=\mathrm{AM}\), we also have equality for the diameter, but this does not necessarily hold for the other diameters. Since \(R(\cdot,C),r(\cdot,C)\) and \(D_{\mathrm{M}}(\cdot,C)\) are continuous with respect to the Hausdorff distance and \(t\in[0,1]\mapsto(1-t)K+tC\) is continuous in \(t\), the composition \(\Gamma_{K}:[0,1]\to\mathds{R}^{2}\), \(t\mapsto\left(r(K_{t},C),\frac{D_{\mathrm{M}}(K_{t},C)}{2}\right)\) is continuous as well. Thus, for every such \(K\) there is a continuous curve \(\Gamma_{K}\) in the diagram from \(f_{\mathrm{M}}(K,C)\) to \(f_{\mathrm{M}}(C,C)\).
Let \((K^{n})_{n\in\mathds{N}}\) be a sequence of bodies on the boundary converging to \(K\) on the boundary. We show that the functions \(\Gamma_{K^{n}}\) converge uniformly to \(\Gamma_{K}\). We can consider the components separately. For the inradius, we know
\[|r(K_{t},C)-r(K_{t}^{n},C)| =|(1-t)r(K,C)+t-(1-t)r(K^{n},C)-t|\] \[=(1-t)|r(K,C)-r(K^{n},C)|\] \[\leq|r(K,C)-r(K^{n},C)|.\]
Let \(\epsilon>0\). Since \(|r(K,C)-r(K^{n},C)|\to 0\) for \(n\to\infty\), there exists an \(N\) such that for all \(n\geq N\), \(|r(K_{t},C)-r(K_{t}^{n},C)|<\epsilon\) for all \(t\in[0,1]\). It is known that when convex, continuous functions \(f_{n}:[a,b]\to\mathds{R}\) converge pointwise to a convex and continuous function \(f\), the convergence is uniform [33, Lemma 21]. The functions \(g_{n}:[0,1]\to\mathds{R}\), \(t\mapsto D_{\mathrm{M}}(K_{t}^{n},C)\) are convex and continuous in \(t\) and they converge pointwise to the convex and continuous function \(g:[0,1]\to\mathds{R}\), \(t\mapsto D_{\mathrm{M}}(K_{t},C)\). Thus, this convergence is also uniform and the curves converge uniformly.
Now, assume the diagram has a hole. For \(K\) on the boundary of the diagram we say that the curve \(\Gamma_{K}\) lies _above_ the hole, if the set enclosed by \([f_{\mathrm{M}}(L_{D},C),f_{\mathrm{M}}(C,C)]\), \(\Gamma_{K}\) and the boundary between \(f_{\mathrm{M}}(K,C)\) and \(f_{\mathrm{M}}(L_{D},C)\) which does not contain the segment \([f_{\mathrm{M}}(L_{D},C),f_{\mathrm{M}}(C,C)]\) does not contain the hole. Analogously, we say that \(\Gamma_{K}\) lies _below_ the hole if the set contains the hole. Thus, \(\Gamma_{L_{D}}\) lies above the hole and \(\Gamma_{C}\) below. Then, there exists a converging sequence \((K_{n})_{n\in\mathds{N}}\) with \(K_{n}\to K\) of bodies on the boundary such that all \(\Gamma_{K^{n}}\) lie above the hole and \(\Gamma_{K}\) below or vice versa. This contradicts the fact that the curves converge uniformly.
In the standard diameter case it was sufficient to describe the Blaschke-Santalo diagram w. r. t. a triangle to obtain \(f_{\mathrm{AM}}(\bar{\mathcal{C}}^{2},\mathcal{C}_{0}^{2})\). Thus, it seems reasonable to look at the diagrams for the three other diameters \(D_{\mathrm{MIN}}\), \(D_{\mathrm{MAX}}\) and \(D_{\mathrm{HM}}\) in terms of triangular gauges first, which we do in the remaining sections.
## 4. The diameter \(D_{\mathrm{MAX}}\)
When we use the notions "equilateral", "regular", and "isosceles" in the following, then it is meant in the euclidean sense. Unless otherwise specified, we fix the _equilateral triangle_ to be \(T:=\mathrm{conv}(\{p^{1},p^{2},p^{3}\})\subset\mathds{R}^{2}\) with \(p^{1}=(0,1)^{T}\), \(p^{2}=(-\sqrt{3}/2,-1/2)^{T}\) and \(p^{3}=(\sqrt{3}/2,-1/2)^{T}\). It is Minkowski-centered with \(s(T)=2\). Moreover, in this case \(T_{\mathrm{MAX}}\) is the regular hexagon \(\mathrm{conv}(\{p^{1},p^{2},p^{3},-p^{1},-p^{2},-p^{3}\})\).
In the case of the maximum both factors \(\rho_{\mathrm{MAX}}\) and \(\delta_{\mathrm{MAX}}\) are known [7] and depend at most on \(s(C)\):
**Proposition 4.1**.: \[C_{\mathrm{AM}}{\subset}^{\mathrm{opt}}C_{\mathrm{MAX}}\subset^{\mathrm{opt} }\frac{2s(C)}{s(C)+1}C_{\mathrm{AM}},\]
Figure 4. Proof of Lemma 3.19: \(K_{n}\to K\) but \(\Gamma_{K}\) lies below the hole and \(\Gamma_{K_{n}}\) above.
Figure 5. The equilateral triangle \(T\) and its maximum \(T_{\mathrm{MAX}}\)
_i. e._\(\rho_{\rm MAX}=\frac{s(C)+1}{2s(C)}\) _and_\(\delta_{\rm MAX}=1\)_._
Taking \(K=C_{\rm MAX}\), we have \(R(C_{\rm MAX},C)=s(C)\), \(r(C_{\rm MAX},C)=1\) and \(D_{\rm MAX}(C_{\rm MAX},C)=2\)[7].
The inequalities from Lemma 3.16 have the form:
\[\frac{D_{\rm MAX}(K,C)}{2}\leq R(K,C), \tag{2}\]
\[r(K,C)\leq\frac{D_{\rm MAX}(K,C)}{2}, \tag{3}\]
\[s(C)r(K,C)+R(K,C)\leq s(C)D_{\rm MAX}(K,C), \tag{4}\]
and
\[0\leq r(K,C). \tag{5}\]
In case of \(K=C\) (2) and (3) become tight while (3) and (4) become tight for \(K=C_{\rm MAX}\).
Asymmetric gauges are not complete, but a completion is easy to find.
**Definition 4.2**.: Let \(A_{C}^{\rm oss}:=\operatorname{bd}(C^{\circ})\cap\operatorname{bd}(-C^{\circ})\).
We define the _outer symmetric support_:
\[C^{\rm oss}:=\bigcap_{a\in A_{C}^{\rm oss}}H_{(a,1)}^{\leq}\]
**Lemma 4.3**.:
1. \(C_{\rm MAX}\) _is always a completion of_ \(C\) _with maximal circumradius_ \(R(C_{\rm MAX},C)=s(C)\)_,_
2. \(C_{\rm MAX}\subset C^{\rm sup}\subset C^{\rm oss}\)_,_
3. \(C_{\rm MAX}=C^{\rm sup}\) _if and only if_ \(C_{\rm MAX}=C^{\rm oss}\)_, and_
4. \(C_{\rm MAX}\) _is always the unique 0-symmetric completion of_ \(C\)_._
One should recognize that we need a scaling factor of up to \(n\) to cover the completion \(C_{\rm MAX}\) by \(C\) here, while with the arithmetic diameter \(C\) is always already complete itself.
Proof.: \(C_{\rm MAX}\) is a completion of \(C\) since \(C\subset C_{\rm MAX}\) and \(D_{\rm MAX}(C,C)=2\delta_{\rm MAX}=2=D_{\rm MAX}(C_{\rm MAX},C)\), while for all \(K\supset C_{\rm MAX}\) we have \(D_{\rm MAX}(K,C)=D_{\rm MAX}(K,C_{\rm MAX})=2R(K,C_{\rm MAX})>2\).
The maximality of the circumradius can be seen as follows: Let \(K\) be any completion of \(C\). Then, from (4) and Lemma 3.6 we obtain
\[R(K,C) \leq s(C)(D_{\rm MAX}(K,C)-r(K,C))=s(C)(D_{\rm MAX}(C_{\rm MAX},C )-r(C_{\rm MAX},C))\] \[=s(C)=R(C_{\rm MAX},C).\]
Next, we show that \(C_{\rm MAX}\subset C^{\rm sup}\subset C^{\rm oss}\). The first containment follows from the fact that \(C^{\rm sup}\) is the union of all completions of \(C\). For the second containment it suffices to show that \(h_{C^{\rm sup}}(a)\leq h_{C^{\rm oss}}(a)\) for all \(a\in A_{C}^{\rm ess}\). Let \(a\in A_{C}^{\rm ess}\). Then, there exists \(p\in-C\cap H_{(a,1)}\). Since \(-p\in C\) we obtain \(C^{\rm sup}\subset-p+2C_{\rm MAX}\) and therefore
\[h_{C^{\rm sup}}(a)\leq h_{-p+2C_{\rm MAX}}(a)=-p^{T}a+2h_{C_{\rm MAX}}(a)=1=h _{C^{\rm oss}}(a).\]
The containment chain directly shows the backward direction of part \(iii)\).
To show the forward direction, assume \(C_{\rm MAX}\neq C^{\rm oss}\). Since \(C_{\rm MAX}\) is the intersection of its regular slabs, there must exist some \(a\in\mathds{R}^{n}\setminus\{0\}\) which defines a regular slab of \(C_{\rm MAX}\) but \(h_{C}(a)>h_{C}(-a)\). Assuming \(a\in\operatorname{bd}(C^{\circ})\), there exists a smooth boundary point \(x\) of \(C_{\rm MAX}\) supported by the hyperplane \(H_{(a,1)}\). Because of \(h_{C}(a)>h_{C}(-a)\), the point \(x\) must belong to \(\operatorname{bd}(C)\cap H_{(a,1)}\).
Now, assume \(x\in\operatorname{bd}(C^{\sup})\) as well. Then, there exist an outer normal \(a_{x}\in\operatorname{bd}(C^{\circ}_{\operatorname{MAX}})\) and a point \(p_{x}\in C\) such that \(x^{T}a_{x}=h_{C^{\sup}}(a_{x})\) and \((a_{x})^{T}(x-p_{x})=2\). Then, \(H_{(a_{x},h_{C^{\sup}}(a_{x}))}\) also supports \(C_{\operatorname{MAX}}\), which implies \(a_{X}=a\) since \(x\) is a smooth boundary point of \(C_{\operatorname{MAX}}\). But since \(h_{C}(-a)<h_{C}(a)\) we cannot choose \(p_{x}\in C\). Thus, \(x\) is not contained in the boundary of \(C^{\sup}\) and therefore, \(C_{\operatorname{MAX}}\neq C^{\sup}\).
Finally, any \(0\)-symmetric completion of \(C\) must contain \(C\) and \(-C\) and therefore \(C_{\max}\).
**Example 4.4**.: _Trapezoids within the following family have completions besides their maximum:_
\[Z_{\lambda}:=\operatorname{conv}\left((\sqrt{3}/2,-1/2)^{T},(-\sqrt{3}/2,-1/2 )^{T},(\lambda\sqrt{3}/2,1-\lambda/2)^{T},(-\lambda\sqrt{3}/2,1-\lambda/2)^{T}\right)\]
_with \(\lambda\in(0,1)\). \(Z_{\lambda}\) is Minkowski-centered with Minkowski asymmetry \(2-\lambda\) and \((Z_{\lambda})^{\operatorname{oss}}\neq(Z_{\lambda})_{\operatorname{MAX}}\), which because of Lemma 4.3 means that \((Z_{\lambda})_{\operatorname{MAX}}\) is not the unique completion of \(Z_{\lambda}\). In the extreme cases \(\lambda\in\{0,1\}\), \(Z_{\lambda}\) is a triangle or a rectangle and \((Z_{\lambda})_{\operatorname{MAX}}=(Z_{\lambda})^{\operatorname{sup}}=(Z_{ \lambda})^{\operatorname{oss}}\)._
The first new inequality we provide is a lower bound for the diameter-circumradius ratio, a so called Jung-type inequality, which stays true independently of the gauge \(C\).
**Theorem 4.5**.: _Let \(K,C\in\mathcal{C}^{2}\), s. t. \(C\) is Minkowski-centered. Then_
\[D_{\operatorname{MAX}}(K,C)\geq R(K,C)\]
Proof.: If \(K\) is a single point, \(R(K,C)=D_{\operatorname{MAX}}(K,C)=0\). Thus, we can assume \(K\in\bar{\mathcal{C}}^{2}\) and \(K\subset^{\operatorname{opt}}C\), which implies \(R(K,C)=1\). Then, there exist touching points \(q^{1},\ldots,q^{k}\in\operatorname{bd}(K)\cap\operatorname{bd}(C)\) with \(k\in\{2,3\}\) and corresponding outer normals \(a^{i}\) as described in Proposition 1.1. If \(k=2\) is possible, there exists a segment \(L\subset K\) with the same circumradius as \(K\) and by Lemma 3.8\(D_{\operatorname{MAX}}(K,C)\geq D_{\operatorname{MAX}}(L,C)\geq 2\rho_{ \operatorname{MAX}}(C)=\frac{s(C)+1}{s(C)}>1\). For \(k=3\), the triangle \(\operatorname{conv}(\{q^{1},q^{2},q^{3}\})\) has the same circumradius as \(K\) and \(D_{\operatorname{MAX}}(K,C)\geq D_{\operatorname{MAX}}(\operatorname{conv}( \{q^{1},q^{2},q^{3}\}),C)\). Thus, it suffices to prove the claim for the case that \(K\) is a proper triangle \(K=\operatorname{conv}(\{q^{1},q^{2},q^{3}\})\).
Let \(S:=\bigcap_{i=1}^{3}H^{\leq}_{(a^{i},1)}\) be the intersection of the three supporting halfspaces of \(C\) s. t. \(q^{i}\in H_{(a^{i},1)}\). Denote the vertex opposing the edge defined by \(a^{i}\) by \(\tilde{p}^{i}\). Then, \(R(K,S)=R(K,C)=1\) and \(D_{\operatorname{MAX}}(K,C)\geq D_{\operatorname{MAX}}(K,S)\). By invariance under linear transformations we can assume that \(S=c+T\) where \(T\) is the Minkowski-centered equilateral triangle as described before: \(\tilde{p}^{i}=c+p^{i}\), \(i=1,2,3\). In the following indices are to be understood modulo \(3\). Let \(\alpha_{i}\in[0,1]\) be, s. t. \(q^{i}=\alpha_{i}\tilde{p}^{i+1}+(1-\alpha)\tilde{p}^{i+2}\). Since \(C\) is Minkowski-centered, \(0\) lies in the interior of \(C\).
We split the proof into two parts. First, we consider the case where the origin is close to the center \(c\), i. e. \(0\in\operatorname{int}(\operatorname{conv}(\big{\{}c-\frac{1}{2}p^{i},\quad i=1,2,3\big{\}}))\). Afterwards, we care about the case, where \(c\) is further apart from the origin.
Let us start with the case where \(0\in\operatorname{int}(\operatorname{conv}(\big{\{}c-\frac{1}{2}p^{i},\quad i=1,2,3 \big{\}}))\), which is equivalent to \(c\in\operatorname{int}(\operatorname{conv}(\big{\{}\frac{1}{2}p^{i},\quad i=1,2,3 \big{\}}))\). Define \(\lambda_{1},\lambda_{2},\lambda_{3}>0\) with \(\sum_{i=1}^{3}\lambda_{i}=\frac{1}{2}\) such that \(c=\sum_{i=1}^{3}\lambda_{i}p^{i}\). Let \(z^{i}\in\mathds{R}^{2}\) be the direction such that
\[(z^{i})^{T}\tilde{p}^{i+1}=-1\quad\text{and}\quad(z^{i})^{T}\tilde{p}^{i+2}=1.\]
This is possible since \(0\in\operatorname{int}S\). Since \(c=\frac{1}{3}\sum_{i=1}^{3}\tilde{p}^{i}\) we have
\[(z^{i})^{T}\tilde{p}^{i}=3(z^{i})^{T}c.\]
Inserting \(c=\sum_{i=1}^{3}\lambda_{i}p^{i}\) yields
\[(z^{i})^{T}c=(z^{i})^{T}\sum_{i=1}^{3}\lambda_{i}(\tilde{p}^{i}-c)=-\frac{1}{2 }(z^{i})^{T}c+3\lambda_{i}(z^{i})^{T}c-\lambda_{i+1}+\lambda_{i+2}\]
which implies
\[3(z^{i})^{T}c=\frac{\lambda_{i+2}-\lambda_{i+1}}{\frac{1}{2}-\lambda_{i}}= \frac{\lambda_{i+2}-\lambda_{i+1}}{\lambda_{i+2}+\lambda_{i+1}}.\]
Thus,
\[1+3(z^{i})^{T}c=\frac{2\lambda_{i+2}}{\lambda_{i+2}+\lambda_{i+1}}\geq 0\quad \text{and}\quad 1-3(z^{i})^{T}c=\frac{2\lambda_{i+1}}{\lambda_{i+2}+\lambda_{i+1}}\geq 0,\]
It follows that
\[+/-(z^{i})^{T}\tilde{p}^{i}\leq 1,\]
which shows that \(z^{i}\) is an outer normal of \(\operatorname{conv}(S\cup(-S))\) with \(h_{\operatorname{conv}(S\cup(-S))}(z^{i})=1\).
We know
\[\begin{split} D_{\text{MAX}}(K,S)&\geq\max_{i=1,2,3}b_{z ^{i}}(K,\text{conv}(S\cup(-S)))\\ &\geq\max_{i=1,2,3}\frac{(z^{i})^{T}q^{i+1}-(z^{i})^{T}q^{i+2}}{h_{ \text{conv}(S\cup(-S))}(z^{i})}\\ &=\max_{i=1,2,3}(z^{i})^{T}(\alpha_{i+1}\tilde{p}^{i+2}+(1- \alpha_{i+1})\tilde{p}^{i})-(z^{i})^{T}(\alpha_{i+2}\tilde{p}^{i}+(1-\alpha_{i +2})\tilde{p}^{i+1})\\ &=\max_{i=1,2,3}\alpha_{i+1}(1-3(z^{i})^{T}c)+(1-\alpha_{i+2})(1 +3(z^{i})^{T}c)\\ &=\max_{i=1,2,3}\frac{2\alpha_{i+1}\lambda_{i+1}+2(1-\alpha_{i+2 })\lambda_{i+2}}{\lambda_{i+2}+\lambda_{i+1}}\end{split} \tag{6}\]
Next, we show that the last term is larger than or equal to \(1\). If \(b_{z^{i}}(K,T_{\text{MAX}})\geq 1\) for \(i\in\{1,2\}\) there is nothing to show. Thus let us assume that those breadths are smaller than \(1\). We show that the latter implies \(b_{z^{3}}(K,T_{\text{MAX}})\geq 1\).
\[\begin{split} b_{z^{1}}(K,S_{\text{MAX}})&<1\quad \text{implies}\quad 2\alpha_{2}\lambda_{2}+2(1-\alpha_{3})\lambda_{3}< \lambda_{2}+\lambda_{3}\quad\text{and}\\ b_{z^{2}}(K,S_{\text{MAX}})&<1\quad\text{implies} \quad 2\alpha_{3}\lambda_{3}+2(1-\alpha_{1})\lambda_{1}<\lambda_{3}+\lambda_{1}. \end{split}\]
By adding these inequalities we obtain
\[2\lambda_{3}+2\alpha_{2}\lambda_{2}+2(1-\alpha_{1})\lambda_{1}<\lambda_{2}+2 \lambda_{3}+\lambda_{1}\]
or equivalently
\[2\alpha_{1}\lambda_{1}+2(1-\alpha_{2})\lambda_{2}>\lambda_{1}+\lambda_{2},\]
which proves \(b_{z^{3}}(K,T_{\text{MAX}})\geq 1\). Thus, \(D_{\text{MAX}}(K,C)\geq D_{\text{MAX}}(K,S)\geq 1\).
Now, consider the case that \(0\notin\text{int}(\text{conv}(\{c-\frac{1}{2}p^{i},\quad i=1,2,3\}))\). Because of the symmetries of \(T\) we can assume that \(0=\sum_{i=1}^{3}\beta_{i}\tilde{p}^{i}\) with \(\sum_{i=1}^{3}\beta_{i}=1\), \(\beta_{3}\geq\frac{1}{2}\) and \(\beta_{2}\geq\beta_{1}>0\) (cf. Figure 8).
By the above conditions we have
\[h_{C}(-a^{3})\leq h_{S}(-a^{3})=(-a^{3})^{T}\tilde{p}^{3}=\frac{\beta_{1}+ \beta_{2}}{\beta_{3}}\leq 1=h_{C}(a^{3}).\]
Figure 8. Proof of Theorem 4.5. If \(0\notin\text{int}(\text{conv}(\{c-\frac{1}{2}p^{i},\quad i=1,2,3\}))\) we may assume, w. l. o. g., that the origin is within the colored area. Then, \(\max_{i=1,3}b_{a^{i}}(K,\text{conv}(C\cup(-C)))\geq 1\).
Thus, using \(\alpha_{1},\alpha_{2},\alpha_{3}\) as above, we know, if \(\alpha_{1}<1-\beta_{3}\) or \(\alpha_{2}>\beta_{3}\), then
\[h_{K}(-a^{3}) \geq\max\bigl{\{}(-a^{3})^{T}q^{1},(-a^{3})^{T}q^{2}\bigr{\}}\] \[=\max\bigl{\{}(-a^{3})^{T}(\alpha_{1}\tilde{p}^{2}+(1-\alpha_{1}) \tilde{p}^{3}),(-a^{3})^{T}(\alpha_{2}\tilde{p}^{3}+(1-\alpha_{2})\tilde{p}^{1 })\bigr{\}}\] \[=\max\biggl{\{}-\alpha_{1}+(1-\alpha_{1})\frac{1-\beta_{3}}{ \beta_{3}},\alpha_{2}\frac{1-\beta_{3}}{\beta_{3}}-(1-\alpha_{2})\biggr{\}}>0\]
and therefore,
\[D_{\mathrm{MAX}}(K,C)\geq b_{a^{3}}(K,\mathrm{conv}(C\cup(-C)))\geq\frac{h_{K }(a^{3})+h_{K}(-a^{3})}{h_{C}(a^{3})}>1.\]
Now, let \(\alpha_{1}\geq 1-\beta_{3}\) and \(\alpha_{2}\leq\beta_{3}\). Since \(\beta_{1}\leq\beta_{2}\) we have \(\beta_{1}\leq\frac{1-\beta_{3}}{2}\). Thus,
\[(-a^{1})^{T}q^{2} =\alpha_{2}((-a^{1})^{T}\tilde{p}^{3})+(1-\alpha_{2})((-a^{1})^{ T}\tilde{p}^{1})=-\alpha_{2}+(1-\alpha_{2})\left(\frac{\beta_{2}}{\beta_{1}}+ \frac{\beta_{3}}{\beta_{1}}\right)\] \[\geq-\beta_{3}+(1-\beta_{3})(1+\frac{2\beta_{3}}{1-\beta_{3}})=1.\]
Using that \(C\) is Minkowski-centered, we know \(h_{C}(-a^{1})\leq s(C)h_{C}(a^{1})=s(C)\) and we obtain
\[D_{\mathrm{MAX}}(K,C) \geq b_{a^{1}}(K,\mathrm{conv}(C\cup(-C)))\geq\frac{(a^{1})^{T}q^ {1}-(a^{1})^{T}q^{2}}{\max(h_{C}(a^{1}),h_{C}(-a^{1}))}\] \[\geq\frac{1+1}{s(C)}=\frac{2}{s(C)}\geq 1.\]
Since \(D_{\mathrm{MAX}}\) is the smallest diameter, this provides a lower bound for all four diameter definitions.
**Corollary 4.6**.: _Let \(K,C\in\mathcal{C}^{2}\), \(C\) Minkowski-centered. Then_
\[D_{\mathrm{M}}(K,C)\geq R(K,C).\]
If \(\mathrm{M}\neq\mathrm{MAX}\) it follows directly from Proposition 1.2 and the translation invariance in the arithmetic case that Corollary 4.6 stays true for non-Minkowski-centered \(C\in\mathcal{C}^{2}\). However, since \(D_{\mathrm{AM}}(K,C)=R(K,C)\) is obtained if and only if \(C\) is a triangle and \(K\) is a homothet of \(-C\)[12, 10], we will see in the next section that this bound cannot be reached if \(\mathrm{M}\in\{\mathrm{MIN},\mathrm{HM}\}\).
If we omit the restriction of \(C\) being Minkowski-centered, Theorem 4.5 is not necessarily true. However, for gauges that still contain the origin, we obtain the following Jung-type inequality.
**Theorem 4.7**.: _Let \(K,C\in\mathcal{C}^{2}\), s. t. \(0\in C\) and \(\dim(C)=2\). Then,_
\[D_{\mathrm{MAX}}(K,C)\geq\frac{2}{3}R(K,C).\]
_Moreover, equality can be attained for some \(K\) if and only if \(C\) is a triangle with one vertex at the origin._
Proof.: We use the same notation as in the previous proof. As in Theorem 4.5, we can assume that \(K\) is a triangle and \(K\subset^{\mathrm{opt}}C\). It suffices to show \(D(K,\mathrm{conv}(S\cup(-S)))\geq\frac{2}{3}\) for \(S\) as given in Theorem 4.5 since \(D_{\mathrm{MAX}}(K,C)=D(K,\mathrm{conv}(C\cup(-C)))\geq D(K,\mathrm{conv}(S \cup(-S)))\).
If there exists \(\alpha_{i}\notin\left[\frac{1}{3},\frac{2}{3}\right]\), there is an \(a^{j}\) with \(\frac{a^{j^{T}}q^{j}-a^{j^{T}}q^{i}}{h_{C}(a^{j})+h_{C}(-a^{j})}>\frac{2}{3}\) (cf. Figure 9) and therefore,
\[D(K,\mathrm{conv}(S\cup(-S))) \geq b_{a^{j}}(K,\mathrm{conv}(S\cup(-S)))\geq\frac{a^{j^{T}}q^{j }-a^{j^{T}}q^{i}}{\max(h_{C}(a^{j}),h_{C}(-a^{j}))}\] \[\geq\frac{a^{j^{T}}q^{j}-a^{j^{T}}q^{i}}{h_{C}(a^{j})+h_{C}(-a^{ j})}>\frac{2}{3}.\]
Now, consider the case where \(\alpha_{i}\in\left[\frac{1}{3},\frac{2}{3}\right]\) for all \(i\in\{1,2,3\}\). There exists \(i\in\{1,2,3\}\) such that \(z^{i}\) defined as in the previous proof defines a supporting hyperplane of \(\operatorname{conv}\left(S\cup(-S)\right)\) with \(h_{\operatorname{conv}(S\cup(-S))}(z^{i})=1\), w. l. o. g. \(i=3\). Then, we know \(1=h_{\operatorname{conv}(S\cup(-S))}(z^{3})\geq(z^{3})^{T}(\pm p^{3})=\pm 3(z^{3})^{T}c\) and using (6) we obtain
\[D(K,\operatorname{conv}(S\cup(-S))) \geq b_{z^{3}}(K,\operatorname{conv}(S\cup(-S)))\geq\alpha_{1}(1-3 (z^{3})^{T}c)+(1-\alpha_{2})(1+3(z^{3})^{T}c)\] \[\geq\frac{1}{3}(1-3(z^{3})^{T}c)+\frac{1}{3}(1+3(z^{3})^{T}c)= \frac{2}{3} \tag{7}\]
Reaching equality in the last inequality chain we need \(\alpha_{1}=(1-\alpha_{2})=\frac{1}{3}\). In this case we need \(0\in\left\{\tilde{p}^{3}\right\}\cup[\tilde{p}^{1},\tilde{p}^{2}]\) for \(b_{a^{3}}\) not to be larger than \(\frac{2}{3}\). However, if \(0\in[\tilde{p}^{1},\tilde{p}^{2}]\) we have that \(z^{3}\) cannot define a supporting hyperplane as described above. Thus, \(0\) remains to be a vertex of \(S\) to reach equality. For \(D_{\operatorname{MAX}}(K,C)=\frac{2}{3}\) to be true, we need
\[b_{z^{3}}(K,\operatorname{conv}(C\cup(-C)))=b_{z^{3}}(K,\operatorname{conv}(S \cup(-S)))=\frac{2}{3},\]
which is only possible if \(\tilde{p}^{1}\in C\) or \(\tilde{p}^{2}\in C\). If only one of these vertices is contained in \(C\), \(\frac{3}{2}(q^{2}-q^{1})\notin\operatorname{conv}(C\cup(-C))\) which implies \(D_{\operatorname{MAX}}(K,C)>\frac{2}{3}\). Thus, \(\tilde{p}^{1},\tilde{p}^{2}\in C\), which means \(C=S\).
All in all, we see that equality can be obtained for \(C=\operatorname{conv}(\left\{0,\tilde{p}^{1},\tilde{p}^{2}\right\})\) and \(\operatorname{conv}(\left\{q^{1},q^{2},q^{3}\right\})\) with \(\alpha_{1}=(1-\alpha_{2})=\frac{1}{3}\) and \(\alpha_{3}\in\left[\frac{1}{3},\frac{2}{3}\right]\). In that case
\[D_{\operatorname{MAX}}(K,C)=\max\{b_{a^{3}}(K,\operatorname{conv}(C\cup(-C)) ),b_{z^{3}}(K,\operatorname{conv}(C\cup(-C)))\}=\frac{2}{3}\]
(see Figure 10), which is achieved, e.g. if we choose \(K=\operatorname{conv}\left(\left\{\frac{1}{3}\tilde{p}^{1},\frac{1}{3}\tilde{p }^{2},\frac{1}{3}\tilde{p}^{1}+\tilde{p}^{2}\right\}\right)\).
For fixed \(C\), we refer to any \(\tilde{K}\) that fulfills \(\frac{D_{\operatorname{M}}(\tilde{K},C)}{R(\tilde{K},C)}=\min_{K\in\mathcal{C}^ {n}}\frac{D_{\operatorname{M}}(K,C)}{R(K,C)}\) as _Jung-extremal_. If \(C=T\), we have equality in Theorem 4.5 for the following family of triangles.
**Example 4.8**.: _Let \(T\) be the equilateral triangle as defined above and_
\[T_{\alpha}:=\operatorname{conv}(\left\{\alpha p^{1}+(1-\alpha)p^{2},\alpha p^{2}+ (1-\alpha)p^{3},\alpha p^{3}+(1-\alpha)p^{1}\right\})\]
_for \(\alpha\in\left[\frac{1}{3},\frac{2}{3}\right]\) (cf. Figure 11). Then,_
\[D_{\operatorname{MAX}}(T_{\alpha},T)=R(T_{\alpha},T)\quad\text{and}\quad r(T_ {\alpha},T)=(1-3\alpha+3\alpha^{2})R(T_{\alpha},T)\]
_The triangles \(T_{\alpha}\) are all equilaterals. Special cases are \(\alpha=\frac{1}{2}\), where \(T_{\frac{1}{2}}=-\frac{1}{2}T\), and \(\alpha\in\left\{\frac{1}{3},\frac{2}{3}\right\}\), rotations of dilatated \(T\) by \(\pm\frac{\pi}{6}\)._
Proof.: By construction \(T_{\alpha}\subset^{\operatorname{opt}}T\). The diameter of \(T_{\alpha}\) is attained between two of its vertices and since \(T_{\operatorname{MAX}}\) is a regular hexagon which has rotational symmetry of order six, is does not matter which of the edges of \(T_{\alpha}\) we consider. Let \(q^{i}=\alpha p^{i+1}+(1-\alpha)p^{i+2}\). By using \(p^{3}=-p^{2}-p^{1}\) we obtain
\[q^{2}-q^{1} =\alpha p^{3}+(1-\alpha)p^{1}-\alpha p^{2}-(1-\alpha)p^{3}\] \[=(2-3\alpha)p^{1}+(3\alpha-1)(-p^{2})\] \[\subset[p^{1},-p^{2}]\subset\operatorname{bd}\left(T_{ \operatorname{MAX}}\right)\]
Hence, \(D_{\operatorname{MAX}}(T_{\alpha},T)=\|q^{2}-q^{1}\|_{T_{\operatorname{MAX}}}=1\).
Figure 11. All the equilateral triangles \(T_{\alpha}\) are Jung-extremal (w. r. t. \(T\)).
The triangle \(\operatorname{conv}(\left\{w^{1},w^{2},w^{3}\right\})\) with \(w^{i}:=\alpha q^{i+2}+(1-\alpha)q^{i+1}\) is optimally contained in \(T_{\alpha}\) by Proposition 1.1. Furthermore,
\[w^{i} =\alpha q^{i+2}+(1-\alpha)q^{i+1}\] \[=\alpha\left(\alpha p^{i}+(1-\alpha)p^{i+1}\right)+(1-\alpha) \left(\alpha p^{i+2}+(1-\alpha)p^{i}\right)\] \[=(1-2\alpha+2\alpha^{2})p^{i}+\alpha(1-\alpha)p^{i+1}+\alpha(1- \alpha)p^{i+2}\]
Using this we compute
\[w^{i+1}-w^{i} =(1-2\alpha+2\alpha^{2})p^{i+1}+\alpha(1-\alpha)p^{i}-(1-2\alpha+ 2\alpha^{2})p^{i}-\alpha(1-\alpha)p^{i+1}\] \[=(1-3\alpha-3\alpha^{2})(p^{i+1}-p^{i}).\]
Thus, \(\operatorname{conv}(\left\{w^{1},w^{2},w^{3}\right\})\) is a translate of \((1-3\alpha-3\alpha^{2})T\) and the inradius is \(r(T_{\alpha},T)=1-3\alpha-3\alpha^{2}\).
\(T_{\frac{3}{2}}\) is especially interesting: it is complete since its arithmetic mean is a dilatation of \(T_{\operatorname{MAX}}\). Since the symmetrization \(T_{\operatorname{MAX}}\) of \(T\) is also Jung-extremal, there are at least two complete Jung-extremal bodies, \(T_{\operatorname{MAX}}\) and \(T_{\frac{3}{2}}\) with different inradius-circumradius ratio.
Now, we compute the values of the functionals for a second family of triangles. We will see that these lie on the boundary of the diagram w. r. t. triangles.
**Lemma 4.9**.: _Let \(T\) be the equilateral triangle as defined above and \(S_{\lambda}=\operatorname{conv}(\left\{q^{1},q^{2},q^{3}\right\})\) with \(q^{1}=\frac{1}{2}(p^{2}+p^{3})\), \(q^{2}=\lambda p^{1}+(1-\lambda)p^{3}\) and \(q^{3}=\lambda p^{1}+(1-\lambda)p^{2}\) for some \(\lambda\in[\frac{1}{2},1]\). Then, \(R(S_{\lambda},T)=1,D_{\operatorname{MAX}}(S_{\lambda},T)=\lambda+\frac{1}{2}\) and \(r(S_{\lambda},T)=\lambda(1-\lambda)\)._
Proof.: By construction, \(R(S_{\lambda},T)=1\). Obviously, \(\left\|q^{2}-q^{3}\right\|_{T_{\operatorname{MAX}}}\leq 1\) and \(\left\|q^{2}-q^{1}\right\|_{T_{\operatorname{MAX}}}=\left\|q^{3}-q^{1}\right\| _{T_{\operatorname{MAX}}}\). Using \(p^{3}=-(p^{1}+p^{2})\) we can compute:
\[q^{2}-q^{1} =\lambda p^{1}+(1-\lambda)p^{3}-\frac{1}{2}(p^{2}+p^{3})=\lambda p ^{1}+\frac{1}{2}(-p^{2})+(\frac{1}{2}-\lambda)(-p^{1}-p^{2})\] \[=(2\lambda-\frac{1}{2})p^{1}+(1-\lambda)(-p^{2})\in(\lambda+\frac {1}{2})\operatorname{conv}(\left\{p^{1},-p^{2}\right\})\] \[\subset(\lambda+\frac{1}{2})\operatorname{bd}(T_{\operatorname{ MAX}}).\]
Since \(\lambda+\frac{1}{2}\geq 1\) for \(\lambda\in[\frac{1}{2},1]\), it follows \(D_{\operatorname{MAX}}(S_{\lambda},T)=\lambda+\frac{1}{2}\).
Now we show the formula for the inradius (cf. Figure 12). Let \(\operatorname{conv}(\left\{w^{1},w^{2},w^{3}\right\})\) be the inner triangle. By axial symmetry of \(T\) and \(S_{\lambda}\) we know that \(w^{1}=\frac{1}{2}(q^{2}+q^{3})\). Denote the euclidean edge length of the inner triangle by \(a\). Since \(T\) has edge length \(\sqrt{3}\), we obtain \(r(S_{\lambda},T)=\frac{a}{\sqrt{3}}\). The segments \([q^{3},q^{2}]\) and \([p^{2},p^{3}]\) are parallel and the corresponding edges of the inner and outer triangle are parallel as well. Thus, the triangles \(\operatorname{conv}(\left\{w^{1},w^{3},q^{2}\right\})\) and \(\operatorname{conv}(\left\{p^{3},q^{2},q^{1}\right\})\) are similar, implying \(\frac{a}{\left\|q^{2}-p^{3}\right\|_{2}}=\frac{q_{1}^{2}}{p_{1}^{2}}\). We obtain
\[a=\frac{q_{1}^{2}}{p_{1}^{3}}\left\|q^{2}-p^{3}\right\|_{2}=(1-\lambda)\left\| q^{2}-p^{3}\right\|_{2}=(1-\lambda)\lambda\left\|p^{1}-p^{3}\right\|_{2}=\sqrt{3} \lambda(1-\lambda),\]
and therefore \(r(S_{\lambda},T)=\frac{a}{\sqrt{3}}=\lambda(1-\lambda)\).
\(S_{\frac{1}{2}}=-\frac{1}{2}T\) is Jung-extremal and \(S_{1}\) is a segment \(L_{w}\).
**Theorem 4.10**.: _Let \(K\in\bar{\mathcal{C}}^{n}\) and \(T\) an equilateral Minkowski-centered triangle. Then_
\[\left(\frac{D_{\operatorname{MAX}}(K,T)}{R(K,T)}-\frac{1}{2}\right)\left(\frac{ 3}{2}-\frac{D_{\operatorname{MAX}}(K,T)}{R(K,T)}\right)\leq\frac{r(K,T)}{R(K,T)}\]
_with equality for the triangles \(S_{\lambda}\), \(\lambda\in[\frac{1}{2},1]\), as described in Lemma 4.9._
To prepare the proof of Theorem 4.10 we need the following lemma.
**Lemma 4.11**.: _Let \(K=\operatorname{conv}\big{(}\{q^{1},q^{2},q^{3}\}\big{)}\) be a triangle that is optimally contained in \(T\), s. t. \(q^{i}\) belongs to the edge of \(T\) opposing \(p^{i}\). We say that \((q^{1},q^{3})\) is steep if \(\big{\|}q^{3}-p^{1}\big{\|}_{2}\leq\big{\|}q^{1}-p^{3}\big{\|}_{2}\). Let \(\tilde{K}:=\operatorname{conv}(\{q^{1},q^{3},\tilde{q}^{2}\})\subset^{ \operatorname{opt}}T\) be a triangle such that \(\tilde{q}^{2}\) is on the same edge of \(T\) as \(q^{2}\) and \(\big{\|}q^{2}-p^{1}\big{\|}_{2}\leq\big{\|}\tilde{q}^{2}-p^{1}\big{\|}_{2}\). If \((q^{1},q^{3})\) is steep, the inradius of \(\tilde{K}\) w. r. t. \(T\) is greater or equal than the one of \(K\)._
We call this property "steep" since the segment \([q^{1},q^{3}]\) is steeper than \([p^{3},p^{1}]\). By symmetry of \(T\) we can generalize this result to all choices \(i,j\in\{1,2,3\}\): \((q^{i},q^{j})\), \(i,j\in\{1,2,3\}\), \(i\neq j\) is steep if \(\big{\|}q^{j}-p^{i}\big{\|}_{2}\leq\big{\|}q^{i}-p^{j}\big{\|}_{2}\). At least one of the ordered pairs \((q^{i},q^{j})\) or \((q^{j},q^{i})\) is always steep.
Proof of Lemma 4.11.: Let \(H_{1}\) and \(H_{2}\) be the two lines parallel to \([p^{1},p^{3}]\) supporting the inner triangle \(\operatorname{conv}(\{w^{1},w^{2},w^{3}\})\) of \(K\) (that necessarily touches all three edges of \(K\)), s.t. \(H_{1}\) contains the edge \([w^{1},w^{3}]\) and \(H_{2}\) the opposing vertex \(w^{2}\) (cf. Figure 13). Since \((q^{1},q^{3})\) is steep the part of \(H_{2}\) below \(w^{2}\) intersects \(\tilde{K}\). By the intercept theorem with the two parallel lines \(H_{1}\) and \(H_{2}\) and the points \(q^{2}\) or \(\tilde{q}^{2}\) the segment of \(H_{1}\) contained in \(\tilde{K}\) is greater or equal than the one contained in \(K\). Thus, we can move the inner triangle of \(K\) within the slab between \(H_{1}\) and \(H_{2}\) until it touches \([q^{1},\tilde{q}^{2}]\). The resulting translations of \(w^{1},w^{2},w^{3}\) are all contained in \(\tilde{K}\), the translation of \(w^{2}\) due to the steepness of \((q^{1},q^{3})\). Thus, \(r(K,T)\leq r(\tilde{K},T)\).
Proof of Theorem 4.10.: The equality case follows directly from Lemma 4.9. So we only need to prove the correctness of the inequality.
Let \(K\subset^{\operatorname{opt}}T\). As shown in the proof of Theorem 4.5, we either have \(D_{\operatorname{MAX}}(K,T)\geq\frac{3}{2}\) or three touching points of \(K\) to the boundary of \(T\), each situated on a different edge of \(T\).
In the first case, the left side of the inequality in Theorem 4.10 is non-positive while the right side is always non-negative. Hence, in this case the inequality is fulfilled.
In the other case we consider the triangle \(S:=\operatorname{conv}(\{q^{1},q^{2},q^{3}\})\), where \(q^{i}\in K\) belongs to the edge of \(T\) opposing \(p^{i}\), \(i=1,2,3\). Assume w. l. o. g. that the diameter of \(S\) is attained between \(q^{1}\) and one of the other points and that \(\big{\|}q^{1}-p^{3}\big{\|}_{2}\leq\big{\|}q^{1}-p^{2}\big{\|}_{2}\). Our goal is to show that there exists a triangle \(S_{\lambda}\subset^{\operatorname{opt}}T\), \(\lambda\in[\frac{1}{2},1]\), as defined in Lemma 4.9 with at most the same diameter and inradius as \(S\). Using the fact that \(\big{(}D_{\operatorname{MAX}}-\frac{1}{2}\big{)}\big{(}\frac{3}{2}-D_{ \operatorname{MAX}}\big{)}\) is decreasing in \(D_{\operatorname{MAX}}\) if \(D_{\operatorname{MAX}}\geq 1\), we may
Figure 12. Calculating the inradius of \(S_{\lambda}\).
then conclude
\[\left(\frac{D_{\mathrm{MAX}}(K,T)}{R(K,T)}-\frac{1}{2}\right)\left( \frac{3}{2}-\frac{D_{\mathrm{MAX}}(K,T)}{R(K,T)}\right) \leq\left(\frac{D_{\mathrm{MAX}}(S,T)}{R(S,T)}-\frac{1}{2}\right) \left(\frac{3}{2}-\frac{D_{\mathrm{MAX}}(S,T)}{R(S,T)}\right)\] \[\leq\left(\frac{D_{\mathrm{MAX}}(S_{\lambda},T)}{R(S_{\lambda},T )}-\frac{1}{2}\right)\left(\frac{3}{2}-\frac{D_{\mathrm{MAX}}(S_{\lambda},T)}{ R(S_{\lambda},T)}\right)\] \[=\frac{r(S_{\lambda},T)}{R(S_{\lambda},T)}\leq\frac{r(S,T)}{R(S,T )}\leq\frac{r(K,T)}{R(K,T)}.\]
We distinguish between the two cases if the diameter is attained by \([q^{1},q^{2}]\) or \([q^{1},q^{3}]\) and show that we may always assume that both segments are diametral.
1. This case is depicted in Figure 14. If \(D_{\mathrm{MAX}}(S,T)=D_{\mathrm{MAX}}([q^{1},q^{2}],T)\) our assumption implies that \(\left\|q^{2}-p^{1}\right\|_{2}\leq\left\|q^{2}-p^{3}\right\|_{2}\). Let us assume otherwise. Then we would have \([q^{1},q^{2}]\subset\mathrm{conv}(\left\{p^{3},\frac{1}{2}(p^{2}+p^{3}), \frac{1}{2}(p^{1}+p^{3})\right\}=\frac{1}{2}(p^{3}+T)\) with \([q^{1},q^{2}]\) not being an edge of \(\frac{1}{2}(T+p^{3})\), which implies \(D_{\mathrm{MAX}}([q^{1},q^{2}],T)=2R([q^{1},q^{2}],T_{\mathrm{MAX}})<2R(\frac {1}{2}T,T_{\mathrm{MAX}})=1=R(K,T)\), contradicting Theorem 4.5. Hence, \(\left\|q^{2}-p^{1}\right\|_{2}\leq\sqrt{3}/2\leq\left\|q^{1}-p^{2}\right\|_{2}\), which implies the steepness of \((q^{1},q^{2})\). Since \([q^{1},q^{2}]\) is diametral, \(q^{3}\) lies inside \(q^{1}+D_{\mathrm{MAX}}(S,T)T_{\mathrm{MAX}}\). However, \(\left\|q^{1}-p^{3}\right\|_{2}\leq\left\|q^{1}-p^{2}\right\|_{2}\) now implies the existence of an intersection point between \([p^{1},p^{2}]\) and the boundary of \(q^{1}+D_{\mathrm{MAX}}(S,T)T_{\mathrm{MAX}}\) which is not further from \(p^{1}\) than \(q^{3}\). Choosing this point as our new \(q^{3}\) neither increases the diameter nor the inradius (the latter because of Lemma 4.11). Doing so, \([q^{1},q^{3}]\) becomes diametral, too.
2. If \(D_{\mathrm{MAX}}(S,T)=D_{\mathrm{MAX}}([q^{1},q^{3}],T)\), we need to consider three subcases: 1. If \(\left\|q^{3}-p^{2}\right\|_{2}\leq\left\|q^{3}-p^{1}\right\|_{2}\) this corresponds to Case 1 with \(q^{3}\) in the role of \(q^{1}\) and \(q^{2}\) being the vertex that is moved. 2. If \(\left\|q^{3}-p^{2}\right\|_{2}\geq\left\|q^{3}-p^{1}\right\|_{2}\) and \((q^{1},q^{3})\) is steep one can move \(q^{2}\) in the direction of \(p^{1}\) such that \([q^{1},q^{2}]\) becomes diametral, too. 3. If \(\left\|q^{3}-p^{2}\right\|_{2}\geq\left\|q^{3}-p^{1}\right\|_{2}\) and \((q^{3},q^{1})\) is steep this corresponds to Case 2b) with roles of \(q^{1}\) and \(q^{3}\) interchanged, which means that we may move \(q^{2}\) towards \(p^{3}\).
Altogether we see that assuming \(\left\|q^{2}-q^{1}\right\|_{T_{\mathrm{MAX}}}=\left\|q^{3}-q^{1}\right\|_{T_{ \mathrm{MAX}}}\) is possible and doing so the points \(q^{2}\) and \(q^{3}\) do not only lie on the boundary of \(q^{1}+D_{\mathrm{MAX}}(S,T)T_{\mathrm{MAX}}\), they essentialy lie on the (translated and dilatated) edges \([p^{1},-p^{3}]\) or \([p^{1},-p^{2}]\) of \(T_{\mathrm{MAX}}\). For \(q^{2}\) this follows from the fact that it has to lie closer to \(p^{1}\) than to \(p^{3}\). As described in the proof of Case 1, the boundary of \(q^{1}+D_{\mathrm{MAX}}(S,T)T_{\mathrm{MAX}}\) intersects \([p^{1},p^{2}]\) once or twice, but it is not possible that it only intersects with the segment \(q^{1}+D_{\mathrm{MAX}}(S,T)[p^{2},-p^{3}]\) as this would contradict our assumption
Figure 13. Proof of Lemma 4.11. Since \((q^{1},q^{3})\) is steep, the inner triangle can be translated along the orange hyperplanes.
\(D_{\rm MAX}(S,T)=D_{\rm MAX}([q^{1},q^{2}],T)\). If we have two intersection points, we can replace \(q^{3}\) if necessary by the upper one without increasing the inradius since \((q^{1},q^{2})\) is steep. Thus, we can assume \(q^{3}\in q^{1}+D_{\rm MAX}(S,T)[-p^{3},p^{1}]\).
For the next part of the proof we now assume that \(q^{3}\in q^{1}+D_{\rm MAX}(S,T)[p^{1},-p^{3}]\) and \(q^{2}\in q^{1}+D_{\rm MAX}(S,T)[p^{1},-p^{2}]\) is true. This is not the case if \(q^{1}_{1}>q^{2}_{1}\) (see Figure 15). But then, we know \(q^{3}_{2}<q^{2}_{2}\) and \((q^{2},q^{3})\) is steep. Thus, replacing \(q^{1}\) by \(\tilde{q}^{1}\) such that \(\tilde{q}^{1}_{1}=q^{2}_{1}\) does not increase the diameter or the inradius and we still have \(\|q^{3}-q^{1}\|_{T_{\rm MAX}}=\|q^{2}-q^{1}\|_{T_{\rm MAX}}\). This shows that we can assume \(q^{1}_{1}\leq q^{2}_{1}\) and that \(q^{2}\in q^{1}+D_{\rm MAX}(S,T)[-p^{2},p^{1}]\).
Now, we consider the triangle \(S_{\lambda}={\rm conv}(\{\tilde{q}^{1},\tilde{q}^{2},\tilde{q}^{3}\})\) with \(\lambda\) as in Lemma 4.9 such that it has the same diameter. Due to symmetry reasons and our assumptions about the positions of \(q^{2}\) and \(q^{3}\), \(\left\|q^{1}-\tilde{q}^{1}\right\|_{2}=\left\|q^{2}-\tilde{q}^{2}\right\|_{2} =\left\|q^{3}-\tilde{q}^{3}\right\|_{2}=:\kappa\) (see Figure 16).
By using the intercept theorem and the law of sines we will show that, possibly after a suitable translation, all vertices of the inner triangle of \(S_{\lambda}\) are contained in \(S\).
Figure 14. Proof of Case 1 of Theorem 4.10. The diameter is attained between \(q^{1}\) and \(q^{2}\). We can replace \(q^{3}\) such that it is attained between \(q^{1}\) and \(q^{3}\) as well since \((q^{1},q^{2})\) is steep.
Figure 15. Proof of Theorem 4.10. We can assume that \(q^{3}\) lies on \(q^{1}+D_{\rm MAX}(S,T)[p^{1},-p^{3}]\) and \(q^{2}\) lies on \(q^{1}+D_{\rm MAX}(S,T)[p^{1},-p^{2}]\). Otherwise we can consider the triangle \(\tilde{S}={\rm conv}(\{\tilde{q}^{1},q^{2},q^{3}\})\) which has smaller or equal inradius and diameter.
inradius of \(S_{\lambda}\) is at most the one of \(S\). We denote the vertices of the inner triangle of \(S_{\lambda}\) by \(w^{i}\) (see Figure 18) and the euclidean distance in the horizontal direction of \(w^{i}\) to the segment \([q^{j},q^{k}]\), \(\{i,j,k\}=\{1,2,3\}\), by \(l_{i}\). If \(\kappa\neq 0\), every side of \(S\) intersects the corresponding side of \(S_{\lambda}\) exactly once. Let us denote these intersection points by \(v^{i}\) (cf. Figure 17). We will show that if we shift the inner triangle of \(S_{\lambda}\) by \(l_{3}\) to the left it is completely contained in \(S\). To do so we compute all the values \(l_{i}\), \(i=1,2,3\) and prove that we have \(l_{3}\leq l_{j}\), \(j=1,2\).
Computation of \(l_{1}\) (cf. Figure 17): We use the intercept theorem for \([\tilde{q}^{2},\tilde{q}^{3}]\) and the two lines parallel to this segment through \(q^{2}\) and \(q^{3}\), respectively. Since \(T\) is an equilateral triangle, \(\operatorname{conv}(\{p^{1},\tilde{q}^{3},\tilde{q}^{2}\})\) is also equilateral with an edge length of \((1-\lambda)\sqrt{3}\). Furthermore,
\[\|\tilde{q}^{3}-v^{1}\|_{2}=\frac{1}{2}\|r-q^{2}\|_{2}=\frac{1}{2}\|p^{1}-q^{2 }\|_{2}=\frac{1}{2}(\|p^{1}-\tilde{q}^{2}\|_{2}-\kappa).\]
It follows that \(w^{1}\) is always contained in \(S\) and
\[l_{1}=\frac{1}{2}\|p^{1}-\tilde{q}^{2}\|_{2}-\|\tilde{q}^{3}-v^{1}\|_{2}=\frac {1}{2}\kappa.\]
Computation of \(l_{2}\) (cf. Figure 18): Let \(\alpha_{2}=\angle v^{2}q^{3}\tilde{q}^{3}\), \(\beta_{2}=\angle v^{2}q^{1}p^{2}\) and \(\gamma_{2}=\angle\tilde{q}^{1}v^{2}q^{1}\). We compute \(\frac{\|\tilde{q}^{3}-v^{2}\|_{2}}{\|\tilde{q}^{1}-v^{2}\|_{2}}\) using the law of sines, first for the triangles \(\operatorname{conv}(\{v^{2},q^{1},\tilde{q}^{1}\})\) and \(\operatorname{conv}(\{v^{2},q^{3},\tilde{q}^{3}\})\), and then for \(\operatorname{conv}(\{p^{2},q^{1},q^{3}\})\).
Figure 16. Proof of Theorem 4.10. Transformation of \(S\) into \(S_{\lambda}\) with the same diameter. The distances \(\|q^{i}-\tilde{q}^{i}\|_{2}\) are equal.
Figure 17. Proof of Theorem 4.10. Computation of \(l_{1}\). The triangles defined by \(p^{1}\) and the intersection points of the parallel lines are all three equilateral.
\[\frac{\|\tilde{q}^{3}-v^{2}\|_{2}}{\|\tilde{q}^{1}-v^{2}\|_{2}} =\frac{\sin\left(\gamma_{2}\right)}{\kappa\sin\left(\beta_{2} \right)}\cdot\frac{\kappa\sin\left(\alpha_{2}\right)}{\sin\left(\gamma_{2} \right)}=\frac{\sin\left(\alpha_{2}\right)}{\sin\left(\beta_{2}\right)}\] \[=\frac{\sin\left(\pi-\alpha_{2}\right)}{\sin\left(\beta_{2} \right)}=\frac{\frac{\sqrt{3}}{2}+\kappa}{\sqrt{3}\lambda-\kappa}\]
Furthermore, we know from Lemma 4.9 that \(r(S_{\lambda},T)=\lambda(1-\lambda)\). Together with the intercept theorem we obtain
\[\frac{\|\tilde{q}^{3}-w^{2}\|_{2}}{\|\tilde{q}^{1}-w^{2}\|_{2}}=\frac{w_{2}^{ 1}-w_{2}^{2}}{(\tilde{q}_{2}^{3}-\tilde{q}_{2}^{1})-(w_{2}^{1}-w_{2}^{2})}= \frac{1-\lambda}{\lambda}.\]
Since \(\frac{\sqrt{3}+\kappa}{\sqrt{3}\lambda-\kappa}\geq\frac{1-\lambda}{\lambda}\) for \(\lambda\in[\frac{1}{2},1]\), \(v^{2}\) is closer to \(\tilde{q}^{1}\) than \(w^{2}\) and therefore also \(w^{2}\in S.\)
The distance of \(w^{2}\) to \([q^{1},q^{3}]\) in the direction of \((-1,0)^{T}\) is by the intercept theorem
\[l_{2}=\kappa\cdot\frac{\|v^{2}-w^{2}\|_{2}}{\|\tilde{q}^{1}-v^{2}\|_{2}}=\kappa \cdot\frac{\|\tilde{q}^{3}-v^{2}\|_{2}-\|\tilde{q}^{3}-w^{2}\|_{2}}{\|\tilde{ q}^{1}-v^{2}\|_{2}}\]
Computation of \(l_{3}\) (cf. Figure 18): If \(w^{3}\) is also contained in \(S\), we have that the complete inner triangle of \(S_{\lambda}\) is contained in \(S\) and we are done.
Otherwise, \(\|\tilde{q}^{1}-v^{3}\|_{2}\leq\|\tilde{q}^{1}-w^{3}\|_{2}\) and we need to show that we can translate the inner triangle to be in \(S\). To do so, we compute the distance \(l_{3}\) of \(w^{3}\) to \([q^{1},q^{2}]\) in the direction of \((-1,0)^{T}\), which can be done completely analogously to \(l_{2}\): Let \(\alpha_{3}=\angle q^{1}q^{2}p^{3}\), \(\beta_{3}=\angle v^{3}q^{1}p^{2}\) and \(\gamma_{3}=\angle\tilde{q}^{1}v^{3}q^{1}\). Then,
\[\frac{\|\tilde{q}^{2}-v^{3}\|_{2}}{\|\tilde{q}^{1}-v^{3}\|_{2}}=\frac{\sin \left(\gamma_{3}\right)}{\kappa\sin\left(\beta_{3}\right)}\cdot\frac{\kappa \sin\left(\alpha_{3}\right)}{\sin\left(\gamma_{3}\right)}=\frac{\sin\left( \alpha_{3}\right)}{\sin\left(\pi-\beta_{3}\right)}=\frac{\frac{\sqrt{3}}{2}- \kappa}{\sqrt{3}\lambda+\kappa}.\]
and \(\frac{\|\tilde{q}^{2}-w^{3}\|_{2}}{\|\tilde{q}^{1}-w^{3}\|_{2}}=\frac{1}{ \lambda}\). Thus,
\[\frac{\|\tilde{q}^{2}-v^{3}\|_{2}}{\|\tilde{q}^{1}-v^{3}\|_{2}}\leq\frac{\| \tilde{q}^{3}-v^{2}\|_{2}}{\|\tilde{q}^{1}-v^{2}\|_{2}}.\]
Together with \(\|\tilde{q}^{2}-\tilde{q}^{1}\|_{2}=\|\tilde{q}^{3}-\tilde{q}^{1}\|_{2}\) we obtain \(\|\tilde{q}^{1}-v^{2}\|_{2}\leq\|\tilde{q}^{1}-v^{3}\|_{2}\) and \(\|\tilde{q}^{2}-v^{3}\|_{2}\leq\|\tilde{q}^{3}-v^{2}\|_{2}\) Hence,
\[l_{3} =\kappa\cdot\frac{\|v^{3}-w^{3}\|_{2}}{\|\tilde{q}^{1}-v^{3}\|_{2 }}=\kappa\cdot\left(\frac{\|\tilde{q}^{2}-v^{3}\|_{2}-\|\tilde{q}^{2}-w^{3}\|_ {2}}{\|\tilde{q}^{1}-v^{3}\|_{2}}\right)\] \[\leq\kappa\cdot\left(\frac{\|\tilde{q}^{3}-v^{2}\|_{2}-\|\tilde{ q}^{3}-w^{2}\|_{2}}{\|\tilde{q}^{1}-v^{2}\|_{2}}\right)\] \[=l_{2}.\]
Moreover, since \(\kappa\geq 0\) and \(\lambda\in[\frac{1}{2},1]\) it follows
\[l_{3} =\kappa\cdot\left(\frac{\|\tilde{q}^{2}-v^{3}\|_{2}-\|\tilde{q}^{ 2}-w^{3}\|_{2}}{\|\tilde{q}^{1}-v^{3}\|_{2}}\right)\] \[\leq\kappa\cdot\left(\frac{\|\tilde{q}^{2}-v^{3}\|_{2}}{\|\tilde{ q}^{1}-v^{3}\|_{2}}-\frac{\|\tilde{q}^{2}-w^{3}\|_{2}}{\|\tilde{q}^{1}-w^{3}\|_{2}}\right)\] \[=\kappa\cdot\left(\frac{\frac{\sqrt{3}}{2}-\kappa}{\sqrt{3} \lambda+\kappa}-\frac{1-\lambda}{\lambda}\right)\] \[\leq\kappa\cdot\left(\frac{1}{2\lambda}-\frac{1-\lambda}{\lambda }\right)=\kappa\cdot\left(1-\frac{1}{2\lambda}\right)\] \[\leq\frac{1}{2}\cdot\kappa=l_{1}.\]
Hence, if we translate the inner triangle of \(S_{\lambda}\) by \((l_{3},0)^{T}\), it is contained in \(S\), which proves \(r(S_{\lambda},T)\leq r(S,T)\).
The collected inequalities are now sufficient to provide a full description of the Blaschke-Santalo-diagram \(f_{\mathrm{MAX}}(C^{2},T)\) (cf. Figure 19).
**Theorem 4.12**.: _For every Minkowski-centered triangle \(S\) the diagram \(f_{\mathrm{MAX}}(\bar{C}^{2},S)\) is fully described by the inequalities_
\[\frac{D_{\mathrm{MAX}}(K,S)}{2} \leq R(K,S)\] \[r(K,S) \leq\frac{D_{\mathrm{MAX}}(K,S)}{2}\] \[0 \leq r(K,S)\] \[R(K,S) \leq D_{\mathrm{MAX}}(K,S)\] \[\left(\frac{D_{\mathrm{MAX}}(K,S)}{R(K,S)}-\frac{1}{2}\right) \left(\frac{3}{2}-\frac{D_{\mathrm{MAX}}(K,S)}{R(K,S)}\right)\leq\frac{r(K,S) }{R(K,S)}.\]
Proof.: Since all Minkowski-centered triangles can be linearly transformed into the equilateral triangle \(T\) it suffices to show the claim for \(T\). We give a continuous description of the boundaries described by the inequalities (2), (3), and (5), as well as those given by Theorem 4.5 and Theorem 4.10. First, (2) is attained with equality for \((1-\lambda)L_{D}+\lambda T\), \(\lambda\in[0,1]\) where \(L_{D}\) and \(T\) are the extreme cases. Second, (3) is attained with equality for \((1-\lambda)T+\lambda T_{\mathrm{MAX}}\), \(\lambda\in[0,1]\) by Lemma 3.2 where \(T\) and \(T_{\mathrm{MAX}}\) are the extreme cases. The boundary induced by (5) is filled by segments from \(L_{w}\) to \(L_{D}\). Next, since \(T_{\mathrm{MAX}}\) is the completion of \(-T\), the boundary from Theorem 4.5 is filled by the sets \(-T_{+}\) with \(-T\subset-T_{+}\subset T_{\mathrm{MAX}}\). Finally, the inequality in Theorem 4.10 is fulfilled with equality by the triangles \(S_{\lambda}\) as introduced in Lemma 4.9. Since we have presented a continuous description of the boundary we can apply Lemma 3.19 and follow that the diagram is simply connected.
While the inequalities (2), (3), (5) and the one given by Theorem 4.5 are valid for all choices of planar, Minkowski-centered gauges, the inequality from Theorem 4.10 is only proven for triangles. Using the
Figure 18. Proof of Theorem 4.10. Computation of \(l_{2}\) and \(l_{3}\).
result from Proposition 3.18 and \(D_{\rm AM}(K,C)\leq\frac{4}{3}D_{\rm MAX}(K,C)\) which follows from \(\frac{s(C)+1}{2s(C)}C_{\rm MAX}\subset^{\rm opt}C_{\rm AM}\)[7], we are able to give another general inequality
\[\frac{2}{3}\frac{D_{\rm MAX}(K,C)}{R(K,C)}\left(1-\frac{2}{3}\frac{D_{\rm MAX} (K,C)}{R(K,C)}\right)\leq\frac{r(K,C)}{R(K,C)}, \tag{8}\]
which enables us to give a bound for the union of the diagrams \(f_{\rm MAX}(\bar{\mathcal{C}}^{2},C)\) with \(C\) Minkowski-centered (depicted in red within Figure 19).
**Conjecture 4.13**.: _The diagram of any triangle is dominating, i.e. for every Minkowski-centered \(C\) we have \(f_{\rm MAX}(\bar{\mathcal{C}}^{2},C)\subset f_{\rm MAX}(\bar{\mathcal{C}}^{2},S)\)._
## 5. The diameter \(D_{\rm HM}\)
In the case of the harmonic mean, the factors \(\rho_{\rm HM}\) and \(\delta_{\rm HM}\) are not bound to the Minkowski asymmetry \(s(C)\). However, in [7] the following bounds are proven:
**Proposition 5.1**.: \[1\leq\rho_{\rm HM}\leq\frac{(s(C)+1)^{2}}{4s(C)}\leq\delta_{\rm HM}\leq\frac{ s(C)+1}{2}.\]
Taking \(K=C_{\rm HM}\), we know \(R(C_{\rm HM},C)=\frac{2s(C)}{s(C)+1}\), \(r(C_{\rm HM},C)=\frac{2}{s(C)+1}\) and \(D_{\rm HM}(C_{\rm HM},C)=2\)[7]. Furthermore, the inequalities from Lemma 3.16 have the form:
\[\frac{D_{\rm HM}(K,C)}{2}\leq\delta_{\rm HM}R(K,C), \tag{9}\]
\[\delta_{\rm HM}r(K,C)\leq\frac{D_{\rm HM}(K,C)}{2}, \tag{10}\]
\[s(C)r(K,C)+R(K,C)\leq\frac{s(C)+1}{2\rho_{\rm HM}}D_{\rm HM}(K,C)\leq\frac{s( C)+1}{2}D_{\rm HM}(K,C), \tag{11}\]
Figure 19. The diagram \(f_{\rm MAX}(\bar{\mathcal{C}}^{2},S)\) w. r. t. a Minkowski-centered triangle \(S\) (black) and an upper bound for the union over all Minkowski-centered gauges bounded by (8) (additional purple region).
\[r(K,C)+R(K,C)\leq\frac{2s(C)}{s(C)+1}D_{\rm HM}(K,C) \tag{12}\]
and
\[0\leq r(K,C) \tag{13}\]
Thus, if \(\delta_{\rm HM}=\frac{s(C)+1}{2}\) then \(C_{\rm HM}\) fulfills (10) with equality and if \(\rho_{\rm HM}=\frac{(s(C)+1)^{2}}{4s(C)}\) then \(C_{\rm HM}\) fulfills the first inequality in (11) with equality, which in this case can be rewritten as
\[s(C)r(K,C)+R(K,C)\leq\frac{2s(C)}{s(C)+1}D_{\rm HM}(K,C). \tag{14}\]
Unlike with \(D_{\rm MAX}\), (a dilatate of) the symmetrization \(C_{\rm HM}\) is not always a completion of the gauge.
**Lemma 5.2**.: _The following are equivalent:_
1. \(\delta_{\rm HM}=\frac{s(C)+1}{2}\)_,_
2. \(\frac{s(C)+1}{2}C_{\rm HM}\) _is a completion of_ \(C\) _w. r. t._ \(C\)_, and_
3. \(D(C,C_{\rm HM})=2R(C,C_{\rm HM})\)_._
Proof.: We know from [7] that \(C\subset^{\rm opt}\frac{s(C)+1}{2}C_{\rm HM}\) and therefore \(\frac{s(C)+1}{2}C_{\rm HM}\) is a complete set containing \(C\) with \(R(C,C_{\rm HM})=\frac{s(C)+1}{2}\).
\[\begin{array}{ll}i)\Rightarrow ii):&\mbox{If }\delta_{\rm HM}=\frac{s(C)+1}{2} \mbox{ then }D_{\rm HM}\left(\frac{s(C)+1}{2}C_{\rm HM},C\right)=s(C)+1=2\delta_{\rm HM} =D_{\rm HM}(C,C),\\ &\mbox{implying that }\frac{s(C)+1}{2}C_{\rm HM}\mbox{ is a completion of }C. \end{array}\]
\(ii)\Rightarrow iii):&\mbox{If }\frac{s(C)+1}{2}C_{\rm HM}\mbox{ is a completion of }C,\)\(D(C,C_{\rm HM})=D_{\rm HM}(C,C)=s(C)+1=2R(C,C_{\rm HM}).\\ iii)\Rightarrow i):&\mbox{Assuming (iii) it follows }2\delta_{\rm HM}=2R(C_{\rm AM},C_{\rm HM})=D(C,C_{\rm HM})=2R(C,C_{\rm HM})=s (C)+1.\end{array}\]
If \(\frac{s(C)+1}{2}C_{\rm HM}\) is a completion of the gauge \(C\), then it is also a completion of \(-C\) and since \(R(-C,C)=s(C)=R(\frac{s(C)+1}{2}C_{\rm HM},C)\) for \(-C\) it is even a Scott-completion.
**Example 5.3**.: _The Reuleaux triangle \({\rm RT}\) is the completion of the equilateral triangle \(T\) in the euclidean case:_
\[{\rm RT}:=\bigcap_{i=1}^{3}p^{i}+\sqrt{3}{\rm B}_{2}^{2},\]
_where the \(p_{i}\) are the vertices of \(T\). For the Reuleaux triangle one obtains (omitting the detailed calculations) \(s({\rm RT})=\frac{1}{\sqrt{3}-1}\approx 1.366\), \(\delta_{\rm HM}=\frac{\sqrt{3}}{\sqrt{11}-\sqrt{3}}\approx 1.093<\frac{s({\rm RT })+1}{2}\) and \(\rho_{\rm HM}=\frac{(s({\rm RT})+1)^{2}}{4s({\rm RT})}=\frac{3(\sqrt{3}+1)}{8} \approx 1.025\). Thus, by Lemma 5.2 this is a case where the (dilatated) harmonic mean \({\rm RT}_{\rm HM}\) is not a completion of the gauge \({\rm RT}\)._
_Since \(T\subset{\rm RT}\) is diametric we obtain from Lemma 3.15 that the unique completion of \({\rm RT}\) is_
\[{\rm RT}^{*}:=\bigcap_{i=1}^{3}p_{i}+2\delta_{\rm HM}\,{\rm RT}_{\rm HM}\quad \mbox{(cf. Figure \ref{fig:
For the equilateral triangle, we have \(\delta_{\rm HM}=\frac{s(T)+1}{2}=\frac{3}{2}\), \(\rho_{\rm HM}=\frac{(s(T)+1)^{2}}{4s(T)}=\frac{9}{8}\) and \(T_{\rm HM}=\frac{2}{3}T_{\rm MAX}\)[6]. Thus, \(D_{\rm HM}(K,T)=\frac{3}{2}D_{\rm MAX}(K,T)\) for all \(K\in\mathcal{C}^{2}\) and we can transfer all inequalities concerning triangles from the previous section.
**Corollary 5.4**.: _For every Minkowski-centered triangle \(S\) the diagram \(f_{\rm HM}(\bar{\mathcal{C}}^{2},S)\) is fully described by the inequalities_
\[D_{\rm HM}(K,S) \leq 3R(K,S)\] \[3r(K,S) \leq D_{\rm HM}(K,S)\] \[0 \leq r(K,S)\] \[R(K,S) \leq\frac{2}{3}D_{\rm HM}(K,S)\] \[\frac{4}{9}\left(\frac{D_{\rm HM}(K,S)}{R(K,S)}-\frac{3}{4}\right)\]
Figure 21. The Blaschke-Santalo diagram \(f_{\rm HM}(\bar{\mathcal{C}}^{2},{\rm RT})\) with conjectured inequalities (purple). Keep in mind that the (dilatated) harmonic symmetrization of the Reuleaux triangle is not its completion.
Figure 20. Reuleaux Triangle RT (dashed), its completion \({\rm RT}^{*}\) (left) and its harmonic symmetrization \({\rm RT}_{\rm HM}\) (right).
_(cf. Figure 22)._
**Remark 5.5**.: The bound \(D_{\rm HM}(K,C)\geq R(K,C)\) derived from Corollary 4.6 for the planar case cannot be reached in case of the harmonic diameter. If \(D_{\rm HM}(K,C)=2\), we obtain from the containment chain
\[\frac{1}{2}\left(1+\frac{1}{s(K)}\right)K\subset\frac{K-K}{2}\subset C_{\rm HM} \subset\frac{2s(C)}{s(C)+1}C,\]
that
\[R(K,C)\leq\frac{4s(K)s(C)}{(s(K)+1)(s(C)+1)}\leq\frac{4n^{2}}{(n+1)^{2}}. \tag{15}\]
Thus,
\[D_{\rm HM}(K,C)\geq\frac{(n+1)^{2}}{2n^{2}}R(K,C),\]
which for \(n=2\) gives \(D_{\rm HM}(K,C)\geq\frac{9}{8}R(K,C)>R(K,C)\). Furthermore, equailty in (15) can only be reached if \(K\) and \(C\) are simplices, but we have shown that for Minkowski-centered triangles \(D_{\rm HM}(K,S)\geq\frac{3}{2}R(K,S)\). Finally, one may recognize that if we consider \(0\)-symmetric planar gauges, (15) also yields \(D_{\rm HM}(K,C)\geq\frac{3}{2}R(K,C)\).
The following system of inequalities provides an upper bound for the union of the diagrams \(f_{\rm HM}(\mathcal{C}^{2},C)\) over all Minkowski-centered gauges \(C\in\mathcal{C}_{0}^{2}\) (cf. Figure 22).
\[0 \leq r(K,C)\] \[r(K,C) \leq R(K,C)\] \[D_{\rm HM}(K,C) \leq 3R(K,C)\] \[2r(K,C) \leq D_{\rm HM}(K,C)\] \[9R(K,C) \leq 8D_{\rm HM}(K,C)\] \[\frac{D_{\rm HM}(K,C)}{2R(K,C)}\left(1-\frac{D_{\rm HM}(K,C)}{2R (K,C)}\right) \leq\frac{r(K,C)}{R(K,C)}.\]
Moreover, the following parts of the boundary described by the above inequalities are reached:
1. \(r(K,C)=0\) for segments \(K=L_{D}\) for gauges \(C\) with \(s(C)\in[1,2]\)
2. \(r(K,C)=R(K,C)\) for \(K=C\) for gauges \(C\) with \(s(C)\in[1,2]\)
3. \(D_{\rm HM}(K,C)=3R(K,C)\) with \(C\) being a triangle as in \(f_{\rm HM}(\mathcal{C}^{2},S)\)
The first two inequalities are trivial. The third and forth follow from Lemma 3.16 and the fifth from Remark 5.5. The last inequality follows from Proposition 3.18 and Remark 3.1. The equality cases follow from Lemma 3.2, Lemma 3.8 and Corollary 5.4.
**Remark 5.6**.: Since \(\delta_{\rm HM}\) is not the same for every gauge as it is in the arithmetic case, the diagram for the equilateral triangle cannot be dominating. For every \(C\), \(f_{\rm HM}(C,C)=(1,\delta_{\rm HM})\) and this is always the only combination where inradius and circumradius coincide. Thus, we cannot find a single gauge \(C\) which defines the union of the diagrams.
## 6. The diameter \(D_{\rm MIN}\)
In the case of the minimum \(\delta_{\rm MIN}\) depends solely on the asymmetry of \(C\) but \(\rho_{\rm MIN}\) can only be bounded in terms of the asymmetry. [7].
**Proposition 6.1**.: \[1\leq \rho_{\rm MIN}\leq\frac{s(C)+1}{2}=\delta_{\rm MIN}\]
Taking \(K=C_{\rm MIN}\), we know \(R(C_{\rm MIN},C)=1\), \(r(C_{\rm MIN},C)=\frac{1}{s(C)}\) and \(D_{\rm MIN}(C_{\rm MIN},C)=2\)[7]. The inequalities from Lemma 3.16 have the form:
\[D_{\rm MIN}(K,C)\leq(s(C)+1)R(K,C), \tag{16}\]
\[(s(C)+1)r(K,C)\leq D_{\rm MIN}(K,C), \tag{17}\]
\[s(C)r(K,C)+R(K,C)\leq\rho_{\rm M}(s(C)r(K,C)+R(K,C))\leq(s(C)+1)\frac{D_{\rm M }(K,C)}{2}, \tag{18}\]
\[r(K,C)+R(K,C)\leq D_{\rm MIN}(K,C), \tag{19}\]
\[0\leq r(K,C). \tag{20}\]
Contrary to the other diameters the (dilatated) symmetrization only yields a completion of the gauge \(C\) in the trivial case.
**Lemma 6.2**.: \(\rho C_{\rm MIN}\) _is a completion of \(C\) if and only if \(\rho=s(C)=1\), which means \(C\) is \(0\)-symmetric._
Proof.: We have \(C\subset^{\rm opt}s(C)C_{\rm MIN}\), which shows that \(\rho\) must equal \(s(C)\). However,
\[D_{\rm MIN}(s(C)C_{\rm MIN},C)=2s(C)\geq s(C)+1=2\delta_{\rm MIN}=D_{\rm MIN}(C,C),\]
with equality if and only if \(s(C)=1\).
Next, we prove a Jung-bound. Bohnenblust [2] shows for symmetric gauges
Figure 22. The diagram \(f_{\rm HM}(C^{2},S)\) w. r. t. a Minkowski-centered triangle \(S\) (left) and an upper bound for the union of the diagrams over all Minkowski-centered gauges \(C\in\mathcal{C}_{0}^{2}\) (right).
**Proposition 6.3**.: (21) \[\frac{n+1}{2n}\leq\frac{D(K,C)}{2R(K,C)}.\]
For the minimum diameter we obtain the same bound.
**Corollary 6.4**.: (22) \[\frac{n+1}{2n}\leq\frac{D_{\mathrm{MIN}}(K,C)}{2R(K,C)}\]
_Moreover, equality can be obtained only if \(K\) is a simplex. For every \(s_{C}\in[1,n]\), there exists \(C\in\mathcal{C}_{0}^{n}\), Minkowski-centered, with \(s(C)=s_{C}\) and a simplex \(K\) such that the inequality is tight._
Proof.: Using Proposition 6.3 for the gauge \(C_{\mathrm{MIN}}\) and Remark 3.1, it follows
\[\frac{n+1}{2n}\leq\frac{D(K,C_{\mathrm{MIN}})}{2R(K,C_{\mathrm{MIN}})}\leq \frac{D(K,C_{\mathrm{MIN}})}{2R(K,C)}=\frac{D_{\mathrm{MIN}}(K,C)}{2R(K,C)}. \tag{23}\]
By [12, Theorem 4.1] equililty in Bohnenblust's inequality can only be attained if \(K\) is a simplex. Let \(S\) be a Minkowski-centered simplex. Then, \(C=S\cap s_{C}(-S)\) with \(s_{C}\in[1,n]\) has Minkowski asymmetry \(s(C)=s_{C}\), and for \(K=-S\), we have \(R(K,C)=n\) and \(D_{\mathrm{MIN}}(K,C)=D(K,C_{\mathrm{MIN}})=D(S,S\cap(-S))=2R(\frac{S-S}{2},S \cap(-S))=n+1\).
Since for Minkowski-centered triangles \(S_{\mathrm{MIN}}=\frac{2}{3}S_{\mathrm{AM}}\), we know \(D_{\mathrm{MIN}}(K,S)=\frac{3}{2}D_{\mathrm{AM}}(K,S)\) and the inequalities from Proposition 3.18 can be transferred to describe \(f_{\mathrm{MIN}}(\mathcal{C}^{2},S)\).
**Corollary 6.5**.: _For every Minkowski-centered triangle \(S\in\mathcal{C}^{2}\), the diagram \(f_{\mathrm{MIN}}(\mathcal{C}^{2},S)\) is fully described by the inequalities_
\[D_{\mathrm{MIN}}(K,S) \leq 3R(K,C)\] \[2r(K,S)+R(K,S) \leq D_{\mathrm{MIN}}(K,S)\] \[\frac{D_{\mathrm{MIN}}(K,C)}{3R(K,C)}\left(1-\frac{D_{\mathrm{ MIN}}(K,C)}{3R(K,C)}\right) \leq\frac{r(K,C)}{R(K,C)}.\]
As in the case of the harmonic diameter, since \(\delta_{\mathrm{MIN}}\) is not the same for every gauge, the diagram for the equilateral triangle cannot be dominating. There does not exist a single gauge which defines the union of the diagrams. The following system of inequalities provides an upper bound for the union of the diagrams \(f_{\mathrm{MIN}}(\mathcal{C}^{2},C)\) over all Minkowski-centered gauges \(C\in\mathcal{C}_{0}^{2}\) (cf. Figure 23).
\[0 \leq r(K,C)\] \[r(K,C) \leq R(K,C)\] \[D_{\mathrm{MIN}}(K,C) \leq 3R(K,C)\] \[r(K,C)+R(K,C) \leq D_{\mathrm{MIN}}(K,C)\] \[3R(K,C) \leq 2D_{\mathrm{MIN}}(K,C)\] \[\frac{D_{\mathrm{MIN}}(K,C)}{2R(K,C)}\left(1-\frac{D_{\mathrm{ MIN}}(K,C)}{2R(K,C)}\right) \leq\frac{r(K,C)}{R(K,C)}.\]
Moreover, the following parts of the boundary described by the above inequalities are reached:
1. \(r(K,C)=0\) for segments \(K=L_{D}\) for gauges \(C\) with \(s(C)\in[1,2]\)
2. \(r(K,C)=R(K,C)\) for \(K=C\) for gauges \(C\) with \(s(C)\in[1,2]\)
3. \(D_{\mathrm{MIN}}(K,C)=3R(K,C)\) with \(C\) being a triangle as in \(f_{\mathrm{MIN}}(\mathcal{C}^{2},S)\)
4. \(3R(K,C)=2D_{\mathrm{MIN}}(R,C)\) for \(K=-S\) and \(C=S\cap s(-S)\) with \(s\in[1,2]\)
5. \(r(K,C)+R(K,C)=D_{\mathrm{MIN}}(K,C)\) for \(K=\lambda(-S)+(1-\lambda)S_{\mathrm{MIN}}\) with \(\lambda\in[0,1]\) and \(C=S_{\mathrm{MIN}}\)
The first two inequalities are trivial. The third and the fourth follow from Lemma 3.16 and the fifth from Corollary 6.4. The last inequality follows from Proposition 3.18,Remark 3.1 and the fact that \(R(K,C)\leq D_{\mathrm{MIN}}(K,C)\). The equality cases follow from Lemma 3.2, Lemma 3.8, Corollary 6.5, and the fact that \(R(-S,S_{\mathrm{MIN}})=2\).
|
2309.04828 | FAIR: Flow Type-Aware Pre-Training of Compiler Intermediate
Representations | While the majority of existing pre-trained models from code learn source code
features such as code tokens and abstract syntax trees, there are some other
works that focus on learning from compiler intermediate representations (IRs).
Existing IR-based models typically utilize IR features such as instructions,
control and data flow graphs (CDFGs), call graphs, etc. However, these methods
confuse variable nodes and instruction nodes in a CDFG and fail to distinguish
different types of flows, and the neural networks they use fail to capture
long-distance dependencies and have over-smoothing and over-squashing problems.
To address these weaknesses, we propose FAIR, a Flow type-Aware pre-trained
model for IR that involves employing (1) a novel input representation of IR
programs; (2) Graph Transformer to address over-smoothing, over-squashing and
long-dependencies problems; and (3) five pre-training tasks that we
specifically propose to enable FAIR to learn the semantics of IR tokens, flow
type information, and the overall representation of IR. Experimental results
show that FAIR can achieve state-of-the-art results on four code-related
downstream tasks. | Changan Niu, Chuanyi Li, Vincent Ng, David Lo, Bin Luo | 2023-09-09T15:51:49Z | http://arxiv.org/abs/2309.04828v1 | # FAIR: Flow Type-Aware Pre-Training
###### Abstract.
While the majority of existing pre-trained models from code learn source code features such as code tokens and abstract syntax trees, there are some other works that focus on learning from compiler intermediate representations (IRs). Existing IR-based models typically utilize IR features such as instructions, control and data flow graphs (CDFGs), call graphs, etc. However, these methods confuse variable nodes and instruction nodes in a CDFG and fail to distinguish different types of flows, and the neural networks they use fail to capture long-distance dependencies and have over-smoothing and over-squashing problems. To address these weaknesses, we propose FAIR, a Flow type-Aware pre-trained model for IR that involves employing (1) a novel input representation of IR programs; (2) Graph Transformer to address over-smoothing, over-squashing and long-dependencies problems; and (3) five pre-training tasks that we specifically propose to enable FAIR to learn the semantics of IR tokens, flow type information, and the overall representation of IR. Experimental results show that FAIR can achieve state-of-the-art results on four code-related downstream tasks.
2024
Changan Niu, Chuanyi Li, Vincent Ng, David Lo, and Bin Luo. 2024FIAIR: Flow Type-Aware Pre-Training of Compiler Intermediate Representations. In _2024 IEEE/ACM 46th International Conference on Software Engineering (ICSE '24), April 14-20, 2024, Lisbon, Portugal_. ACM, New York, NY, USA, 12 pages. [https://doi.org/10.1145/3597503.3608136](https://doi.org/10.1145/3597503.3608136)
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
embeddings are not task-agnostic and therefore cannot embed the contextual information of a target downstream task.
Nevertheless, there are several weaknesses in existing work (except for those based on GNNs) on IR-based models w.r.t. the IR features used by these models. Recall that in a CDFG, there are two types of nodes, one for variables/values (operands) and one for instructions, Existing approaches fail to distinguish between these two types of nodes by embedding in the same representation space using the same embedding method, while other approaches simply eliminate one type of nodes, which might greatly reduce performance (Chen et al., 2017; Chen et al., 2018). In addition, existing work treats all flows as equivalent (Zhu et al., 2018) or does not completely distinguish between all flow types (Chen et al., 2017; Chen et al., 2018; Chen et al., 2018). However, the flows in a CFG and a DFG should not be treated as identical. For example, in a CFG, a node may have multiple jump relationships controlled by conditions such as a Boolean expression, while in a DFG, the dependencies between data can be additive, divisive, etc. In fact, the flow-type information does exist in the original CDFG. For example, the flow type of a CFG can be retrieved from the last instruction of the basic block node, and the flow type of a DFG is in the opcode of the instruction node. Existing approaches embed the nodes first and then learn the flow information. As a result, the flow-type information stored in the nodes will be diluted by other texts in the nodes when performing node embedding, and more importantly, this flow-type information cannot be correctly associated with the corresponding flows in a CDFG.
Another weakness associated with existing work on IR-based models lies in the model architecture. Specifically, while existing work typically chooses message-passing-based GNNs to encode graphical features such as CDFGs, a CDFG is usually very large, often with more than a thousand nodes and thousands of flows. Such a large and densely connected graph would cause long-range dependencies (Zhu et al., 2018; Chen et al., 2018) problems for GNNs. Besides, the training process of GNNs naturally has over-smoothing and over-squashing problems, where the former refers to a situation where the representations of nodes become too similar to each other as a result of repeated graph convolutions (Chen et al., 2017; Chen et al., 2018; Chen et al., 2018), and the latter refers to a situation where the activation function used in the GNN model compresses the node representations too much, causing the model to lose important information (Chen et al., 2017; Chen et al., 2018).
All things considered, there is no existing work that seeks to address the size and heterogeneity (i.e., different node/flow types) problem of CDFGs, as well as the problems caused by GNNs. In light of these observations, we propose FAIR, a Flow type-**A**ware code pre-trained model based on **IR**. FAIR distinguishes itself from existing IR-based pre-trained models in its _input representation, model architecture_, and _pre-training_ tasks, as described below:
_Input Representation._ FAIR (1) decomposes a CDFG into a CFG and a DFG in order to reduce graph size; (2) assigns an explicit Flow Type to each flow in both the CFG and the DFG to distinguish different flow types; (3) adds the flows according to the call graph in order to connect multiple CFGs or DFGs of one single IR program; and (4) adds flows to link the nodes from the CFG and those from the DFG that have reference relationships. This process yields a novel graph-based input representation of an IR program.
_Model Architecture._ FAIR (1) uses a Transformer Encoder (Zhu et al., 2018) and a normal word embedding layer to embed the nodes of CFG and DFG, respectively; (2) employs Graph Transformer (Hu et al., 2017) to learn the representation of the entire IR program by taking the nodes' embedding as input and injecting graph priors into the attention computation via graph bias terms; and (3) associates each flow type with a unique bias term in order to learn from flow types.
_Pre-Training._ FAIR employs five pre-training tasks: (1) Masked Language Modeling (MLM) (Hu et al., 2017), which enables the model to predict the original nodes in the CFG and the DFG that are masked in the input; (2) CFG Flow Type Prediction (CFT), (3) DFG Flow Type Prediction (DFT), and (4) BB-Var Flow Prediction (BVP), all of which randomly mask some flows in the graph and then let the model predict whether these flows exist, and/or the flow type; and (5) a pre-training task based on contrastive learning, where we design four novel strategies to construct positive examples.
We compare FAIR with strong baselines based on both IR and source code on four downstream tasks, namely code-to-code retrieval, algorithm classification, heterogeneous device mapping, and optimal thread coarsening factor. Empirical results show that FAIR achieves state-of-the-art performance on all tasks and generalizes very well to unseen programming languages.1
Footnote 1: Artifacts are available at [https://github.com/NongatCA/FAIR](https://github.com/NongatCA/FAIR).
Overall, we make the following contributions. First, we propose FAIR, a flow type-aware pre-trained model of IR, which is programming language- and platform-independent. FAIR is novel in its design of an _input representation_ of IR programs as well as _pre-training tasks_ that aim to predict concrete types of flows and novel strategies to generate more positive examples for contrastive learning. Second, when pre-training FAIR on several large open-source repositories, we achieve state-of-the-art performance on four downstream tasks.
## 2. Related Work
### Source Code-based Pre-Trained Models
Inspired by the successes of pre-trained models in natural language processing (NLP), e.g., BERT (He et al., 2016), RoBERTa (Wang et al., 2017), BART (Kalal et al., 2017) and T5 (Zhu et al., 2018), a number of pre-trained models of source code have been proposed (Chen et al., 2017; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). While some of the pre-training tasks used in these models are directly copied from NLP such as MLM and replaced token detection (Wang et al., 2017), other pre-training tasks are designed to encode code content. In particular, code token-aware and natural language-aware pre-training tasks are widely adopted. For instance, identifier MLM only masks identifiers in the code tokens and trains the model to predict them (Zhu et al., 2018; Chen et al., 2018; Chen et al., 2018), and cross-modal generation (Chen et al., 2018; Chen et al., 2018) aims to generate natural language/code given code/natural language. Structure-aware pre-training tasks have also been proposed to enable a model to learn the structural information in, for instance, ASTs and DFGs. Examples include edge prediction and node alignment tasks, which help a model learn features within a DFG and between a DFG and code (Chen et al., 2018).
Contrastive learning is frequently used to improve the overall representation capability of a model. Existing contrastive learning strategies differ primarily in the methods used to generate positive examples. These methods include swapping the order of input parts, inputting different modalities of the same example separately (Zhu et al., 2018), and using different dropout masks (Chen et al., 2018).
Despite the successes of source code-based pre-trained models, we believe it is important to investigate IR-based models for at least two reasons. First, IR is programming language-independent, so IR-based models only need to capture the unique features of the IR language, such as grammar, vocabulary, and syntax. Second, IR-based models can be trained more efficiently since they do not require processing and aligning data from multiple languages.
### IR-based Models
Recent work on IR-based pre-trained models can be broadly divided into three categories:
**Using existing pre-trained models for node embedding.** Ncc (Ncc, 2017) combine a Control Flow Graph (CFG) and a Data Flow Graph (DFG) in order to build a Contextual Flow Graph (XFG). With an XFG, they train inst2vec, a skip-gram-based pre-trained embedding lookup table for each IR instruction, by defining the context of size \(N\) as nodes within distance \(N\) in the XFG. Then, they use LSTM to verify the performance of the trained inst2vec on downstream tasks. IR2VEC (Ncc, 2017) uses a trained embedding lookup table of seed embeddings. To obtain the lookup table, the authors (1) extract opcode, data type, and arguments from each instruction, (2) use the extracted information to convert an instruction into several triples, (3) apply the TransE learning model (Dosovitskiy et al., 2017) to the resulting triples to learn the seed embeddings of each instruction. Based on seed embeddings, they add the information of a CFG to obtain the representation vectors of an IR program. Rather than utilizing a lookup table, Gui et al. (Gui et al., 2018) use a BERT model pre-trained on IR data to embed a given IR program.
**Using GNNs to encode graph features.** CodeCMR (Zhu et al., 2019) feeds the source code of a high-level language and the CFG of a low-level language into the DPCNN (Zhu et al., 2019) and the GNN, respectively. GNN-CDFG (Garshan et al., 2019) (1) adds call graph and store-load dependencies into the CDFG of IR, (2) simplifies the nodes in the CDFG by eliminating the variable/value nodes and replaces each instruction node with its opcode, and (3) encodes the resulting graph using a message-passing paradigm-based GNN (Zhu et al., 2019). GNN-CEFG outperforms state-of-the-art approaches that use sequential models based on token sequences. ProGraML (Zhu et al., 2019) (1) adds call graph to a CDFG and utilizes Message Passing Neural Network (MPNN) framework (Zhu et al., 2019) to encode the whole graph, and (2) uses opcode and data type to represent an instruction. Both of these work tries to address the heterogeneity of CDFG by discarding some critical information, such as operands and return values. However, the heterogeneous nature of CDFG is not considered (Zhu et al., 2019) or well handled (Garshan et al., 2019; Zhu et al., 2019). Different from them, in FAIR, we decompose CDFG into CFG and DFG, and in addition to adopting a call graph, we define explicit types for flows, as well as simplify DFG and connect CFG and DFG with a novel type of flow.
**Developing pre-trained models of IR.** With the emergence of pre-training, some recent approaches utilize pre-training. OSCAR (Zhu et al., 2019), a pre-trained model of IR, leverages abstract environment information (AEI) along with the IR token sequence as model input. In contrast, IRGen jointly learns source code and the corresponding IR code generated using different compilation optimization options in order to better represent programs (Zhu et al., 2019). As pre-training tasks, MLM is used by OSCAR, whereas contrastive learning is used by both OSCAR and IRGen, even though the way contrastive learning is being used is different in the two models. Specifically, to construct more positive examples for contrastive learning, OSCAR generates correct IRs for each source code with different compilation optimization options, whereas IRGen uses contrastive learning by extending CodeCMR with a new objective based on triplet loss that increases the similarity between a source code and its corresponding IR and at the same time reduces the similarity between the source code and the irrelevant IRs. While we also employ contrastive learning in the design of FAIR, we (1) propose four novel strategies to construct positive examples by mutating the input of the given IRs, and (2) design the other two novel pre-training tasks that had never been used by existing pre-training models of IR, i.e., predicting the flow type of CFG and DFG.
## 3. Fair
In this section, we present FAIR, including its _Input_ (Section 3.1), _Architecture_ (Section 3.2), and _Pre-training Tasks_ (Section 3.3).
### Input Representation
FAIR's input is constructed from a given IR program2. Figure 1.a is an example of an IR function. Like most high-level programming languages, IR functions consist of a function signature and a function body, which contains one or more **Basic Blocks**, each starting with its label and a colon (e.g. "entry:"). Each basic block consists of a sequence of **Instructions**, and the instructions in a basic block are executed sequentially, without any branches.
Footnote 2: Without loss of generality, we use LLVM IR ([https://llvm.org/docs/LangRef.html](https://llvm.org/docs/LangRef.html))
Concretely, we propose a representation of an IR program that will be used as FAIR's input based on a Control Flow Graph (CFG) and a Data Flow Graph (DFG). This representation is composed of a _CFG with Flow Type_, a _Simplified DFG with Flow Type_, and _BasicBlock-Variable Flows_.
The reason for encoding CFG and DFG separately instead of using CDFG is that CFG and DFG describe the behavior of the IR program from different perspectives, and they are completely independent of each other. Although CDFG is a graph formed by merging CFG and DFG, the information expressed by CFG and DFG is still independent and orthogonal in CDFG. Therefore, using one single neural network to encode two different types of information at the same time may lead to worse results. In addition, as a cost of merging, the CDFG becomes very large and there becomes more than one type of node, which makes it even more difficult for the neural network to encode it.
#### 3.1.1. CFG with Flow Type
A CFG specifies the order in which a function executes its instructions. It determines not only the sequence in which different parts of a function are executed but also how the function reacts to different conditions or inputs. The left side of Figure 0(b) shows an original CFG of the IR function "(omain) in Figure 0(a) (except "br.T" and "br.F"). Each node of a CFG is a basic block and the edges are originally identical. The edges show possible jumps without the corresponding condition. For example, the basic block "entry" in this CFG has two jumps, <entry, ifend4> and <entry, for.body>3. Using this CFG, it is impossible to determine which jump to choose. Therefore, we need to explicitly add this
information to the CFG as the type of flow, which is important for understanding the jumps between basic blocks, and this is the reason why we consider CFGs to be heterogeneous graphs.
_Adding Flow Types._ Such jump condition information can be retrieved from the last instruction in the basic block, i.e., the terminator instruction. Referring to Figure 0(a), the terminator instruction of the basic block "entry" is a "br" instruction that is used to perform conditional or unconditional transfer between different basic blocks. This terminal instruction performs a conditional transformation using the previously computed Boolean variable "%cmp" as the condition. If the condition is true, then it will jump to the basic block "if.end4"; otherwise it will jump to "for.body". Based on this, we add the corresponding types, "br.T" and "br.F", to the two flows of the basic block "entry" in CFG, as shown in the orange words in Figure 0(b). It is worth mentioning that in addition to the "br" instruction, there are many other terminator instructions, such as "ret" (return to the caller), "switch", "invoke", etc.
_Adding Call Graphs._ IR programs usually contain multiple functions, while the current CFG can only represent jump information inside a function. Therefore, we add call graph information between functions to associate them. The call graph shows which functions call which other functions and how they are connected. From Figure 0(a) in function "@main", one of the instructions of the basic block "for.inc" calls the function "@off", which executes the return instruction in its "if.end" basic block. In this case, we add two flows, -for.inc, @f-entry, call.func\(>\) and -if.end, for.inc, call.return\({}^{*}\), indicating a function call and a return to the caller, respectively.
1.2. Simplified DFG with Flow Type._ DFG demonstrates the dependencies between instructions and values in a function. Figure 0(c) shows how we add flow types to a DFG of the first instruction of the basic block "entry" of function "@f" in Figure 0(a). On the left side, the original DFG consists of two types of nodes: variable/value nodes (e.g., "%m"), and instruction nodes. To unify the two types of nodes, we replace the entire instruction node with its return value, i.e., "%cmp". By doing so, we can unify the nodes of the DFG into the same type, i.e., variables, which will make them easier to encode. Since this will lose information such as opcode, e.g. "icmp slt", we add opcode information as the flow types.
_Adding Flow Types._ We assign the key information, i.e., opcode, which is discarded during the simplification of a DFG, to the type of flow. Concretely, we represent the opcode as three parts separated by dots: (1) _opcodes_, such as "icmp" (integer comparison), "add", "sub", etc.; (2) _options_, which are only available for certain operands, e.g. "icmp" has options such as "eq" (equal), "ne" (not equal), "slt" (signed less than), "uge" (unsigned greater or equal), etc., while "add", for example, has no options; and (3) _operand positions_, which are only available for non-commutative operands/options, such as "icmp.slt", "icmp.uge", or "sub", but not for "icmp.eq", "icmp.ne", or "add". In this way, we complete the addition of DFG flow types that contain key information such as opcodes, options, and operand positions, namely </m, "scmp, icmp.slt.1> and </sx, %cmp, icmp.slt.2> in the right of Figure 0(c).
_Add call graph._ Just like a CFG, a DFG only represents the flow of data within one function. So, we add call graphs between different DFGs to join them together. Figure 0(d) shows how to add the call graph. As can be seen in the example in Figure 0(a), an instruction in the caller function "@main" calls the function "@f" with arguments "%i" and "%j", while the corresponding parameters of the callee is "%x" and "%m". Then the return instruction of the callee returns variable "%sum", which is assigned to "small" in "@main". Therefore, we first add flows with type "call.arg" between the corresponding caller's arguments and callee's parameters, i.e., </si, %x, call.arg- and </sj, %m, call.arg-. Then we add a flow from the callee's return variable to the caller's return value, with type "call.return", that is </ssum, %call, call.return\(>\). Note that the flow type "call.return" here is not the same as the CFG flow type "call.return".
#### 3.1.3. BB-Var Flows.
Since we encode CFG and DFG separately, the relationship between the CFG and DFG is lost, and the relationship is most notably the subordination of Variables and Basic Blocks. This makes the construction of the BB-Var Flow (for connecting Basic Blocks to Variables) quite simple: if a variable node \(m\) in the DFG belongs to one of the basic blocks \(i\) in the CFG, then an untyped flow <\(i\), \(m\)> will be added between the CFG and the DFG.
So far we have accomplished the processing and construction of the CFG, the DFG, and the BB-Var flow of an IR program. From
Figure 1. The procedure of building input representation of the FAIR model.
now on we will consider the CFG and the DFG after the BB-Var flow connection as a whole graph \(G=(V,F)\), where \(V\) is a set of CFG and DFG nodes, and \(F=\{F_{\text{CFG}},F_{\text{DFG}},F_{\text{BV}}\}\) is a set of all flows, where \(F_{\text{CFG}},F_{\text{DFG}},F_{\text{BV}}\) are sets of CFG flows, DFG flows and BB-Var flows, respectively.
### Model Architecture
As shown in Figure 2, FAIR is a two-level model, where the first level, which includes Basic Block Embedding and Variable Embedding, is employed to encode the nodes in the CFG and the DFG to derive the embedding representation of the nodes, while the second level, the Encoder, is used to learn the overall IR representation from both the node embedding and the flow information within and between the CFG and the DFG.
#### 3.2.1. Node Embedding
The nodes of the CFG and the DFG are basic blocks and variables respectively, and given their distinct characteristics, we adopt different methods to embed these two kinds of nodes.
_Basic Block Embedding_: As mentioned before, in basic blocks, instructions are executed sequentially, so we can naturally use a sequential model such as LSTM (Hochreiter and Schmidhuber, 1993) and Transformer Encoder (Wang et al., 2017) to encode a basic block. We choose a Transformer Encoder to embed the basic block in this paper.
We show in the bottom left of Figure 2 how we obtain the embedding vector of each basic block. Given the CFG that we construct in Section 3.1, we first extract its nodes, namely the basic blocks, and present each basic block as a sequence of text tokens. Then, we add a special symbol "[CLS]" at the beginning of the sequence to identify the position of the output embedding vector, and feed this token sequence to the word embedding layer, the positional encoding layer, as well as several Transformer Encoder layers (in the figure, they are represented as a "Basic Block Embedding"). Finally, we extract the hidden vector of "[CLS]" in the input at the last layer as the embedding vector of the whole basic block. Note that all basic blocks share the same Transformer Encoder when embedding.
In this manner, for a CFG, we can derive the embedding vectors of each node. These vectors are sorted in the order in which the basic blocks appear in the IR program, and the resulting sequence of vectors will be fed into the Encoder in the second level.
_Variable Embedding_: The embedding of DFG nodes is relatively simple since these nodes are all variables. Specifically, we embed them using a regular Word Embedding layer. As shown in the bottom right of Figure 2, we (1) extract the variables from the processed DFG, (2) convert them into one-hot vectors, and (3) use a learnable linear layer to obtain the word embedding vectors of the variables.
#### 3.2.2. Encoder
We use Graph Transformer as the second level encoder to obviate the problems of long dependencies, over-smoothing, and over-squashing that are present in the message passing-based GNNs widely adopted in existing approaches (Han et al., 2017; Wang et al., 2017; Wang et al., 2018). The two inputs of this encoder are the output of the first-level encoder and the flow information. Note that they are utilized in different ways.
_Formulate Node Embedding_: Given a sequence of \(m\) vectors of basic blocks \(E^{b}=[E^{b}_{1},\dots,E^{b}_{m}]\in\mathbb{R}^{m\times d}\), and a sequence of \(n\) vectors of variables \(E^{o}=[E^{o}_{1},\dots,E^{n}_{m}]\in\mathbb{R}^{n\times d}\), where \(d\) denotes the hidden dimension of our model. We first build the input of the Encoder. Specifically, we concatenate these two sequences, using the embedding vector of "[SEP]" \((E_{\text{[SEP]}}\in\mathbb{R}^{d})\) to separate them, and insert the embedding vectors of "[CLS]" \((E_{\text{[CLS]}}\in\mathbb{R}^{d})\) and \(E_{\text{[SEP]}}\) at the beginning and the end of the sequence, respectively. Consequently, we form the input to the Encoder \(I_{\text{Enc}}\in\mathbb{R}^{l\times d}\), where \(l=m+n+3\),
\[I_{\text{Enc}}=[E_{\text{[CLS]}},E^{b}_{1},\dots,E^{b}_{m},E_{\text{[SEP]}},E^ {o}_{1},\dots,E^{o}_{n},E_{\text{[SEP]}}] \tag{1}\]
In order for the Encoder to better distinguish the nodes of the CFG and those of the DFG, we add another vector sequence \(I_{\text{type}}\in\mathbb{R}^{l\times d}\) on top of \(I_{\text{Enc}}\) before we input this vector sequence to the Encoder. This is achieved by a mechanism similar to Segment Embedding in BERT (Kipf and Welling, 2017). To be specific, we construct a sequence containing only \(0\) and \(1\)s, where a \(0\) is used to indicate a CFG node (i.e., the position of \(E^{b}\) in the \(I_{\text{Enc}}\) (including the special symbols)), while a \(1\) is used to indicate a DFG node (i.e., the position where \(E^{o}\) is located). Then we pass the numbers in this sequence through another embedding layer and get the vector sequence that presents the node type, i.e. \(I_{\text{type}}\).
Finally, we add \(I_{\text{Enc}}\) and \(I_{\text{Type}}\) and input the results \(I=I_{\text{Enc}}+I_{\text{type}}\in\mathbb{R}^{l\times d}\) into the Encoder.
_Integrating Flows_: We make the model learn flow information by injecting graph priors into the attention computation via graph bias terms. In other words, since our input is composed of the nodes of the graph, when we compute the self-attention matrix in each layer of the Transformer, the flow information between the nodes is injected into the attention matrix through an adjacency matrix. This makes our model different from the vanilla Transformer Encoder (Wang et al., 2017) in the self-attention module of each encoder layer. For simpler illustration without loss of generality, we assume in this section that there is only one self-attention head.
Concretely, let \(H=[h_{1},\dots,h_{l}]\in\mathbb{R}^{l\times d}\) be the input of the self-attention module, where \(h_{i}\in\mathbb{R}^{d}\) is the hidden vectors of position
Figure 2. The overall architecture of FAIR model.
\(i\). The attention scores of input matrix \(H\) are computed as:
\[Q=HW_{Q},\ K=HW_{K},\ V=HW_{V}, \tag{3}\] \[\text{Attention}(H)=\text{softmax}(A)V, \tag{2}\]
where \(W_{Q},W_{K},W_{V}\in\mathbb{R}^{d\times d}\) are projection matrices, \(A\in\mathbb{R}^{l\times l}\) is the attention score matrix between every two input nodes. Let \(A_{ij}\) be the (i, j) elements of \(A\), we have:
\[A_{ij} =\frac{(h_{i}W_{Q})(h_{j}W_{K})^{T}}{\sqrt{d}}+b, \tag{5}\] \[b =\begin{cases}0,&\langle i,j\rangle\notin F\\ b^{\text{CFG}}_{\phi(t)},&\langle i,j,t\rangle\in F_{\text{CFG}}\\ b^{\text{DFG}}_{\phi(t)},&\langle i,j,t\rangle\in F_{\text{DFG}}\\ b^{\text{BV}},&\langle i,j,t\rangle\in F_{\text{BV}}\end{cases}, \tag{4}\]
where \(b^{\text{CFG}}_{\phi(t)},b^{\text{DFG}}_{\phi(t)}\in\mathbb{R}\) are learnable parameters indexed by \(\phi(t)\). Taking a CFG as an example, we let there be a total of \(p\) CFG flow types. Then we have a vector \(B^{\text{CFG}}=[b^{\text{CFG}}_{1},\ldots,b^{\text{CFG}}_{p}]\in\mathbb{R} ^{p}\), and \(\phi(t)\) is the index of CFG flow type \(t\) in \(B^{\text{CFG}}\). The vector \(b^{BV}\in\mathbb{R}\) is also learnable. All these three types of parameters are shared in all layers. It can be seen that we achieve the injection of flow information by adding bias terms to the attention scores. Specifically, when calculating the attention score between nodes \(i\) and \(j\), if there is no flow between them, we do not add a bias term, but if there is a flow of CFG or DFG of type \(t\) between them, then we add the bias term corresponding to that type \(t\) to the attention score, noting that there is a corresponding learnable bias term for each flow type of CFG and DFG. Finally, if there is an untyped flow of BB-Var between \(i\) and \(j\), we add another learnable bias term to the attention score.
With respect to the other aspects, e.g., the feed-forward module and layer normalization, FAIR is identical to the vanilla Transformer Encoder (Yang et al., 2019), so we will not go over them here. Next, we present the pre-training tasks used to train FAIR.
### Pre-Training Tasks
Pre-training has been shown to massively improve the performance of models on downstream tasks (Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). With respect to Graph Transformer, pre-training is able to help a model to learn generalizable and transferable representations of graphs and exploit additional knowledge to guide a model to capture structural and semantic information (Krizhevsky et al., 2017; Krizhevsky et al., 2017). Therefore, we propose five pre-training tasks that enable the model to learn the semantic information in the basic block, the flow information of the graph, and the overall representation capability for IR.
#### 3.3.1. Masked Language Modeling
Masked Language Modeling (MLM) is widely adopted in the field of NLP and SE (Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). It can help a model to acquire a good contextual and semantic understanding of the basic block (Krizhevsky et al., 2017; Krizhevsky et al., 2017). As a result, we first adopt MLM to train our Basic Block Embedding module to generate better representations for basic blocks. The task is to predict the original tokens that are masked in the input. We follow the original MLM setup, which samples 15% of the tokens from the input sequence, and then replaces 80% of them with a [MASK] token, 10% with a random token, and another 10% of them unchanged.
Let \(x=[x_{1},\ldots,x_{n}]\) be a sequence of tokens of a basic block of length \(n\) and \(M\) be a set of indices of masked tokens. Then the MLM objective is to minimize the following loss:
\[\mathcal{L}_{\text{MLM}}=-\frac{1}{|M|}\sum_{i\in M}\log P(x_{i}|x_{\neg i}), \tag{6}\]
where \(x_{\neg i}\) denotes the sentence \(x\) with the \(i\)-th token masked and \(P(x_{i}|x_{\neg i})\) denotes the probability of predicting the original token \(x_{i}\) given the masked sentence \(x_{\neg i}\).
#### 3.3.2. CFG/DFG Flow Type Prediction
To learn the added flow information in a CFG and a DFG, we design two pre-training tasks, one for each of these two graphs, namely CFG Flow Type Prediction (CFT) and DFG Flow Type Prediction (DFT). We adopt these two pre-training tasks with the motivation that the model learns the structure-aware information of the input IR so that it can grasp the control flow information in the CFG and the data flow information in the DFG. The objectives of these two tasks are the same, so we only illustrate CFT here for the sake of simplicity.
Given a CFG with \(n\) nodes, we randomly sample 15% from a total of \(n^{2}\) ordered pairs of nodes, i.e., \(n_{m}=15\%\times n^{2}\). Then, we mask the flows if they exist between these pairs, and subsequently, we make the model predict whether these edges exist as well as the type of edges. The absence of a flow can be seen as a special type of flow, and consequently, this task becomes a \((k+1)\)-way classification task, where the number of CFG flow types is \(k\). Note that each type of flow is sampled in a balanced manner.
Formally, let \(F^{\text{CFG}}_{m}=\{\langle u_{i},v_{i},\phi_{i}\rangle|i\in[1,n_{m}],\phi_{i }\in[0,k]\}\) be the set of sampled node pairs, where \(\phi_{i}\) indicates the index of the flow type (with 0 representing no flow and [1,\(k\)] representing the flow types). Therefore, the masked CFG becomes \(G^{\text{CFG}}_{m}=\langle V^{\text{CFG}},F^{\text{CFG}}\setminus F^{ \text{CFG}}_{m}\rangle\), where \(F^{\text{CFG}}\setminus F^{\text{CFG}}_{m}\) represents the set difference between the original set of flows \(F^{\text{CFG}}\) and the masked flows \(F^{\text{CFG}}_{m}\).
Assuming \(\langle u_{i},v_{i},\phi(t)\rangle\) is the \(i\)-th element in \(F^{\text{CFG}}_{m}\), the model predicts the type of flow by inputting their hidden vectors in the last layer into an extra linear layer. Let \(h_{u_{i}},h_{v_{i}}\in\mathbb{R}^{d}\) be the hidden vectors of nodes \(i\) and \(j\). The index of the predicted flow type \(\hat{\phi}_{i}\in[0,k]\) is:
\[\hat{\phi}_{i}=\arg\max(\text{softmax}(W[h_{u_{i}},h_{v_{i}}]+b)), \tag{7}\]
where \(W\in\mathbb{R}^{2d\times(k+1)}\) and \(b\in\mathbb{R}^{k+1}\) are learnable parameters, and \([h_{u_{i}};h_{v_{i}}]\) is the concatenation of \(h_{u_{i}}\) and \(h_{u_{i}}\). We can then represent the predicted index set of masked flows as \(F^{\text{CFG}}_{m}=\{\phi_{i}|i\in[1,|E_{m}|]\}\). The objective of CFT is to minimize the following loss:
\[\mathcal{L}_{\text{CFT}}=-\frac{1}{|F^{\text{CFG}}_{m}|}\sum_{(u_{i},\phi_{i })\in F^{\text{CFG}}_{m}}\log P(\phi|G^{\text{CFG}}_{m}), \tag{8}\]
where \(P(\phi|G_{m})\) is the probability of predicting the original flow type \(\phi\) given the masked CFG \(G_{m}\).
In the same way, the DFT objective is to minimize the following loss:
\[\mathcal{L}_{\text{DFT}}=-\frac{1}{|F^{\text{DFG}}_{m}|}\sum_{(u_{i},\phi_{i}) \in F^{\text{DFG}}_{m}}\log P(\phi|G^{\text{DFG}}_{m}), \tag{9}\]
where \(F^{\text{DFG}}_{m}\) is the set of sampled node pairs, \(G^{\text{DFG}}_{m}\) denotes the DFG after masking the flow type in \(F^{\text{DFG}}_{m}\).
#### 3.3.3. BB-Var Flow Prediction
BB-Var Flow Prediction (BVP) is similar to CFT and DFT, except that BVP is a binary classification task that only predicts whether the flow exists or not. Let \(G^{\text{BV}}=(V^{\text{BV}},F^{\text{BV}})\) denote the graph where \(V^{\text{BV}}\) includes \(n\) basic blocks and \(m\) variable nodes, and \(F^{\text{BV}}\) is the set of BB-Var flows. We use the same probability (i.e., \(15\%\)) to mask the flow in \(G^{\text{BV}}\), which results in the masked graph \(G^{\text{BW}}_{m}=(V^{\text{BV}},F^{\text{BV}}\setminus F^{\text{BV}}_{m})\), where \(F^{\text{BV}}_{m}\) is the set of masked flows. The loss of BVP is calculated as:
(10) \[\mathcal{L}_{\text{BVP}}=-\frac{1}{|F^{\text{BV}}_{m}|}\sum_{(u,v)\in F^{ \text{BV}}_{m}}[y\log p_{(u,v)}+(1-y)\log(1-p_{(u,v)})],\] (11)
where \(h_{u},h_{v}\in\mathbb{R}^{d}\) are the hidden vectors of the nodes \(u\) and \(v\) in the last layer, \(y\) is \(1\) if \((u,v)\in F^{\text{BV}}\) and \(0\) otherwise, and \(p_{(u,v)}\) is the probability of predicting that there is an BB-Var flow between nodes \(u\) and \(v\).
#### 3.3.4. Contrastive Learning
We employ contrastive learning as our last pre-training task. Contrastive learning aims to learn representations of an input example (a.k.a. the anchor example) by contrasting its positive and negative pairs5, which allows models to improve their capabilities on multiple dimensions, such as scalability (Chen et al., 2018), generalization ability (Chen et al., 2018), global and hierarchical local features learning (Chen et al., 2018) and performance on downstream tasks (Chen et al., 2018; Chen et al., 2018).
Footnote 5: Positive examples are examples that are similar to the anchor example, while negative examples are examples that are different from the anchor examples. The goal is to make the positive examples closer to the anchor example and the negative examples farther away in the representation space.
The key to contrastive learning is to construct positive and negative examples. Recall that the input of our model can be seen as a single graph (Section 3.1.3) \(G\). Given \(G\), we leverage the following methods to construct positive examples.
* **Function Permutation**: randomly change the order of the functions when input contains multiple functions (which is the majority of cases).
* **Function down-sampling**: remove one or more functions randomly when the input contains more than one function.
* **Flow Mutation**: randomly change some of the flows, i.e. adding, removing flows, and changing the type of the flows.
* **Node Adding/Removing**: add some random standalone nodes (no flow), or remove some nodes (and its flows).
Since most of our inputs contain multiple functions, it is natural to construct positive examples by treating each function as a subgraph, such as function down-sampling, rather than the random down-sampling used by other methods.
Unlike OSCAR and IRGen, which also use contrastive learning based on IR, FAIR constructs positive examples using the aforementioned methods instead of using different optimization options to generate different IRs from the same source code. We believe that the method of constructing positive examples using different optimization options would lead to exposure bias of the model on a downstream task, i.e., the model is only trained on IRs generated by different optimization options during pre-training, while on downstream tasks, IRs are generated after the same optimization options. This could result in a model that only learns how to identify IRs generated by the same source through different optimization options, rather than different IRs generated by different sources through the same optimization options.
Some might also argue that when we construct positive examples by using the methods described above, the underlying IR of constructed positive examples could be incorrect and semantically invalid. While this may be true, we think that it does not impact the effectiveness of the contrastive learning we use. Our goal with contrastive learning is to enable the model to distinguish similar and dissimilar examples, and semantically similar IRs will then result in similar graph representations. As the input to our model is the graph, we, therefore, are able to make some changes to the graph of the anchoring example to construct pseudo-graphs with similar graph structures, without requiring a large-scale dataset with real-world similar IRs.
As negative examples, we utilize other examples in the training mini-batch. Then, we feed the input and the positive/negative examples into the model and obtain their representations. Let \(v\in\mathbb{R}^{d}\) denote the representation vector of the \(G\), and \(S_{\text{pos}}=\{v_{1}^{\text{pos}},\dots,v_{n}^{\text{pos}}\}\), \(S_{\text{neg}}=\{v_{1}^{\text{neg}},\dots,v_{m}^{\text{neg}}\}\) be the sets of representations of positive and negative examples, respectively. The loss of contrastive learning is computed as follows,
\[\mathcal{L}_{\text{CL}}=\max(0,D_{\text{pos}}-D_{\text{neg}}+\text{margin}), \tag{12}\]
where \(D_{\text{pos}},D_{\text{neg}}\in\mathbb{R}\) are the averages of the Euclidean distance (Chen et al., 2018) between \(v\) and each element in \(S_{\text{pos}}\) and \(S_{\text{neg}}\), respectively.
#### 3.3.5. Overall Objective
The overall pre-training objective is to minimize the sum of the all above losses, that is,
\[\mathcal{L}=\mathcal{L}_{\text{MLM}}+\mathcal{L}_{\text{CFT}}+\mathcal{L}_{ \text{DFT}}+\mathcal{L}_{\text{BVP}}+\mathcal{L}_{\text{CL}} \tag{13}\]
## 4. Evaluation Setup
### Pre-Training
_Data Preparation._ We adopt the dataset provided by Peng et al. (Peng et al., 2019) as our pre-training dataset, which consists of eleven popular open-source C/C++ projects from GitHub. This dataset includes 41,322 IR programs, 855,792 functions, and 48,023,781 instructions in total. We further optimize the given IRs using LLVM of version 13.0.1 with the optimization options "-Os" and "-fast-math".
_Tokenizer._ Due to the large gap between the lexical features of IR and those of the high-level programming languages, we do not use existing tokenizers developed for high-level languages. Instead, we build a tokenizer of size 30,000 from scratch using the BPE algorithm (Zhu et al., 2019) upon the pre-training data.
_Hyperparameters._ We set the hidden dimension \(d\) to 768, the intermediate dimension of feed-forward to 3072, the number of layers of the Basic Block Embedding module and Encoder module to 6, and the number of self-attention heads to 12. We set the maximum length of each basic block to 256, the maximum number of basic blocks of each program (which is also the number of CFG nodes) to 64, and the maximum number of DFG nodes to 256. This results in a total of 138M parameters used for model pre-training, of which 30M are temporary parameters that are only used during pre-training. This gives us 108M pre-trained model parameters for the downstream tasks. We pre-train FAIR for 10 epochs by minimizing the loss \(\mathcal{L}\). We use AdamW (Kingma et al., 2014) as our optimizer. The initial
learning rate is 5e-5 and the warmup step is 2,000. The pre-training is run on 4 NVIDIA V100 32G GPUs with a total batch size of 8.
### Downstream Tasks
In this subsection, we present the fine-tuning procedure of FAIR on four downstream tasks. For each downstream task, we first provide a brief introduction and then describe the dataset and the evaluation metrics.
#### 4.2.1. Code-to-Code (C2C) Retrieval
Given a source code as the query, the code-to-code retrieval task aims to retrieve codes with the same semantics from a collection of candidates. This task can evaluate the ability of a model to distinguish between codes/IRs with different semantics.
We use two datasets for this task, namely, POJ-104 (Krizhevsky et al., 2012) and GCJ6. POJ-104 contains 42,000 C/C++ programs that implement entry-level programming assignments for 104 different problems. We use the train/valid/test splits provided by CodeXGLUE (Krizhevsky et al., 2012), where the numbers of problems/codes of each split are 64/32,000, 16/8,000, and 24/12,000. GCJ contains the source code from solutions to Google Code Jam programming challenges and includes 302,070 C/C++ programs across 331 problems. There are no available splits, so we create the train/valid/test splits, which include 265/26/40 problems and 181,103/60,230/60,737 programs.
Footnote 6: [https://github.com/jur1cck/ggj-dataset](https://github.com/jur1cck/ggj-dataset)
As for the metric, we adopt mean average precision with the recall level of 499 (i.e., MAP@R, R=499) (Krizhevsky et al., 2012). That is, we let the model retrieve the top 499 semantically similar candidates given a query.
#### 4.2.2. Algorithm Classification
Algorithm classification aims to categorize a given code. We also use the POJ-104 as the dataset, but adopt the train/valid/test split created by Ben et al. (2017). The sizes of train/valid/test splits are 27,649/9,155/9,227. We use the error rate (ER) on the test set as the evaluation metric.
#### 4.2.3. Heterogeneous Device Mapping
Heterogeneous device mapping is the task of choosing the execution device that has the best performance given an _OpenCL Kernel_, the _Input Data Size_ and _Work Group Size_ (i.e., the number of threads that work in a group with shared memory). We use the dataset provided by Grewe et al. (2017), who formulate this task as a binary classification task. This dataset consists of two subtasks, namely predicting whether the given OpenCL kernel will run faster on an Intel CPU or an AMD GPU and whether it will run faster on an Intel CPU or an NVIDIA GPU. Both of them contain 680 labeled examples derived from the 256 unique kernels by varying dynamic inputs.
In addition to accuracy (Acc), we use a metric called "Speedup", which is the average ratio of the runtime improvement of each OpenCL on the devices predicted by the model compared to the runtime of the static mapping. The static mapping chooses CPU when comparing CPU and AMD GPU, and chooses GPU when comparing CPU and NVIDIA GPU.
We concatenate the _Input Data Size_ and _Work Group Size_ to create the input. Following the usual strategy of utilizing this dataset (Beng et al., 2017; Chen et al., 2017; Chen et al., 2017), we use 10-fold cross-validation with rotating 8/1/1 train/valid/test splits for evaluation.
#### 4.2.4. Optimal Thread Coarsening Factor
Given an OpenCL kernel, this task is to predict the best-performing thread coarsening factor, which is a value that determines how many threads to merge together.
We adopt the dataset provided by (Chen et al., 2017). It contains the runtimes on 17 benchmarks with 4 GPUs having thread coarsening factors of 1, 2, 4, 8, 16, and 32, respectively. The GPUs are Cypress (AMD Radem HD 5900), Tahiti (AMD Tahiti 7970), Fermi (NVIDIA GTX 480) and Kepler (NVIDIA Tesla K20c). It is a 6-way classification task (i.e., predicting one of the 6 possible factors) and includes 4 subtasks, each corresponding to one GPU.
We use the Speedup metric to evaluate the performance of the model. Speedup is the ratio of runtime reduction of the GPU at the factor predicted by the model to the runtime without thread coarsening (i.e., when the factor is 1).
### Fine-Tuning
The pre-trained FAIR model will be fine-tuned on each individual downstream task. We discard the modules that are temporarily added during the pre-training phase, such as the learnable matrix \(W\) and the vector \(b\) in the classification head module (see Section 3.3.2), and only preserve all the modules present in Figure 2 when FAIR is applied to the downstream tasks. For the classification model, we will add the corresponding classification module so that the representation vector generated by FAIR can be mapped to each class. Before fine-tuning, we convert high-level source code into LLVM IR for each dataset of the downstream tasks with Clang 13.0.1. LLVM 13.0.1 is used to optimize the LLVM IR.
### Baselines
We use two groups of baselines. The first group is composed of models of high-level language source code, all of which were pre-trained on source code and have achieved state-of-the-art performance on various code-related downstream tasks. They are **CodeBERT**(Krizhevsky et al., 2012), **CodeT5**(Zhang et al., 2017) and **UniXcoder**(Zhang et al., 2017). For each downstream task, these three models are directly fine-tuned on the high-level source code in the dataset. The second group is composed of models that are designed for IR, including **ncc**, **IR2VEC**, **GNN-CDFG**, **ProGraML**, **OSCAR**, and **IRGen**. They are introduced in Section 2.2.
## 5. Results and Discussion
To evaluate FAIR, we propose three Research Questions. We run each experiment three times by using different random seeds and report the mean. To check the statistical significance of the experimental results, we utilize the Approximate Randomization Test7.
Footnote 7: [https://github.com/danieldk/approx-rand-test](https://github.com/danieldk/approx-rand-test)
### Comparison with Baselines
**RQ1: How effective is FAIR compared with the state-of-the-art baselines on four downstream tasks?**
We conduct experiments to check the performances of all compared approaches on the four downstream tasks. The results of code-to-code retrieval and algorithm classification are in Table 1, and the results of heterogeneous device mapping and optimal thread coarsening factor are in Tables 2 and 3, respectively. (Note that in
these tables, (1) the best results are boldfaced, and (2) the differences between the best result and the other results are statistically significant at \(p<0.05\).) Overall, FAIR achieves either new SOTA performance or performance comparable to the current SOTA models on all four downstream tasks.
In addition to that FAIR achieves new SOTA for the code-to-code retrieval task, Table 1 also shows that pre-trained models of both source code (i.e., CodeBERT, CodeT5, and UnixCoder) and IR (i.e., OSCAR, IRGen, and FAIR) generally achieve higher performance then non-pre-training approaches (i.e., ncc, IR2Vec, GNN-CDFG, and ProGraML) on both of the two datasets. Comparing the performance of each approach on different datasets, we find that the ncc and IR2Vec that use lookup tables perform better on GCJ than on POJ-104, while the others perform better on POJ-104 than on GCJ.
Examining the results in Table 2, we find that the pre-trained models (i.e., the first group of models, OSCAR, IRGen, and FAIR) tend to have better performance than their non-pre-trained counterparts. Since the dataset for this task is small (with only 680 examples per subtask), we speculate that pre-training can help a model learn more general features and more transferable representations from large-scale data and subsequently improve its performance on a downstream task that has insufficient data ((13; 1; 35)).
However, a much smaller amount of data occurs in the optimal thread coarsening factor task in Figure 3, where each subtask has only 17 examples. We find that the IR-based pre-trained model continues to have better performance than the others. Note that even a model as small and shallow as IR2Vec has remarkable performance, possibly because small models require fewer data to train and are also less likely to overfit the training data (60; 40). We do not show the results of the models of the first group in Table 3 because they always predict the same label for all examples. One reason for this behavior is that the task is a multi-label classification task, which has a large gap with the pre-training tasks used to pre-train these models. Another reason is the data distribution gap: these models are all pre-trained on CodeSearchNet (25) (or plus C/C# from BigQuery (57)), which does not contain OpenCL kernel-related code. Above all, having too little data prevents them from effectively transferring the code representation to this task.
### Model Ablation
**RQ2: How do our input representation as well as pre-training tasks contribute to FAIR's performance?**
For the input representation, we experiment with three variants of FAIR: (1) **FAIR w/o type**: remove the type information from all flows, i.e., only indicate whether the flow exists or not, (2) **FAIR w/o flow**: remove the bias in Equation 4 when calculating attention scores, and (3) **FAIR w/ CDF**: replace the input with CDFG+call graph with the typed flow. With respect to pre-training tasks, we experiment with the following variants: (1) **FAIR w/o MLM**: remove the MLM pre-training task, (2) **FAIR w/o xFT**: remove the CFT/DFT pre-training tasks, (3) **FAIR w/o BVP**: remove the BVP pre-training task, (4) **FAIR w/o CL**: remove the contrastive learning pre-training task, and (5) **FAIR w/o all**: remove all pre-training tasks. The results are shown in Table 4, where the worst results of each group are underlined.
Several observations deserve mention. First, each part of the input and each pre-training task can help FAIR to get better performance on downstream tasks. Second, for code-to-code retrieval and algorithm classification, changing the input to a CDFG has the greatest impact on performance, especially for GCJ. However, for heterogeneous device mapping, changing the input to a CDFG seems to have a smaller impact on performance. As for the contribution of the pre-training tasks, removing contrastive learning from the pre-training tasks has the biggest impact on the performance of the first two tasks. It is because contrastive learning enhances the model's capability to identify semantically similar and dissimilar
\begin{table}
\begin{tabular}{l r r r} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{Retrieval} & Algorithm \\ \cline{2-4} & PQJ-104 & GCJ & Classification \\ \cline{2-4} & MAP@R & MAP@R & Error Rate \\ \hline CodeBERT & 82.67 & 77.16 & 4.61 \\ CodeT5 & 88.65 & 79.65 & 4.12 \\ UniXcoder & 90.52 & 82.23 & 1.91 \\ \hline ncc & 54.19 & 64.68 & 5.17 \\ IR2Vec & 76.34 & 77.90 & 3.93 \\ GNN-CDFG & 79.20 & 66.64 & 3.72 \\ ProGraML & 81.53 & 71.27 & 3.38 \\ OSCAR & 89.98 & 81.76 & 1.92 \\ IRGen & 89.22 & 83.26 & 2.01 \\ \hline FAIR & **92.04** & **85.41** & **1.75** \\ \hline \hline \end{tabular}
\end{table}
Table 1. Results on C2C retrieval and algorithm classification.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{NVIDIA} & \multicolumn{2}{c}{AMD} \\ \cline{2-5} & Acc & Speedup & Acc & Speedup \\ \hline CodeBERT & 86.76 & 1.58 & 95.59 & 2.79 \\ CodeT5 & 88.54 & 1.48 & 93.10 & 2.59 \\ UnixCoder & 89.71 & 1.50 & 94.12 & 2.76 \\ \hline ncc & 84.67 & 1.44 & 88.09 & 3.47 \\ IR2Vec & 85.32 & 1.26 & 91.32 & 3.51 \\ GNN-CDFG & 87.93 & 1.39 & 89.16 & 3.37 \\ ProGraML & 88.13 & 1.41 & 92.60 & 2.98 \\ OSCAR & 89.52 & 1.49 & 94.11 & 3.34 \\ IRGen & 89.86 & 1.57 & 94.32 & 3.60 \\ \hline FAIR & **91.61** & **1.62** & **96.52** & **3.63** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Results on heterogeneous device mapping.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline Models & Cypress & Tahiti & Fermi & Kepler \\ \hline ncc & 1.01 & 1.04 & 0.95 & 1.01 \\ IR2Vec & 1.18 & **1.21** & 1.1 & **1.08** \\ GNN-CDFG & 1.01 & 0.93 & 0.92 & 0.86 \\ ProGraML & 1.05 & 1.12 & 0.96 & 0.97 \\ OSCAR & 1.21 & 1.19 & 1.06 & 1.07 \\ IRGen & 1.22 & 1.17 & 1.11 & **1.08** \\ \hline FAIR & **1.25** & **1.21** & **1.13** & **1.08** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Results on optimal thread coarsening factor.
IRs, which is what the model needs to perform well both downstream tasks. Finally, in most cases, for the last two tasks with very limited data, model performance does not show any significant change when we remove one of the pre-training tasks, but when we remove all of them, performance deteriorates.
### Transferability
**RQ3: How well can FAIR transfer to IR compiled from unseen programming languages in the zero-shot setting?**
We evaluate FAIR on the code-to-code retrieval tasks using a dataset of unseen programming languages. This experiment will also allow us to measure the ability of FAIR to represent the IR program of low-resource programming languages. Specifically, there are many niche or emerging languages that do not have the same active community and large-scale data as popular languages needed to effectively train a model. Although the source codes of programming languages share some lexical similarity, and existing work has demonstrated the ability of some source code-based models to transfer between programming languages, we believe that an IR-based approach is better suited to do this because IR can completely eliminate the differences between programming languages.
We collect 10,751 Rust solutions to 59 online judge problems from the CodeNet Corpus (Zhu et al., 2017). The Rust program is compiled to LLVM IR by using Cargo 1.68.28. We only choose the pre-trained models in Section 4.4 as baselines9. Other settings are the same as those shown in Section 4.2.1.
Footnote 9: Only pre-trained models can be evaluated in the zero-shot setting.
Results are shown in Table 5. As can be seen, (1) FAIR achieves state-of-the-art performance, (2) the IR-based models (i.e., OSCAR, IRGen, and FAIR) are generally better than the source code-based models, and (3) the models with contrastive learning (i.e., UniXeoder, OSCAR, IRGen, and FAIR) have a significant advantage.
### Qualitative Error Analysis
To understand the strengths and weaknesses of FAIR, we conduct a qualitative analysis of FAIR and two existing pre-trained models of IR (i.e., IRGen and OSCAR) and a method using CDFG (i.e., ProGraML). Specifically, we conduct an error analysis according to three groups of test examples taken from the PQJ-104 dataset of the code-to-code retrieval task. The first group contains 50 examples randomly selected from all of the test examples that all of the four models handle correctly. The second group contains 50 examples randomly selected from all of the test examples for which FAIR is correct and the other three models are wrong. The third group contains 50 examples randomly selected from all the test examples for which none of the models handles correctly. We believe that this last group contains some of the most challenging examples.
By examining the examples in the first and second groups, we find that FAIR has strengths in handling IR programs with the following characteristics:
(1) Longer IR programs: The average number of lines of IR programs in the first group is 243.26 (i.e., with 1083.34 tokens), while that in the second group is 256.84 (i.e., with 1336.02 tokens). This shows that FAIR performs better on longer IR programs, which can likely be attributed to the fact that we have scaled down the input size in FAIR. This also explains FAIR's bigger advantage on GCJ than on PQJ-104 compared with the other models, and the significant performance degradation of GNNs-based GNN-CDFG and ProGraML on GCJ in Table 1. Recall that the IR of the code in GCJ is seven times longer than that of the code in PQJ-104 (Zhu et al., 2017), but FAIR is able to scale down the size of the input IR program to be less affected by the increase in the size of the input.
(2) More functions: We find that in the first group, there is only one example with five or more functions, while the second group has eight. We speculate that our use of call graphs to connect the independent functions in the CFG and the DFG enables FAIR to get a better understanding of the relationships between functions.
(3) More diverse opcodes: the average number of opcode types that a DFG has in each IR in the first group and second group are 12.68 and 16.32, respectively. This is because we explicitly assign the opcode information to the flow type, then use the self-attention bias and pre-training tasks to make the model learn this information.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Retrieval} & \multirow{2}{*}{Algorithm} & \multicolumn{3}{c}{Device Mapping} & \multicolumn{3}{c}{Thread Coarsening Factor} \\ \cline{2-11} & POJ-104 & GCJ & & \multicolumn{2}{c}{AMD} & \multicolumn{2}{c}{NVIDIA} & Cypress & Tahiti & Fermi & Kepler \\ \cline{2-11} & MAP@R & MAP@R & ER & Acc & Speedup & Acc & Speedup & Speedup & Speedup & Speedup \\ \hline FAIR & **92.04** & **85.41** & **1.75** & **91.61** & **1.62** & **95.52** & 3.63 & **1.13** & **1.25** & **1.21** & **1.08** \\ \hline -w/o type & 90.32 & 83.50 & 1.79 & 91.23 & 1.59 & 95.11 & 3.61 & **1.13** & 1.24 & **1.21** & **1.08** \\ -w/o flow & 88.95 & 81.83 & 1.94 & 90.66 & 1.48 & 94.43 & 3.47 & 1.12 & 1.22 & 1.20 & **1.08** \\ -w/ CDFG & 87.13 & 79.39 & 2.48 & 91.01 & 1.56 & 95.09 & 3.58 & **1.13** & 1.24 & 1.20 & 1.07 \\ \hline -w/o MLM & 91.85 & 84.94 & 1.91 & 89.02 & 1.40 & 93.85 & 3.25 & - & 1.21 & - & - \\ -w/o xFT & 91.14 & 84.86 & 2.03 & 91.38 & 1.56 & 95.16 & 3.34 & 1.11 & 1.18 & 1.19 & **1.08** \\ -w/o BVP & 91.25 & 85.19 & 1.99 & 91.6 & 1.59 & **95.52** & **3.64** & 1.12 & 1.21 & **1.21** & **1.08** \\ -w/o CL & 88.09 & 81.61 & 2.88 & 90.96 & 1.58 & 95.15 & 3.59 & 1.11 & 1.18 & 1.18 & 1.06 \\ \hline -w/o all & 87.36 & 79.14 & 2.98 & - & - & - & - & - & - & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 4. Ablation results on downstream tasks. The best results in each column are boldfaced, and the worst results in each group are underlined. In cases where the model predicts the same label for all examples, the result is replaced with a ’-’.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Models & CodeBERT & CodeT5 & UniXeoder & OSCAR & IRGen & FAIR \\ \hline MAP@R & 8.70 & 7.41 & 21.19 & 22.72 & 24.83 & 27.22 \\ \hline \hline \end{tabular}
\end{table}
Table 5. Results on zero-shot code-to-code retrieval.
A closer look at the third group of examples highlights FAIR's limitations in handling complex data types in IR, especially data type-sensitive programs with multiple conversions. This may be because we do not explicitly extract data types from the instruction nodes during DFG simplification, preventing the model from learning type-related information easily.
## 6. Threats to Validity
Construct ValidityWe do not check for duplicates in the pre-training data and the data on the downstream task, but we do not think this is a concern because the pre-training data contains neither algorithm-type data nor OpenCL programs, and therefore we think the impact of the data overlap on the downstream task is negligible. Prior studies follow the same setup (Zhu et al., 2020; Zhang et al., 2020).
External ValidityWe use the LLVM IR as the compiler intermediate representation since LLVM is one of the most popular compilers and supports lots of programming languages. We are not sure if our model has the same performance on other IRs such as the GCC IR. Previous work has chosen to use LLVM IR as well (Beng et al., 2019; Zhang et al., 2020; Zhang et al., 2020).
Besides, we evaluate the validity of FAIR on four tasks, including retrieval and classification tasks, with datasets containing IR compiled from C/C++ and OpenCL using Clang. We are not sure if FAIR will have different performance on other tasks, or on IR generated with other compiler front-ends. We used another programming language with a front-end in Section 5.3 to verify to some extent the external validity of FAIR at this point. Moreover, we make more choices of downstream tasks and datasets than in previous work, for example, Li et al. (Li et al., 2020) consider code-to-code retrieval on POJ-104 and GCJ, which is our first downstream task.
## 7. Conclusion and Future Work
We proposed FAIR, a flow type-aware IR-based pre-trained model, which (1) reduces the input size and adds more flow type information by splitting the CDFG into a CFG and a DFG, simplifying the DFG, adding flow type information and call graph to the two graphs, and connecting a CFG and a DFG by adding flows to them; (2) uses the Transformer Encoder and Word Embedding to embed the nodes of a CFG and a DFG respectively and learn the flow information in the graph; and (3) employs five pre-training tasks to pre-train FAIR so that it can learn text semantics, flow information, and the overall representation of an IR program. By fine-tuning FAIR on four downstream tasks, we show that FAIR achieved state-of-the-art performance on all tasks. Our ablation study and zero-shot investigation experiment also demonstrated the advantages of the different components of FAIR and its representation capability.
In future work, we expect to use IR for representation learning at the project level since the compilation process can give more cross-file information and project-level information in the IR.
###### Acknowledgements.
This research / project is supported by the Cooperation Fund of Huawei-NJU Creative Laboratory for the Next Programming, CCF-Huawei Populus Grove Fund, NSF award 2034508, and the National Research Foundation, under its Investigatorship Grant (NRF-NRFI08-2022-0002). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore. We also thank the reviewers for their helpful comments. Chuanyi Li is the corresponding author.
|
2302.14791 | Magneto-exciton limit of quantum Hall breakdown in graphene | One of the intrinsic drift velocity limit of the quantum Hall effect is the
collective magneto-exciton (ME) instability. It has been demonstrated in
bilayer graphene (BLG) using noise measurements. We reproduce this experiment
in monolayer graphene (MLG), and show that the same mechanism carries a direct
relativistic signature on the breakdown velocity. Based on theoretical
calculations of MLG- and BLG-ME spectra, we show that Doppler-induced
instabilities manifest for a ME phase velocity determined by a universal value
of the ME conductivity, set by the Hall conductance. | A. Schmitt, M. Rosticher, T. Taniguchi, K. Watanabe, G. Fève, J-M. Berroir, G. Ménard, C. Voisin, M. O. Goerbig, B. Plaçais, E. Baudin | 2023-02-28T17:42:20Z | http://arxiv.org/abs/2302.14791v2 | # Magneto-exciton limit of quantum Hall breakdown in graphene
###### Abstract
One of the intrinsic drift velocity limit of the quantum Hall effect is the collective magneto-exciton (ME) instability. It has been demonstrated in bilayer graphene (BLG) using noise measurements [W. Yang _et al._, Phys. Rev. Lett. **121**, 136804 (2018)]. We reproduce this experiment in monolayer graphene (MLG), and show that the same mechanism carries a direct relativistic signature on the breakdown velocity. Based on theoretical calculations of MLG- and BLG-ME spectra, we show that Doppler-induced instabilities manifest for a ME phase velocity determined by a universal value of the ME conductivity, set by the Hall conductance.
Low-bias quantum Hall (QH) transport is notoriously described in terms of single-electron physics, as exemplified by the edge-channel conductance quantization used in metrology [1; 2; 3]. The situation differs at large bias as electrons may couple to the collective particle-hole excitation spectrum (PHES) [4], described by a dispersion relation \(\omega(q)\) : it includes in the integer QH case both magneto-plasmon (MP) and magneto-exciton (ME) branches [5], and, in the fractional QH case, a magneto-roton (MR) branch [6, 7]. High-bias transport also differs in the electric field and current distributions. In a transistor or a Hall bar geometry (length \(L\), width \(W\)), the non-dissipative Hall current penetrates the Landau insulating bulk, so that source and drain get connected via open ballistic orbits (drift velocity \(v_{x}=E_{y}/B\)) [8, 9]. The high-bias conductance \(G_{H}\), and Hall conductivity \(\sigma_{xy}=G_{H}=I_{x}/V_{y}=\nu G_{K}\), are still set by the conductance quantum \(G_{K}=e^{2}/h\) and the filling factor \(\nu=nh/eB\) at a carrier density \(n\). This ballistic transport is ultimately limited by the quantum Hall effect breakdown (QHEBD), a bulk effect occurring at a critical voltage \(V_{bd}\) (or field \(E_{bd}=V_{bd}/W\), or velocity \(v_{bd}=E_{bd}/B\)), which is signaled by the onset of a longitudinal voltage \(V_{x}=LE_{x}\) associated with a bulk backscattering current and its associated shot noise \(S_{I}\). The most frequently considered QHEBD mechanism is inter-Landau-level tunneling (ILLT), a single-particle effect that sets in when the wavefunctions of neighboring Landau-levels (LL) overlap in the tilted potential under applied bias.[10]. In the case of a massive 2d electron gas called 2DEG, ILLT has a critical Zener field \(E_{Z}\sim\hbar\omega_{c}/eR_{c}\), where \(\omega_{c}=eB/m^{*}\) and \(R_{c}\sim\sqrt{N}l_{B}\) are the cyclotron angular frequency and radius, \(m^{*}\) is the effective mass, \(N\) the number of occupied LLs, and \(l_{B}=\sqrt{\hbar/eB}\) the magnetic length [10]. ILLT gives rise to quite large velocities \(v_{Z}\sim\hbar/m^{*}R_{c}\) (\(\sim 2.10^{5}\) m.s\({}^{-1}\) for \(N=1\) at 10 \(T\) with \(m^{*}\simeq 0.06\)\(m_{0}\) for GaAs-based 2DEGs). Hall bar experiments indicate premature breakdowns with \(v_{bd}\lesssim v_{Z}/10\) in both 2DEGs and graphene (see [11] and references therein). Several mechanisms have been considered to explain this discrepancy, such as the phonon- or impurity-assisted ILLT [10, 12]. Such extrinsic mechanisms are actually needed to overcome the momentum-conservation protection of ILLT, which stems from the \(2k_{F}\) momentum mismatch between neighboring LL wave functions [13], where \(k_{F}\) is the Fermi momentum. However, larger velocities \(v_{bd}\sim v_{Z}/2\) have been reported in quantum-Hall constrictions [14, 15], thanks to a more uniform electrostatic landscape in the absence of invasive voltage probes. These experiments challenge the single-particle ILLT interpretation, and motivate alternative explanations in terms of collective excitations, such as the ME-instability scenario proposed in Ref. [11].
QHEBD was recently investigated in bilayer graphene (BLG) transistors using shot noise as a probe of ballistic transport breakdown [11]. Interestingly, doped BLG emulates a massive 2DEG with \(m^{*}\simeq 0.03\)\(m_{0}\). In the two-terminal transistor geometry, the breakdown was monitored by the sharp onset of the microwave shot-noise current \(I_{N}=S_{I}/2e\) above the noiseless ballistic Hall background. Breakdown noise is characterized by a large differential noise conductance \(G_{N}=\partial I_{N}/\partial V\) exceeding the DC Hall conductance \(G_{H}\). These large values signal a strongly superpoissonian backscattering shot noise which has been interpreted in Ref.[11] as a signature of a collective magneto-exciton (ME) instability, calling for a kinematic origin of breakdown. The \(\omega(q\sim k_{F})\) sector of the 2DEG-PHES, which is relevant for breakdown in 2DEGs, being essentially interaction independent (see [5] and discussion below), the ME-instability velocity \(v_{ME}^{BLG}\sim\hbar/m^{*}R_{c}\) turns out to be similar to the interaction-free Zener limit \(v_{Z}\), providing a clue to the apparent single-particle ILLT puzzle [11]. Even though the ME scenario can hardly be distinguished from ILLT according to the breakdown threshold in 2DEGs, it does explain the superpoissonian noise as a mere consequence of its collective nature. Note that the ME-instability has also been considered to interpret quantum Hall fluid flows across an ionized impurity in Ref.[16], and DC magnetoresistance resonances in monolayer graphene (MLG) in Ref.[17].
The present work extends the noise investigation to MLG, that sustains a qualitatively different PHES due to its relativistic Landau level ladder, and a more pronounced effect of interactions on the ME branches of the PHES, as explained in Ref.[5]. This peculiarity of MLG is revisited below, and in Supplementary Information Section III, with new RPA-calculations of the spectral function and magneto-optical conductivity \(\sigma_{MO}\), accounting for screening by both hBN-encapsulation and local back-gating. Noise measurements, performed in high-mobility hBN-encapsulated graphene transistors, present a magnetic field and doping independent breakdown velocity \(v_{bd}^{MLG}\simeq 1.4\)\(10^{5}\) m/s. Calculations of the PHES for our transistor geometry indicate that this constant breakdown velocity is actually determined by an empirical but universal impedance matching criterion: \(\sigma_{MO}\sim 10^{-2}\)\(NG_{K}\), where \(NG_{K}\) is the Hall conductance. We conclude the paper by a comparison between MLG and BLG breakdown velocities at large doping, illustrating this qualitative difference between massive and massless ME-instability supported by RPA theory.
The samples analyzed in this experiment have been previously used in the investigations of the Schwinger effect in Ref.[19] and/or flicker noise in Ref.[20]; they are described in
Supplementary Information (Table.SI-1). The transistors are embedded in coplanar waveguides for DC and microwave noise characterization at 4 Kelvin (see measurement setup in Fig.1-a). The experiment is performed in the microwave frequency range to overcome flicker noise, which dominates up to the low-GHz range at large currents [20], and to access the QHEBD shot noise of interest. Data presented below concentrate on the hBN-encapsulated, bottom-gated, graphene sample AuS2 (\(L\times W\times t_{hBN}=16\times 10.6\times 0.032\)\(\mu\)m) which is described in Fig.1 and in Ref.[19]. Graphene conductance is calculated after correcting for the (small) contact resistance effect. Low-bias magneto-conductance \(G(V_{g},B)=\partial I/\partial V\) (Fig.1-b), and the \(\partial G/\partial V_{g}(V_{g},B)\) fan-chart (Fig.1-c), show clear MLG-quantization down to low fields, i.e. for \(B\gtrsim 0.5\) T in accordance with the large \(\mu\simeq 32\) m\({}^{2}\)/Vs mobility. The specific MLG quantization, with plateaus at \(\nu=2(2N+1)\), is clearly observed; the tiny width of the plateaus in Fig.1-b signals the absence of disorder-induced localized bulk states, which warrants the absence of electrostatic-disorder. Plateaus gate voltages allow for the calibration of the gate capacitance at \(C_{g}=1\) mF/m\({}^{2}\) for a thickness of the bottom-hBN \(t_{hBN}=32\) nm with \(\epsilon_{hBN}=3.4\)[18]. The large biases entail prominent drain-gating effects, eventually leading to a pinch-off, as reported in Ref.[19] including for AuS2. This effect is compensated here by following the gating procedure described in Ref.[21] and routinely used in Refs.[11,19,20,22]; it consists in applying a bias-dependent gate voltage \(V_{g}(V)=V_{g}(0)+\beta V\), \(\beta\sim 0.4\) being adjusted to keep the resistance maximum at charge neutrality independent of bias at zero magnetic field. Fig.1-d shows typical microwave \(S_{I}(f)\) shot-noise spectra in increasing bias. Noise is expressed below in terms of the noise current \(I_{N}=S_{I}/2e\) for an easy comparison with DC transport current.
The high-bias magneto-transport and noise characteristics of sample AuS2 are described in Fig.2. The current voltage relation \(I(V)\), measured at \(B=0.5\) T in Fig.2-a, shows a smooth crossover between the quantum Hall regime where \(I\simeq I_{H}=\nu G_{K}V\) (inset) and the extremely high-bias metallic-like regime where the differential conductance recovers its zero-field value, which is set by the Zener-Klein conductivity [11,21]. The breakdown voltage \(V_{bd}\approx 0.6\) V (black line) appears as a gradual deviation from the \(I_{H}(V)\) Hall regime. By contrast, the current-noise characteristics \(I_{N}(V)\), measured in the same conditions in Fig.2-b, clearly distinguish two regimes : a quasi-noiseless quantum Hall regime for \(V\leq V_{bd}=0.6\)V, characterized by a residual contact noise conductance \(I_{N}/V\sim 0.1\) mS, and a large differential noise conductance \(G_{N}(n)=\partial I_{N}/\partial V\gtrsim 1\) mS for \(V\geq V_{bd}\). The intersection
between the two lines provides an unambiguous determination of the breakdown voltage \(V_{bd}\) which agrees with the transport determination in Fig.2-a. In both \(I(V)\) and \(I_{N}(V)\), the breakdown voltage is found to be nearly doping-independent, as opposed to the high-bias noise conductance \(G_{N}\propto n^{2}\) in Fig.2-b. Fig.2-c shows the current noise \(I_{N}(V)\) for different magnetic fields at a fixed large doping \(n=2.10^{12}\) cm\({}^{-2}\). It highlights the strong dependence of both \(V_{bd}\propto B\) (inset) and \(G_{N}\propto G_{H}\propto 1/B\), leading to a field-independent zero-bias extrapolate (not shown in the figure). These doping and field dependencies can be cast into the scaling displayed in Fig.2-d, where noise data, collected over a broad \([n,B]\) range, are found to collapse on the universal master line
\[\frac{I_{N}}{W}=\gamma n^{2}\left[\frac{E}{Bv_{bd}}-1\right]\qquad, \tag{1}\]
where \(E=V/W\), \(\gamma=40\times 10^{-32}\) Am\({}^{3}\), and \(v_{bd}=1.4\times 10^{5}\) m/s. While the scaling differs from that of BLG [11], the noise amplitudes are comparable, with \(I_{N}/W=40\) A/m for \(V=2\)\(V_{bd}\) at \(n=10^{12}\) cm\({}^{-2}\). This noise scaling, with a doping-independent \(v_{bd}\), contrasts with the doping-dependent ILT breakdown threshold (dashed line in Fig. 2c-Inset). As quasiparticle interactions are controlled by the doping-independent fine-structure constant \(\alpha_{g}=e^{2}/4\pi\hbar\epsilon_{0}\epsilon_{r}v_{F}\), the observation of a doping-independent \(v_{bd}\) suggests a breakdown mechanism controlled by interactions. Besides, the current noise intensity \(S_{I}\propto n^{2}\) corresponds to a doping-independent velocity noise \(S_{v}\propto S_{I}/n^{2}\), suggesting a kinematic interpretation of breakdown such as that provided by the ME-instability.
To base this qualitative interpretation on a more quantitative analysis, we recalculate below the MLG-PHES of Ref.[5], adapting it for geometry and material parameters that are suitable for our experimental conditions. Figures 3-(a,b) show the PHESs calculated in the RPA approximation of Ref.[5]. It is adapted for AuS2-sample geometry by including the screening by the local bottom-gate and the hBN-encapsulation, as explained in Supplementary Section-III.A. In the context of velocity-induced instability, we have plotted the magneto-optical conductivity spectrum \(\Re[\sigma_{MO}(q,\omega)]\) (denoted \(\sigma_{MO}\) below), which is deduced from the usual spectral function \(\Im[\Pi^{RPA}(q,\omega)]\) using the \(\Re[\sigma_{MO}(q,\omega)]=-\frac{\omega e^{2}}{q^{2}}\Im[\Pi^{RPA}(q,\omega)]\) relation. Note that \(\sigma_{MO}\) suffers from non-physical divergence in the low-\(q\) PHES-limit, blurring the magneto-plasmon branch and hiding the existence of a spectral gap that appears more clearly in the spectral function (see \(\Pi^{RPA}\) spectra in Supplementary Fig. SI-5). This \(N\)-dependent bandgap is equal to the MLG cy
clotron gap \(\omega_{c}^{MLG}=(\sqrt{N+1}-\sqrt{N})v_{F}/l_{B}\simeq v_{F}/R_{c}\) (with \(R_{c}\) the cyclotron radius), similarly to BLG in Figs.3-(d,e), where \(\omega_{c}^{BLG}=1/m^{*}l_{B}^{2}\) is \(N\)-independent (see the spectral function in Fig. SI-7). Conductivity spectra are plotted for \(N=3\) (panel a) and \(N=12\) (panel b), displayed in logarithmic scale to map their steep \(\omega\) and \(q\) dependencies, and normalized to the Hall conductivity \(NG_{K}\) (per spin and valley) for a direct comparison of electronic and collective electron-hole excitation's conductivity. The momentum and energy scales are expressed in MLG-relevant dimensionless units \(ql_{B}\) and \(\omega l_{B}/v_{F}\), which imply a magnetic-field independence of the ME branches phase velocity \(v_{ME}=\omega_{ME}/q\). Remarkably the ME optical conductivity \(\sigma_{MO}\) is steeply increasing with \(v_{ME}\), with \(\sigma_{MO}\sim(3.10^{-4}\)-\(3.10^{-2})\)\(NG_{K}\) for \(v_{ME}=(0.06\)-\(0.35)\)\(v_{F}\).
The effect of screening is quite substantial in MLG, as depicted in Supplementary Section Fig.SI-6, and much more prominent than in BLG (Fig.SI-8), especially at large \(q\). The white lines in Figs.3-(a,b) correspond to the Doppler shifted electronic energy \(\omega=v_{bd}q\) of drifting electrons, calculated at the measured breakdown velocity \(v_{bd}=0.14v_{F}\) of Fig.2. In both the \(N=3\) (panel a) and the \(N=12\) (panel b) examples, this line separates a high ME-conductivity domain for \(\omega\gtrsim v_{bd}q\), where \(\sigma_{MO}\gtrsim 10^{-2}NG_{K}\), from a low conductivity domain for \(\omega\lesssim v_{bd}q\). The observation of a \(N\)- and \(B\)-independent ME-instability, at a velocity \(v_{BD}\simeq v_{ME}=\) cst. controlled by a fixed \(\sigma_{MO}\sim 10^{-2}NG_{K}\) constraint, is consistent with a collective wave interpretation, even if the value of the impedance threshold remains to be established theoretically. It is obviously consistent with our experimental observation in Fig.2 of an \(n\)- and \(B\)-independent breakdown velocity, a feature observed in all tested Au-gated samples (see Supplementary Table SI-1). Unlike in BLG, the \(v_{bd}=\) cst. breakdown velocity of MLG, inferred from the above conductivity criterion, exceeds the ILLT \(v_{Z}\propto\sqrt{B}\), especially at low-\(B\).
Let us recall that the situation is different in BLG. Figures 3-(d,e) reproduce the theoretical analysis for a similar BLG sample, such as that measured in Ref.[11]. The energy (\(\omega/\omega_{c}\)) and momentum (\(q/k_{F}\)) reduced units are adapted for a massive 2DEG-like BLG, but the reduced conductivity scale \(\sigma_{MO}/NG_{K}\) is the same. The two panels correspond to the same \(N=6\), but different magnetic fields \(B=5\) T (panel d) and \(B=1\) T (panel e). Contrarily to MLG, the phase velocity \(\omega_{ME}/q\) is not magnetic-field independent in this representation, as \(\omega_{c}\propto B\) and \(k_{F}\propto 1/l_{B}\propto\sqrt{B}\) have different B-dependencies. As a consequence, positioning an identical Doppler line on the two reduced-units plots amounts taking a \(v_{bd}\propto\sqrt{B}\). Figs.3
(d,e) show that this criterion also corresponds to a consistent \(\sigma_{MO}\sim 10^{-2}NG_{K}\) criterion for the ME-instability, which is met in BLG at \((q,\omega)\)-localized ME conductivity peaks. This impedance analysis shows that for BLG, and more generally 2DEGs, the ME-instability and Zener-ILLT, which are basically different, give consistent and similar breakdown velocities in BLG, confirming earlier statement of Ref.[11]. Finally, Fig.3-c illustrates the qualitative difference between MLG and BLG in a plot of \(v_{bd}(B)\) at a large \(n=2.10^{12}\) cm\({}^{-2}\) (BLG data are reproduced from Fig.2 of Ref.[11]), with \(v_{ME}^{MLG}\simeq 0.14v_{F}\) (blue line) and \(v_{ME}^{BLG}=\hbar/m^{*}l_{B}\sqrt{N}\propto\sqrt{B}\) (red line for \(N=5\)) [11]. Let us note that the magnetic field dependencies \(v_{bd}^{MLG}=\) Cst. and \(v_{bd}^{BLG}\propto\sqrt{B}\) merely reflect the energy dependence of the Fermi velocity, \(v_{F}^{MLG}(\varepsilon_{L})=\) Cst. and \(v_{F}^{BLG}(\varepsilon_{L})\propto\sqrt{\varepsilon_{L}}\), when taken at the Landau energy \(\varepsilon_{L}=\hbar\omega_{c}\propto B\).
Relying on the good mapping of the ME-scenario with experiment, we exploit the RPA calculations further in Supplementary Section III-B and III-C to model breakdown in varied graphene geometries such as graphene in vacuum or semi-infinite hBN embedding, keeping a systematic benchmark of MLG and BLG cases, and assuming the existence of a universal impedance-matching condition. For MLG, we show in Figs.SI-6-(a,b) that screening by the bottom gate in AuS2 (panel a) is equivalent to a semi-infinite, \(\epsilon_{r}=100\) dielectric (panel b), meaning that both PHESs correspond to the fully screened conductivity. Effect of interactions, which is maximal for un-gated suspended graphene (\(\epsilon_{r}=1\) in panel c), amounts to suppressing the conductivity amplitude below the \(\sigma_{MO}\sim 10^{-2}NG_{K}\) ME-instability threshold over most of the ME-spectrum leading to an enhanced breakdown velocity \(v_{bd}\simeq 0.5v_{F}\) (white line). Given the impedance matching condition \(\sigma_{MO}\sim 10^{-2}NG_{K}\), we conclude that the ME-instability velocity of MLG is a constant, in the range \(v_{ME}=[0.14,0.5]\)\(v_{F}\) that depends on screening. The same analysis is performed for BLG in Figs.SI-8, showing that the large-\(q\) PHES sector is to a large extent insensitive to screening, yielding a Zener-like breakdown velocity \(v_{ME}^{BLG}=\frac{\hbar}{m^{*}R_{c}}\).
In conclusion, we have shown that bulk quantum Hall breakdown is controlled by the magneto-exciton instability in both MLG and BLG with a threshold \(v_{drift}\geq v_{ME}\), which is reminiscent of the Cerenkov effect [23]. More precisely, instability is defined by a universal conductivity criterion \(\sigma_{MO}\sim 10^{-2}NG_{K}\). This universal criterion explains the qualitative differences between the massless MLG, and massive BLG. Whereas the BLG-ME instability mimics single-particle ILLT, that of MLG is sensitive to screening by the embedding
dielectric and local gates. Screening reduces the breakdown velocity, and gated transistors correspond to the fully screened regime. Both studies promote shot-noise as a sensitive probe of quantum Hall transport, RPA as a relevant theoretical tool to tackle interactions and screening, and high-velocity transport as a sensitive probe of the large-momentum collective excitations, as suggested by Landau [25]. Understanding the combined effects of Landau quantization and interactions in the collective modes of the integer quantum Hall effect is a prerequisite before addressing the more challenging case of the fractional regime, where elusive magneto-rotons may come into play. Finally and on a broader scope, let us mention that the magneto-exciton instability is a quantum-Hall-matter light coupling effect, which belongs to a domain of current interest [26].
## Supplementary Information
A Supplementary Information is available. It presents a similar analysis of the experimental data on the other devices of the series, as well as an extended discussion on the magneto-optical conductivity spectrum in monolayer and bilayer graphene. In this respect, it focuses on the contrasted role of interactions in these two cases.
## Acknowledgments
AS thanks Prof. C.R. Dean for hospitality and introducing him to the fabrication of high-quality graphene-hBN heterostructures. The research leading to these results has received partial funding from the European Union Horizon 2020 research and innovation program under grant agreement No.881603 "Graphene Core 3", and from the French ANR-21-CE24-0025-01 "ELuSeM".
## Conflict of interest
The authors have no conflict of interest to disclose.
## Authors contribution statement
AS, BP and EB conceived the experiment. AS conducted device fabrication and measurements, under the guidance of MR in the early developments. TT and KW have provided the hBN crystals. AS, MOG and BP developed the models and theoretical interpretations. AS, GF, JMB, GM, CV, BP and EB participated to the data analysis. BP wrote the manuscript with assistance of AS and EB, and contributions from the coauthors.
## Data availability statement
Data are available on a public Zenodo repository.
|
2309.16592 | Tensor Factorization for Leveraging Cross-Modal Knowledge in
Data-Constrained Infrared Object Detection | The primary bottleneck towards obtaining good recognition performance in IR
images is the lack of sufficient labeled training data, owing to the cost of
acquiring such data. Realizing that object detection methods for the RGB
modality are quite robust (at least for some commonplace classes, like person,
car, etc.), thanks to the giant training sets that exist, in this work we seek
to leverage cues from the RGB modality to scale object detectors to the IR
modality, while preserving model performance in the RGB modality. At the core
of our method, is a novel tensor decomposition method called TensorFact which
splits the convolution kernels of a layer of a Convolutional Neural Network
(CNN) into low-rank factor matrices, with fewer parameters than the original
CNN. We first pretrain these factor matrices on the RGB modality, for which
plenty of training data are assumed to exist and then augment only a few
trainable parameters for training on the IR modality to avoid over-fitting,
while encouraging them to capture complementary cues from those trained only on
the RGB modality. We validate our approach empirically by first assessing how
well our TensorFact decomposed network performs at the task of detecting
objects in RGB images vis-a-vis the original network and then look at how well
it adapts to IR images of the FLIR ADAS v1 dataset. For the latter, we train
models under scenarios that pose challenges stemming from data paucity. From
the experiments, we observe that: (i) TensorFact shows performance gains on RGB
images; (ii) further, this pre-trained model, when fine-tuned, outperforms a
standard state-of-the-art object detector on the FLIR ADAS v1 dataset by about
4% in terms of mAP 50 score. | Manish Sharma, Moitreya Chatterjee, Kuan-Chuan Peng, Suhas Lohit, Michael Jones | 2023-09-28T16:55:52Z | http://arxiv.org/abs/2309.16592v1 | # Tensor Factorization for Leveraging Cross-Modal Knowledge in Data-Constrained
###### Abstract
While state-of-the-art object detection methods have reached some level of maturity for regular RGB images, there is still some distance to be covered before these methods perform comparably on Infrared (IR) images. The primary bottleneck towards accomplishing this goal is the lack of sufficient labeled training data in the IR modality, owing to the cost of acquiring such data. Realizing that object detection methods for the RGB modality are quite robust (at least for some commonplace classes, like person, car, etc.), thanks to the giant training sets that exist, in this work we seek to leverage cues from the RGB modality to scale object detectors to the IR modality, while preserving model performance in the RGB modality. At the core of our method, is a novel tensor decomposition method called _TensorFact_ which splits the convolution kernels of a layer of a Convolutional Neural Network (CNN) into low-rank factor matrices, with fewer parameters than the original CNN. We first pre-train these factor matrices on the RGB modality, for which plenty of training data are assumed to exist and then augment only a few trainable parameters for training on the IR modality - to avoid over-fitting, while encouraging them to capture complementary cues from those trained only on the RGB modality. We validate our approach empirically by first assessing how well our _TensorFact_ decomposed network performs at the task of detecting objects in RGB images vis-a-vis the original network and then look at how well it adapts to IR images of the FLIR ADAS v1 dataset. For the latter, we train models under scenarios that pose challenges stemming from data paucity. From the experiments, we observe that: (i) _TensorFact_ shows performance gains on RGB images; (ii) further, this pre-trained model, when fine-tuned, outperforms a standard state-of-the-art object detector on the FLIR ADAS v1 dataset by about \(4\%\) in terms of mAP 50 score.
## 1 Introduction
The success of deep neural networks in core computer vision tasks, such as image denoising [42], image classification [44], object detection [8, 40], _etc._ can at least in part be attributed to the availability of large-scale labeled training data [46], which allows these models (with lots of parameters) to avoid over-fitting [11]. This has resulted in wide-ranging applicability of these methods in tasks such as pedestrian detection in vehicles [35], face detection [48], vehicle counting [57], _etc._
One key element that made such large-scale data available,
Figure 1: Qualitative comparison of object detections by a state-of-the-art object detector (denoted as baseline) [51] and our TensorFact method on IR images. The orange, cyan, green boxes denote bicycle, person, and car classes respectively, while the associated numbers denote the confidence score of the prediction. The visualizations show that our proposed approach is better at capturing more objects, especially those that are of a smaller size, with higher precision.
is the ubiquity of good quality RGB cameras, which come at throwaway prices. This coupled with the popularity of online platforms for sharing content widely, including social media sites such as YouTube or Meta, meant that sharing such images at a large-scale became commonplace.
However, from the standpoint of certain applications, such as autonomous driving, regular RGB images fall short on some important counts. For instance, while RGB images can provide clear visualization of the surroundings during the day, at night, RGB images are only useful if there is sufficient street lighting, _etc_. In scenarios where the ambient light is insufficient, passive thermal Infrared (IR) cameras come in handy for tasks such as pedestrian detection, as thermal IR sensors capture scenes at wavelengths beyond the visible spectrum and are sensitive to warm objects, such as the human body [4]. Nonetheless, one catch that remains is that IR cameras are not as cheap as their RGB counterparts and are thus not as ubiquitous. This poses a major hurdle in acquiring the profuse amounts of images needed to train deep networks that could operate on IR images at performance levels similar to their RGB counterparts. In such conditions, an overparameterized model results in overfitting, which has an impact on model generalisation and performance. Therefore, a reduction in the number of parameters may be needed for improved performance. Low-rank factorization methods are among the most popular methods towards this end and are utilized for different deep learning applications [22, 23, 41].
While the success of deep neural networks today spans several computer vision tasks, the task of object detection is of particular interest in this paper. The task entails localizing the pixels which an object occupies in an image as well as labeling the cluster of pixels with the class to which the said object belongs. Solving this task is crucial, since it permits acquiring a greater understanding of what an image contains and is often a first step towards understanding the scene [32]. Given the importance of IR images, as a modality for the task of scene understanding, designing effective object detection models that work on such data becomes critical. Nonetheless, the paucity of sufficient training data (_i.e_., datasets with lots of IR images) continues to present a challenge to this end.
In this work, we leverage the observation that while sufficient training data in the IR modality may be difficult to find, such data for the RGB modality is easily available. The key idea in our approach then, is to train an object detection model in the RGB modality and to then transfer the common cross-modal cues to the IR modality where only a few parameters can be trained to capture the complementary cues necessary for successfully detecting objects in the IR image space. Concretely, we devise a novel method called _TensorFact_, which splits the convolution kernel weights of a CNN layer into low-rank factor matrices, with fewer trainable parameters. These factor matrices can be trained to capture the common cues for detecting objects, across modalities, by leveraging the RGB data. These weights can then be augmented with only a few, new learnable parameters to capture the cues specific to the IR modality. This design allows us to train only the relatively small number of IR modality-specific weights when training with IR images, allowing us to prevent over-fitting. Note that naively applying domain adaptation methods [1] to transfer from RGB to IR modality fails because here the modality itself switches between the source (RGB) and the target (IR) which represents a big shift in the data distribution.
We conduct experiments on the FLIR ADAS v1 dataset [49] of IR images to empirically validate the efficacy of our method. To derive the common object detection cues from RGB images, we use the FLIR Aligned RGB [13] images. Our experiments show that _TensorFact_ decomposition assists with achieving better object detection performance both on RGB and IR images, even when the latter has few training samples. In particular, in the IR dataset (FLIR ADAS v1), our method outperforms a competing state-of-the-art object detection model [51] by \(4\%\) on mAP 50, underscoring the efficacy of our method. Figure 1 contrasts detections obtained by our method in comparison to a recent state-of-the-art detection baseline, YOLOv7 [51], on the FLIR ADAS v1 dataset. From the figure, we see that our approach is more capable of detecting objects of different sizes, compared to the state-of-the-art approach.
We summarize below the core contributions of our work.
* We present _TensorFact_, a novel tensor decomposition-based method that can leverage both modality-specific and cross-modal cues for effective object detection in the IR modality, where acquiring sufficient training data is a challenge.
* Our experiments reveal that our proposed method outperforms competing approaches at the task of object detection in a data sparse IR modality, with only 62 training images, by \(4\%\) on mAP 50.
* Our formulation also offers a supplementary contribution to the RGB modality, yielding a compressed neural network that improves object detection in this modality.
## 2 Related works
In this section, we discuss relevant prior works to our paper and present the distinction between these approaches and our method.
**Object detection approaches in IR images:** The journey of object detection in RGB images, using deep learning, has come a long way [36, 38, 41, 51]. The inception of a two-stage object detection process involving proposal generation and object class prediction, initiated by the work of Girshick _et al_. [16] for RGB images, laid the foundation for the
field. However, the computational intensity of the process necessitated faster success [15, 18, 38, 47, 50]. However, porting these approaches to the realm of IR image object detection, has posed certain challenges. The study by Ghose _et al_. [14] and Devagupta _et al_. [7] sought to enhance infrared image features using saliency maps and multimodal Faster R-CNN, respectively. These efforts, however, encountered challenges such as slow inference speed, non end-to-end multitask training, and a lack of general applicability across different datasets.
To overcome the limitations of two-stage detectors, the work by Redmon and Farhadi [36] introduced a one-stage detector, YOLO, which considered each image cell as a proposal for object detection and achieved end-to-end real-time detection. YOLO's evolution into YOLOv3 [37], YOLOv4 [3], and its subsequent variants, as documented by Kristo _et al_. [26], has accelerated the detection of objects both in RGB and IR images, though issues of omission of small-scale objects and low detection accuracy persist.
Innovative modifications like the SE block in SE-YOLO [27] and the attention module, CIoU loss, improved Soft-NMS, and depthwise separable convolution in YOLO-ACN [31] were proposed to improve detection accuracy, but they still grapple with challenges like large parameter sizes and applicability to embedded settings.
Other one-stage models have been explored, including ThermalDet [5] and TIRNet [6], each of which offers different solutions to the aforesaid problems but falls short when tested in real-world, non-curated datasets. Song _et al_. [45] proposed a multispectral feature fusion network based on YOLOv3, showing promise for smaller-sized images.
The YOLO series has shown considerable potential for IR object detection and several variants to it have been proposed. This includes the network of Shuigen _et al_. [43], an attention mechanism-infused YOLOv3 [14], and a YOLOv3 enhanced with a category balance loss term [30]. Further refinements in object detection have been achieved by using the SAF architecture [34] and the YOLO-FIRI model [29], which incorporate optimization parameters, introduce dilated convolutional block attention modules, and enable the detection of smaller IR targets. Zhao _et al_. [58] and Du _et al_. [10] have contributed to the field by improving the fusion method of YOLOv3 and leveraging YOLOv4 to enhance IR target features, respectively, paving a promising path for future IR object detection research. While we consider these models for designing the backbone of our proposed approach but none of them provide a way to mitigate the data paucity issue in the IR modality which we address front and center.
**Domain adaptation methods:** The community has explored domain adaptation methods to overcome the challenges associated with less training data in certain domains. Towards this end, several works have been proposed [17, 39, 53, 54, 56], which include those that progressively transition from one domain to another [21], or transition through multiple levels of granularity [59], or use semi-supervised [9, 52] or unsupervised learning [28, 55] techniques for the same. Nonetheless, these approaches tackle scenarios which represent reasonably minor shifts in the domain of the input data, say from clear RGB images to foggy RGB images [12] and so on. However, our task, deals with much larger-scale shifts in the type of input, in particular from RGB to IR modalities. The change is so stark that certain objects are visible in a given modality, only under specific scenarios. For instance, warm-bodied, dimly lit objects are visible only in the IR images but are very difficult to see in RGB images. This prevents us from trivially adapting these approaches for our task. While some more recent methods have looked into domain adaptation techniques for IR detection tasks, these are fairly limited in scope [20, 24] and focus mostly on detecting people, not other classes. Importantly, none of these approaches simulate the training data paucity scenario, for the IR modality, something we consider in this work.
## 3 Proposed approach
In this work, we propose _TensorFact_ - a novel tensor decomposition-based method designed to tackle the paucity of labeled training data in the IR modality. It effectively leverages knowledge learned from the RGB modality, where training data is abundant, and efficiently transfers this knowledge to the IR modality, overcoming the data scarcity challenge. Initially, we learn two trainable low-rank factor-matrices, the product of which yields the weights for each layer of the CNN and task them with detecting objects in the source RGB modality. This representation cuts down on the number of learnable parameters in the network and facilitates the training of a more generalizable network (due to less over-fitting) on the RGB modality. Following this, in order to facilitate object detection in the IR modality, we enhance the network's capability by a minor expansion of the number of trainable parameters. This is achieved by increasing the number of the columns/rows of the factor matrices. The factor matrices that emerge from the increased columns/rows effectively serve as a parallel trainable branch, enabling the network to leverage the complementary information gleaned from the RGB modality for object detection in the IR modality. In this way, _TensorFact_ affords us a practical solution to the challenge of limited training data in the IR modality, demonstrating how robust and transferable features can be effectively extracted and utilized across different modalities.
### Notation
In this paper, we utilize the following conventions: lowercase letters such as \(x\) denotes scalar variables, vectors are symbolized by boldface lowercase letters like \(\mathbf{x}\), and matrices are depicted by boldface uppercase letters such as \(\mathbf{X}\). Tensors, on the other hand, are indicated by calligraphic
uppercase letters (for instance, \(\mathcal{X}\)). \(\mathbb{R}\) denotes the set of real numbers. To illustrate a component of a vector, matrix, or tensor, we adopt the \([\cdot]_{i}\) notation, where \(i\) represents a set of indices for that component.
### Decomposed convolution layer
The weights of a convolutional layer in a CNN, denoted by \(\mathcal{K}\in\mathbb{R}^{T\times S\times D_{2}\times D_{1}}\), is a \(4\)-way tensor, where \(D_{1}\) and \(D_{2}\) represent the width and height, respectively, of the spatial window of the convolution kernels, while \(S\) and \(T\) denote the number of input channels of the input to the layer and the number of kernels learned in the layer. The number of trainable parameters in a standard convolutional layer is then given by \(P=TSD_{2}D_{1}\).
For a decomposed convolutional layer, we commence with two trainable factors \(\mathbf{A}\in\mathbb{R}^{TS\times r}\) and \(\mathbf{B}\in\mathbb{R}^{r\times D_{2}D_{1}}\) with \(r\) serving as their inner dimension, as shown in Figure 2, denoting the rank of the original weight matrix (prior to decomposition). These combine to form the intermediate matrix \(\mathbf{M}=\mathbf{A}\mathbf{B}\), as follows:
\[[\mathbf{M}]_{p,q}=\sum_{c=1}^{r}[\mathbf{A}]_{p,c}[\mathbf{B}]_{c,q}, \tag{1}\]
where, \(p=1,\ldots,TS\) and \(q=1,\ldots,D_{2}D_{1}\). This matrix \(\mathbf{M}\), operates on the input to the layer. The convolutional filter \(\mathcal{K}\), is derived from \(\mathbf{M}\) as:
\[[\mathcal{K}]_{t,s,d_{2},d_{1}}=[\mathbf{M}]_{(t-1)S+s,(d_{2}-1)D_{1}+d_{1}}, \tag{2}\]
where, \(t=1,\ldots,T\), \(s=1,\ldots,S\), \(d_{2}=1,\ldots,D_{2}\), and \(d_{1}=1,\ldots,D_{1}\). Therefore, the number of trainable parameters in the decomposed convolutional layer formulation, \(P_{fac}\), is a function of \(r\), resulting in \(P_{fac}=r(TS+D_{2}D_{1})\) trainable parameters. The value of \(r\) can be altered to adapt to the necessary CNN complexity but typically \(r\leq\text{rank}(\mathbf{M})\). Since CNNs are known to be over-parameterized [11], one could choose \(r\) such that the number of learnable parameters is fewer than that in \(\mathbf{M}\), to avoid the risk of over-fitting.
### Capacity augmentation
To augment the network capacity to accommodate the new modality, we increase \(r\) by \(\Delta r\) (where \(\Delta r>0\)) for both matrices \(\mathbf{A}\) and \(\mathbf{B}\), thereby producing \(\mathbf{A}^{\prime}\in\mathbb{R}^{TS\times(r+\Delta r)}\) and \(\mathbf{B}^{\prime}\in\mathbb{R}^{(r+\Delta r)\times D_{2}D_{1}}\) with \(r+\Delta r\) serving as their new inner dimension. Now, \(\mathbf{A}^{\prime}\) and \(\mathbf{B}^{\prime}\) can be interpreted as \(\mathbf{A}^{\prime}=\left[\mathbf{A}\left|\Delta\mathbf{A}\right|\right]\) and \(\mathbf{B}^{\prime}=\left[\mathbf{B}\left|\left|\Delta\mathbf{B}\right|\right]^ {T}\) such that \(\Delta\mathbf{A}\in\mathbb{R}^{T\times\Delta r}\) and \(\Delta\mathbf{B}\in\mathbb{R}^{\Delta r\times D_{2}D_{1}}\) and \(\left|\right|\) denotes concatenation. Subsequently, \(\mathbf{A}^{\prime}\) and \(\mathbf{B}^{\prime}\) merge to form \(\mathbf{M}^{\prime}=\mathbf{A}^{\prime}\mathbf{B}^{\prime}=\mathbf{M}+\Delta \mathbf{M}\), where \(\Delta\mathbf{M}=\Delta\mathbf{A}\Delta\mathbf{B}\), as shown in Figure 3. Similar to Equation 2, \(\Delta\mathcal{K}\in\mathbb{R}^{T\times S\times D_{2}\times D_{1}}\) can be derived from \(\Delta\mathbf{M}\). Hence, increasing \(r\) by \(\Delta r\) results in a parallel architectural branch, as depicted in Figure 4. Therefore, the increase in the number of trainable parameters in a decomposed convolutional layer after capacity augmentation is given by \(\Delta P_{fac}=\Delta r(TS+D_{2}D_{1})\). We seek to augment as few parameters as possible to ensure the detection network does not suffer from challenges related to over-fitting in the new modality. In particular, we ensure that the total number of network parameters (considering those trained using only RGB and the augmented set) of our proposed framework, is less than the original unfactorized network.
### Training
For an object detector CNN with \(L\) convolutional layers, let \(\mathbf{A}_{l}\) and \(\mathbf{B}_{l}\) represent the left and right factor matrices, respectively, for the \(l^{th}\) decomposed convolutional layer, with \(r_{l}\) representing their inner-dimension and \(l=1,\ldots,L\). When training for the data-rich source RGB modality, the network weights for the decomposed convolutional layers are SVD-initialized, leading to orthogonal column and row vectors in \(\mathbf{A}_{l}\) and \(\mathbf{B}_{l}\), respectively, with \(r_{l}=\lfloor\alpha r_{l}^{max}\rfloor\). Here, \(r_{l}^{max}=\min(TS,D_{2}D_{1})_{l}\) and \(\alpha\in(0,1)\) controls the number of the trainable parameters across layers. With \(\alpha\leq 1\), the training process is straightforward and similar to a typical object detector network, leading to the learning of both generic and modality-specific features for the RGB data.
Next, to train for the data-scarce IR modality, we augment
Figure 3: Decomposed convolutional layer with capacity augmentation.
Figure 2: Decomposed convolutional layer.
the network capacity by increasing the value of \(\alpha\), which introduces new trainable parameters and creates a parallel path for each decomposed convolutional layer. During this training phase, we freeze the trainable parameters learned during the training of the RGB modality, thereby architecturally promoting the learning of complementary features for the IR modality branch. Akin to skip-connections in ResNets [19], which permits the learning of residual mapping, our proposed method leverages cross-modal cues and promotes the learning of features specific to the IR modality that were not learned during RGB modality training. As the factor matrices trained on the RGB data capture several cues essential for object detection, only a small percentage of augmented capacity is required for capturing the facets of object detection in the IR modality. This is an essential requirement to train the model without over-fitting in a data-scarce modality. Additionally, to explicitly capture complementary cues between the RGB and IR modalities, we maximize the \(L_{2}\) or \(L_{1}\) distances between the feature activation maps that are output from each branch (RGB and IR) of a layer and have it as an additional term in the training objective for the task. This can be implemented by the following loss \(L_{c}\):
\[L_{c}=-||\mathcal{K}*\mathcal{X}-\Delta\mathcal{K}*\mathcal{X}||_{p}, \tag{3}\]
where \(p=\{1,2\}\) and \(*\) denotes convolution. Note that the dimensions of \(\mathcal{K}\) and \(\Delta\mathcal{K}\) are the same. The final loss function \(L_{f}\) of _TensorFact_ can be written as follows:
\[L_{f}=L_{d}+\omega_{c}L_{c}, \tag{4}\]
where \(L_{d}\) is the object detection loss used in YOLOv7 [51], and \(\omega_{c}\) is the weight of \(L_{c}\). We minimize this loss using the ADAM optimizer [25].
## 4 Experiments
In this section we layout the empirical evaluation that we conducted to validate the efficacy of our proposed approach.
### Experimental setup
**Datasets:** In our object detection experiments, we make use of two datasets: FLIR Aligned [13] and FLIR ADAS v1 [49]. The FLIR Aligned dataset contains RGB images, with ground-truth comprising bounding-box coordinates around objects in the image as well as class labels. This dataset includes \(4129\) images for training and \(1013\) images for validation, and features three classes: person, bicycle, and car, with the distribution of instances provided in Table 1.
The FLIR ADAS v1 dataset is a dataset of IR images. The ground-truth for this dataset, includes bounding-box coordinates around objects in the image and their class labels, from among: person, bicycle, and car. The dataset includes \(7859\) images for training and \(1360\) images for validation. However, for fair comparative studies, we split the training set into train and validation splits in an 80:20 train-validation ratio to create new, randomly selected train and validation sets consisting of \(6287\) and \(1572\) images, respectively. To mimic a data-scarce environment, we use randomly selected \(62\) images (\(1\%\) of images) from the training set. Table 2 details the distribution of the FLIR ADAS v1 IR dataset classes, as used in our experiment.
**Baseline network and evaluation metrics:** We use YOLOv7 [51], a state-of-the-art object detector with over 37M trainable parameters as our baseline network. To determine appropriate anchor box sizes for the detector, we use K-Means++ method [2].
In evaluating the performance of our object detection model, we employed the Mean Average Precision (mAP), a widely-used and robust metric in the field. mAP considers both precision and recall, ensuring a balance between detecting as many objects as possible and minimizing false positives. This is achieved by generating Precision-Recall (PR) curves for each object class in two different settings. In the first, the Intersection Over Union (IoU) between predicted and ground truth bounding boxes is set to larger than 0.5, for it to be counted as a true positive prediction, while in the second setting, multiple evaluations are performed with increasing thresholds from 0.5 to 0.95 in increments of 0.05. The Average Precision (AP) is then calculated as
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Class** & **Training Instances** & **Validation Instances** \\ \hline Person & \(8987\) & \(4107\) \\ Bicycle & \(2566\) & \(360\) \\ Car & \(20608\) & \(4124\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Distribution of class instances for training and validation sets for FLIR Aligned RGB dataset [13].
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Class** & **Training Instances** & **Validation Instances** \\ \hline Person & \(161\) & \(4611\) \\ Bicycle & \(24\) & \(842\) \\ Car & \(351\) & \(8472\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Distribution of class instances for training (\(1\%\)) and validation sets for FLIR ADAS v1 IR dataset [49].
Figure 4: The flow of data in our proposed _TensorFact_ approach in every layer. The input \(\mathcal{X}\) convolves with \(\mathcal{K}\) (top branch) and \(\Delta\mathcal{K}\) (bottom branch) and results in output \(\mathcal{Y}\) after summation.
the area under each PR curve for every class, under each of these settings. We then take the mean of these APs, across the different classes, to get the mAP. The IoU threshold of 0.5, is used for the mAP 50 metric, while the range of IoU thresholds from 0.5 to 0.95 (in steps of 0.05) is used for the mAP 50-95 metric.
**Implementation details:** We train all models for \(200\) epochs, with mini-batch size of \(40\) images, where the gradients are accumulated over \(2\) mini-batch iterations prior to parameter update. We use the ADAM optimizer [25] with a learning rate of \(10^{-5}\) when training on the RGB modality and \(10^{-3}\) when training on the IR modality. We use "reduce on plateau" as the learning rate scheduler, that reduces learning rate by a factor of \(0.1\) if the validation loss does not improve over \(10\) epochs. Rather than initializing the RGB network from scratch, we initialize it with the pre-trained weights for detecting objects in the MS-COCO dataset [33]. When explicitly encouraging complementarity between the RGB and IR branches, we set the weight \(\omega_{c}=0.01\) in Eq. 4 such that both terms have comparable range.
### Results and analysis
In Table 3, we present the evaluation results of the proposed and baseline methods for the FLIR Aligned RGB validation dataset for the task of object detection in RGB images. From the table, we observe that our proposed method demonstrates comparable, if not superior, performance to
Figure 5: Comparison of object detection results between the state-of-the-art YOLOv7 [51] and our proposed approach. We show the ground truth (left column), baseline (middle column), and proposed method’s (\(\alpha=0.1\), right column) detections as rectangular bounding boxes. We show detections on two different images from the FLIR ADAS v1 IR validation dataset, one in each row. The orange, cyan, green boxes denote bicycle, person, and car classes respectively, while the associated numbers denote the confidence score of the prediction.
Figure 6: Comparison of object detection results for the proposed method without (left column) and with (right column) \(L_{1}\) regularization. The orange, cyan, green boxes denote bicycle, person, and car classes respectively, while the associated numbers denote the confidence score of the prediction. We obtain better object detections using the \(L_{1}\) regularization, as compared to the vanilla model, as manifested by the higher confidence scores for the predicted bounding boxes.
the baseline model in terms of both mAP 50 and mAP 50-95 evaluation metrics, across varying values of \(\alpha\). Interestingly, while reduction in the value of \(\alpha\) leads to significant compression in the model's size, our proposed method successfully maintains, and in certain instances enhances, model performance. We hypothesize that this is because the decrease of the trainable parameters reduces the chance of over-fitting.
Table 4 presents the comparison results between the baseline and the proposed _TensorFact_ method on the FLIR ADAS v1 IR validation dataset. For the proposed _TensorFact_ method, we employ two different \(\alpha\) configurations, \(0.1\) and \(0.2\), such that the ratios \(\{\Delta r_{l}:r_{l}\}_{l=1}^{L}\) are \(1:9\) and \(1:4\), respectively, for \(l=1,2,\dots,L\). We observe that both proposed model configurations outperform the baseline on the mAP 50 evaluation metric, with only a few additional trainable parameters in the IR branch. These results underscore the potential of our proposed method to efficiently learn and generalize with significantly fewer trainable parameters in a data-scarce environment like the IR modality, while leveraging cross-modal cues from the data-rich RGB modality.
Lastly, in Table 5, we present results for augmenting the training objective with an explicit complementarity criterion, for \(\alpha=0.1\) on the FLIR ADAS v1 IR validation dataset to determine the impact of regularization to promote learning of complementary features for the IR modality beyond the pre-trained RGB modality. We observe that both \(L_{1}\) and \(L_{2}\) regularization methods show slight improvements in detection performance compared to the model without explicit regularization.
**Qualitative results:** In Figure 5, we compare the object detection results using the ground truth (left column), baseline (middle column), and proposed methods (\(\alpha=0.1\), right column). The results are displayed vertically for two different images from the FLIR ADAS v1 IR validation dataset. We observe that the baseline method fails to detect small, distant objects and objects with backgrounds of similar texture as the foreground, whereas the proposed method accurately detects them. This shows that the proposed method is more robust against false negatives relative to the baseline. Next, in Figure 6, we compare the object detection results for the proposed method without (left column) and with (right column) \(L_{1}\) regularization and observe that this explicit regularization leads to more confident bounding box detections.
## 5 Conclusions
In this work, we proposed _TensorFact_ - a novel approach for object detection to be able to capture cross-modal cues so as to generalize better to modalities with scarce training data. _TensorFact_ benefits from pre-training on modalities where plenty of training data is available (such as RGB), mitigating the challenges posed by the target modality (such as IR). In our formulation, at first, the data-rich RGB modality is used to learn the common cross-modal cues using low-rank tensor factorization of the network weights. We then use the IR modality training data to only learn the cues complementary to the RGB modality (either explicitly or implicitly), thereby requiring fewer trainable parameters. We empirically validate the efficacy of our method on the task of object detection in IR images by pre-training our network on RGB object detection datasets and show that _TensorFact_ yields performance boosts for object detection, in both RGB and IR images without an increase in the total number of network parameters.
|
2310.00293 | Rapid Scan White Light Two-dimensional Electronic Spectroscopy with 100
kHz Shot-to-Shot Detection | We demonstrate an approach to two-dimensional electronic spectroscopy (2DES)
that combines the benefits of shot-to-shot detection at high-repetition rates
with the simplicity of a broadband white light continuum input and conventional
optical elements to generate phase-locked pump pulse pairs. We demonstrate this
through mutual synchronization between the laser repetition rate,
acousto-optical deflector (AOD), pump delay stage and the CCD line camera,
which allows rapid scanning of pump optical delay synchronously with the laser
repetition rate while the delay stage is moved at a constant velocity. The
resulting shot-to-shot detection scheme is repetition rate scalable and only
limited by the CCD line rate and the maximum stage velocity. Using this
approach, we demonstrate measurement of an averaged 2DES absorptive spectrum in
as much as 1.2 seconds of continuous sample exposure per 2D spectrum. We
achieve a signal-to-noise ratio (SNR) of 6.8 for optical densities down to 0.05
with 11.6 seconds of averaging at 100 kHz laser repetition rate. Combining
rapid scanning of mechanical delay lines with shot-to-shot detection as
demonstrated here provides a viable alternative to acousto-optic pulse shaping
(AOPS) approaches that is repetition-rate scalable, has comparable throughput
and sensitivity, and minimizes sample exposure per 2D spectrum with promising
micro-spectroscopy applications. | Asha S. Thomas, Vivek N. Bhat, Vivek Tiwari | 2023-09-30T08:03:06Z | http://arxiv.org/abs/2310.00293v1 | # Rapid Scan White Light Two-dimensional Electronic Spectroscopy with 100 kHz Shot-to-Shot Detection
###### Abstract
We demonstrate an approach to two-dimensional electronic spectroscopy (2DES) that combines the benefits of shot-to-shot detection at high-repetition rates with the simplicity of a broadband white light continuum input and conventional optical elements to generate phase-locked pump pulse pairs. We demonstrate this through mutual synchronization between the laser repetition rate, acousto-optical deflector (AOD), pump delay stage and the CCD line camera, which allows rapid scanning of pump optical delay synchronously with the laser repetition rate while the delay stage is moved at a constant velocity. The resulting shot-to-shot detection scheme is repetition rate scalable and only limited by the CCD line rate and the maximum stage velocity. Using this approach, we demonstrate measurement of an averaged 2DES absorptive spectrum in as much as 1.2 seconds of continuous sample exposure per 2D spectrum. We achieve a signal-to-noise ratio (SNR) of 6.8 for optical densities down to 0.05 with 11.6 seconds of averaging at 100 kHz laser repetition rate. Combining rapid scanning of mechanical delay lines with shot-to-shot detection as demonstrated here provides a viable alternative to acousto-optic pulse shaping (AOPS) approaches that is repetition-rate scalable, has comparable throughput and sensitivity, and minimizes sample exposure per 2D spectrum with promising micro-spectroscopy applications.
A]Visek Tiwari [a]The authors contributed equally.
Electronic mail: Author to whom correspondence should be addressed:[email protected]
Introduction
Electronic relaxation in the condensed phase proceeds through several overlapping vibrational-electronic manifolds on femtosecond to picosecond timescales. Such phenomena span biological proteins to emerging energy materials, and carry both fundamental and applied significance. For example, sub-100 fs cis-trans photoisomerization of retinal [1] in the mammalian visual pigment rhodopsin initiates vision, and sub-50fs carrier thermalization [2] may limit hot-carrier extraction in photovoltaic devices based on bulk perovskites.
A broadband white light continuum (WLC) light source is naturally desirable in order to probe the entire energetic manifold subsequent to a narrowband pump excitation as is typically implemented in pump-probe (PP) spectrometers [3]. However, broad overlapping electronic resonances in the condensed phase along with ultrafast relaxation timescales impose the requirements of high temporal resolution with a broadband pump spectrum, and consequently also the need to know the pump excitation frequency information in order to deconvolute the underlying photophysics into a uniquely determined rate model [4]. In this regard, two-dimensional electronic spectroscopy (2DES) goes beyond conventional PP implementations in that there is no trade-off between temporal resolution and pump excitation frequency information. The spectral information is resolved in the form of a 2D contour map that correlates the initial excitation to the final detection frequency, and evolves with the pump-probe waiting time \(T\). 2DES has revealed energetic relaxation pathways in complex spectrally congested systems such as protein networks within photosynthetic cells [4; 5], molecular aggregate-plasmon-plexiton states [6], and carbon nanotube thin films [7]. Recent several fully-collinear implementations [8; 9] of 2DES have also added sub-micron spatial-resolution as an additional handle to decongest the ensemble-averaged ultrafast dynamics through micro-spectroscopy. This has led to a general interest towards the development of high-repetition rate, high-throughput approaches to 2D micro-spectroscopy which also minimize sample exposure.
Broadband light sources in PP and 2DES have been typically [8] obtained through multi-stage optical parametric amplifiers (OPAs) which offer tunable [10] few-cycle optical pulses with over 300 nm bandwidth and several hundred nanojoules (nJ) pump pulse energies [11; 12]. This is often complemented with a significantly simpler WLC based light source [13; 14] to probe the relaxation dynamics
in a broadband UV-visible-NIR region. Along similar lines, pump pulses that are also generated from a WLC are highly desirable because they can provide nearly octave spanning excitation axis[7] in 2DES without the cost and complexity of OPAs. For WLC generation through the use of nonlinear crystals[15], however, this also brings in significant challenges associated with power stability[15] across the bandwidth, spectral and temporal correlations[16], and most significantly, pulse energies of only a few tens of picojoules (pJ). Zanni and co-workers have pioneered[17; 7] a YAG-WLC based approach to 2DES, with extensions to high-repetition rate through shot-to-shot acousto-optic pulse shaping[18] (AOPS) at 100 kHz. YAG-WLC based 2DES approaches have also been recently implemented in action-detected variant of 2DES[19; 20; 21]. Note that several 2DES approaches have been demonstrated[22; 23; 24] using the gas filamentation approach to continuum generation which provides \(\mu\)J pulse energies starting from mJ fundamental pulse energy. In comparison, a YAG-WLC only provides \(\sim\)1 nJ pulse energies starting with \(\sim 1\mu\)J fundamental pulse energy[15].
Shot-to-shot PP[25; 26] or 2DES[18] data collection is highly desirable because it utilizes the full laser repetition rate, leverages the shot-to-shot correlations between laser pulses to replace multi-channel referencing[3], and suppresses the 1/\(f\) laser noise encountered during a delay scan[27]. The latter point was demonstrated in the WLC-2DES study of Kearns et al.[18] which shows signal-to-noise ratio (SNR) enhancement beyond that guaranteed by the scaling of repetition rate from 1 kHz to 100 kHz due to an additional suppression of 1/\(f\) noise component. While programmable AOPS technology[28; 29] has proven highly effective for high-repetition rate shot-to-shot 2D spectroscopy in the visible[18] and mid-IR[30], significant cost and complexity, limited time aperture-RF bandwidth product, and the RF waveform update rate[28] in the modulator poses limitations in terms of repetition rate scalability desirable for 2DES micro-spectroscopy[31; 32; 33; 34] applications. Development of alternate repetition rate scalable approaches to shot-to-shot WLC-2DES that rely on conventional optics and provide comparable throughput and sensitivity is therefore quite essential in this regard.
We demonstrate a repetition-rate scalable WLC-2DES spectrometer which relies on conventional optical and electronic elements to achieve repetition-rate scalable shot-to-shot data collection. The pump pulse pair is generated using birefringent-wedge based common path interferometer[35]. 100 kHz shot-to-shot detection is achieved by mutual synchronization of the laser repetition rate, acousto-optic deflector (AOD), pump delay stage and the CCD line camera,
such that the pump delay axis can be raster scanned synchronously with the laser repetition rate, while the CCD records every probe laser shot. As we have recently shown[26] in the context of WLC-PP spectroscopy, combining rapid mechanical delay scan with shot-to-shot detection provides advantages of not only increased averaging by substantially minimizing single scan time but also in suppressing[27] the 1/\(f\) component of experimental noise encountered during a scan. Zanni and co-workers have shown[18] that AOPS approaches to 2DES with shot-to-shot data collection can fully leverage correlations between laser shots to suppress 1/\(f\) laser noise. Our approach demonstrates that the above advantages are also possible with rapid scanning of mechanical delay lines and conventional optical elements, with an additional vital feature of repetition-rate scalability, which is in principle only limited by the camera line rate, without sacrificing pump WLC bandwidth. Overall, we demonstrate measurement of averaged 2DES absorptive spectra in as much as 1.2 seconds of continuous sample exposure per 2D spectrum, limited only by the maximum stage velocity. We achieve an SNR of 6.8 for ODs down to 0.05 in 11.6 seconds of averaging at 100 kHz laser repetition rate, demonstrating throughput and sensitivity comparable to that reported for AOPS approaches[18]. Overall, we introduce a considerably simpler and viable alternative to WLC-2DES that is repetition rate scalable and minimizes sample exposure per 2D spectrum with promising applications in 2DES micro-spectroscopy.
## II Experimental Methods
This section describes the experimental setup, interferometer for pulse pair generation and electronics synchronization scheme for shot-to-shot detection, and compares the various data acquisition and averaging schemes.
### Experimental Setup
The schematic of the partially collinear white-light 2DES setup is shown in Fig. 1(a). The 1040 nm fundamental beam from a 100 kHz Yb:KGW amplifier (Spirit One, Spectra-Physics) is split into pump and probe lines of \(\sim\) 1 \(\mu\)J power each. The fundamental beam is focused using 7.5 cm and 5 cm focal length lenses onto 8 mm and 10 mm YAG crystals for pump and probe WLC generation, respectively. Any residual of the fundamental is filtered using 725 nm (pump) and 850 nm (probe) shortpass optical filters (OD4, Edmund Optics). A crystalline quartz acousto
optic deflector (AOD, Gooch and Housego model 97-02965-01, 8.8 mm pathlength) modulates the pump at 50 kHz synchronously with the laser repetition rate (\(f_{R}\)) ensuring every other pump pulse is blocked. Note that the placement of AOD after the pump WLC generation introduces spatial chirp in the pump pulse. Placement of a collimation lens right after the AOD and reflective (achromatic) focusing with \(\sim\)33 \(\mu\)m average focal spot sizes (substantially larger than \(\sim\)1 \(\mu\)m) are expected to mitigate[21] the effect of angular dispersion at the focus. However, measurements with sub-micron spatial resolution will necessarily require either double-passing[36] through the AOD to exactly cancel out angular dispersion, or placement of AOD before the pump WLC generation. The deflected pump pulses are routed to a common path interferometer[35] (CPI) for phase-locked pulse pair generation with mechanically controllable pump delay (\(\tau\)). More details of the CPI are described in Section II.2. The total optical dispersion in the pump arm caused by the BBO wedges, optical filters, focusing and collimating lenses, sample cuvette, AOD and the YAG crystal is partially pre-compensated by two pairs of group delay dispersion (GDD) oscillation compensated chirped mirrors (Layertec 148545, -40 fs[2] GDD per bounce) with total 43 pairs of bounces where each bounce pair is specified to compensate \(\sim\)1 mm of fused silica. The probe beam is routed to the sample position after 22 pairs of bounces in a pair of chirped mirrors (148545 Layertec, -40 fs[2] GDD per bounce) to approximately compensate for optical dispersion in the probe WLC. A pump pulse duration of \(\sim\)33 fs is measured at the sample position by focusing into a SiC photodiode
Figure 1: (a) Schematic of the WLC-2DES setup. BS Beam Splitter; L Lens; CM Chirped Mirror; M Mirror; AOD Acousto-Optic Deflector; HWP Half-waveplate; CPI Common Path Interferometer; P Linear Polarizer; PM Parabolic Mirror; S Sample; BD Beam Dump. 100 kHz detection represents the spectrograph, the 100 kHz line camera and the timing electronics that enable shot-to-shot detection. Dimensions of the wedges in the CPI (L\(\times\)W\(\times\)H) in mm ; 25x(3.6-0.5)x20, with an apex angle (\(\theta\)) of 7.07\({}^{\circ}\). (b) Linear absorption spectrum of Oxazine170 in Methanol along with averaged pump and probe spectra. % RMSE for probe passing through methanol in 500 \(\mu\)m cuvette is plotted along the secondary Y axis. The horizontal line is drawn at 2% RMSE. The average probe % RMSE measured through methanol in the range of 550-700 nm is 1.7%.
(Fig. S1) and measuring the two-photon interferometric autocorrelation, and suggests uncompensated third-order or higher optical dispersion. The relaxed 2DES absorptive spectra reported here are not affected by these limitations of dispersion compensation with passive optical elements. The instrument response function (IRF) is assumed to be Gaussian and estimated to be \(\sim\)60 fs from a global fit of the rise time of the transient absorption signal measured with Oxazine 170 (Fig. S2).
The delay (\(T\)) between the pump and probe arms is varied by a linear translational stage (ILS150BPP, Newport, 1 \(\mu\)m resolution). The pump and probe delay stages are controlled by a stage controller (XPS-D, Newport). The pump and probe arms with parallel polarization are focused into the sample in a 500 \(\mu\)m pathlength cuvette using a parabolic mirror (reflected focal length 101.6 mm) at a crossing angle of \(\sim\)7.5\({}^{\rm o}\). The pump is blocked after the sample. The transmitted probe is routed to a spectrograph (Horiba iHR320, 150 grooves/mm) using a combination of reflective and achromatic optics. Every dispersed probe shot is recorded by a line CCD camera (e2v AViVA, 14\(\times\)28 \(\mu\)m, 1024 pixels) attached to the spectrograph. The CCD camera is interfaced with an Xtium-CL MX4 frame grabber (512 MB onboard memory buffer). The averaged pump and probe spectra are shown in Fig. 1(b) along with the % root-mean-square (RMS) noise obtained by averaging 2k probe shots after transmission through the solvent. The pump and probe 1/\(e^{2}\) focal spot sizes at the sample location were measured to be 33 \(\mu\)m and 36 \(\mu\)m, respectively (Fig. S3(B-C)), with pulse energies 0.47 nJ and 1.53 nJ across the entire \(>\)150 nm WLC bandwidth. The sample % transmission was confirmed to be linear across this range of pulse energies. Before each experiment, the overlap of pump and probe focal spots inside the sample cuvette along the optical axis was further optimized by maximizing the pump-probe signal as the cuvette position is changed (Fig. S3(A)). This becomes crucial[37] in case of a combination of large crossing angles, high sample ODs and long sample pathlengths where the signal may be dominantly generated towards the front of the sample. For the 2DES measurements reported here, Oxazine 170 (Sigma-Aldrich) solution is prepared in methanol with OD \(\sim\)0.37 in 500 \(\mu\)m cuvette with subsequent dilutions to prepare solutions of lesser OD. The OD of the samples was measured before and after the 2DES experiments and showed no changes beyond the measurement errors.
### Common Path Interferometer (CPI)
The CPI in the pump line in Fig. 1(a) is essentially a Babinet-Soleil compensator, and its design and application in 2DES presented here is motivated [35] from the extensive work of Cerullo and co-workers. The interferometer consists of a rectangular block A of the negative birefringent material \(\alpha\)-BBO with the fast optical axis oriented along the X direction (according to the coordinate axes defined in Fig. 1(a)).
When a 45\({}^{o}\) polarized pulse passes through this block, the X polarized component (\(V\)) travels with a faster velocity with respect to the Y polarized component (\(H\)) resulting in a delay between the two polarization components. This is followed by blocks B and C, each comprised of two pairs of \(\alpha\)-BBO wedges assembled in the form of rectangular blocks. The orientations of the optical axes in each wedge pair are such that the \(H\) component travels faster in one set of wedges (in B\({}_{2}\) and C\({}_{1}\)) and both components travel with the same velocity in the other set of wedges (B\({}_{1}\) and C\({}_{2}\)). This implies that the relative group delay (GD) between the \(V\) and \(H\) components can be precisely controlled by adjusting the relative thickness of block A (\(d_{A}\)) and the pathlength traveled in the wedges B\({}_{2}\) and C\({}_{1}\), \(d_{B2}\) and \(d_{C1}\), as GD(\(\lambda_{o}\)) = GVM(\(\lambda_{o}\))(\(d_{A}-d_{B2}-d_{C1}\)). Here GVM(\(\lambda_{o}\)) \(=(v_{g,o}^{-1}(\lambda_{o})-v_{g,e}^{-1}(\lambda_{o}))\), is the group velocity mismatch at the central wavelength \(\lambda_{o}\) where the mismatch ultimately depends on the ordinary versus extraordinary refractive index of \(\alpha\)-BBO, denoted as '\(o\)' and '\(e\)' subscripts, respectively.
To scan the delay between the \(H\) and \(V\) components, the B pair of wedges is mounted on a motorized translational stage (MFA-CC, Newport, 0.1 \(\mu\)m resolution) which enables control over the thickness \(d_{B2}\) of wedge B\({}_{2}\) in the pump path. However, the overall thickness of the medium (ideally) remains fixed during the scan since the wedges are mounted in the form of a rectangular block. The wedges in block C are static and at minimum insertion to correct for the pulse front tilt of the pulses emerging from block B. The collinear \(H\) and \(V\) components pass through an output linear polarizer (LPVISC050-MP2, Thorlabs) at 45\({}^{o}\) polarization to result in collinear pulses with a common polarization axis, followed by rotation to vertical polarization by an achromatic half-waveplate (AHWP05M-600, Thorlabs). The spectral resolution is determined by the maximum possible delay range (\(\tau_{max}\)) scanned by the interferometer, which is \(\sim\pm\)320 fs at \(\lambda_{o}=~{}620\) nm resulting in a spectral resolution of \(\sim\)52 cm\({}^{-1}\) along the absorption axis \(\omega_{\tau}\). Note that the spectral
resolution is limited by the system due to fast optical dephasing along the optical coherence time \(\tau\) at room temperature. A common path design ensures that relative timing jitters \(\delta\tau\) between the pulses during a \(\tau\) scan are naturally suppressed with interferometric stability [35] (Fig. S4).
Since the \(V\) component travels through a fixed thickness of medium with constant group velocity irrespective of the stage position, the absolute time of emergence of the \(V\) component should be ideally fixed during the \(\tau\) delay scan of block B. This has been experimentally confirmed by focusing the \(V\) component and the probe through a 25 \(\mu\)m pinhole and recording the spectral interference using a CCD spectrometer (CCS200, Thorlabs) as shown in Fig. 2(a). As the \(\tau\) delay stage is scanned, the spectral fringes and the fringe density do not change confirming that \(V\) component is unaffected during \(\tau\) scan. This in turn ensures that the pump-probe waiting time \(T\) remains constant during a \(\tau\) scan. Note that during a \(\tau\) scan, the \(H\) component experiences a
Figure 2: (a) Spectral interference of the pump pulse \(V\) and probe recorded at fixed \(T\) delay for \(\tau\) = -100, 0, 100 fs. Each spectrum is an average of five consecutive spectra at a fixed \(\tau\). The error bar of the measurement across the five trials is overlaid on each spectrum. (b) Top panel shows the spectrally-resolved Autocorrelation (sAC) of the pump pulses. Bottom panel shows the calibration of excitation frequency axis \(\omega_{\tau}\) from sAC of the pump pulses by comparing the \(\omega_{\tau}\) axis to the detection frequency \(\omega_{\tau}\). The error bar in the calibration across measurements on consecutive days is overlaid as a red transparent band on the mean calibration curve. (c) Top panel shows the spectrally integrated sAC of the pump pulses recorded at the sample position zoomed into a range of \(\pm\) 25 fs. Bottom panel compares the measured pump spectrum with the reconstructed pump spectrum obtained after Fourier transforming the trace. (d) Spectrally integrated sAC of pump pulses recorded by the shot-to-shot rapid scan detection scheme (Section II.3) for forward and backward scans. The scans are offsetted for clarity. Odd and even scans correspond to forward and backward scans, respectively with a constant index shift between them (Fig. S4(C)).
relative change in the thickness of ordinary and extraordinary glass whereas the \(V\) component does not. This minor change in the amount of GDD between the two components is ignored in our analysis but can be consequential [35] (in terms of relative pulse durations of the two pulses) for a combination of UV wavelengths, large \(\tau\) scan range, and few cycle pulses. Fig. 2(b) (top panel) shows a spectrally-resolved autocorrelation sAC(\(\omega_{t},\tau\)) where the delay axis \(\tau\) is constructed using the approximate conversion between stage position (scanned synchronously with the laser repetition rate \(f_{R}\)), insertion of block B and the resulting GD at \(\lambda_{o}=\)620 nm. Even though the stage position is synchronized to \(f_{R}\), such an estimate is only approximate because the pulse replicas do not travel identical pathlengths in block B due to a small but finite air gap between the wedges B1 and B2 and a change in the fast axis orientation between the wedges. An exact calibration for the Fourier transformed frequency axis \(\omega_{\tau}\) is obtained by comparing it to the detection frequency axis \(\omega_{t}\) as shown in Fig. 2(b) (bottom panel). Prior to every experiment, sAC(\(\omega_{t},\tau\)) between the pump pulses is recorded with the shot-to-shot rapid scan detection scheme described later in Section II.3. The resulting calibration is checked prior to every experiment and was observed to be fairly consistent between day-to-day measurements as shown by the error bar in Fig. 2(b) (bottom panel).
Fig. 2(c) (top panel) shows the spectrally integrated sAC recorded at the sample position. The corresponding modulation depth, given by \(\frac{I_{max}-I_{min}}{I_{max}+I_{min}}\times\)100, is calculated to be \(\sim\) 93%. A perfect modulation depth is expected for an ideal interferometer with perfect spatial overlap and collinearity between pulses. A deviation from 100% is likely caused by a finite air gap and a change in refractive index between the wedges due to which a few microns of lateral shift (relative to a 2 mm spot diameter) between the ideally overlapped and collinear \(H,V\) components is expected [35]. Fig. 2(c) (bottom panel) shows the comparison between the Fourier transform of the spectrally integrated signal in the top panel (reconstructed pump spectrum) and the measured pump spectrum. The calibration in Fig. 2(b) (bottom panel) was used to obtain the frequency axis of the reconstructed pump spectrum. The good agreement between the spectra confirms the validity of the excitation frequency axis calibration and the interferometer alignment and stability.
We have implemented the rapid scanning of the pump delay axis in both, forward and backward directions for faster averaging. In this regard, repeatability of \(\tau\) points during consecutive scans is vital for efficient averaging of multiple \(\tau\) scans without compromising the time step resolution.
This is achieved by synchronizing the stage movement and CCD detection with the laser repetition rate \(f_{R}\) as described in Section II.3. Fig. 2(d) shows the spectrally integrated sAC traces of the pump pulses recorded by the CCD line camera for consecutive forward and backward scans for the maximum stage velocity of 2 mm/s used for the experiments reported here. Unlike previous implementations[35], synchronization of stage movement and CCD detection during \(\tau\) scans ensures that all forward scans and all backward scans mutually overlap perfectly, with a constant index shift (Fig. S4(C)) between forward and backward scans which can be corrected during processing without recording separate pump interferograms. Without this synchronization, when multiple \(\tau\) scans are averaged, arbitrary variations in \(\tau\) points of the order of time steps are expected[35]. Such variations can lead to phasing errors which become severe for a combination of faster scan velocities and lower repetition rates. The checks described in Fig. 2 are conducted prior to every experimental run to confirm interferometer alignment and calibration.
### Electronic Synchronization Scheme
Figure 3 (a-b) describes the timing electronics which synchronizes the laser repetition rate (\(f_{R}\)),
Figure 3: (a) Schematic of the electronics synchronization for shot-to-shot data acquisition along with (b) the timing diagram. Such a detection scheme synchronizes the AOD, pump delay stage and the CCD camera to the laser repetition rate \(f_{R}\). (Un)shaded region in the 100 kHz CCD line trigger corresponds to pump (ON)OFF state, while the probe is ON. (c) Motion profile of the \(\tau\) delay stage recorded by the stage encoder for the velocity of 1.2 mm/s. The stage position versus time elapsed is plotted with the corresponding linear fit with a slope of \(1.20\pm 3\)E-6 mm/s compared to the set velocity of 1.2 mm/s. The light red trace shows the corresponding delay error in attoseconds along the secondary Y axis. The vertical lines on the extreme ends correspond to the time window where the stage maintains 99.8% of the set velocity. The inner vertical lines define the trigger window over which the data is collected. The mechanical shutter is open from the start of the stage motion to the end of the trigger window. This window is defined as the sample exposure window. Note that in the motion profile, the stage is set to move 3\(\times\) the calculated distance required by the stage to reach a constant velocity even though the encoder data suggests that the stage is already at 99.8% of set velocity earlier than that.
pump modulation by AOD, \(\tau\) stage movement and the CCD detection to combine shot-to-shot data acquisition with rapid delay scanning for any given input repetition rate. We have recently[26] implemented this detection scheme to demonstrate WLC-PP spectroscopy combining shot-to-shot detection at 100 kHz with continuous scanning of pump-probe waiting time \(T\). Here we have extended this approach to WLC-2DES spectroscopy. The 100 kHz pulse train from the laser is converted to a 100 kHz TTL signal. The 100 kHz TTL output is then split into two parts of which one part is converted to a 50 kHz TTL signal by an \(f_{R}\)/2 TTL divider. This signal is used to drive the AOD such that every other pump shot is deflected into the setup, that is, pump modulation at \(f_{R}\)/2. The pump delay stage controller outputs a constant high signal when the stage enters a defined \(\tau\) scan window (trigger window). This signal is combined with the 100 kHz TTL pulse using an AND circuit, and the output is used to trigger the CCD camera. This results in the CCD camera reading every probe shot at repetition rate \(f_{R}\) once the stage has entered the defined scan range. Furthermore, for every probe shot, the pump is alternating between \(ON\) and \(OFF\) states at \(f_{R}\)/2. In the pump-probe geometry, when the pump is \(ON\), the \(3^{rd}\) order nonlinear signal is radiated in the same direction as the probe. The desired homodyned signal in case of shot-to-shot detection can be written[38] as \(S_{2D}(\tau,\ T,\ \lambda_{t})\) = \(S_{i+1}(\tau,\ T,\ \lambda_{t})^{ON}\) - \(S_{i}(\tau,\ T,\ \lambda_{t})^{OFF}\), where \(S_{i+1}\) and \(S_{i}\) denote consecutive transmitted probe shots recorded by the CCD with pump \(ON\) and \(OFF\), respectively. The homodyned signal is optically Fourier transformed by the spectrograph resulting in the detection wavelength axis (\(\lambda_{t}\)), which is then converted to a detection frequency axis (\(\omega_{t}\)) after a wavelength to frequency conversion during data processing. A numerical Fourier transform along the pump optical delay \(\tau\) yields the absorption frequency axis \(\omega_{\tau}\), to result in the absorptive 2D spectrum \(S_{2D}(\omega_{\tau},\ T,\ \omega_{t})\) for a given pump-probe waiting time \(T\). The signal has a transient absorption background \(S_{TA}(T,\ \omega_{t})\), which is constant along the \(\tau\) delay axis for a fixed \(\omega_{t}\) for a 2D spectrum measured at a fixed \(T\). This constant offset can either be subtracted in the \(\tau\) domain during data processing or removed by Fourier filtering in the \(\omega_{\tau}\) domain. Note that in our implementation, we did not encounter the complication of alternating dark count background that is reported[39] to lead to dark count differences of the order of 1-5 counts between alternating lines in a PP microscopy experiment. Fourier filtering along \(\tau\) implies that such a complication will not affect 2DES.
The rapid scan of the pump delay axis in principle leads to shot-to-shot increment of the pump delay with the theoretically expected minimum delay step given by \(\Delta\tau_{min}=\text{GVM}(\frac{2v_{scan}}{f_{R}})\tan\theta\)
where \(\theta\) is the apex angle of the \(\alpha\)-BBO wedges (Fig. 1). This corresponds to the stage movement during two consecutive pump shots and is determined by the laser repetition rate \(f_{R}\) and the stage velocity \(v_{scan}\). Note that a slight timing offset between the stage trigger onset and the laser TTL high state (Fig. 3(b)) can lead to timing errors. However, the maximum possible such error (\(\delta\tau\)) encountered in a delay scan is given by \(\delta\tau\) = GVM\((\frac{v_{scan}}{f_{R}})\tan\theta\). This error corresponds to \(\sim\)1.1 attoseconds (as) for the maximum velocity of 2 mm/s implemented here. In our implementation this error is inconsequential because, as described in the following Section II.4, multiple finely sampled time steps, \(\Delta\tau_{min}\), are binned together to form a larger \(\tau\) steps of \(\Delta\tau_{bin}\) such that the timing error is only \(\sim\)0.9% of the binned delay step of 0.132 fs. Further details of shot-to-shot data acquisition and processing are described in Section II.4.
### Data Acquisition and Averaging Scheme
Similar to rapid scan of \(T\) delay stage at a fixed velocity within the defined stage trigger window in a PP experiment [26], here the \(\tau\) stage moves at a constant velocity within the defined trigger window. This is ensured by allowing the stage to move \(\sim\)3\(\times\) the calculated distance \(d\) required by the stage to accelerate to, or decelerate from, a uniform velocity before and after the defined trigger window, respectively. The set final velocity, set acceleration, the resulting distance \(d\) and the time required by the stage to attain constant velocity is summarized in Table S1. The resulting motion profile of the stage as recorded by the stage encoder is shown in Fig. 3(c) for a representative stage
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Stage & Exposure & Trigger & \(ON/OFF\) & & & Frames & Dead Time \\ velocity & Window & Window & pairs per bin & Theoretical & & Recorded per & per \(\tau\) scan \\ (mm/s) & (secs) & (secs) & (\(M\)) & \(\Delta\tau_{min}\) (as) & \(\Delta\tau_{bin}\) (fs) & \(\tau\) Scan & (secs) \\ \hline \(v_{1}\) = 0.4 & 4.192 & 3.498 & 300 & 0.5 & 0.132 & 343 & 7.642 \\ \(v_{2}\) = 0.8 & 2.337 & 1.748 & 150 & 0.9 & 0.132 & 171 & 3.840 \\ \(v_{3}\) = 1.2 & 1.693 & 1.165 & 100 & 1.4 & 0.132 & 114 & 2.481 \\ \(v_{4}\) = 2.0 & 1.221 & 0.698 & 60 & 2.2 & 0.132 & 68 & 1.628 \\ \hline \end{tabular}
\end{table}
Table 1: Scan parameters for different stage velocities. The \(\tau\) scan range of 75.910 fs (-5.608 to 70.302 fs), binned time step \(\Delta\tau_{bin}\) = 0.132 fs with 15 points per 620 nm cycle and \(S\) = 10 scans are kept fixed across all experiments. The trigger window is defined in Fig. 3(c). The sample exposure window is also defined in Fig. 3(c) and includes 30 ms each for shutter opening and closing. A frame is defined as 1000 probe shots. The dead time is defined as the writing time after each \(\tau\) scan and may not be needed for faster scan velocities where the number of frames collected per scan is substantially lesser than the frame grabber onboard memory.
velocity of 1.2 mm/s. The region enclosed by the outer vertical lines corresponds to the region over which the stage moves with constant velocity. The shaded region enclosed by the inner set of dashed vertical lines represents the defined trigger window in which the probe shots are recorded whereas the outer most dashed vertical lines represent the region in which the stage velocity reaches 99.8% of the set velocity. The sample is exposed to light from start of the \(\tau\) stage motion to the end of the trigger window, after which a mechanical shutter blocks the light. As shown in the figure, delay errors are estimated by measuring the stage position deviations compared to that expected from a perfectly uniform motion profile, that is, a line with a constant slope corresponding to the set velocity. The resulting \(\tau\) delay errors are \(\sim\)5 as, which is only 3.8% of the binned time step \(\Delta\tau_{bin}\) and of the order of \(\sim\)0.1 \(\mu\)m of wedge insertion (corresponding to the minimum incremental resolution of the stage).
Figure 4 explains the data averaging scheme. As shown in Fig. 4(a), the synchronization of the AOD, \(\tau\) stage and the detector read-out to the laser repetition rate enables recording every probe shot with alternate pump \(ON/OFF\) state. The continuous motion of the \(\tau\) stage during this detection enables synchronous increment of the pump delay between consecutive \(ON/OFF\) pairs resulting in finely sampled signal along the \(\tau\) axis with stepsize \(\Delta\tau_{min}\). For data processing, \(M\) such pairs are averaged together to yield the \(i^{th}\) binned time point \(\tau_{b,i}\). Thus, as defined in Fig. 4(a), each binned time point \(\tau_{b,i}\) arises from \(M=s/2\)\(ON/OFF\) pairs, that is, \(M\) finely sampled \(\tau\) points per bin. For example, for a velocity of 1.2 mm/s, \(M\)= 100 points per bin. Fig. 4(b) schematically denotes this binning procedure for a zoomed-in simulated signal \(S_{2D}(\tau_{b,i},T,\lambda_{\tau})\) at a fixed \(T\) and \(\lambda_{\tau}\), where a red dot denotes a binned point \(\tau_{b,i}\) and the width of gray bar denotes the number \(M\) of finely sampled \(\tau\) points that together constitute an averaged \(\tau_{b,i}\) time point. In our experiments, the number of points per bin, \(M\) is adjusted for each velocity so as to keep the binned time step of \(\Delta\tau_{bin}\sim\)0.132 fs, resulting in directly comparable frequency axis \(\omega_{\tau}\) across all velocities. Each \(\tau\) scan is repeated '\(S\)' times at a fixed pump-probe delay \(T\), with the \(\tau\) stage moving alternately in forward and backward directions for consecutive scans. Apart from averaging \(M\) points per binned step, averaging multiple \(\tau\) scans further suppresses the effect of 1/\(f\) long term laser drifts. Our recent shot-to-shot rapid scan PP [26] measurements also suggest that increasing the number of scan averages \(S\) is more effective in suppressing the low-frequency 1/\(f\) experimental noise than an equivalent increase in the number of points \(M\) per binned point \(\tau_{b,i}\). The scan parameters for the experiments with different stage velocities in Section III are summarized in Table 1. These include
the sample exposure time, finely sampled \(\tau\) step (\(\Delta\tau_{min}\)), number of \(ON/OFF\) pairs per bin (\(M\)),
Figure 4: Comparison of different averaging schemes. (a) Data averaging scheme shown for the case of \(v_{3}\) scan in Table 1. Black vertical lines represent consecutive probe shots. (Un)shaded represents probe shots recorded with pump \((ON)OFF\) state. Blue box represents the number of consecutive \(ON/OFF\) pairs (\(s\) probe shots and \(M\) = \(s\)/2 pairs) averaged together to form one binned delay point \(\tau_{b,i}\). \(S\) represents one complete \(\tau\) scan. Several such scans \(S\) are conducted for each pump-probe delay point \(T\). (b) Schematic representation of binning for the data collection in panel A. Simulated signal sampled with \(\Delta\tau_{min}\) (black) and with \(M\) = 100 averaged pairs constituting one binned point \(\tau_{b,i}\), denoted as red dot. The area under the curve for 100 \(ON/OFF\) pairs is denoted by shaded gray region around the binned data point and corresponds to 0.132 fs on the binned delay axis. (c) Experimentally measured shot-to-shot probe intensity fluctuations added to the simulated signal \(S_{2D}(\tau_{b,i})\) at \(T\) = 1 ps and \(\lambda_{\tau}\) = 645 nm where the signal maximizes. Three different averaging schemes are compared. \(ON/OFF\) pairs with \(m\) shots \(ON\), \(m\) shots \(OFF\) and with total \(M\) such \(ON/OFF\) pairs are averaged together to yield one binned point \(\tau_{b,i}\). \(m\) = 1 is the shot-to-shot detection case with \(M\) = 60 pairs averaged in case of scan velocity \(v_{4}\) (Table 1). The plot is a zoomed in version of the full 70 fs scan range. Note that \((m,M)\) are chosen such that the total laser shots contributing to a binned time step \(\Delta\tau_{bin}\) is fixed for all cases for a fair comparison. (d) Normalized Fourier transform of the data shown in panel C. (e) Simulated signal with experimental probe noise for the shot-to-shot (\(m\) = 1) case, for four different number of \(ON/OFF\) pairs (\(M\)) per \(\tau_{b,i}\). This is done in order to simulate the effect of \(M\), resulting from scanning with four different velocities (Table 1), on the experimental signal to noise. (f) Normalized Fourier transform of the data shown panel E.
etc. The \(\tau\) scan range in the experiments, binned time step \(\Delta\tau_{bin}\) = 0.132 fs with 15 points per 620 nm cycle and \(S\) = 10 scans are kept fixed across all experiments.
As shown in Table 1, for a fixed scan range and the repetition rate \(f_{R}\), the number of probe shots recorded with pump \(ON/OFF\) states depends on the stage velocity. Defining a frame as \(s\) = 1000 consecutive probe shots, in the sequence acquisition [40] mode implemented here the onboard memory of the frame grabber can store a maximum of 490 frames. We therefore write a 2D spectrum on the computer RAM after each \(\tau\) scan, leading to a dead time after each scan during which the frames are dumped on the computer memory. However, for the faster velocities implemented here, the number of frames per \(\tau\) scan are substantially lesser than maximum possible, for instance 68 frames per \(\tau\) scan for 2 mm/s velocity. In such cases, a dead time after each \(\tau\) scan is not required because all scans can be recorded together before writing them on the computer RAM.
Note that in Table 1, \(\Delta\tau_{min}\) for even the fastest velocity corresponds to a wedge insertion \(\sim\)2.5\(\times\) lesser than the minimum incremental resolution of the stage, implying that the maximum velocity that could be used in our approach can be at least 2.5\(\times\) faster. However, we cannot use arbitrarily fast stage velocity because stage movement between pump \(ON\) and \(OFF\) states can no longer be ignored if the signals \(S_{i+1}^{ON}\) and \(S_{i}^{OFF}\) vary substantially with \(\tau\) stage movement between consecutive shots. In this regard, programmable pulse shaping approaches hold a distinct advantage because of no mechanical delay elements. Consequently, in the AOPS approach [18] of Kearns et al. a 'burst scan' approach is employed where instead of \(M\) points per bin and \(S\) scans, an equivalent of \(M\times S\) scans with no binning can be implemented with a better suppression of 1/\(f\) noise encountered over the duration [27] of the \(\tau\) scan. In terms of scan efficiency [26], or the number of pulses utilized versus wasted for a measurement, AOPS approach holds a clear advantage as well, because no mechanical delay elements are involved and therefore no pulses are wasted while the stage attains a constant velocity or the scan direction is alternated between forward and backward. Note that if scan efficiency is the main concern, Fig. 3(c) suggests that the requirement of 3\(d\) distance can be easily relaxed as well as the time required to attain a constant velocity can be minimized by increasing the stage acceleration (Table S1). However, maximizing scan efficiency has no bearing on the 1/\(f\) experimental noise encountered _during_ a scan. Despite these distinct advantages of programmable pulse shaping, the results in Section III demonstrate that comparable throughput
and SNR is attainable even with mechanical delay scans through a combination of shot-to-shot detection with the rapid scan approach, with vital advantages of simplicity and repetition-rate scalability without sacrificing pump WLC bandwidth.
## III Results and Discussion
This section compares various averaging schemes to demonstrate advantages of shot-to-shot detection, followed by rapid scan shot-to-shot 2DES experiments on Oxazine 170 including a demonstration of SNR for different stage velocities and sample concentrations.
### Comparison of Averaging Schemes
Fig. 4(c) compares the SNR possible with various averaging schemes in order to motivate the 2DES experiments with rapid scan shot-to-shot detection. '\(m\)' denotes the number of probe shots that are averaged together with pump \(ON\) or \(OFF\). \(m\) = 1 case results in the shot-to-shot 2D signal \(S_{2D}\) defined earlier. In contrast, \(m\) = 60 implies that 60 probe shots are averaged with pump \(ON\) and \(OFF\), and a difference of the two then yields \(S_{2D}\cdot M\)' is the number of such \(ON/OFF\) pairs that are averaged together to result in a 2D signal at time \(\tau_{b,i}\). The panel considers the case of maximum scan velocity \(v_{4}\) in Table 1. Note that the total number of probe shots '\(s\)' that result in the averaged signal, \(s\) = 2\(mM\), is kept fixed for a fair comparison between the three averaging schemes. This implies that the reduction in 1/\(f\) component of experimental noise over the duration of the \(\tau\) scan[27; 18] is equivalent between the three averaging schemes. The probe laser noise is estimated directly from experiment by passing the probe through the sample with pump blocked, recording every probe shot, and subtracting the mean counts. The probe noise at \(\lambda_{t}\) = 645 nm, where the signal maximizes, is then grouped together as per the (\(m\),\(M\)) combination and added to the simulated signal. Fig. 4(d) shows the corresponding Fourier transform. The analysis in Figs.4(c-d) demonstrates that faster the modulation of PP signal, better is the resulting SNR because pump-induced intensity modulations in the probe at the maximum possible frequency of 2/\(f_{R}\) minimizes the 1/\(f\) component[27] of probe noise for a given \(\tau\) data point. The relative standard deviation is quantified in Fig. S5(A) and corresponds to 1x, 3.63x and 5.68x for (\(m,M\)) = (1,60), (6,10), and (60,1), respectively.
Fig. 4(e-f) simulates the effect of the four different \(\tau\) scan velocities in Table 1 on the SNR, for
the case of \(m\) = 1, that is, shot-to-shot detection. As before, experimentally measured shot-to-shot probe noise is binned depending on the \((m,M)\) = (1,\(M\)) combination, and added to the simulated signal. Number of points per bin, \(M\) is chosen to be 300, 150, 100, 60 corresponding to the four velocities \(v_{1}\)-\(v_{4}\) (Table 1). Fig. S5 (B) compares the standard deviation of the time-domain noise floor for \(M\) = 60, 100 and 150 relative to the \(M\) = 300 case, against that expected from \(1/\sqrt{M}\) scaling of Gaussian noise with bin size. The noise floor in case of faster scans degrades lesser than that expected from simply scaling the bin size, suggesting advantages of shot-to-shot detection for faster scan velocities. This trend is qualitatively similar to the reported [18] suppression of 1/\(f\) noise encountered over the scan duration as the scans become faster. Overall, the SNR for the v\({}_{4}\) scan (\(M\) = 60) is only \(\sim\)1.78\(\times\) lesser but with an advantage of 5\(\times\) faster acquisition suggesting equivalent advantages in the 2DES experiments.
Note that in addition to faster throughput, rapid delay scan also proportionally minimizes the sample exposure time which is crucial to optimize in order to mitigate sample photodamage in micro-spectroscopy applications. Sample exposure time per \(\tau\) scan is tabulated in Table 1 with one 2DES spectrum collected in \(\sim\)1.2 seconds for the maximum velocity implemented here. Note that this exposure time includes the additional time required by the stage to reach from the point of 99.8% of set velocity to the start of the trigger window (time between the initial two vertical lines in Fig. 3C), as well as the shutter opening and closing times before and after each \(\tau\) scan. As discussed in Section II.4, future experiments without this additional time, and with at least 2.5\(\times\) faster velocity can significantly minimize this exposure time in a feasible manner without running into issues related to stage movement during pump \(ON\) and \(OFF\) states in a rapid scan approach. Based on these simulations, Sections III.2-III.3 present shot-to-shot 2DES spectra for the rapid scan settings in Table 1 to demonstrate the throughput and sensitivity of the approach proposed here.
### Shot-to-shot Rapid Scan 2D Spectra
Figure 5 shows the absorptive 2D spectra \(S_{2D}\)(\(\tau\), \(T\), \(\omega_{t}\)) at \(T\)= 1 ps collected for scan settings in Table 1. Experimental 2D data on Oxazine 170 in methanol are collected for a \(\tau\) scan range of -5.608 to 70.302 fs for a fixed pump-probe delay (\(T\)). The constant offset between the forward and backward scans is aligned before the \(\tau\) scans are averaged together. The averaged 2D data is
spectrally integrated along the detection frequency axis and the maximum of the interferogram is determined. The spectrally integrated signal maximizes at zero \(\tau\) delay between the pump pulses. The data is cropped at the maxima before the Fourier transform along the \(\tau\) delay axis. Any error in determining the signal maximum results in phasing issues, that is, mixed absorptive and dispersive lineshapes along \(\omega_{\tau}\) in the relaxed 2D spectrum. Minor phasing errors of the order of half the bin size can arise due to the binning procedure described in Section II.4. This is so because the binned data does not necessarily sample the signal at \(\tau\) = 0. In other words, the maxima of the binned signal is at \(\tau\) = 0 \(\pm\)\(\Delta\tau_{shift}\). This shift in delay results in an extra phase factor in the Fourier transformed signal and needs to be quantified to accurately phase the 2D spectra. This is done by comparing the binned \(\tau\) scans with a finely binned scan with \(M\) = 25, and correcting for \(\Delta\tau_{shift}\) in the frequency domain. The maximum timing error in this method is given by 0.5\(\times\Delta\tau_{bin}\) for \(M\) = 25, which corresponds to phase error of less than \(\lambda\)/369 at 620 nm. This procedure is illustrated in Fig. S6.
Fig. 5(a) shows the 2D spectra of Oxazine 170 at \(T\) = 1 ps with sample OD 0.37 in a 500
Figure 5: (a) Normalized 2D spectra of 0.37 OD Oxazine 170 in Methanol at \(T\) = 1 ps recorded with stage velocities \(v_{1}-v_{3}\) and \(v_{4}\). Contours are drawn at 5% and 10%–100% in 10% intervals for both positive or negative contours. The 2D spectrum for the fastest velocity \(v_{4}\) is broader and blue-shifted compared to other cases because this experiment was conducted on a different day with a blue shifted pump bandwidth (Fig. S7). (b) Spectrally integrated 2D spectrum (along \(\omega_{\tau}\)) corresponding to \(v_{4}\) overlaid on the spectrally-resolved pump-probe (SRPP) spectrum at \(T\) = 1 ps. (c) \(S(\tau)\) at \(T\) = 1 ps, \(\omega_{\tau}\) = 2.918 rad/fs for \(v_{1}-v_{3}\) and \(v_{4}\). The error bar calculated over \(M\) bins per \(\tau\) point and \(S\) scans is overlaid as a translucent band on the signal. SNR calculated for each slice is mentioned in the inset. (d) Standard deviation versus the bin size for the different scan velocities. The standard deviation for each case is normalized relative to the \(M\) = 300 (\(v_{1}\)) case. Gray trace overlays the curve expected from the \(1/\sqrt{M}\) scaling with decreasing bin size \(M\).
\(\mu\)m pathlength cuvette for the four scan velocities. The scan velocities and related parameters are shown in Table 1. The frequency resolution along the \(\tau\) axis is system limited due to fast optical dephasing. For a scan range of \(\sim\)70 fs, the resulting frequency resolution after maximum allowed \(N\) to \(2N\) zero-padding is \(\sim\)238 cm\({}^{-1}\). The 2D spectrum for the fastest velocity is broader and blue-shifted compared to other cases because this experiment was conducted on a different day with a blue shifted pump bandwidth (Fig. S7). The changes in the 2D spectra corresponding to different pump bandwidths compare well with those expected from the relaxed 2D spectrum constructed with the experimental pump and probe bandwidths, and independently measured absorption and spontaneous emission lineshapes [41] (Fig. S7). Fig. 5(b) compares the spectrally integrated phased 2D spectrum at 1 ps with the spectrally-resolved pump probe spectrum at 1 ps (Section S2) for the fastest scan velocity. Overlap of the two spectra indicates no residual phase in the recorded signal along the detection axis as may be expected in homodyned detection [38].
Fig. 5(c) shows the \(\tau\)-domain traces at \(\omega_{t}\) = 2.918 rad/fs (\(\lambda_{t}\) = 645 nm) for the four stage velocities with the error bar overlaid. The SNR for each case is estimated similar to ref. [18], by the inverse of the standard deviation of the normalized signal for a range of \(\tau\) > 40 fs where the signal has completely dephased. Fig. 5(d) plots the corresponding standard deviation versus the bin size for the three velocities. The SNR is lowest for the fastest velocity \(v_{4}\), as expected for the smallest bin size. However, \(v_{4}\) reduces the number of probe shots needed to record a 2D spectrum by 5\(\times\) with SNR deteriorating only by \(\sim\)1.9\(\times\) compared to the slowest scan (maximum points per bin case). When this measured SNR in Fig. 5d is compared against the expected \(1/\sqrt{M}\) dependence of SNR at each \(\tau\) data point, the noise floor increases with decreasing bin size as expected. However, similar to the trend in the simulations in Fig. S5, the noise floor consistently degrades lesser than that predicted by the \(1/\sqrt{M}\) scaling with decreasing bin size \(M\). This again emphasizes the point that rapid scan with shot-to-shot detection is expected to suppress [18] the low-frequency 1/\(f\) noise encountered [27] over the duration of a scan. Note that such a suppression will be maximum for a 'burst' scan (Section II.4), which is straightforward in the AOPS approach because phase errors arising from stage movement during consecutive pump \(ON\) and \(OFF\) states (Section II.3) are entirely circumvented in programmable pulse shaping.
### Sensitivity
Encouraged by the SNR degradation lesser than what is expected from \(1/\sqrt{M}\) scaling of Gaussian or random noise (Fig. 5(d)) along with simultaneous throughput improvement, we decided to test the sensitivity of rapid scan shot-to-shot detection approach by reducing the sample concentration at the velocity of 1.2 mm/s, that is, 3\(\times\) faster than the case for which best the SNR is measured (due to more number of points per bin). The starting OD in 500 \(\mu\)m cuvette is 0.37 as before, which corresponds to a number density of 5.5E16 molecules/cm\({}^{3}\). Figure 6(a) shows consistent measurements of 2D spectra for concentrations down to 0.78E16 molecules/cm\({}^{3}\) corresponding to an OD of 0.053 in 500 \(\mu\)m cuvette. A small red-shift of 0.029 rad/fs (6.3 nm) is seen along the detection axis from lowest to highest OD. We checked that this trend is consistent with the measurement of fluorescence spectra (Fig. S8) which show a progressively increasing red-shift from lowest to highest OD likely caused by aggregation at higher concentrations. The SNR in Fig. 6(b) degrades with decreasing concentration as is expected. Degrading SNR implies that the negative 5% excited state absorption (ESA) signal contour at \(\omega_{t}\) = 3.52 rad/fs is at the noise floor. This level of SNR is achieved with total 1140E3 probe shots (Table 1), that is, \(\sim\)11.6 seconds of averaging. The total experimental time is \(\sim\)3\(\times\) longer because it includes the full sample exposure window and the dead time which lowers the overall scan efficiency[26] in case of rapid scanning of mechanical delays as compared to the AOPS approach. As discussed in Section II.4, the rapid scan efficiency for a given velocity can be improved by avoiding dead times and minimizing the additional distance traveled by the stage at a constant velocity. Note however that the scan efficiency has no bearing on the SNR reported here which reports the SNR encountered _during_ a scan, and therefore does not depend on the dead times encountered before or after a scan. If desired, a higher level of sensitivity is also straightforward to achieve by using slower scan velocities, that is, larger bin size \(M\) or longer averaging through slower scan velocity.
In comparison to above SNR and averaging times reported for sample concentrations of 0.78E16 molecules/cm\({}^{3}\), the state-of-the-art AOPS approach to WLC-2DES has reported[18] an SNR of 4.2 in 180 secs of averaging at 100 kHz for a sample concentration of 26E16 chlorophyll a molecules/cm\({}^{3}\) (OD 0.08 in 50 \(\mu\)m cuvette). Note that our crossing angle of 7.5\({}^{o}\) implies that pump-probe spot overlap will not perfect throughout the cuvette pathlength. This is further veri
fied from Fig. S3 where the maximum signal drops to approximately half within a 200 \(\mu\)m region as the cuvette is translated along the beam propagation direction. This suggests that the sample pathlength over which the pump-probe signal is predominantly generated is lesser than the cuvette pathlength and therefore the effective sample OD which generates the pump-probe signal may be lesser than 0.05. Note also that in comparison to the conventional 2DES approaches mentioned above, a recent rapid scan fluorescence-detection 2DES approach [21] has reported measurements of coherent 2D signals which are only \(\sim\)10% of population signals for sample ODs as low as \(\sim\)1 mOD, although at 1 MHz repetition rate and with significantly longer averaging times.
## IV Conclusions
We have introduced a repetition rate scalable approach to 2DES spectroscopy that combines the benefits of shot-to-shot detection with rapid scanning of mechanical delays to provide a viable
Figure 6: (a).Normalized \(T\) = 1 ps 2D spectra of Oxazine 170 at three different concentrations recorded at velocity \(v_{3}\). Contours are drawn at 2% 5% and 10%–100% in 10% intervals for both positive or negative contours. (b) \(S(\tau\,,T,\,\omega_{t})\) at \(\omega_{t}\) = 2.918 rad/fs for the three concentrations. The error bar calculated over \(M\) points per bin and \(S\) scans averaged is overlaid as a translucent band on the signal. SNR calculated for each slice is mentioned in the inset.
alternative to state-of-the-art AOPS approaches. Our approach relies on the simplicity of conventional optical elements to generate phase-locked pump pulse pairs and a broadband white light continuum as input. We demonstrate this through mutual synchronization between the laser repetition rate, acousto-optical deflector (AOD), pump delay stage and the CCD line camera, which allows rapid scanning of pump optical delay synchronously with the laser repetition rate while the delay stage is moved at a constant velocity. The resulting shot-to-shot detection scheme is repetition rate scalable with the throughput only limited by the CCD line rate and the maximum stage velocity without any limitations imposed on the pump WLC bandwidth as \(f_{R}\) increases beyond 100 kHz. Using this approach, we demonstrate measurement of an averaged 2DES absorptive spectrum in as much as 1.2 seconds of continuous sample exposure per 2D spectrum. We achieve a signal-to-noise ratio (SNR) of 6.8 for optical densities down to 0.05 with 11.6 seconds of averaging at 100 kHz laser repetition rate. We discuss limitations of mechanical delays compared to programmable pulse shaping in terms of 'burst scans', where AOPS approaches can provide maximum 1/\(f\) noise suppression and better scan efficiency. However, compared to AOPS approaches, the approach proposed here does not run into fundamental limitations of AOPS approaches at higher repetition rates such as limited time aperture-RF bandwidth product and RF update rate. Overall, combining rapid scan with shot-to-shot detection as demonstrated here provides throughput and sensitivity comparable to the AOPS approach, is repetition-rate scalable and minimizes sample exposure per 2D spectrum. Our demonstration opens door to promising micro-spectroscopy applications using a combination of repetition rate tunable Yb:KGW amplifiers and currently available cameras of up to 250 kHz line rates, which can be easily accommodated without any change in the experimental setup except the input TTL signal.
## Supplementary Material
See the supplementary material for pulse width and instrument response function measurements, SRPP spectra, signal vs cuvette position and spot size measurements, phasing procedure, linear absorption and emission spectra of Oxazine170, and estimation of relaxed 2D spectrum from absorption and emission lineshapes.
## Funding
This work is supported in parts by research grants from the Indian Space Research Organization (ISTC/CSS/VT/468); Department of Biotechnology, India (BT/PR38464/BRB/10/1893/2020); Board of Research in Nuclear Sciences (58/20/31/2019-BRNS) and the Science and Engineering Research Board (CRG/2019/003691, CRG/2022/004523).
## Acknowledgments
We thank Prof. Giulio Cerullo (Politecnico di Milano) for details of the specifications of birefringent wedges, and Prof. Minjung Son (Boston University) for initial suggestions regarding CCD line rate cameras. AST acknowledges Prime Minister's Research Fellowship, MoE India. VNB acknowledges research fellowship from DST-Inspire. VT acknowledges the Infosys Young Investigator Fellowship supported by the Infosys Foundation, Bangalore.
## Author Declarations
### Conflict of Interest
The authors have no conflicts to disclose.
### Author Contributions
VT designed the research problem. AST and VNB contributed equally to the work. All authors contributed towards writing the manuscript.
## Data Availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. |
2302.00082 | Adaptive sparseness for correntropy-based robust regression via
automatic relevance determination | Sparseness and robustness are two important properties for many machine
learning scenarios. In the present study, regarding the maximum correntropy
criterion (MCC) based robust regression algorithm, we investigate to integrate
the MCC method with the automatic relevance determination (ARD) technique in a
Bayesian framework, so that MCC-based robust regression could be implemented
with adaptive sparseness. To be specific, we use an inherent noise assumption
from the MCC to derive an explicit likelihood function, and realize the maximum
a posteriori (MAP) estimation with the ARD prior by variational Bayesian
inference. Compared to the existing robust and sparse L1-regularized MCC
regression, the proposed MCC-ARD regression can eradicate the troublesome
tuning for the regularization hyper-parameter which controls the regularization
strength. Further, MCC-ARD achieves superior prediction performance and feature
selection capability than L1-regularized MCC, as demonstrated by a noisy and
high-dimensional simulation study. | Yuanhao Li, Badong Chen, Okito Yamashita, Natsue Yoshimura, Yasuharu Koike | 2023-01-31T20:23:32Z | http://arxiv.org/abs/2302.00082v1 | # Adaptive sparseness for correntropy-based
###### Abstract
Sparseness and robustness are two important properties for many machine learning scenarios. In the present study, regarding the _maximum correntropy criterion_ (MCC) based robust regression algorithm, we investigate to integrate the MCC method with the _automatic relevance determination_ (ARD) technique in a Bayesian framework, so that MCC-based robust regression could be implemented with '_adaptive sparseness_'. To be specific, we use an inherent noise assumption from the MCC to derive an explicit likelihood function, and realize the maximum a posteriori (MAP) estimation with the ARD prior by variational Bayesian inference. Compared to the existing robust and sparse \(L_{1}\)-regularized MCC regression, the proposed MCC-ARD regression can eradicate the troublesome tuning for the regularization hyper-parameter which controls the regularization strength. Further, MCC-ARD achieves superior prediction performance and feature selection capability than \(L_{1}\)-regularized MCC, as demonstrated by a noisy and high-dimensional simulation study.
adaptive sparseness, robustness, maximum correntropy criterion, automatic relevance determination, variational Bayes
## I Introduction
Regression aims at a prediction model for continuous variables from the input of covariate variables or some derived features, which is also closely related to system identification, adaptive filtering, and so on. Consider the following canonical linear-in-parameter (LIP) model with additive noise
\[t=\varPhi(\mathbf{x})\mathbf{w}+\epsilon \tag{1}\]
where \(t\) denotes the model output, \(\varPhi(\mathbf{x})\) is a mapping of input \(\mathbf{x}\), \(\mathbf{w}\) is the model parameter, while \(\epsilon\) denotes the noise term. If we exclude the utilization of the mapping function \(\varPhi(\cdot)\), LIP model degenerates to the linear regression model
\[t=\mathbf{x}\mathbf{w}+\epsilon \tag{2}\]
in which one can suppose that \(\mathbf{x}=(x_{1},x_{2},\cdots,x_{D})\in\mathbb{R}^{1\times D}\) is the \(D\)-dimensional covariate while \(\mathbf{w}=(w_{1},w_{2},\cdots,w_{D})^{T}\)\(\in\mathbb{R}^{D\times 1}\) is the model parameter. \(T\) denotes the transpose for a vector or matrix. The most common method for learning \(\mathbf{w}\) is to minimize the expectation of the quadratic error \(e\triangleq t-\mathbf{x}\mathbf{w}\) which refers to the least square (LS) criterion
\[\mathbf{w}=arg\min_{\mathbf{w}}\left\langle e^{2}\right\rangle=arg\min_{ \mathbf{w}}\left\langle(t-\mathbf{x}\mathbf{w})^{2}\right\rangle \tag{3}\]
where \(\left\langle\cdot\right\rangle\) denotes the mathematical expectation. However, the traditional least square method is only effective for well-posed questions. When \(D>N\), (3) will result in poor generalization performance. A useful solution is to select a subset of features while pruning those irrelevant features, which is called sparse learning. In the learned model parameter \(\mathbf{w}\), many components will be zero so that the corresponding features are pruned. The idealized sparse model is to minimize the \(L_{0}\)-regularized cost function
\[\mathbf{w}=arg\min_{\mathbf{w}}\left\langle e^{2}\right\rangle+\lambda\| \mathbf{w}\|_{0} \tag{4}\]
where \(\lambda\) is hyper-parameter tuning the regularization strength, while \(\|\mathbf{w}\|_{0}\) is the \(L_{0}\)-norm of \(\mathbf{w}\) denoting the number of non-zero components in \(\mathbf{w}\). Since solving (4) is NP-hard, \(L_{0}\)-norm is usually replaced with its tightest _convex_ relaxation \(L_{1}\)-norm [1] which leads to the LASSO algorithm [2]
\[\mathbf{w}=arg\min_{\mathbf{w}}\left\langle e^{2}\right\rangle+\lambda\| \mathbf{w}\|_{1} \tag{5}\]
which has been well studied and discussed for sparse learning [3, 4, 5, 6]. However, the hyper-parameter \(\lambda\) is usually a nuisance which would require manual tuning or time-consuming cross-validation.
An alternative way to solve a sparse model is the automatic relevance determination (ARD) technique [7], which has been receiving growing attention with the proposal of the relevance vector machine (RVM) [8, 9, 10], a Bayesian treating of support vector machine (SVM). ARD supposes a prior distribution for \(\mathbf{w}\) with a hierarchical form, and infers the posterior distribution for \(\mathbf{w}\), combining with the likelihood function, in the Bayesian framework. ARD has proved as a tighter approximation of \(L_{0}\)-norm than \(L_{1}\)-norm, thus providing superior sparse capability, although it is _non-convex_ in the regularization form [1]. More importantly, ARD could infer all the unknown variables, while excluding the regularization hyper-parameter \(\lambda\), thus realizing '_adaptive sparseness_'.
On the other hand, the least-square criterion implicitly uses a Gaussian assumption on the noise \(\epsilon\), which need not be the truth in practice. In particular, least-square methods can suffer
serious degeneration in the presence of outliers. The _maximum correntropy criterion_ (MCC) is highly efficient for noisy data analysis [11, 12, 13, 14], which has been also used for robust sparse learning integrating with \(L_{1}\)-regularization [15, 16, 17] or other regularization terms [18, 19]. Yet, as mentioned before, they need careful tuning on the regularization hyper-parameters. In this work, we desire to introduce the Bayesian ARD technique to the MCC-based robust regression for '_adaptive sparseness_', which remains a vacancy in the literature.
The remainder of this paper is organized as follows. Section II reviews the ARD-based sparse regression algorithm with the Gaussian assumption for the noise term \(\epsilon\). In Section III, we give a brief introduction about MCC and show the assumption on the noise distribution when MCC is used as the regression objective function. In Section IV, we propose to employ MCC as the likelihood function with ARD technique in the Bayesian framework for robust sparse regression. In Section V, we show some experimental results to demonstrate the superiority of the proposed method. In Section VI, we provide some discussions. Finally, Section VII concludes this paper.
## II ARD-Based Sparse Regression
Supposing the zero-mean Gaussian distribution for the noise term with the variance being \(\sigma^{2}\), we can obtain the probability density function (PDF) for \(t\) by \(p(t|\mathbf{x})=\mathcal{N}(t|\mathbf{x}\mathbf{w},\sigma^{2})\), which is a Gaussian distribution over \(t\) with mean \(\mathbf{x}\mathbf{w}\) and variance \(\sigma^{2}\). With an input-target dataset \(\{\mathbf{x}_{n},t_{n}\}_{n=1}^{N}\) and assuming the independence of \(t_{n}\), we could write the likelihood function
\[p(\mathbf{t}|\mathbf{w},\sigma^{2})=(2\pi\sigma^{2})^{-N/2}\exp\{-\frac{1}{2 \sigma^{2}}\|\mathbf{t}-\mathbf{X}\mathbf{w}\|^{2}\} \tag{6}\]
in which \(\mathbf{t}=(t_{1},t_{2},\cdots,t_{N})^{T}\in\mathbb{R}^{N\times 1}\), \(\mathbf{X}\in\mathbb{R}^{N\times D}\) denotes the collection for \(\mathbf{x}_{n}\), each row of which represents a sample. For simplicity, the dependence upon the covariate matrix \(\mathbf{X}\) is omitted in (6) and also subsequent expressions. The maximum likelihood estimation (MLE) of (6) is equal to the least square criterion, which exhibits the following closed-form solution
\[\mathbf{w}=(\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{t} \tag{7}\]
If \(D>N\), the solution (7) will be ill-posed. To select a subset of features for the regression task, one could employ the ARD technique that assigns the zero-mean and anisotropic Gaussian distribution for each model parameter with individual inverse variances \(\mathbf{a}=(a_{1},a_{2},\cdots,a_{D})\)
\[p(\mathbf{w}|\mathbf{a})=\prod_{d=1}^{D}p(w_{d}|a_{d})=\prod_{d=1}^{D}\mathcal{ N}(w_{d}|0,a_{d}^{-1}) \tag{8}\]
where \(a_{d}\) (the inverse variance) is called relevance parameter, which controls the possible range for corresponding \(w_{d}\). Each relevance parameter is then assigned with the non-informative Jeffreys hyper-prior (which is actually an _improper_ prior1[20])
Footnote 1: Note that this prior is in fact an _improper_ prior since it is not normalizable (the integral is infinite).
\[p(\mathbf{a})=\prod_{d=1}^{D}p(a_{d})=\prod_{d=1}^{D}a_{d}^{-1} \tag{9}\]
The prior distribution for noise variance \(\sigma^{2}\) is usually assumed to be non-informative as well
\[p(\sigma^{2})=(\sigma^{2})^{-1} \tag{10}\]
Having defined the likelihood and also the prior in (6)(8)-(10), we can write analytically the posterior distribution over \(\mathbf{w}\)
\[p(\mathbf{w}|\mathbf{t},\mathbf{a},\sigma^{2})=\frac{p(\mathbf{ t}|\mathbf{w},\sigma^{2})p(\mathbf{w}|\mathbf{a})}{p(\mathbf{t}|\mathbf{a}, \sigma^{2})}=\frac{p(\mathbf{t}|\mathbf{w},\sigma^{2})p(\mathbf{w}|\mathbf{a}) }{\int p(\mathbf{t}|\mathbf{w},\sigma^{2})p(\mathbf{w}|\mathbf{a})d\mathbf{w}}\] \[=(2\pi)^{-D/2}|\boldsymbol{\Sigma}|^{-1/2}\exp\{-\frac{1}{2}( \mathbf{w}-\boldsymbol{\mu})^{T}\boldsymbol{\Sigma}^{-1}(\mathbf{w}- \boldsymbol{\mu})\}\]
in which the covariance and mean for \(\mathbf{w}\) are computed by
\[\boldsymbol{\Sigma} =(\sigma^{-2}\mathbf{X}^{T}\mathbf{X}+\mathbf{A})^{-1} \tag{12}\] \[\boldsymbol{\mu} =\sigma^{-2}\boldsymbol{\Sigma}\mathbf{X}^{T}\mathbf{t}\]
with \(\mathbf{A}=diag(a_{1},a_{2},\cdots,a_{D})\). To obtain the whole posterior distribution
\[p(\mathbf{w},\mathbf{a},\sigma^{2}|\mathbf{t})=p(\mathbf{w}|\mathbf{t}, \mathbf{a},\sigma^{2})p(\mathbf{a},\sigma^{2}|\mathbf{t}) \tag{13}\]
one notes that the hyper-parameter posterior distribution could be denoted by \(p(\mathbf{a},\sigma^{2}|\mathbf{t})\propto p(\mathbf{t}|\mathbf{a},\sigma^{2} )p(\mathbf{a})p(\sigma^{2})\). Utilizing the non-informative hyper-priors, we only need to optimize \(\mathbf{a}\) and \(\sigma^{2}\) so that the _marginal likelihood_\(p(\mathbf{t}|\mathbf{a},\sigma^{2})\) is maximized
\[p(\mathbf{t}|\mathbf{a},\sigma^{2})= \int p(\mathbf{t}|\mathbf{w},\sigma^{2})p(\mathbf{w}|\mathbf{a})d \mathbf{w} \tag{14}\] \[= (2\pi)^{-D/2}|\sigma^{2}\mathbf{I}+\mathbf{X}\mathbf{A}^{-1} \mathbf{X}^{T}|^{-1/2}\] \[\times\exp\{-\frac{1}{2}\mathbf{t}^{T}(\sigma^{2}\mathbf{I}+ \mathbf{X}\mathbf{A}^{-1}\mathbf{X}^{T})^{-1}\mathbf{t}\}\]
To maximize (14), setting the differentiation to zero yields the following update
\[a_{d}=\frac{\gamma_{d}}{\mu_{d}^{2}} \tag{15}\]
in which \(\mu_{d}\) is the \(d\)-th component of \(\boldsymbol{\mu}\) and \(\gamma_{d}\) is defined by \(\gamma_{d}\triangleq 1-a_{d}\Sigma_{dd}\) with \(\Sigma_{dd}\) the \(d\)-th diagonal element of \(\boldsymbol{\Sigma}\). \(\sigma^{2}\) is updated by
\[\sigma^{2}=\frac{\|\mathbf{t}-\mathbf{X}\boldsymbol{\mu}\|^{2}}{N-\sum_{d=1}^{D} \gamma_{d}} \tag{16}\]
Updating (12)(15)(16) alternately, we will obtain the _maximum a posteriori_ (MAP) estimations for all the unknown variables. In particular, during the inference, those \(a_{d}\) which correspond to irrelevant features will diverge to arbitrarily large numbers, so that the probability density of the corresponding \(w_{d}\) focuses at the origin, thus pruning the irrelevant features and realizing sparse regression.
The above-described optimization involves maximization of the _marginal likelihood_\(p(\mathbf{t}|\mathbf{a},\sigma^{2})\) (14), which is known as the _type-II maximum likelihood_[20]. Moreover, the model can be optimized in other ways. For example, Expectation-Maximum (EM) could be employed by regarding the relevance parameter \(\mathbf{a}\) as the hidden variables [3]. One could also use the variational Bayesian (VB) method with surrogate function to approximate the posterior distribution for every random variable [10]. Since the conventional ARD-based sparse regression is derived under
the assumption of Gaussian noise (6), it may suffer significant performance degeneration in a realistic non-Gaussian scenario, in particular in the presence of outliers [11, 14, 21, 22].
## III Maximum Correntropy Criterion
### _Maximum Correntropy Criterion_
Correntropy was originally developed as a generalized form of correlation function for stochastic processes, which has been further extended as a similarity measure between two arbitrary variables for machine learning and signal processing [11]. For two variables \(A\) and \(B\) with joint distribution \(p_{A,B}(a,b)\), their correntropy similarity is defined by
\[\mathcal{V}(A,B)\triangleq\langle k(A,B)\rangle=\int k(a,b)dp_{A,B}(a,b) \tag{17}\]
where \(k(\cdot,\cdot)\) is a shift-invariant _Mercer_ kernel which is usually implemented with the Gaussian kernel function
\[k_{h}(a,b)\triangleq\exp(-\frac{(a-b)^{2}}{2h}) \tag{18}\]
where \(h>0\) denotes the kernel bandwidth, controlling all the robust property for correntropy. Given \(N\) samples of variables \(A\) and \(B\), the empirical estimation of correntropy is computed by
\[\hat{\mathcal{V}}(A,B) =\frac{1}{N}\sum_{n=1}^{N}k_{h}(a_{n},b_{n}) \tag{19}\] \[=\frac{1}{N}\sum_{n=1}^{N}\exp(-\frac{(a_{n}-b_{n})^{2}}{2h})\]
In a supervised machine learning task, maximizing the correntropy between the model prediction and the true target exhibits exceptional robustness with respect to non-Gaussian noises, in particular to outliers, which refers to the _maximum correntropy criterion_ (MCC), because correntropy is a _local_ measure which is mainly determined by the Gaussian kernel function \(k_{h}\) along \(A=B\). It was also proved to extract more statistical moments from the data and has a close relation with the Renyi's entropy of the second order [11, 23].
### _Noise Assumption Under MCC_
We desire to rethink the noise assumption inherent in MCC. Utilizing MCC for the linear regression model with \(N\) samples yields
\[\mathbf{w} =arg\max_{\mathbf{w}}\frac{1}{N}\sum_{n=1}^{N}\exp(-\frac{(t_{n}- \mathbf{x}_{n}\mathbf{w})^{2}}{2h}) \tag{20}\] \[=arg\max_{\mathbf{w}}\frac{1}{N}\sum_{n=1}^{N}\exp(-\frac{e_{n}^{ 2}}{2h})\]
where \(e_{n}\triangleq t_{n}-\mathbf{x}_{n}\mathbf{w}\) denotes the \(n\)-th prediction error. If we omit the fixed number \(N\), we can find MCC will be equivalent to a multiplication form through an exponential function
\[\mathbf{w} =arg\max_{\mathbf{w}}\sum_{n=1}^{N}\exp(-\frac{e_{n}^{2}}{2h}) \tag{21}\] \[=arg\max_{\mathbf{w}}\prod_{n=1}^{N}\exp\{\exp(-\frac{e_{n}^{2}} {2h})\}\] \[=arg\max_{\mathbf{w}}\prod_{n=1}^{N}\exp\{\exp(-\frac{(t_{n}- \mathbf{x}_{n}\mathbf{w})^{2}}{2h})\}\]
which can be extraordinarily regarded as a likelihood function maximum if we assume independence for each \(t_{n}\) and define the following PDF for the noise distribution
\[\mathcal{C}(e|0,h)\triangleq\exp\{\exp(-\frac{e^{2}}{2h})\} \tag{22}\]
in which \(\mathcal{C}(e|0,h)\) is defined as a correntropy-aware PDF over \(e\) with a zero mean and the shape parameter \(h\). Utilizing such an assumption on the noise distribution, we obtain the PDF of \(t\) by \(p(t|\mathbf{x})=\mathcal{C}(t|\mathbf{x}\mathbf{w},h)\). Hence, assuming the independence for \(t_{n}\), one can find the MLE based on the defined PDF \(\mathcal{C}\) will be equivalent to the original MCC (21).
It is important to investigate the property of the defined PDF \(\mathcal{C}\). Unsurprisingly, it is not a 'well-defined' PDF since one sees that its integral is infinite, thus, being an _improper_ distribution [20]. Even more, when \(e\) is far from the origin, the probability density defined by \(\mathcal{C}(e|0,h)\) is close to \(1\), rather than a normal case \(0\), which seems to be a _deviant_ PDF. Nevertheless, in the present study, we demonstrate empirically that, such a _deviant_ MCC-aware noise distribution can largely improve the robust property for an ARD-based sparse regression model. We show some examples for \(\mathcal{C}(e|0,h)\) in Fig. 1 with different \(h\) values. A further discussion for this _deviant_ noise assumption is given in Section VI-A.
## IV MCC-ARD for Robust Sparse Regression
In this section, we desire to integrate the MCC-based robust regression with the ARD technique under a Bayesian inference
Fig. 1: MCC-aware noise distribution \(\mathcal{C}(e|0,h)\) with different \(h\) values.
framework, using the correntropy-aware noise assumption (22) to derive the likelihood function, which is written by
\[\begin{split} p(\mathbf{t}|\mathbf{w},h)&=\prod_{n=1}^ {N}\mathcal{C}(t_{n}|\mathbf{x}_{n}\mathbf{w},h)\\ &=\prod_{n=1}^{N}\exp\{\exp(-\frac{(t_{n}-\mathbf{x}_{n}\mathbf{ w})^{2}}{2h})\}\end{split} \tag{23}\]
However, one could find that the utilization of the MCC-aware likelihood function (23) obstructs the analytical derivation for the posterior distribution \(p(\mathbf{w}|\mathbf{t},\mathbf{a},h)\), contrast to \(p(\mathbf{w}|\mathbf{t},\mathbf{a},\sigma^{2})\) (11) under the Gaussian noise assumption, since the likelihood function (23) is not conjugate with the Gaussian priors \(p(\mathbf{w}|\mathbf{a})\) (8). Therefore, we resort to the variational Bayesian inference [20], which can approximate the posterior distribution for each variable. For simplicity, we can first treat the kernel bandwidth \(h\) as a fixed parameter. Section VI-B gives a discussion about the treatment of \(h\) as a random variable.
The variational Bayesian inference defines a surrogate PDF \(Q(\mathbf{w},\mathbf{a})\) to approximate the posterior distribution \(p(\mathbf{w},\mathbf{a}|\mathbf{t},h)\) which is furthermore assumed with the independence between \(\mathbf{w}\) and \(\mathbf{a}\) by \(Q(\mathbf{w},\mathbf{a})=Q_{\mathbf{w}}(\mathbf{w})Q_{\mathbf{a}}(\mathbf{a})\), and tries to maximize the following free energy \(F(Q_{\mathbf{w}}(\mathbf{w})Q_{\mathbf{a}}(\mathbf{a}))\)
\[\begin{split}& F(Q_{\mathbf{w}}(\mathbf{w})Q_{\mathbf{a}}( \mathbf{a}))\triangleq\\ &\int Q_{\mathbf{w}}(\mathbf{w})Q_{\mathbf{a}}(\mathbf{a})\log \frac{p(\mathbf{w},\mathbf{a},\mathbf{t},h)}{Q_{\mathbf{w}}(\mathbf{w})Q_{ \mathbf{a}}(\mathbf{a})}d\mathbf{w}d\mathbf{a}\end{split} \tag{24}\]
which is maximized when and only when \(Q(\mathbf{w},\mathbf{a})\) is equal to the posterior distribution \(p(\mathbf{w},\mathbf{a}|\mathbf{t},h)\). The logarithmic forms of \(Q_{\mathbf{w}}(\mathbf{w})\) and \(Q_{\mathbf{a}}(\mathbf{a})\) are expressed by
\[\begin{split}\log Q_{\mathbf{w}}(\mathbf{w})&= \left\langle\log p(\mathbf{w},\mathbf{a},\mathbf{t},h)\right\rangle_{Q_{ \mathbf{a}}(\mathbf{a})}\\ \log Q_{\mathbf{a}}(\mathbf{a})&=\left\langle\log p (\mathbf{w},\mathbf{a},\mathbf{t},h)\right\rangle_{Q_{\mathbf{w}}(\mathbf{w})} \end{split} \tag{25}\]
where \(\left\langle\cdot\right\rangle_{Q}\) means the expectation with respect to PDF \(Q\). The log joint distribution \(\log p(\mathbf{w},\mathbf{a},\mathbf{t},h)\) is
\[\begin{split}&\log p(\mathbf{w},\mathbf{a},\mathbf{t},h)= \log p(\mathbf{t}|\mathbf{w},h)+\log p(\mathbf{w}|\mathbf{a})+\log p(\mathbf{ a})\\ =&\sum_{n=1}^{N}\exp(-\frac{(t_{n}-\mathbf{x}_{n} \mathbf{w})^{2}}{2h})-\frac{1}{2}\mathbf{w}^{T}\mathbf{A}\mathbf{w}-\frac{1}{2 }\log|\mathbf{A}|+const\end{split} \tag{26}\]
Gathering the relevant terms with respect to \(\mathbf{w}\) and \(\mathbf{a}\), one then obtains
\[\begin{split}\log Q_{\mathbf{w}}(\mathbf{w})&=\sum_{n =1}^{N}\exp(-\frac{(t_{n}-\mathbf{x}_{n}\mathbf{w})^{2}}{2h})-\frac{1}{2} \mathbf{w}^{T}\left\langle\mathbf{A}\right\rangle_{Q_{\mathbf{a}}(\mathbf{a})} \mathbf{w}\\ \log Q_{\mathbf{a}}(\mathbf{a})&=-\frac{1}{2}\sum_{d =1}^{D}a_{d}\left\langle w_{d}^{2}\right\rangle_{Q_{\mathbf{w}}(\mathbf{w})} -\frac{1}{2}\sum_{d=1}^{D}\log a_{d}\end{split} \tag{27}\]
However, one could see that \(Q_{\mathbf{w}}(\mathbf{w})\) cannot be expressed with an analytical form. Therefore, we further utilize the Laplacian approximation to \(\log Q_{\mathbf{w}}(\mathbf{w})\) through a quadratic form by
\[\begin{split}\log Q_{\mathbf{w}}(\mathbf{w})\approx\log Q_{ \mathbf{w}}(\mathbf{w}^{*})-\frac{1}{2}(\mathbf{w}-\mathbf{w}^{*})^{T} \mathbf{H}(\mathbf{w}^{*})(\mathbf{w}-\mathbf{w}^{*})\end{split} \tag{28}\]
in which \(\mathbf{w}^{*}\) is the maximum point of \(\log Q_{\mathbf{w}}(\mathbf{w})\), and \(\mathbf{H}(\mathbf{w}^{*})\) denotes the _negative_ Hessian matrix of \(\log Q_{\mathbf{w}}(\mathbf{w})\) at \(\mathbf{w}^{*}\)
\[\begin{split}&\mathbf{H}(\mathbf{w})=-\frac{\partial^{2}\log Q_ {\mathbf{w}}(\mathbf{w})}{\partial\mathbf{w}\partial\mathbf{w}^{T}}\\ &=-\frac{1}{h}\sum_{n=1}^{N}\mathbf{x}_{n}^{T}\left\{\exp(-\frac{ e_{n}^{2}}{2h})(\frac{e_{n}^{2}}{h}-1)\right\}\mathbf{x}_{n}+\left\langle \mathbf{A}\right\rangle_{Q_{\mathbf{a}}(\mathbf{a})}\end{split} \tag{29}\]
Thus by approximating \(\log Q_{\mathbf{w}}(\mathbf{w})\) with a quadratic form (28), \(Q_{\mathbf{w}}(\mathbf{w})\) can be regarded as a Gaussian distribution \(Q_{\mathbf{w}}(\mathbf{w})=\mathcal{N}(\mathbf{w}|\mathbf{w}^{*},\mathbf{H}( \mathbf{w}^{*})^{-1})\). The expectation \(\left\langle w_{d}^{2}\right\rangle\) can be calculated by
\[\left\langle w_{d}^{2}\right\rangle_{Q_{\mathbf{w}}(\mathbf{w})}=w_{d}^{*2}+s_ {d}^{2} \tag{30}\]
where \(s_{d}^{2}\) is the \(d\)-th diagonal element in \(\mathbf{H}(\mathbf{w}^{*})^{-1}\). As a result, \(\log Q_{\mathbf{a}}(\mathbf{a})\) can be expressed by
\[\log Q_{\mathbf{a}}(\mathbf{a})=-\frac{1}{2}\sum_{d=1}^{D}\{a_{d}(w_{d}^{*2} +s_{d}^{2})+\log a_{d}\} \tag{31}\]
through which \(Q_{\mathbf{a}}(\mathbf{a})\) could be regarded to obey the following Gamma distribution
\[Q_{\mathbf{a}}(\mathbf{a})=\prod_{d=1}^{D}Q_{a_{d}}(a_{d})=\prod_{d=1}^{D} \Gamma(a_{d}|a_{d}^{*},\frac{1}{2}) \tag{32}\]
where \(\Gamma(a_{d}|a_{d}^{*},\frac{1}{2})\) denotes a Gamma distribution over \(a_{d}\) with the degree of freedom \(\frac{1}{2}\) and the expectation \(a_{d}^{*}\) that is
\[a_{d}^{*}=\frac{1}{w_{d}^{*2}+s_{d}^{2}} \tag{33}\]
which can be in turn substituted into \(\left\langle\mathbf{A}\right\rangle_{Q_{\mathbf{a}}(\mathbf{a})}\) for \(\log Q_{\mathbf{w}}(\mathbf{w})\) in (27)-(29).
By updating \(\log Q_{\mathbf{w}}(\mathbf{w})\) and \(\log Q_{\mathbf{a}}(\mathbf{a})\) alternately, the free energy \(F(Q_{\mathbf{w}}(\mathbf{w})Q_{\mathbf{a}}(\mathbf{a}))\) will be maximized, so that one could obtain the MAP estimations for \(\mathbf{w}\) and \(\mathbf{a}\). To optimize \(\mathbf{w}\), one can perceive that \(\log Q_{\mathbf{w}}(\mathbf{w})\) is exactly equal to \(L_{2}\)-regularized MCC with the current \(\mathbf{a}\) values (27), which could be effectively optimized by the fixed-point update with fast convergence [24]
\[\mathbf{w}=(\mathbf{X}^{T}\mathbf{\Psi}\mathbf{X}+\mathbf{A})^{-1}\mathbf{X}^{T }\mathbf{\Psi}\mathbf{t} \tag{34}\]
where \(\mathbf{\Psi}\) is a \(N\times N\) diagonal matrix with the diagonal element \(\Psi_{nn}=\exp(-e_{n}^{2}/2h)\). By finding the maximum point \(\mathbf{w}^{*}\) for \(\log Q_{\mathbf{w}}(\mathbf{w})\), one could optimize \(\mathbf{a}\) by (33), while the following update could give faster convergence [1, 9]
\[a_{d}^{*}=\frac{1-a_{d}^{*}s_{d}^{2}}{w_{d}^{*2}} \tag{35}\]
which could be regarded as a fixed-point form of (33). During the training, some \(a_{d}\) will diverge to infinity, as introduced in Section II. We could employ an upper threshold and prune the corresponding features if their relevance parameter \(a_{d}\) exceeds this upper limit. The proposed MCC-ARD method for robust sparse regression is summarized in Algorithm 1.
## V Experiments
We assess the proposed MCC-ARD algorithm by a synthetic dataset, comparing it with the conventional ARD-based sparse regression introduced in Section II (denoted by LS-ARD), and the \(L_{1}\)-regularized MCC [15, 16, 17] (MCC-\(L_{1}\)) optimized with an EM method [3, 6]. The kernel bandwidth \(h\) for both MCC-ARD and MCC-\(L_{1}\) are selected by cross validation, while the latter uses another cross validation for regularization parameter \(\lambda\). The pruning threshold \(a_{\max}\) is set as \(10^{6}\) for both LS-ARD and MCC-ARD.
We generate a noisy and high-dimensional synthetic dataset with the following method. We first generate 300 i.i.d. training samples and 300 i.i.d. testing samples, which obey the \(1000\)-dimensional standard normal distribution. To obtain the model output, we employ a sparse true solution \(\mathbf{w}^{*}\) which is a \(1000\)-dimensional vector where only the first \(30\) dimensions are non-zero components and the other \(970\) components are zero
\[\mathbf{w}^{*}=[\overbrace{w_{1}^{*},w_{2}^{*},\cdots,w_{30}^{*},\underbrace{ 0,0,0,0,0,0,0,\cdots,0}_{970~{}zero-components}}^{100~{}dimensions}]^{T} \tag{36}\]
in which the non-zero elements were randomly generated from the univariate standard normal distribution. The model output is obtained with the linear regression model (2). To assess the robustness of each algorithm, we use the following distribution for the additive noise term \(\epsilon\)
\[\epsilon\sim(1-\psi)\mathcal{N}(\epsilon|0,0.05)+\psi\mathcal{L}(\epsilon|0,\tau) \tag{37}\]
in which \(\mathcal{L}(\epsilon|0,\tau)\) denotes the Laplace distribution over \(\epsilon\) with zero mean and the scale parameter \(\tau\) to imitate outliers, and \(\psi\) means the proportion of outliers among the additive noise. We employ a popular setting for robustness evaluation, where only the training dataset is contaminated with the above corruption, whereas the noise term is excluded for the testing data, as was advised in [25]. We consider the following values for the scale parameter \(\tau\): \(2\), \(5\), and \(10\), indicating increasing strengths for the outliers. The outlier proportion \(\psi\) is increased from 0 to 1.0 with a step 0.05. The regression performance is evaluated by two classical regression performance indicators, correlation coefficient (\(r\)) and root mean squared error (RMSE), which are computed respectively by
\[r=Cov(\mathbf{\hat{t}},\mathbf{t})/\sqrt{Var(\mathbf{\hat{t}}) Var(\mathbf{t})} \tag{38}\] \[\text{RMSE}= \sqrt{\frac{1}{N}\|\mathbf{\hat{t}}-\mathbf{\hat{t}}\|^{2}}\]
where \(Cov(\cdot,\cdot)\) and \(Var(\cdot)\) mean the covariance and variance, respectively, while \(\mathbf{\hat{t}}\) is the collection of the model predictions. We present the prediction performance of each algorithm with the above simulation settings with 100 Monte-Carlo repetitions in Fig. 2, which exhibit a noisy and high-dimensional dataset. One could observe that, the proposed MCC-ARD outperforms the conventional LS-ARD largely by significantly higher \(r\) and lower RMSE, when the high-dimensional data is contaminated by the non-Gaussian noises under each scale parameter \(\tau\). One further perceives that the proposed MCC-ARD achieves higher \(r\) than the existing MCC-\(L_{1}\) under each scale parameter \(\tau\), and lower RMSE for \(\tau=2\) and \(5\). MCC-ARD and MCC-\(L_{1}\) give
Fig. 2: Correlation coefficient (\(r\)) and root mean squared error (RMSE) with the noisy and high-dimensional dataset under different outlier proportions and scale parameters. The results are averaged across 100 Monte-Carlo repetitions where the error bars represent the standard deviations.
similar RMSE when \(\tau=10\). When \(\tau\) becomes larger than \(10\), the conclusion of performance comparison is analogous to the case when \(\tau\) is equal to \(10\). We would like to remind here that the proposed MCC-ARD method only has one hyperparameter \(h\) to be tuned carefully, whereas MCC-\(L_{1}\) needs to adjust two important hyper-parameters, namely, the kernel size \(h\) and the regularization parameter \(\lambda\).
On the other hand, we also consider the feature selection of the high-dimensional dataset in the presence of outliers, where we can evaluate the selection quality quantitatively because the ground-truth'relevant'/'irrelevant' label for each dimension is known. The feature selection can be viewed as an unbalanced classification task in which we have 30'relevant' features and 970 'irrelevant' features. In the trained regression models, the pruned dimensions are predicted as 'irrelevant' features, while the retained ones with non-zero model parameters are regarded as'relevant'. The confusion matrix for this classification issue is illustrated in Fig. 3. We utilize a comprehensive performance indicator, F1-score, to evaluate this unbalanced problem
\[F_{1}=2\times\frac{Precision\times Recall}{Precision+Recall} \tag{39}\]
which is the harmonic mean of \(Precision=TP/(TP+FP)\) and \(Recall=TP/(TP+FN)\). Fig. 4 illustrates the number of selected features and F1-score of feature selection for each algorithm. One can observe that when the data is contaminated by the outliers, the number of selected features by MCC-ARD is closer to the ground truth of \(30\) relevant features, compared with the conventional LS-ARD and existing MCC-\(L_{1}\). Notably MCC-ARD reveals significantly higher F1-score in the feature selection than other two algorithms in the presence of outliers, showing exceptional feature selection capability in a noisy and high-dimensional scenario. Even more, MCC-ARD also gives higher F1-score without outlier contamination (proportion=\(0\)). Remarkably, when the outlier scale parameter equals \(5\) or \(10\), a small outlier proportion (e.g. \(0.05\)) improves largely the F1-score for feature selection for the proposed MCC-ARD, which seems rather surprising and necessitates a further investigation to interpret this effect.
## VI Discussion
### _MCC-Aware Noise Assumption_
It is indispensable to discuss whether the MCC-aware noise assumption \(\mathcal{C}(e|0,h)\) (22) is adequate to be utilized in a robust regression model from a Bayesian perspective. Conventionally, an _improper_ distribution, referring to a non-normalizable PDF, can be only permitted for a prior distribution (and the resultant posterior distribution) in a classical Bayesian regime [20]. The likelihood function (equally the noise distribution), to the best of our knowledge, has for the first time been utilized with such a _deviant_ distribution \(\mathcal{C}(e|0,h)\), which does not even converge to \(0\) far from the origin. To verify the validity of such a _deviant_ noise assumption, we define the following noise distribution
\[\mathcal{C}^{\prime}(e|0,h)\triangleq\exp\{\exp(-\frac{e^{2}}{2h})\}-1 \tag{40}\]
which is a simple translation of \(\mathcal{C}(e|0,h)\) towards the horizontal axis, and can be proved a normalizable PDF by elementary derivation, shown in Fig. 5. With this _proper_ noise distribution, we conduct a similar derivation as in Section IV, and compare the experimental results utilizing the identical synthetic dataset from Section V in Fig. 6. One can observe that, for each outlier scale parameter, the _deviant_ MCC-ARD outperforms evidently the _proper_ one. In particular, when the outlier scale parameter is \(10\), the _proper_ MCC-ARD even achieves similar results with the conventional LS-ARD, showing poor robustness compared with the _deviant_ one. Therefore, the validity of the MCC-aware _deviant_ noise distribution \(\mathcal{C}(e|0,h)\) (22) is empirically proved. The robustness of \(\mathcal{C}(e|0,h)\), in our opinion, can be interpreted heuristically as follows.
The prominent characteristic of the _deviant_\(\mathcal{C}(e|0,h)\) is that, its probability density reaches the maximum at the origin while it converges to \(1\) when \(e\rightarrow\infty\). In the usual noise assumptions
Fig. 4: Number of selected features and F1-score of each regression algorithm for the high-dimensional dataset.
Fig. 3: Confusion matrix for the feature selection problem.
(e.g. Gaussian), the probability density converges to \(0\) when \(e\) is arbitrarily large, which seems to be a reasonable hypothesis. However, if a dataset is in particular prone to adverse outliers, this hypothesis would be unreliable, because some errors with large values do happen, indicating non-zero probability density even though far from the origin. By comparison, our _deviant_\(\mathcal{C}(e|0,h)\) precisely assumes non-zero density for the arbitrarily large error. Thus, we would like to argue that the MCC-aware \(\mathcal{C}(e|0,h)\) is a more rational noise assumption when the dataset is prone to outliers, as was demonstrated by the experimental results. To the best of our knowledge, this is the first time that the exceptional robustness of MCC has been interpreted from the perspective of noise assumption. Further investigations are being studied for more solid theoretical guarantees.
### _Kernel Bandwidth Determination_
In this paper, we determined the kernel bandwidth \(h\) through cross validation, which is a widely employed strategy for MCC based algorithms [11, 12, 13, 14]. Although the kernel bandwidth \(h\) could be computed directly from the kernel density estimation, such as _Silverman's Rule_[26], it was reported to result in poor consequence in [16]. We desire to investigate how to treat this hyper-parameter as a random variable and integrate it with the Bayesian inference as well. Using the non-informative hyper-prior for \(h\) yields the log joint distribution \(\log p(\mathbf{w},\mathbf{a},\mathbf{t},h)\)
\[\begin{split}&\log p(\mathbf{w},\mathbf{a},\mathbf{t},h)\\ =&\log p(\mathbf{t}|\mathbf{w},h)+\log p(\mathbf{w} |\mathbf{a})+\log p(\mathbf{a})+\log p(h)\\ =&\sum_{n=1}^{N}\exp(-\frac{(t_{n}-\mathbf{x}_{n} \mathbf{w})^{2}}{2h})-\frac{1}{2}\mathbf{w}^{T}\mathbf{A}\mathbf{w}-\frac{1}{ 2}\log|\mathbf{A}|-\log h\end{split} \tag{41}\]
Accordingly, the variational inference becomes
\[\begin{split}&\log Q_{\mathbf{w}}(\mathbf{w})=\left\langle\log p (\mathbf{w},\mathbf{a},\mathbf{t},h)\right\rangle_{Q_{\mathbf{a}}(\mathbf{a})Q _{h}(h)}\\ =&\sum_{n=1}^{N}\left\langle\exp(-\frac{e_{n}^{2}}{2 h})\right\rangle_{Q_{h}(h)}-\frac{1}{2}\mathbf{w}^{T}\left\langle\mathbf{A} \right\rangle_{Q_{\mathbf{a}}(\mathbf{a})}\mathbf{w}\end{split} \tag{42}\]
\[\begin{split}&\log Q_{\mathbf{a}}(\mathbf{a})=\left\langle\log p (\mathbf{w},\mathbf{a},\mathbf{t},h)\right\rangle_{Q_{\mathbf{w}}(\mathbf{w}) Q_{h}(h)}\\ &=-\frac{1}{2}\sum_{d=1}^{D}a_{d}\left\langle w_{d}^{2}\right\rangle_{Q_{ \mathbf{w}}(\mathbf{w})}-\frac{1}{2}\sum_{d=1}^{D}\log a_{d}\end{split} \tag{43}\]
\[\begin{split}&\log Q_{h}(h)=\left\langle\log p(\mathbf{w}, \mathbf{a},\mathbf{t},h)\right\rangle_{Q_{\mathbf{w}}(\mathbf{w})Q_{h}( \mathbf{a})}\\ =&\sum_{n=1}^{N}\left\langle\exp(-\frac{(t_{n}- \mathbf{x}_{n}\mathbf{w})^{2}}{2h})\right\rangle_{Q_{\mathbf{w}}(\mathbf{w})} -\log h\end{split} \tag{44}\]
where, however, one can find that the expectations with respect to the correntropy term in \(\log Q_{\mathbf{w}}(\mathbf{w})\) and \(\log Q_{h}(h)\) is pretty hard to compute analytically. Thus, some other approximations are essential to treat the bandwidth \(h\) as a random variable. In our future work, we will do a deeper exploration so that MCC will be implemented with '_adaptive robustness_' and '_adaptive sparseness_', integrated with the ARD technique in a Bayesian framework.
## VII Conclusion
In this paper, we expose the inherent noise assumption under the MCC-based regression, and derive an explicit MCC-aware likelihood function. Integrated with the ARD technique, MCC-based robust regression can be implemented with the '_adaptive sparseness_', where one does not need to tune the regularization hyperparameter. Compared with the conventional LS-ARD and the existing MCC-\(L_{1}\), the proposed MCC-ARD algorithm can realize superior regression and feature selection in a noisy and high-dimensional scenario. Further investigations, including a
Fig. 5: Comparison between _deviant_\(\mathcal{C}(e|0,h)\) and _proper_\(\mathcal{C}^{\prime}(e|0,h)\).
Fig. 6: Correlation coefficient (\(r\)) and root mean squared error (RMSE) for the MCC-ARD regression algorithms which are derived by the _proper_\(\mathcal{C}^{\prime}(e|0,h)\) and the _deviant_\(\mathcal{C}(e|0,h)\), respectively.
Bayesian treatment of kernel bandwidth \(h\) and an interpretation about the _deviant_ noise assumption \(\mathcal{C}(e|0,h)\), will be explored in our future works.
|
2309.05288 | On the Structure of the Linear Codes with a Given Automorphism | The purpose of this paper is to present the structure of the linear codes
over a finite field with q elements that have a permutation automorphism of
order m. These codes can be considered as generalized quasi-cyclic codes.
Quasi-cyclic codes and almost quasi-cyclic codes are discussed in detail,
presenting necessary and sufficient conditions for which linear codes with such
an automorphism are self-orthogonal, self-dual, or linear complementary dual. | Stefka Bouyuklieva | 2023-09-11T08:13:01Z | http://arxiv.org/abs/2309.05288v1 | # On the structure of the linear codes with a given automorphism
###### Abstract.
The purpose of this paper is to present the structure of the linear codes over a finite field with \(q\) elements that have a permutation automorphism of order \(m\). These codes can be considered as generalized quasi-cyclic codes. Quasi-cyclic codes and almost quasi-cyclic codes are discussed in detail, presenting necessary and sufficient conditions for which linear codes with such an automorphism are self-orthogonal, self-dual, or linear complementary dual.
Key words and phrases:linear codes, automorphisms, quasi-cyclic codes 2010 Mathematics Subject Classification: Primary 94B05,20B25 The research is partially supported by the Bulgarian National Science Fund under Contract No KP-06-H62/2/13.12.2022
## 1. Introduction
Linear codes invariant under a given permutation have been studied for a long time, and this is most evident for cyclic codes (we refer to [29] and [32] for more information), and also for group codes (see [4, 28]). The idea of using automorphisms in the construction of combinatorial structures is not new. In [1], the authors used automorphisms in the search for a projective plane of order \(10\). The research of W. C. Huffman is devoted to the study of linear and more precisely self-dual codes over various finite fields and even rings having an automorphism of a given order (preferably prime) [20, 21, 23, 24]. We would like to mention also the works of V. Yorgov, G. Nebe, M. Borello and W. Willems on the linear codes and their automorphisms (for example [6, 30, 35]).
Studying the structure of linear codes invariant under a given permutation is important not only from the point of view of obtaining useful information about the properties and parameters of these codes, but also for presenting efficient methods for constructing codes with given parameters and preset properties such as self-orthogonal or linear complementary dual (LCD) codes. Codes that have such an automorphism, have some symmetric structure and useful algebraic properties. As a disadvantage of these methods, we can point out the lack of comprehensiveness, i.e. we are not sure whether we have obtained all codes with the requested properties and parameters, we cannot prove the nonexistence of codes with given parameters unless we combine the research with other techniques. On the other hand, codes with a large group of automorphisms have a rich algebraic structure, very useful properties, practical applications, and therefore they are the most studied codes.
In this paper, we present a study on linear codes over a finite field with \(q\) elements (\(q\) is a prime power) having as an automorphism a permutation \(\sigma\) of a given order \(m\) (not necessarily prime), focusing in particular on the case where \(\sigma\) has \(c\) disjoint cycles of length \(m\) and \(f\geq 0\) fixed points. Our idea is to combine:
1. the works by Huffman (see for example [20, 23]) and Yorgov [35], mostly on binary, ternary and Hermitian quaternary self-dual codes,
2. our own research on binary self-dual, self-orthogonal and LCD codes having an automorphism of prime order [9, 10, 11, 12], and
3. the algebraic approach to quasi-cyclic codes of Patrick Sole and San Ling [26, 27].
We present some general conditions according to which the considered linear codes are self-dual, self-orthogonal, or LCD, respectively.
The paper is organized as follows. In the next section we present the needed definitions and general statements. Section 3 is devoted to permutations of order \(m\), relatively prime to the characteristic of the considered finite field. The term _almost quasi-cyclic code_ is introduced for generalized quasi-cyclic (GQC) codes of block lengths \((m,\ldots,m,1,\ldots,1)\). These are linear codes invariant under a permutation of order \(m\) with \(c\) disjoint cycles of length \(m\) and \(f\geq 1\) cycles of length \(1\) (fixed points) in its decomposition. In Section 4 we study codes with a permutation automorphism of order \(m\) divisible by the characteristic of the field. We present some examples in Section 5. We end the paper with a conclusion.
## 2. Preliminaries
A linear \([n,k]\) code \(C\) is a \(k\)-dimensional subspace of the vector space \(\mathbb{F}_{q}^{n}\), where \(\mathbb{F}_{q}\) is the finite field of \(q\) elements, \(q=p^{\ell}\) for a prime \(p\) and positive integer \(\ell\). Let \((u,v):\mathbb{F}_{q}^{n}\times\mathbb{F}_{q}^{n}\to\mathbb{F}_{q}\) be an inner product in \(\mathbb{F}_{q}^{n}\). If \(C\) is an \([n,k]\) linear code, then its orthogonal complement \(C^{\perp}=\{u\in\mathbb{F}_{q}^{n}:(u,v)=0\ \forall v\in C\}\) is a linear \([n,n-k]\) code called the dual code of \(C\). We consider three types of linear codes depending on the intersection with their duals:
* If \(C=C^{\perp}\), \(C\) is termed self-dual. If the length of a self-dual code is \(n\) then its dimension must be \(n/2\).
* If \(C\subseteq C^{\perp}\), the code is self-orthogonal. A self-orthogonal code is also self-dual iff its dimension is a half of its length. Self-orthogonal codes with \(k>n/2\) do not exist.
* If \(C\cap C^{\perp}\) consists only of the zero vector, the code is called LCD (linear complementary dual). If \(C\) is an LCD code so is its dual code \(C^{\perp}\).
In this paper, we consider inner products of two types:
* Euclidean inner product, defined by \[u\cdot v=\sum_{i=1}^{n}u_{i}v_{i}\in\mathbb{F}_{q},\ u=(u_{1},\ldots,u_{n}),v= (v_{1},\ldots,v_{n})\in\mathbb{F}_{q}^{n}.\]
* Hermitian inner product \[(u,v)=\sum_{i=1}^{n}u_{i}\overline{v}_{i}\in\mathbb{F}_{q},\] where \(\overline{a}=a^{\sqrt{q}}\) if \(q\) is a square.
The most general definition for equivalence of linear codes of length \(n\) over the finite field \(\mathbb{F}_{q}\) is based on the action of the semilinear isometries group \(\mathcal{M}_{n}^{*}(q)=\operatorname{Mon}_{n}(\mathbb{F}_{q}^{*})\rtimes \operatorname{Aut}(\mathbb{F}_{q})\leq\Gamma_{n}(\mathbb{F}_{q})\) on the vector space \(\mathbb{F}_{q}^{n}\), where \(\Gamma_{n}(\mathbb{F}_{q})\) is the set of all semilinear mappings, i.e. the general semilinear group, \(\operatorname{Mon}_{n}(\mathbb{F}_{q}^{*})\) is the group of all monomial \(n\times n\) matrices over \(\mathbb{F}_{q}\), and \(\operatorname{Aut}(\mathbb{F}_{q})\) is the automorphisms group
of the field \(\mathbb{F}_{q}\). Linear \(q\)-ary codes \(C\) and \(C^{\prime}\) of the same length \(n\) are equivalent whenever \(C^{\prime}=CT\) for some \(T\in\mathcal{M}_{n}^{*}(q)\). If \(CT=C\) for an element \(T\in\mathcal{M}_{n}^{*}(q)\) then \(T\) is called an automorphism of the code. The set of all automorphisms of \(C\) form a group denoted by \(\operatorname{Aut}(C)\).
Any element \(T\in\mathcal{M}_{n}^{*}(q)\) can be written as \(T=PD\tau\) where \(P\) is a permutation matrix (permutation part), \(D\) is a diagonal matrix (diagonal part), and \(\tau\in\operatorname{Aut}(\mathbb{F}_{q})\). Note that in the case of prime \(q\), \(\mathcal{M}_{n}^{*}(q)=\operatorname{Mon}_{n}(\mathbb{F}_{q}^{*})\), and if \(q=2\) then \(\mathcal{M}_{n}^{*}(q)\cong\operatorname{Sym}(n)\) where \(\operatorname{Sym}(n)\) is the symmetric group of degree \(n\). We consider here only the permutation automorphisms of a linear code.
Let \(C\) be a linear \(q\)-ary code with a permutation automorphism \(\sigma\in\operatorname{Sym}(n)\) of order \(m\) (not necessarily prime). If \(\sigma\) is a product of \(s\) disjoint cycles, namely
\[\sigma=\Omega_{1}\Omega_{2}\cdots\Omega_{s}, \tag{2.1}\]
where the length of \(\Omega_{i}\) is \(l_{i}\geq 1\), \(1\leq i\leq s\), then \(m=\operatorname{lcm}(l_{1},\ldots,l_{s})\).
To describe the structure of the considered codes, we need the factor rings \(\mathcal{R}_{l_{i}}=\mathbb{F}_{q}[x]/(x^{l_{i}}-1)\), where \(\mathbb{F}_{q}[x]\) is the ring of polynomials in the indeterminate \(x\) with coefficients in \(\mathbb{F}_{q}\). Define the map \(\phi:\mathbb{F}_{q}^{n}\to\mathcal{R}_{l_{1}}\times\mathcal{R}_{l_{2}}\times \cdots\times\mathcal{R}_{l_{s}}\) by
\[\phi(c) =(c_{1}(x),\ldots,c_{s}(x))\] \[=(c_{10}+c_{11}x+\cdots+c_{1,l_{1}-1}x^{l_{1}-1},\ldots,c_{s0}+c_ {s1}x+\cdots+c_{s,l_{s}-1}x^{l_{s}-1})\]
for \(c=(c_{10},c_{11},\ldots,c_{1,l_{1}-1},\ldots,c_{s0},c_{s1},\ldots,c_{s,l_{s}-1 })\in\mathbb{F}_{q}^{n}\).
Any submodule of \(\mathcal{R}^{\prime}=\mathcal{R}_{l_{1}}\times\mathcal{R}_{l_{2}}\times \cdots\times\mathcal{R}_{l_{s}}\) is called a generalized quasi-cyclic (GQC) code of block lengths \((l_{1},\ldots,l_{s})\), which is a linear code of length \(l_{1}+\cdots+l_{s}\) over \(\mathbb{F}_{q}\)[17, 33]. Decomposition into constituents for GQC codes is given by Esmaeili and Yari in [15]. We will not consider this decomposition here.
We define the subcodes of \(C\)
\[F_{\sigma}(C):=\{v\in C\mid v\sigma=v\}\]
and
\[E_{\sigma}(C):=\{v=(v_{1},\ldots,v_{n})\in C:\sum_{i\in\Omega_{j}}v_{i}=0 \text{ in }\mathbb{F}_{q}\text{ for all }j=1,\ldots,s\}.\]
Note that \(v\in F_{\sigma}(C)\) if and only if \(v\in C\) and \(v|_{\Omega_{j}}\) is constant for \(j=1,\ldots,s\). Therefore, we define the map \(\pi:F_{\sigma}(C)\to\mathbb{F}_{q}^{s}\) by \((\pi(v))_{j}=v_{i}\) for some \(i\in\Omega_{j}\), \(j=1,2,\ldots,s\), \(v\in F_{\sigma}(C)\).
We use also the map \(\psi:C\to\mathbb{F}_{q}^{s}\) defined by
\[\psi(v)=(\sum_{i\in\Omega_{1}}v_{i},\ldots,\sum_{i\in\Omega_{s}}v_{i}),\]
where \(v_{i}\) are the coordinates of the vector \(v\in\mathbb{F}_{q}^{n}\), \(i=1,\ldots,n\). This is a homomorphism and the kernel of \(\psi\) is the subcode \(E_{\sigma}(C)\), or \(\ker\psi=E_{\sigma}(C)\).
Instead of studying linear codes over rings that are Cartesian products of different factor rings, we prefer to focus on some specific cases. Sylow's first theorem gives us a reason to narrow down the considered cases. More precisely, we use the corollary, also known as Cauchy's theorem.
**Theorem 2.1**.: (Cauchy) _Given a finite group \(G\) and a prime number \(r\) dividing the order of \(G\), then there exists an element (and thus a cyclic subgroup generated by this element) of order \(r\) in \(G\)._
This means that if a linear code has a nontrivial automorphism group, it has an automorphism of prime order. Therefore, if we need to classify all codes with given parameters, having a nontrivial automorphism group, it is sufficient to consider only the automorphisms of prime orders. However, one can get many more results by combining automorphisms and from this point of view study codes with automorphisms of composite order or codes invariant under the action of a given group (see for example [5, 6, 30], as well as the examples in Section 5).
In this work, we focus on one automorphism of a linear code without considering its interaction with other automorphisms. The following lemma gives us a partial motivation to consider only permutational automorphisms, especially when their order is prime.
**Lemma 2.2**.: _[_23_, W. C. Huffman]_ _Let \(C\) be a linear code over \(\mathbb{F}_{q}\) with an automorphism \(T=PD\tau\) of prime order \(r\) where \(r\nmid(q-1)\) and \(r\nmid|\mathrm{Gal}(\mathbb{F}_{q})|\). Then there exists a code \(C^{\prime}\) equivalent to \(C\) where \(P\in\mathrm{Aut}(C^{\prime})\)._
By \(\mathbf{1}\) and \(\mathbf{0}\) we denote the all-ones vectors and the zero vector of the corresponding length, respectively.
## 3. Permutation automorphisms of order \(m\) relatively prime to the characteristic of the field
Let \(\gcd(m,\mathrm{char}(\mathbb{F}_{q}))=1\). The following theorem gives a very important information about the structure of a linear code \(C\) having a permutation automorphism of order \(m\).
**Theorem 3.1**.: _Let \(C\leq\mathbb{F}_{q}^{n}\) be a linear code with a permutation automorphism \(\sigma\in\text{Sym}(n)\) of order \(m\) such that \(\gcd(m,q)=1\). Then \(C=F_{\sigma}(C)\oplus E_{\sigma}(C)\). Both \(F_{\sigma}(C)\) and \(E_{\sigma}(C)\) are \(\sigma\)-invariant and orthogonal to each other._
Proof.: Take an arbitrary codeword \(v\in C\) and consider \(w=\sum_{i=0}^{m-1}v\sigma^{i}\). Since \(v\sigma^{m}=v\), \(w\sigma=w\) and so \(w\in F_{\sigma}(C)\). Let \(x=v-\frac{1}{m}w\). If \(v|_{\Omega_{i}}=(v_{i1},\ldots,v_{il_{i}})\) then \(w|_{\Omega_{i}}=\frac{m}{l_{i}}(\sum_{j=1}^{l_{i}}v_{ij},\ldots,\sum_{j=1}^{l_ {i}}v_{ij})\). Hence
\[x|_{\Omega_{i}}=(v_{i1}-\frac{1}{l_{i}}\sum_{j=1}^{l_{i}}v_{ij},\ldots,v_{il_{ i}}-\frac{1}{l_{i}}\sum_{j=1}^{l_{i}}v_{ij})\implies\sum_{j=1}^{l_{i}}x_{j}= \sum_{j=1}^{l_{i}}v_{ij}-\sum_{j=1}^{l_{i}}v_{ij}=0.\]
It follows that \(x\in E_{\sigma}(C)\) and so \(v=\frac{1}{m}w+x\in F_{\sigma}(C)+E_{\sigma}(C)\). This proves that \(C=F_{\sigma}(C)+E_{\sigma}(C)\).
If \(v\in F_{\sigma}(C)\cap E_{\sigma}(C)\), then \(v|_{\Omega_{i}}=(\underbrace{\alpha,\ldots,\alpha}_{l_{i}})=\alpha\mathbf{1}\) and \(\alpha(\underbrace{1+\cdots+1}_{l_{i}})=l_{i}\alpha=0\). Hence \(\alpha=0\), \(v=\mathbf{0}\), \(F_{\sigma}(C)\cap E_{\sigma}(C)=\{\mathbf{0}\}\) and therefore \(C=F_{\sigma}(C)\oplus E_{\sigma}(C)\). Obviously, both subcodes are \(\sigma\)-invariant.
If \(v\in E_{\sigma}(C)\) and \(w\in F_{\sigma}(C)\), \(w|_{\Omega_{i}}=w_{i}\mathbf{1}\) for \(i=1,\ldots,s\), then
\[(v,w)=\sum_{i=1}^{s}(w_{i}^{\prime}\sum_{j\in\Omega_{i}}v_{j})=0,\]
where \(w_{i}^{\prime}=w_{i}\) in the case of Euclidean inner product, and \(w_{i}^{\prime}=\overline{w}_{i}\) if the inner product is Hermitian. Hence both subcodes are orthogonal to each other.
The following theorem proves important properties of the projection code \(C_{\pi}=\pi(F_{\sigma}(C))\) if \(C\) is a self-dual or LCD code with respect to the considered inner product.
**Theorem 3.2**.: _Assume \(l_{1}\equiv l_{2}\equiv\cdots\equiv l_{s}\equiv l\not\equiv 0\pmod{p}\). Then:_
1. _if_ \(C\) _is self-orthogonal so is_ \(C_{\pi}\)_;_
2. _if_ \(C\) _is self-dual so is_ \(C_{\pi}\)_;_
3. _if_ \(C\) _is LCD so is_ \(C_{\pi}\)_._
Proof.: If \(v=(v_{1},\ldots,v_{n})\), by \(v^{\prime}\) we denote the vector \(v\) if the considered inner product is Euclidean, and the vector \(\overline{v}=(\overline{v}_{1},\ldots,\overline{v}_{n})\), if the inner product is Hermitian.
Let \(v=(\underbrace{v_{1},\ldots,v_{1}}_{l_{1}},\ldots,\underbrace{v_{s},\ldots,v _{s}}_{l_{s}})\) and \(w=(\underbrace{w_{1},\ldots,w_{1}}_{l_{1}},\ldots,\underbrace{w_{s},\ldots,w _{s}}_{l_{s}})\) be codewords in \(F_{\sigma}(C)\). Then
\[(v,w)=\sum_{i=1}^{s}l_{i}v_{i}w^{\prime}_{i}=l\sum_{i=1}^{s}v_{i}w^{\prime}_{ i}=l(\pi(v),\pi(w)).\]
1. If \(C\) is a self-orthogonal code, then \((v,w)=0\) for any two codewords \(v,w\in F_{\sigma}(C)\). Hence \((\pi(v),\pi(w))=0\)\(\forall v,w\in F_{\sigma}(C)\) and therefore \(C_{\pi}\) is also self-orthogonal.
2. Let \(C\) be a self-dual code. Hence \(C\) is also self-orthogonal and therefore \(C_{\pi}\) is a self-orthogonal code, or \(C_{\pi}\subseteq C_{\pi}^{\perp}\). Take \(w=(w_{1},\ldots,w_{s})\in C_{\pi}^{\perp}\), and \(w_{F}=(\underbrace{w_{1},\ldots,w_{1}}_{l_{1}},\ldots,\underbrace{w_{s},\ldots,w_{s}}_{l_{s}})\in\mathbb{F}_{q}^{n}\). Then \[(v,w_{F})=l(\pi(v),w)=0\;\forall v\in F_{\sigma}(C).\] Furthermore, \[(u,w_{F})=\sum_{i=1}^{s}(w^{\prime}_{i}\sum_{j\in\Omega_{i}}u_{j})=0\;\forall u \in E_{\sigma}(C).\] Hence, \(w_{F}\perp C\) and so \(w_{F}\in C^{\perp}=C\). It follows that \(w_{F}\in F_{\sigma}(C)\) and \(w\in C_{\pi}\). This proves that \(C_{\pi}^{\perp}=C_{\pi}\) and so \(C_{\pi}\) is a self-dual code.
3. Consider now the case of an LCD code \(C\). Take \(w=(w_{1},\ldots,w_{s})\in C_{\pi}\cap C_{\pi}^{\perp}\), and \(w_{F}=\pi^{-1}(w)=(\underbrace{w_{1},\ldots,w_{1}}_{l_{1}},\ldots, \underbrace{w_{s},\ldots,w_{s}}_{l_{s}})\). Then \(w_{F}\in F_{\sigma}(C)\) and \[(v,w_{F})=l(\pi(v),w)=0\;\forall v\in F_{\sigma}(C).\] Hence \(w_{F}\perp F_{\sigma}(C)\) and \(w_{F}\perp E_{\sigma}(C)\), which means that \(w_{F}\in C^{\perp}\cap C\). Since \(C\) is an LCD code, \(w_{F}=\mathbf{0}\) and so \(w\) is the zero vector. It follows that \(C_{\pi}\cap C_{\pi}^{\perp}=\{\mathbf{0}\}\) and \(C_{\pi}\) is an LCD code.
Thus the theorem is proved.
Next, we focus on quasi-cyclic and almost quasi-cyclic codes.
### Quasi-cyclic codes
This is the case when \(l_{1}=l_{2}=\ldots=l_{s}=m\) and \(n=sm\). Then for the fixed subcode we have the following corollary, that follows from Theorem 3.2.
**Corollary 3.3**.: _Let \(C\) be a \(q\)-ary quasi-cyclic code of length \(sm\) and index \(s\), where \(\gcd(m,q)=1\). Then:_
1. _if_ \(C\) _is self-orthogonal so is_ \(C_{\pi}\)_;_
2. _if_ \(C\) _is self-dual so is_ \(C_{\pi}\)_;_
3. _if_ \(C\) _is LCD so is_ \(C_{\pi}\)_._
If \(m\) and \(q\) are relatively prime, so \(p\nmid m\), \(x^{m}-1\) can be written in the form [26]
\[x^{m}-1=\delta g_{0}(x)g_{1}(x)\cdots g_{r}(x)h_{1}(x)h_{1}^{*}(x)\cdots h_{t} (x)h_{t}^{*}(x), \tag{3.1}\]
where \(\delta\in\mathbb{F}_{q}^{*}\), \(g_{0}=x-1\), \(g_{1},\ldots,g_{r}\) are associated with their reciprocal polynomials, and \(h_{i}^{*}(x)\) is the reciprocal polynomial of \(h(x)\), \(i=1,\ldots,t\). Then
\[\mathcal{R}_{m}=(\bigoplus_{i=0}^{r}\mathbb{F}_{q}[x]/(g_{i}))\oplus( \bigoplus_{i=1}^{t}(\mathbb{F}_{q}[x]/(h_{i})\oplus\mathbb{F}_{q}[x]/(h_{i}^{ *})),\]
and \(\mathbb{F}_{q}[x]/(g_{i})\), \(i=0,1,\ldots,r\), \(\mathbb{F}_{q}[x]/(h_{i})\) and \(\mathbb{F}_{q}[x]/(h_{i}^{*})\), \(j=1,\ldots,t\), are fields (extensions of \(\mathbb{F}_{q}\)). In some cases it is more suitable to consider these fields as minimal ideals in \(\mathcal{R}_{m}\), generated respectively by the polynomials \(\frac{x^{m}-1}{g_{i}(x)}\), \(i=0,1,\ldots,r\), \(\frac{x^{m}-1}{h_{j}(x)}\) and \(\frac{x^{m}-1}{h_{j}^{*}(x)}\), \(j=1,\ldots,t\). Denote these ideals by \(G_{0},G_{1},\ldots,G_{r}\), \(H_{1}^{\prime}\), \(H_{1}^{\prime\prime}\), \(\ldots,H_{t}^{\prime}\), \(H_{t}^{\prime\prime}\), respectively, and so
\[\mathcal{R}_{m}=(\bigoplus_{i=0}^{r}G_{i})\oplus(\bigoplus_{i=1}^{t}(H_{j}^{ \prime}\oplus H_{j}^{\prime\prime}).\]
In this case the map \(\phi\) is defined in the following way:
\[\phi:\mathbb{F}_{q}^{ms}\to\mathcal{R}_{m}^{s},\;\phi(c)=(c_{1}(x),\ldots,c_{s }(x))\]
**Lemma 3.4**.: _[_26_, Lemma 3.1]_ _The map \(\phi\) induces a one-to-one correspondence between quasi-cyclic codes over \(\mathbb{F}_{q}\) of index \(s\) and length \(ms\) and linear codes over \(\mathcal{R}_{m}\) of length \(s\)._
Since
\[\mathcal{R}_{m}^{s}=(\bigoplus_{i=0}^{r}G_{i}^{s})\oplus(\bigoplus_{i=1}^{t}( H_{j}^{\prime s}\oplus H_{j}^{\prime\prime s}),\]
then
\[\phi(C)=(\bigoplus_{i=0}^{r}C_{i})\oplus(\bigoplus_{i=1}^{t}(C_{j}^{\prime} \oplus C_{j}^{\prime\prime}),\]
where \(C_{i}\) is a linear code over \(G_{i}\), \(i=0,1,\ldots,r\), \(C_{j}^{\prime}\) and \(C_{j}^{\prime\prime}\) are linear codes over \(H_{j}^{\prime}\) and \(H_{j}^{\prime\prime}\), respectively, \(j=1,\ldots,t\), all of length \(s\).
Since \(\tilde{G_{0}}=(1+x+\cdots+x^{m-1})\lhd\mathcal{R}_{m}\), \(G_{0}\cong\mathbb{F}_{q}[x]/(x-1)\cong\mathbb{F}_{q}\), \(\phi^{-1}(C_{0})\) is actually the fixed subcode \(F_{\sigma}(C)\), and
\[\phi(E_{\sigma}(C))=(\bigoplus_{i=1}^{r}C_{i})\oplus(\bigoplus_{i=1}^{t}(C_{ j}^{\prime}\oplus C_{j}^{\prime\prime}).\]
In \(\mathcal{R}_{m}^{s}\), we use the Hermitian inner product, defined in [26], namely
\[(u,v)=\sum_{i=1}^{s}u_{i}\overline{v_{i}}\;\;\text{for}\;u=(u_{1},\ldots,u_{s} ),\;v=(v_{1},\ldots,u_{s}), \tag{3.2}\]
where \(\overline{v_{i}}=v_{i}(x^{-1})=v_{i}(x^{m-1}).\) Note that \(\overline{v_{i}}\in G_{j}\) if \(v_{i}\in G_{j}\), \(0\leq j\leq r\), \(\overline{v_{i}}\in H_{j}^{\prime\prime}\) if \(v_{i}\in H_{j}^{\prime}\), and \(\overline{v_{i}}\in H_{j}^{\prime}\) if \(v_{i}\in H_{j}^{\prime\prime}\) if \(v_{i}\in H_{j}^{\prime\prime}\), \(1\leq j\leq t\). Actually, this inner product is Euclidean over \(G_{0}\cong\mathbb{F}_{q}\).
The following theorem follows from [26, Theorem 4.2].
**Theorem 3.5**.: _The linear code \(C\) is Euclidean self-dual \(q\)-ary code if and only if \(C_{\pi}=C_{0}\) is Euclidean self-dual, \(C_{i}\) for \(i=1,\ldots,r\) are Hermitian self-dual codes, and \(C_{j}^{\prime\prime}=(C_{j}^{\prime})^{\perp}\) for \(j=1,\ldots,t\) with respect to the Euclidean inner product._
This theorem gives us the following corollaries.
**Corollary 3.6**.: _The linear code \(C\) is Euclidean self-orthogonal \(q\)-ary code if and only if \(C_{\pi}=C_{0}\) is Euclidean self-orthogonal, \(C_{i}\) for \(i=1,\ldots,r\) are Hermitian self-orthogonal codes, and \(C_{j}^{\prime\prime}\subseteq(C_{j}^{\prime})^{\perp}\) for \(j=1,\ldots,t\) with respect to the Euclidean inner product._
**Corollary 3.7**.: _The linear code \(C\) is Euclidean LCD \(q\)-ary code if and only if \(C_{\pi}=C_{0}\) is Euclidean LCD code, \(C_{i}\) for \(i=1,\ldots,r\) are Hermitian LCD codes, \(C_{j}^{\prime\prime}\cap(C_{j}^{\prime})^{\perp}=\{\mathbf{0}\}\) and \(C_{j}^{\prime}\cap(C_{j}^{\prime\prime})^{\perp}=\{\mathbf{0}\}\) for \(j=1,\ldots,t\) with respect to the Euclidean inner product._
### Almost quasi-cyclic codes
Let \(C\) be a linear code with a permutation automorphism \(\sigma\in\operatorname{Sym}(n)\) of order \(m\) (not necessarily prime) with \(c\) cycles of length \(m\) and \(f\) fixed points. We call such code _almost quasi-cyclic_. In this case, we say that \(\sigma\) is of type \(m\)-\((c,f)\). Without loss of generality we can assume that
\[\sigma=\Omega_{1}\ldots\Omega_{c}\Omega_{c+1}\ldots\Omega_{c+f} \tag{3.3}\]
where \(\Omega_{i}=((i-1)m+1,\ldots,im),i=1,\ldots,c\), are the cycles of length \(r\), and \(\Omega_{c+i}=(cm+i),i=1,\ldots,f\), are the fixed points. Obviously, \(cm+f=n\). Almost quasi-cyclic codes are a special case of generalized quasi-cyclic codes, but we consider them separately because the decomposition given above is not very useful, since the constituents that correspond to the fixed points are codes of length \(1\). Therefore, in this subsection we focus on the subcodes \(F_{\sigma}(C)\) and \(E_{\sigma}(C)\) in more detail here. Theorem 3.2 gives us the following statement.
**Corollary 3.8**.: _Let \(m\equiv 1\pmod{p}\). Then:_
1. _if_ \(C\) _is self-orthogonal so is_ \(C_{\pi}\)_;_
2. _if_ \(C\) _is self-dual so is_ \(C_{\pi}\)_;_
3. _if_ \(C\) _is LCD so is_ \(C_{\pi}\)_._
If \(v\in E_{\sigma}(C)\) then \(v|_{\Omega_{j}}=0\) for \(j=c+1,\ldots,c+f\). Denote by \(E_{\sigma}(C)^{*}\) the code obtained from \(E_{\sigma}(C)\) by deleting the last \(f\) coordinates. Since \(E_{\sigma}(C)^{*}\) is a quasi-cyclic \(q\)-ary code of length \(cm\) and index \(c\), we can use the decomposition given in the previous subsection. All codewords of \(C\) that \(\sigma\) preserves belong to the subcode \(F_{\sigma}(C)\), therefore for the code \(C_{\phi}=\phi(E_{\sigma}(C)^{*})\) we have
\[C_{\phi}=(\bigoplus_{i=1}^{r}C_{i})\oplus(\bigoplus_{i=1}^{t}(C_{j}^{\prime} \oplus C_{j}^{\prime\prime}),\]
where \(C_{i}\) is a linear code over \(G_{i}\), \(i=1,\ldots,r\), \(C_{j}^{\prime}\) and \(C_{j}^{\prime\prime}\) are linear codes over \(H_{j}^{\prime}\) and \(H_{j}^{\prime\prime}\), respectively, \(j=1,\ldots,t\), all of length \(c\). Theorem 3.1 gives us the following corollary.
**Corollary 3.9**.: _The code \(C\) having an automorphism \(\sigma\) given in (3.3) is self-orthogonal (resp. LCD) code if and only if \(F_{\sigma}(C)\) and \(E_{\sigma}(C)^{*}\) are self-orthogonal (resp. LCD)._
Proof.: Obviously, \(E_{\sigma}(C)\) is an orthogonal (resp. LCD) code if and only if the code \(E_{\sigma}(C)^{*}\) is self-orthogonal (resp. LCD).
If \(C\) is a self-orthogonal code, all its subcodes are also self-orthogonal. Conversely, if \(F_{\sigma}(C)\) and \(E_{\sigma}(C)^{*}\) are self-orthogonal codes then \(E_{\sigma}(C)\) is also self-orthogonal, and since \(F_{\sigma}(C)\perp E_{\sigma}(C)\), the code \(C=F_{\sigma}(C)\oplus E_{\sigma}(C)\) is self-orthogonal.
In the case of LCD codes, we will prove that if \(C=C_{1}\oplus C_{2}\) and \(C_{1}\perp C_{2}\), then \(C\) is an LCD code if and only if both \(C_{1}\) and \(C_{2}\) are LCD codes.
\(\Rightarrow\)) Let \(C\) be an LCD code. If \(w=(w_{1},\ldots,w_{n})\in C_{1}\cap C_{1}^{\perp}\) then \(w\perp C_{1}\) and \(w\perp C_{2}\). This gives us that \(w\perp C\) and so \(w\in C\cap C^{\perp}\). Hence \(w=0\) and \(C_{1}\) is an LCD code. The same holds for the code \(C_{2}\).
\(\Leftarrow\)) Let \(C_{1}\) and \(C_{2}\) be LCD codes, and \(x\in C\cap C^{\perp}\). Since \(C=C_{1}\oplus C_{2}\) then \(x=x_{1}+x_{2}\), \(x_{i}\in C_{i}\), \(i=1,2\). Take \(y_{i}\in C_{i}\), \(i=1,2\). Then we have \(x\cdot y_{i}=0\) and
\[x_{i}\cdot y_{i}=(x_{1}+x_{2})\cdot y_{i}=x\cdot y_{i}=0\ \Rightarrow x_{i} \perp C_{i}\ \Rightarrow x_{i}\in C_{i}\cap C_{i}^{\perp}\ \Rightarrow x_{i}=0,\ i=1,2.\]
This proves that \(x=0\) and so \(C\) is also an LCD code.
To complete the proof, we take \(C_{1}=E_{\sigma}(C)\) and \(C_{2}=F_{\sigma}(C)\).
Combining the corollary with Corollary 3.6 and Corollary 3.7, we prove the following.
**Corollary 3.10**.: _The linear code \(C\) is Euclidean self-orthogonal \(q\)-ary code if and only if \(C_{\pi}\) is Euclidean self-orthogonal, \(C_{i}\) for \(i=1,\ldots,r\) are Hermitian self-orthogonal codes, and \(C_{j}^{\prime\prime}\subseteq(C_{j}^{\prime})^{\perp}\) for \(j=1,\ldots,t\) with respect to the Euclidean inner product._
**Corollary 3.11**.: _The linear code \(C\) is Euclidean LCD \(q\)-ary code if and only if \(C_{\pi}\) is Euclidean LCD code, \(C_{i}\) for \(i=1,\ldots,r\) are Hermitian LCD codes, \(C_{j}^{\prime\prime}\cap(C_{j}^{\prime})^{\perp}=\{\mathbf{0}\}\) and \(C_{j}^{\prime}\cap(C_{j}^{\prime\prime})^{\perp}=\{\mathbf{0}\}\) for \(j=1,\ldots,t\) with respect to the Euclidean inner product._
Self-dual almost quasi-cyclic codes was studied by Huffman in [19]. Methods for construction and classification of self-dual codes with an automorphism of prime order \(m\neq p\) were given in [20, 35] for binary codes, [23] for ternary codes, [21, 22] for Hermitian quaternary codes. More detailed list with references on linear codes with automorphism of prime order can be seen in [24]. Binary LCD codes having an automorphism of odd prime order are studied in detail in [11]. Some classes of quasi-cyclic codes with complementary duals are examined in [16].
## 4. Permutation automorphisms of order \(m\) not coprime with the characteristic of the field
Let \(C\) be a linear code over \(\mathbb{F}_{q}\), where \(q=p^{\ell}\) for a prime \(p\), \(\ell\geq 1\), with a permutation automorphism \(\sigma\in\operatorname{Sym}(n)\) of order \(m=p^{a}m^{\prime}\), \(a\geq 1\). We again consider \(\sigma\) as a product of \(s\) disjoint cycles as in (2.1).
Let us see what happens to the subcodes \(F_{\sigma}(C)\) and \(E_{\sigma}(C)\) in this situation.
**Theorem 4.1**.: _Let \(l_{i}\equiv 0\pmod{p}\) for all \(i=1,\ldots,s\). Then \(F_{\sigma}(C)\) is a self-orthogonal code and it is a subcode of \(E_{\sigma}(C)\)._
This theorem shows that we cannot use the same decomposition as in Section 3 in order to study self-orthogonal, self-dual and/or LCD codes having an automorphism \(\sigma\). We need something different here.
Next, we prove a theorem that holds for the quasi-cyclic codes of length \(ms\) and index \(s\) for all integers \(m=p^{a}m^{\prime}\). A similar theorem for binary self-dual codes of length \(2k\) and index \(k\) is proved in [9].
**Theorem 4.2**.: _Let \(C\) be a \(q\)-ary quasi-cyclic code of length \(sm\) and index \(s\). Let \(\psi:C\rightarrow\mathbb{F}_{q}^{s}\) be the map defined by_
\[\psi(c)=(\sum_{i\in\Omega_{1}}c_{i},\ldots,\sum_{i\in\Omega_{s}}c_{i}),\]
_where \(c_{i}\) are the coordinates of the vector \(c\in\mathbb{F}_{q}^{n}\), \(i=1,\ldots,n\). If \(C\) is a self-orthogonal code then \(C_{\psi}=\psi(C)\) is also self-orthogonal, and \(C_{\pi}=\pi(F_{\sigma}(C))\subset C_{\psi}^{\perp}\). If \(C\) is self-dual then \(C_{\pi}=C_{\psi}^{\perp}\)._
Proof.: Let \(c=(c_{11},\ldots,c_{1m},c_{21},\ldots,c_{2m},\ldots,c_{s1},\ldots,c_{sm})\in C\). Then
\[c\sigma=(c_{1m},c_{11},\ldots,c_{1,m-1},c_{2m},c_{21},\ldots,c_{2,m-1},\ldots, c_{sm},c_{s1},\ldots,C_{s,m-1})\in C.\]
For two codewords \(u,v\in C\) we have
\[(\psi(u),\psi(v))=\sum_{i=1}^{s}(u_{i1}+\cdots+u_{im})(v^{\prime}_{i1}+\cdots+ v^{\prime}_{im})=(u,v)+(u,v\sigma)+\cdots+(u,v\sigma^{m-1}).\]
If \(C\) is a self-orthogonal code, then \((u,v)=(u,v\sigma)=\cdots=(u,v\sigma^{m-1})=0\forall u,v\in C\). Hence \((\psi(u),\psi(v))=0\) for all \(u,v\in C\).
Furthermore, \((\psi(u),\pi(v))=\sum_{i=1}^{s}(u_{i1}+\cdots+u_{im})v^{\prime}_{i}=(u,v)\). Therefore, for a self-orthogonal code \(C\), \((\psi(u),\pi(v))=0\) for \(u\in C\), \(v\in F_{\sigma}(C)\) and so \(C_{\pi}\subset C_{\psi}^{\perp}\).
Now let \(C\) be a self-dual code, \(v=(v_{1},\ldots,v_{s})\in C_{\psi}^{\perp}\) and
\[w=\pi^{-1}(v)=(\underbrace{v_{1},\ldots,v_{1}}_{m},\ldots,\underbrace{v_{s}, \ldots,v_{s}}_{m}).\]
It follows that
\[(u,w)=\sum_{i=1}^{s}(u_{i1}+\cdots+u_{im})v^{\prime}_{i}=(\psi(u),v)=0\ \forall u\in C.\]
Hence, \(w\in C^{\perp}=C\) and so \(w\in F_{\sigma}(C)\). This proves that \(v\in C_{\pi}\) and therefore \(C_{\pi}=C_{\psi}^{\perp}\).
_Remark 4.3_.: Theorem 4.1 holds for all \(m\geq 2\) and for all prime powers \(q\) (if corresponding quasi-cyclic codes exist). If \(\gcd(m,q)=1\) then the codes \(C_{\psi}\) and \(C_{\pi}\) coincide.
Quasi-cyclic codes of length \(ms\) and index \(s\) in the case \(m=p^{a}m^{\prime}\), \(a\geq 1\), \(p\nmid m^{\prime}\), are extensively studied in [27]. The factorization of the polynomial \(x^{m}-1\) over \(\mathbb{F}_{q}\) plays a key role in this case, too. Since \(\gcd(m^{\prime},q)=1\), the polynomial \(x^{m^{\prime}}-1\) can be factorized as in (3.1), namely
\[x^{m^{\prime}}-1=\delta g_{0}(x)g_{1}(x)\cdots g_{r}(x)h_{1}(x)h_{1}^{*}(x) \cdots h_{t}(x)h_{t}^{*}(x).\]
Since \(x^{m}-1=x^{p^{a}m^{\prime}}-1=(x^{m^{\prime}}-1)^{p^{a}}\), we have
\[x^{m}-1=\delta^{p^{a}}g_{0}^{p^{a}}g_{1}^{p^{a}}\cdots g_{r}^{p^{a}}h_{1}^{p^{ a}}(h_{1}^{*})^{p^{a}}\cdots h_{t}^{p^{a}}(h_{t}^{*})^{p^{a}},\]
where \(\delta\in\mathbb{F}_{q}^{*}\), \(g_{0}=x-1\), \(g_{1},\ldots,g_{r}\) are associated with their reciprocal polynomials, and \(h_{i}^{*}(x)\) is the reciprocal polynomial of \(h(x)\), \(i=1,\ldots,t\). Consequently, we may now write
\[\mathcal{R}_{m}=(\bigoplus_{i=0}^{r}\mathbb{F}_{q}[x]/(g_{i}^{p^{a}}))\oplus( \bigoplus_{i=1}^{t}(\mathbb{F}_{q}[x]/(h_{i}^{p^{a}})\oplus\mathbb{F}_{q}[x]/ ((h_{i}^{*})^{p^{a}})).\]
If we denote these factor rings by \(R_{i}\) for \(i=0,1,\ldots,r\), \(R_{j}^{\prime}\) and \(R_{j}^{\prime\prime}\) for \(j=1,\ldots,t\), respectively, then
\[\mathcal{R}_{m}^{s}=(\bigoplus_{i=0}^{r}R_{i}^{s})\oplus(\bigoplus_{i=1}^{t}( R_{j}^{\prime s}\oplus R_{j}^{\prime\prime s})).\]
In particular, \(\mathcal{R}_{m}\)-linear code \(\mathcal{A}\) of length \(s\) can be decomposed in a direct sum in the following way
\[\mathcal{A}=(\bigoplus_{i=0}^{r}\mathcal{A}_{i})\oplus(\bigoplus_{i=1}^{t}( \mathcal{A}_{j}^{\prime}\oplus\mathcal{A}_{j}^{\prime\prime})),\]
where \(\mathcal{A}_{i}\), \(\mathcal{A}_{j}^{\prime}\) and \(\mathcal{A}_{j}^{\prime\prime}\) are linear codes over the rings \(R_{i}\), \(R_{j}^{\prime}\) and \(R_{j}^{\prime\prime}\), respectively, \(i=0,1,\ldots,r\), \(j=1,\ldots,t\). The rings \(R_{i}\), \(R_{j}^{\prime}\) and \(R_{j}^{\prime\prime}\) are finite chain rings. This can be described in the following way (see [27]): If \(f\) is a monic irreducible factor of \(x^{m^{\prime}}-1\) of degree \(d\) then the factor ring \(\mathbb{F}_{q}[x]/(f^{p^{a}})\) can be identified with the finite chain ring \(\mathbb{F}_{q^{k}}+u\mathbb{F}_{q^{k}}+\cdots+u^{p^{a}-1}\mathbb{F}_{q^{k}}\), where \(u^{p^{a}}=0\). The detailed description of quasi-cyclic codes in this case, as well as some interesting examples, are presented in [27], so we will not consider this theory in more detail, but for completeness we present [27, Theorem 4.2].
**Theorem 4.4**.: _A linear code \(C\) over \(\mathcal{R}_{m}=\mathbb{F}_{q}[x]/(x^{m}-1)\) of length \(s\) is self-dual with respect to the Hermitian inner product (or equivalently, an \(s\)-quasi-cyclic code of length \(sm\) over \(\mathbb{F}_{q}\) is self-dual with respect to the Euclidean inner product) if and only if_
\[C=(\bigoplus_{i=0}^{r}C_{i})\oplus(\bigoplus_{i=1}^{t}(C_{j}^{\prime}\oplus(C _{j}^{\prime})^{\perp})),\]
_where, for \(0\leq i\leq r\), \(C_{i}\) is a self-dual code over \(R_{i}\) of length \(s\) (with respect to the Hermitian inner product) and, for \(1\leq j\leq t\), \(C_{j}^{\prime}\) is a linear code of length \(s\) over \(R_{j}^{\prime}\) and \((C_{j}^{\prime})^{\perp}\) is its dual with respect to the Euclidean inner product._
The binary self-dual codes invariant under a permutation \(\sigma\) of order \(2\) with \(c\) independent cycles of length \(2\) and \(f>0\) fixed points are studied in [10]. A construction method for such codes is proposed which is used to obtain optimal self-dual codes of different lengths. In [5], the authors prove that the natural projection of the fixed code of an involution of a self-dual binary linear code is self-dual under some (quite strong) conditions on the codes. To prove that, they introduce the family of binary semi self-dual codes.
## 5. Examples
In this section, we give three examples of codes constructed using the presented decomposition. We consider an almost quasi-cyclic binary self-dual code with an automorphism of order \(15\), a quasi-cyclic binary self-dual code with an automorphism of order \(10\), and an almost quasi-cyclic ternary LCD code with an automorphism of order \(5\).
**Example 5.1**.: In the first example we present a binary self-dual almost quasi-cyclic code \(C\) with two \(15\)-cycles and two fixed points. Now \(C_{\pi}\) must be a binary self-dual \([4,2,2]\) code and we take \(C_{\pi}=\{0000,1010,0101,1111\}\). The code \(E_{\sigma}(C)^{*}\) is a binary quasi-cyclic code of length \(30\) and index \(2\), which is self-orthogonal code of dimension \(14\). Since
\[x^{15}-1=(x-1)(x^{2}+x+1)(x^{4}+x^{3}+x^{2}+x+1)(x^{4}+x+1)(x^{4}+x^{3}+1),\]
where \(g_{1}(x)=x^{2}+x+1\) and \(g_{2}(x)=x^{4}+x^{3}+x^{2}+x+1\) are self-reciprocal polynomials, and \(h(x)=x^{4}+x+1\) and \(h^{*}(x)=x^{4}+x^{3}+1\) are mutually reciprocal polynomials over \(\mathbb{F}_{2}\). It follows that
\[\mathcal{R}_{15}=\mathbb{F}_{2}[x]/(g_{0})\oplus\mathbb{F}_{2}[x]/(g_{1}) \oplus\mathbb{F}_{2}[x]/(g_{2})\oplus\mathbb{F}_{2}[x]/(h)\oplus\mathbb{F}_{2 }[x]/(h^{*}).\]
Instead of the factor-rings in the above formula, we use the corresponding ideals of \(\mathcal{R}_{m}\):
\[G_{1}=\langle\frac{x^{15}-1}{g_{1}(x)}\rangle\cong\mathbb{F}_{4},\;G_{2}= \langle\frac{x^{15}-1}{g_{2}(x)}\rangle\cong\mathbb{F}_{16},\]
\[H^{\prime}=\langle\frac{x^{15}-1}{h(x)}\rangle\cong\mathbb{F}_{16},\;H^{ \prime\prime}=\langle\frac{x^{15}-1}{h^{*}(x)}\rangle\cong\mathbb{F}_{16}.\]
The generating idempotents of these fields are \(e_{1}(x)=x^{14}+x^{13}+x^{11}+x^{10}+x^{8}+x^{7}+x^{5}+x^{4}+x^{2}+x\in G_{1}\), \(e_{2}(x)=x^{14}+x^{13}+x^{12}+x^{11}+x^{9}+x^{8}+x^{7}+x^{6}+x^{4}+x^{3}+x^{2}+ x\in G_{2}\), \(e^{\prime}(x)=x^{12}+x^{9}+x^{8}+x^{6}+x^{4}+x^{3}+x^{2}+x\in H^{\prime}\) and \(e^{\prime\prime}(x)=e^{\prime}(x^{-1})=x^{14}+x^{13}+x^{12}+x^{11}+x^{9}+x^{7}+ x^{6}+x^{3}\in H^{\prime\prime}\). For the code \(C_{\phi}\) we have
\[C_{\phi}=C_{1}\oplus C_{2}\oplus C^{\prime}\oplus C^{\prime\prime},\]
where \(C_{i}\) is a Hermitian self-dual code over \(G_{i}\), \(i=1,2\), \(C^{\prime}\) and \(C^{\prime\prime}\) are mutually orthogonal linear codes over \(H^{\prime}\) and \(H^{\prime\prime}\), respectively, with respect to the Euclidean inner product. We take \(C_{i}=\langle(e_{i}(x),e_{i}(x))\rangle\), \(i=1,2\), \(C^{\prime}=(H^{\prime})^{2}\), and so \(C^{\prime\prime}\) is the zero code. The constructed binary code is doubly-even self-dual \([32,16,8]\) code with a generator matrix \(gen_{1}\), and its weight enumerator is \(1+620y^{8}+13888y^{12}+36518y^{16}+13888y^{20}+620y^{24}+y^{32}\).
\[gen_{1}=\begin{pmatrix}01101101101101101101101101101101100\\ 10110110110110110110110110100\\ 0111101110111110111110111100\\ 101111101111101111101111100\\ 111011110111110111110111101100\\ 111011101111011110111011100\\ 01111011011001000000000000000000000000\\ 0011110101011001000000000000000000000000\\ 00111101011000100000000000000000000000\\ 00001111010110000000000000000000000\\ 0000000000000000001111011001000\\ 0000000000000000011110
over \(\mathbb{F}_{2}\), for the ring \(\mathcal{R}_{10}\) we have
\[\mathcal{R}_{10}=\mathbb{F}_{2}[x]/((x-1)^{2})\oplus\mathbb{F}_{2}[x]/((x^{4}+x^ {3}+x^{2}+x+1)^{2}).\]
According to Theorem 4.4, \(\phi(C)=C_{0}\oplus C_{1}\), where \(C_{i}\) is a linear Hermitian self-dual code over \(G_{i}\), \(i=0,1\), \(G_{0}=<(x^{4}+x^{3}+x^{2}+x+1)^{2}>\cong\mathbb{F}_{2}[x]/((x-1)^{2})\), \(G_{1}=<(x-1)^{2}>\cong\mathbb{F}_{2}[x]/((x^{4}+x^{3}+x^{2}+x+1)^{2})\). The structure of the rings \(G_{0}\) and \(G_{1}\) is as follows:
\[G_{0}=\{0,e=1+x^{2}+x^{4}+x^{6}+x^{8},u=1+x+\cdots+x^{9},e+u\}=\mathbb{F}_{2}+ u\mathbb{F}_{2},\]
\[G_{1}=\mathbb{F}_{16}+u^{\prime}\mathbb{F}_{16},\;u^{\prime}=1+x^{4}+x^{5}+x^ {9},\;\mathbb{F}_{16}^{*}=\{\beta^{i}:\;\beta=1+x^{2},\;i=0,\ldots,14\}.\]
The identity element of \(G_{1}\) is \(e^{\prime}=x^{2}+x^{4}+x^{6}+x^{8}=1+e\). The Euclidean and Hermitian inner products in \(G_{0}\) are the same, as \(e(x^{-1})=e(x)\) and \(u(x^{-1})=u(x)\). Taking the codes
\[C_{0}=\langle\begin{pmatrix}e&0&e&u\\ 0&e&u&e\end{pmatrix}\rangle\quad\text{and}\quad C_{1}=\langle\begin{pmatrix}e^{ \prime}&e^{\prime}&u^{\prime}&0\\ 0&xu^{\prime}&e^{\prime}&e^{\prime}\\ u^{\prime}&u^{\prime}&0&0\\ 0&0&u^{\prime}&u^{\prime}\end{pmatrix}\rangle,\]
we obtain a binary quasi-cyclic self-dual [40, 20, 8] doubly-even code whose automorphism group has order \(245760\). The code \(C_{0}\) is the only Type II code over \(\mathbb{F}_{2}+u\mathbb{F}_{2}\) up to equivalence [14].
The third example presents a ternary LCD code.
**Example 5.3**.: If \(C\) is a ternary LCD code of length \(18\) having an automorphism of order \(5\) with three \(5\)-cycles and three fixed points, its subcodes \(F_{\sigma}(C)\) and \(E_{\sigma}(C)\) are also LCD codes, \(C_{\pi}\) is a ternary code of length \(6\), and \(C_{\phi}\) is an LCD code of length \(3\) with respect to the Hermitian inner product over the field \(\mathbb{F}_{3}[x]/(x^{4}+x^{3}+x^{2}+x+1)\) with \(3^{4}\) elements. This field is described in detail in [23], where Huffman has used it to construct ternary self-dual codes.
We take \(C_{\pi}=\langle\begin{pmatrix}110011\\ 001110\end{pmatrix}\rangle\) and \(C_{\phi}=\langle\begin{pmatrix}e&e&0\\ 0&\alpha&e\end{pmatrix}\rangle\), where \(e=2+x+x^{2}+x^{3}+x^{4}\), \(\alpha=x^{3}+2x^{4}\). The constructed [18, 10] code has minimum distance \(4\) and a generator matrix \(gen_{3}\).
\[gen_{3}=\begin{pmatrix}111111111100000&011\\ 000000000011111&110\\ 211112111100000&000\\ 1211112111100000&000\\ 1121111211100000&000\\ 112111121100000&000\\ 111211112100000&000\\ 000000001221111&000\\ 000001200011211&000\\ 000000120011121&000\\ 000000120011121&000\\ 000000120011121&000\\ 000000120011121&000\\ 000000120011121&000\\ \end{pmatrix}gen_{4}=\begin{pmatrix}111111111100000&011\\ 0000000000011111&1111\\ 211112111100000&000\\ 1211112111100000&000\\ 112111121112100000&000\\ 11211112112100000&000\\ 11211112100000&000\\ 000000001221111&000\\ 0000001200011121&000\\ 000000120011121&000\\ 000000120011121&000\\ \end{pmatrix}gen_{4}=\begin{pmatrix}111111111100000&0111\\ 0000000000011111&1111\\ 211112111100000&000\\ 1211112111100000&000\\ 1121111211112100000&000\\ 111211112100000&000\\ 00000001221111&000\\ 000000120011121&000\\ \end{pmatrix}gen_{4}=\begin{pmatrix}11111111110000&0111\\ 0000000000011111&1111\\ 211112111100000&000\\ 1121112111100000&000\\ 112111121112100000&000\\ 112111112100000&000\\ 000000001221111&000\\ 000000120011121&000\\ \end{pmatrix}gen_{4}\). The first code contains \(30\) codewords of weight \(4\), while the second code has \(40\) codewords with minimum weight. This example shows that the condition on \(m\) in Corollary 3.8 is important.
## 6. Conclusion
The decomposition of the linear codes that have non-trivial permutation automorphisms gives us a powerful construction for codes with optimal parameters and important properties. Many optimal self-dual codes with automorphisms of prime order over fields with \(2\), \(3\) or \(4\) elements are obtained using their previously known structure [20, 23, 22, 35].
As a conclusion, we would like to present what is known so far about the automorphisms of the putative binary self-dual [72, 36, 16] code. The extremal self-dual codes of length a multiple of \(24\) are of particular interest, but only two such codes are known so far - the extended Golay code \(g_{24}\) and the extended quadratic residue code \(q_{48}\) (see [31, 18]). In 1973 Sloane [34] posed a question which remains unresolved: is there a binary self-dual doubly-even [72, 36, 16] code? The automorphism group of the extended Golay code is the \(5\)-transitive Mathieu group \(M_{24}\) of order \(2^{10}\cdot 3^{3}\cdot 5\cdot 7\cdot 11\cdot 23\) (see [3]), as the automorphism group of \(q_{48}\) is only \(2\)-transitive and is isomorphic to the projective special linear group \(\mathrm{PSL}(2,47)\) of order \(2^{4}\cdot 3\cdot 23\cdot 47\)[25]. The first authors to study the automorphism group of the putative [72, 26, 16] code were Conway and Pless [13], in particular they focused on the possible automorphisms of odd prime order. The last published result on this group so far is given in [7] and we present it in the following theorem.
**Theorem 6.1**.: _If \(C\) be self-dual [72, 36, 16] code, then \(\mathrm{Aut}(C)\) is trivial or isomorphic to \(C_{2}\), \(C_{3}\), \(C_{2}\times C_{2}\) or \(C_{5}\)._
So, if such a code exists, it may be very difficult to find it by algebraic techniques. Many authors have tried combinatorial methods to construct such a code, also using its connections with combinatorial designs and Hadamard matrices [2, 8], but all the resulting codes have minimum distance \(d\leq 12\). Although efforts to obtain such a code by permutation of a given order have so far been unsuccessful, the method for constructing self-dual, self-orthogonal and LCD codes has been used many times and many new codes have been introduced through it. So studying the structure of linear codes with permutation automorphisms provides a powerful method to investigate, construct and classify codes with given properties and parameters.
|
2309.14385 | Sampling - Variational Auto Encoder - Ensemble: In the Quest of
Explainable Artificial Intelligence | Explainable Artificial Intelligence (XAI) models have recently attracted a
great deal of interest from a variety of application sectors. Despite
significant developments in this area, there are still no standardized methods
or approaches for understanding AI model outputs. A systematic and cohesive
framework is also increasingly necessary to incorporate new techniques like
discriminative and generative models to close the gap. This paper contributes
to the discourse on XAI by presenting an empirical evaluation based on a novel
framework: Sampling - Variational Auto Encoder (VAE) - Ensemble Anomaly
Detection (SVEAD). It is a hybrid architecture where VAE combined with ensemble
stacking and SHapley Additive exPlanations are used for imbalanced
classification. The finding reveals that combining ensemble stacking, VAE, and
SHAP can. not only lead to better model performance but also provide an easily
explainable framework. This work has used SHAP combined with Permutation
Importance and Individual Conditional Expectations to create a powerful
interpretability of the model. The finding has an important implication in the
real world, where the need for XAI is paramount to boost confidence in AI
applications. | Sarit Maitra, Vivek Mishra, Pratima Verma, Manav Chopra, Priyanka Nath | 2023-09-25T02:46:19Z | http://arxiv.org/abs/2309.14385v1 | # Sampling - Variational Auto Encoder - Ensemble:
###### Abstract
Explainable Artificial Intelligence (XAI) models have recently attracted a great deal of interest from a variety of application sectors. Despite significant developments in this area, there are still no standardized methods or approaches for understanding AI model outputs. A systematic and cohesive framework is also increasingly necessary to incorporate new techniques like discriminative and generative models to close the gap. This paper contributes to the discourse on XAI by presenting an empirical evaluation based on a novel framework: Sampling - Variational Auto Encoder (VAE) - Ensemble Anomaly Detection (SVEAD). It is a hybrid architecture where VAE combined with ensemble stacking and SHapley Additive explanations is used for imbalanced classification. The finding reveals that combining ensemble stacking, VAE, and SHAP can not only lead to better model performance but also provide an easily explainable framework. This work has used SHAP combined with Permutation Importance and Individual Conditional Expectations to create a powerful interpretability of the model. The finding has an important implication in the real world, where the need for XAI is paramount to boost confidence in AI applications.
discriminative model; explainable artificial intelligence; ensemble stacking; generative model; shapley additive explanations;
## I Introduction
The increasing complexity of ML models has led to a growing interest in XAI. While today's Industry 4.0 emphasizes smart and intelligent processes powered by technology, complex models, such as EM and DL approaches, have emerged as key technologies for accomplishing the goal [23, 33]. However, these models are often difficult to understand and trust, limiting their practical use in real-world applications ([2, 13]). To address this challenge, recent advances in ML have focused on constructing and leveraging internal representations within ML models [16]. These advances aim to enhance the interpretability and explainability of complex models, making them more accessible and trustworthy for deployment in real-world settings.
This study aims to address this issue by using DML and GAI on a skewed anomaly detection dataset. DML are concerned with classifying or predicting specific outcomes
\begin{table}
\begin{tabular}{|c|c|} \hline
**Term** & **Abbreviation** \\ \hline Area Under the Precision-Recall Curve & AUPRC \\ \hline Artificial Intelligence & AI \\ \hline Bernoulli Distribution & BD \\ \hline Brier Score & BS \\ \hline Cohen’s Kappa Coefficient & CKC \\ \hline Cross Validation & CV \\ \hline Decision Tree & RF \\ \hline Deep Learning & DL \\ \hline Discriminative Models & DML \\ \hline Ensemble Modelling & EM \\ \hline Ensemble Stacking & ES \\ \hline Ensemble Voting & EV \\ \hline Explainable Artificial Intelligence & XAI \\ \hline Evidence Lower Bound & ELBO \\ \hline Gaussian Distribution & GD \\ \hline \end{tabular}
\end{table} TABLE I: Abbreviation
given input data, whereas GAI models are concerned with learning the data distribution itself to generate new data points. Both these AI models can benefit from XAI techniques, albeit in diverse ways. For DML, XAI helps in understanding and interpreting their predictions, while for GAI, XAI can assist in understanding the data generation process and identifying anomalies. GAI models like VAEs and DMLs can offer a degree of interpretability, but it is important to clarify the extent of their interpretability and how it differs from other models.
Bias often exists in class imbalances, and traditional model interpretation methods, though easy to understand, lack practicality in explaining them to businesses and determining success criteria. To be realistic, interpretation should provide knowledge of the model's operations, predictions, discrimination rules, or potential disruptions [16]. This study explores best practices for integrating hybrid models combining DML and GAI approaches in real-world applications, aiming to improve the interpretability and robustness of anomaly detection systems.
This study presents a novel SVEAD framework for anomaly detection, utilizing sampling, VAE for compressed data representation, and supervised algorithms for classification, offering an integrated approach considering unique data characteristics and advanced DL ML techniques. Table 1 lists the technical abbreviations used in this article. There is currently no tangible mathematical concept for interpretability or explainability, nor have they been analyzed by some metric. Both terminologies are used interchangeably in this article.
## II Previous work
The hybridization of GAI and DML techniques has emerged as a promising avenue for enhancing the accuracy of AI systems. Numerous researchers have advocated for the advantages of integrating these two paradigms (e.g., [1]; [33]; [28]; [27]; [43]; and others). However, the adoption of these hybrid approaches within the business sector remains cautious, primarily due to challenges associated with interpretability.
While a substantial body of research has been dedicated to exploring the performance of various DML and GAI models on identical datasets (e.g., [4]; [36]; [38]; [14]; [31]; [18]), these efforts have often focused on quantitative assessments. Although some researchers have undertaken comparative evaluations of multiple ensemble techniques employing diverse algorithms on the same datasets [29], these studies have typically not delved into the intricacies of interpretability. Consequently, a critical facet of AI model assessment--explicability--has been conspicuously absent from their work. This research endeavor distinguishes itself by addressing a notable gap in the current scholarly discourse. It contributes valuable insights into the vital statistics of explicability within the context of hybrid GAI-DML models. Through a rigorous examination of interpretability challenges and elucidation of their implications, this study enriches the existing academic literature and underscores the significance of explicability in facilitating the broader adoption of advanced AI systems within the business domain.
Various authors (e.g., [9], [23]) have presented comprehensive overviews of explainable and interpretable algorithms in the context of ML. They have highlighted the importance of model agnostic approaches to explainability, which can be used with a range of different MLs. Some authors have provided important insights into the use of hybrid generative-discriminative models for anomaly detection and their potential for improving explainability ([8], [28]). While some researchers (e.g., [28]) have argued for the lack of solid explainability in such hybrid approaches, another group of studies (e.g., [8]) demonstrated that the use of a hybrid approach can lead to improved performance compared to conventional models. This provides an argument to combine generative and discriminative models for anomaly detection systems, which requires further research for robust strategies, potentially increasing trust, and acceptance of AI in business applications. (e.g., [32]) have emphasized the importance of improving AI model explanations. They reviewed various aspects of XAI models and suggested new research directions. They also argued that interpretability should be an essential component of AI, requiring a deeper understanding of the underlying mechanisms and processes, beyond just providing predictions.
The development of XAI models, particularly for anomaly detection, is gaining interest. Traditional methods have improved performance but lack explainability. Further investigation into explainability is needed, particularly for hybrid models. A model-agnostic strategy can enhance the explainability of discriminative models.
## III Methodological Approach
Deep learning has led to the rise of reconstruction methods for anomaly detection. These methods assume that a model trained on normal data will fail to reconstruct anomalous data, signaling the presence of anomalous data. Deep autoencoders (AE) have been used to develop reconstruction approaches for anomaly detection with remarkably superior results, but an expanding body of literature suggests even better outcomes when employing the more advanced and probabilistic variational autoencoders [19].
We propose the SVEAD framework, a multi-step approach to anomaly detection that starts by compressing data using VAE to a lower-dimensional space. Then, it leverages ensemble techniques to combine the outputs of various individual anomaly detection models. This process aims to provide a simplified yet comprehensive approach to identifying anomalies in complex datasets. Fig. 1 presents the proposed SVEAD framework.
Fig 1: SVEAD Interpretable framework: Anomaly Detection (Source: Author)
The three-core approaches (t-SNE, VAE and EM) are discussed below to provide an argument and justification for using these methods.
### _t-distributed Stochastic Neighbor Embedding_
t-SNE is a dimensionality reduction technique that emphasizes the preservation of local data relationships. Its ability to uncover non-linear patterns and anomalies makes it a valuable tool in various fields, including anomaly detection. By projecting high-dimensional data into a lower-dimensional space, t-SNE helps to unmask hidden structures and deviations that might not be readily apparent in the original data space ([11]; [37]; [6]; [21]; [24]). It does so by employing a probabilistic approach to denote the similarity between data points and then minimizing the divergence (KL) between the high and low-dimensional similarity distributions.
\[\boldsymbol{p}\left(\boldsymbol{i,j}\right) = \left(\frac{p\left(\boldsymbol{i}\mid\boldsymbol{j}\right)+p \left(\boldsymbol{j}\mid\boldsymbol{0}\right)}{2\boldsymbol{N}}\right) \tag{1}\] \[q\left(i,j\right) = \frac{\left(1+\left\lvert\left\lvert y_{i}-y_{j}\right\rvert \right\rvert^{2}\right)^{-1}}{z}\] (2) \[KL\ divergence =\ \sum_{i}\sum_{j}p\left(i,j\right)\ log\frac{p\left(i,j \right)}{q\left(i,j\right)} \tag{3}\]
Equation (1) displays the working at high dimensional space, \(p\left(i,j\right)=\) similarity between data points \(i\) and \(j\), \(p\left(i\mid j\right)=\) conditional probability of choosing data point \(i\) as a neighbor of data point \(j\), and \(N=\) total number of data points. This equation calculates the similarity between data points and phased on conditional probabilities.
Equation (2) displays the working at low dimensional space, \(q\left(i,j\right)=\) similarity between data points \(i\) and \(j\). It is computed based on Euclidean distance \(\left\lvert\left\lvert y_{i}-\ y_{j}\right\rvert\right\rvert^{2}\) between data points in the lower-dimensional space, and it is normalized by \(z\).
Equation (3) represents the KL divergence between the between the conditional probability distributions \(p\left(i,j\right)\) and \(q\left(i,j\right)\).
Fig. 2 displays a comparison of the clear separation between majority and minority classes by applying PCA, t-SVD and t-SNE to our dataset1. Researchers found that the t-SNE pipeline yields better visualization and is much better at preserving local structure ([19], [23]). So, we have experimented with t-SNE by extracting the embeddings generated by the algorithm. These are used as input features for downstream anomaly detection models.
Footnote 1: The dataset was collected and analyzed as part of a research cooperation between Worldline and ULB’s Machine Learning Group ([http://mlg.ulb.ac.be](http://mlg.ulb.ac.be)) on big data mining and fraud detection.
t-SNE is non-deterministic and parameter-dependent, which is defined as perplexity. For a large sample size, the recommended perplexity value is between 20-30 ([26]).
### _Variational Auto Encoder_
The and of a distribution are represented by two vectors of size m that a VAE encoder learns to produce. From these vectors, a latent vector is sampled and transformed back to the original input vector.\(m\) Several authors (e.g., [22]; [30]; [41]; [40]; [35]; [10]), have proposed the use of VAE for better interpretability and have demonstrated its effectiveness in different applications, including financial and healthcare data (e.g., [7] and [25]). This supports the argument that VAE can be a promising approach for anomaly detection. Moreover, it has been found that researchers [38] have applied VAE on the same dataset and obtained promising results which further strengthens this argument.
The encoder of the VAE maps the input data point x to a latent variable z, while the decoder maps z to the reconstructed output \(x^{\prime}\). The encoder takes an input x and computes the mean and std dev of the GD over z, presented in Equation (4):
\[\boldsymbol{q}(\boldsymbol{z}|\boldsymbol{x})=\ N(\boldsymbol{z};\ \boldsymbol{\mu}(\boldsymbol{x}),\boldsymbol{\sigma}^{2}(\boldsymbol{x}) \boldsymbol{I}) \tag{4}\]
Equation (4) displays that \(\mu(x)=\) mean, \(\sigma^{2}(x)=\) std dev, represents the conditional probability distribution of the latent variable z given the input data x. In a VAE, this distribution is typically assumed to be GD (0). The decoder takes z sampled from the GD and generates a reconstructed output x':
\[p(x^{\prime}|z)\ =\ Bernoulli\ (x^{\prime};\ f(z)) \tag{5}\]
Equation (5) displays that \(f(\boldsymbol{z})=NN\), taking z as input, and outputs the parameters of a BD over the reconstructed output \(x^{\prime}\). The objective function is to maximize the ELBO, which is defined in Equation (6):
\[ELBO\ =\ E\left[log\ p\left(x\mid\boldsymbol{z}\right)\right]-KL\left[q\left( \boldsymbol{z}\mid\boldsymbol{x}\right)\mid\boldsymbol{p}(\boldsymbol{z})\right] \tag{6}\]
\(E\left[log\ p(x\mid\boldsymbol{z})\right]=\) expected log-likelihood of the reconstructed output given the latent variable, and \(KL\left[q\left(\boldsymbol{z}\mid\boldsymbol{x}\right)\mid\boldsymbol{p}( \boldsymbol{z})\right]=KL\) divergence between the encoder distribution and the prior distribution over the latent variable. The prior distribution is set to GD, \(p(\boldsymbol{z})=N(\boldsymbol{z};\ 0,I)\). The ELBO can be re-written as in Equation (7) and subsequently in Equation (8):
\[ELBO\ =\ E\left[log\ p\left(x\mid\boldsymbol{z}\right)\right]-KL\left[q\left( \boldsymbol{z}\mid\boldsymbol{x}\right)\mid\boldsymbol{\mid}p(\boldsymbol{z})\right] \tag{7}\]
\[=\ E\left[log\ p\left(x\mid\boldsymbol{z}\right)\right]-E\left[log\ q\left( \boldsymbol{z}\mid\boldsymbol{x}\right)\right]\ +\]
\[E\left[log\ p(\boldsymbol{z})\right]-E\left[log\ p\left(\boldsymbol{z}\mid \boldsymbol{x}\right)\right] \tag{8}\]
The above equations consist of 4-terms:
* \(1^{\text{st}}\)term (\(E\left[log\ p\left(\boldsymbol{x}\mid\boldsymbol{z}\right)\right])\rightarrow\) reconstruction error,
* \(2^{\text{nd}}\)term (\(KL[q\left(\boldsymbol{z}\mid\boldsymbol{x}\right)\mid\boldsymbol{\mid}p( \boldsymbol{z})]\)) \(\rightarrow\) this is the divergence between the approximate posterior and prior distribution where, \(q\left(\boldsymbol{z}\mid\boldsymbol{x}\right)=\) posterior and the \(p(\boldsymbol{z})=\) prior distribution.
* \(3^{\text{rd}}\)term, \(E\left[log\ p(\boldsymbol{z})\right]\rightarrow\) prior distribution of the latent variable z. The interaction between the prior distribution of the latent variable z and the VAE's
Fig. 2: Feature reduction (code concept taken from J M Bachmann blog)
objective to bring the learned distribution in line with this prior distribution is the central for creating a meaningful and effective latent space. This ensures that the VAE learns structured and organized representations of data in the latent space, which in turn enables the model to perform various tasks, such as generating new data, manipulating existing data attributes, and detecting anomalies in the data.
* \(4^{\text{th}}\) term, \(-E\left[log\ p\ (z\mid x)\right]\rightarrow\text{negative log-likelihood of the approximate posterior distribution q}\) (\(z\mid x\)) under the prior distribution \(p(z)\). This term serves as a regularization component during training. It encourages the VAE to have its learned distribution (approximate posterior) of latent variable z be as close as possible to the specified prior distribution. In other words, it penalizes the model if it tries to represent data in a way that significantly deviates from the initially assumed distribution.
To learn a useful latent representation of the input data, VAE tries to optimize the target function during training.
Reconstruction approaches for anomaly detection enhance confidence by identifying large reconstruction errors. Fig. 3 displays the network architecture that detects anomalous behavior, learning five distribution parameters for feature-independent normal distributions (two mean values and three covariance values).
MCS were conducted to estimate reconstruction probability, with 100 samples for each input. Fig. 4 shows distinct clustering across all t-SNE plots of latent distribution parameters and samples.
The VAE is learning useful information from a distinct split between fraudulent and legitimate transactions, with fraudulent transactions being more dispersed and having larger values on both axes. This is consistent with the theory that anomalous transactions are mostly unpredictable.
### _Ensemble Model_
The ensemble model implemented here combines various models to increase overall performance and mitigate the weaknesses of individual models. Assuming the target function \(f(x_{i},y_{i})\) is the training data, the goal is to learn a hypothesis function \(\hat{\text{H}}f(x)\) to approximate the target function\(f\) as closely as possible. \(\hat{\text{H}}f(\text{x})\) is expressed as a weighted sum of the output of the base classifiers, as shown in Equation (9):
\[\hat{\text{H}}f(x)\ =\ w1\ *\ h1(x)\ +\ w2\ *\ h2(x)+\ldots+wn\ *\ hn(x) \tag{9}\]
where, \(h_{n}(x)\) is the output of the \(n^{th}\) base classifier, and \(w_{n}\) is a weight determining the contribution of each base classifier to the final prediction. The weights \(w_{n}\) learned by minimizing a loss function measure the difference between the predicted output of the ensemble and the true output. While EM can significantly enhance prediction accuracy, the combination of multiple models complicates the ability to explain the ensemble's decisions. Balancing model performance and interpretability is an ongoing challenge in the field of machine learning, especially when dealing with complex models like ensembles.
## IV Data analysis & Model Development
The information in the data includes fraudulent purchases made by cardholders in Europe. There were 492 fraudulent purchases (0.17%) and 28,4315 safe purchases (99.83%). It has 28 main components and is PCA encoded to ensure confidentiality. We have experimented with different supervised algorithms as the base learners, e.g., Log Regn, SVC, KNN, and RF. Table 2 displays the accuracy scores of all the CV models. CV ensures that the model does not overfit the training data and generalizes well to new data.
The last row of Table 2 displays the ROCAUC score, which identifies the best model to distinguish between fraud and safe transactions. Table 3 shows the learning curve analysis for a Log Reg model with varied training set sizes (30%, 60%, and 90% of the entire dataset). We can conclude from the findings that as the training set grows, the average training accuracy reduces significantly while the average test accuracy increases.
Fig. 4: t-SNE scatterplots: Latent Representations
Fig. 3: Trained VAE
The high accuracy of the test dataset indicates that the model is operating well and generalizing to new data.
Fig. 5 displays the graphical representation of models' performance, showing overfitting or underfitting for selected hyperparameters.
Three different sampling techniques were employed to compare the optimal output: Random Undersampler, SMOTE Oversampler, and combined sampler SMOTE with Tomek links. To avoid data leakage, it was crucial to sample after the CV. Fig. 6 displays the sampling pipeline employed for this work.
### _Ensemble techniques_
We used simple ensemble techniques:
* Voting based: Hard Voting and Soft Voting.
* Stacking: Where all 4 models are combined in a hierarchical manner and finally their predictions are used as input to a meta-model (final_estimator), which produces the final prediction.
Fig. 7 displays the fitted ES architecture.
The predictions from the basic estimators are combined in the final_estimator, which is a Log Reg model. It determines which basic models work well in scenarios based on training data patterns and modifies their weights. CV is used to avoid overfitting and using smaller base model sets can help mitigate overfitting. We also ensured that during meta-learning, estimator is trained on predictions that were not used during the training of base models to avoid data leakage.
### _Evaluation criteria_
We used the following evaluation metrics for evaluating the ensemble models: Precision, Recall, F1-score, ROCAUC, PRAUC, MCC, CK, and BS.
To determine the best configuration for the provided dataset, the study investigated various VAE architectures. A sparse 2-layer overcomplete VAE with linear activation and dropout was found to perform well in dealing with the dataset's features, including noise. The VAE uses normal samples and standardized training data sets, creating a new EM training set for the estimator and enhancing performance by reducing data dimensionality and noise. Fig. 8 displays a scatter plot of the latent vectors obtained from encoding with the well separated classes in the latent space; we can see distinct clusters of points for each class. This supports the claim that the proposed framework can identify the distinct separate clusters of points for each class.
Fig. 5: Learning curves (code concept taken from J M Bachmann blog)
Fig. 6: Sampling approach during CV (Source: Author)
Fig. 7: Ensemble Stacking architecture.
## V Shapley Additive Explanations
SHAP provides a way to decompose the predictions of the model into individual contributions from each feature. It is model agnostic and helps identify key features driving the output and how they interact with each other. To determine an overall SHAP value for each feature in the ensemble, the SHAP values are computed for each individual model and the second level model and aggregated using a weighted average.
\[\varphi_{1}^{5}(x)=\sum_{T\subseteq S[i]}\frac{|T|\langle i\rangle(S|-|T|-1)}{|S| \text{:}i\rrbracket}\ast\{\text{f}_{T}\big{(}x_{T\cup(i)}\big{)}-\text{f}_{T}x_ {T}\} \tag{10}\]
Where, \(\varphi_{1}^{5}(x)=\) SHAP value of feature i, for instance, x, conditioned on a set of features S, \(\text{f}_{T}\big{(}x_{T\cup(i)}\big{)}=\) model's output when features T and i are present in the input, \(\text{f}_{T}x_{T}=\) model's output when only features T are present in the input and all other features are set to their background values, \(|T|=\) cardinality of the set T (i.e., the number of features in T), and \(|S|=\) cardinality of the set S (i.e., the number of features in S).
The summation in the equation goes over all subsets T of S that do not include feature i. The weighting factor inside the summation makes sure that the SHAP values satisfy several desired criteria, such additivity and consistency, and it depends on the size of T and S.
## VI Results & discussions
The objective of using VAE in architecture is to provide an interpretable latent space. By compressing the data into a lower-dimensional representation, VAE identifies the underlying structure of the data and, subsequently, the anomalous transactions.
Various accuracy measures were used by previous researchers working on imbalanced datasets, e.g., accuracy, recall, precision, TPR, FPR, specificity, and G-mean [12]; MCC, F2 Score, Kappa Score, Brier Score, and Precision [2]; accuracy, precision, recall, F1 score [44]; Precision, and AUPRC [26]. We opted for a combination of all those displayed in Table IV. All the models were trained and assessed using a 70%-30% train-test split, and various evaluation metrics were used to evaluate the models. The ES method has the best overall performance, displaying the highest precision score of 94.87% and the highest F1 score of 84.09%, suggesting a good balance between precision and recall. Additionally, the ROCAUC score of 87.75% indicates that it is the most effective at distinguishing between the two classes. ES is the meta-model of all the models trained on the same training dataset.
SMOTETomek + VAE + ES has the highest values for precision, recall, F1, ROCAUC, and AUPRC, indicating that it has the best overall performance among all models. It also has a high MCC and Kappa score, suggesting good agreement between predicted and actual labels. This model correctly identified 99.91% of positive cases. The precision score of 0.9878 indicates that the model is correctly identifying positive samples while minimizing the number of false positives. The MCC score of 0.898 indicates the model has the best overall performance in balancing precision, recall, and MCC.
SMOTE + VAE + Log Reg has the second-highest values for precision, recall, F1, ROCAUC, and AUPRC and has the highest MCC and Kappa score among all models. These metrics indicate that it has exceptionally good overall performance, although slightly lower than the top-performing model. For the SVEAD framework, the SHAP values are calculated for each model in the ensemble and the second-level model and combined using a weighted average to generate an overall SHAP value for each feature in the ensemble.
Fig. 9 displays the feature importance plot of the variables by using PIP. V14, V17, and V10 are the top three features that influence model prediction and, thus, fraudulent, and non-fraudulent transactions. For a thorough knowledge of feature relevance, PIP is employed as a supplemental technique to SHAP. It shuffles the values of a single feature in the dataset at random before reevaluating the model's performance with the shuffled feature. By plotting the decrease in model performance against the features, a PIP plot is generated to rank the features based on their importance.
ICE plots can be used to create more localized explanations for a single individual [15]. Based on this argument, we further used the ICE plot to interpret the feature importance of the trained model.
Fig. 10 displays the ICE plots of features 'V14' and 'V17'. Each gray line represents the predicted probability of the positive class for a single observation, as the features are
Fig. 8: Latent vector samples: Variational Auto Encoder
Fig. 10: ICE plots of feature V17 & V14
Fig. 9: Feature importance plot
varied. As the feature value increases, the predicted probability increases, leading to the "waterfall" shape of the plot. The shape of each ICE curve reflects how the model's predicted probabilities for the positive class change as the features vary for each individual observation. The curves take a sigmoid-like shape, with the probability increasing or decreasing sharply at certain points, indicating that changes in the feature have a significant effect on the predicted probabilities. The sigmoid curve represents nonlinearities in the relationship between the features and the target variable.
The output of the model is not simply the result of a single algorithm, but a combination of the outputs of all the models. So, the question of explaianbility appears here, where we have implemented the model-agnostics SHAP framework into the architecture.
## Conclusions
The SVEAD framework is a powerful tool for anomaly detection, achieving high performance while maintaining interpretability and explainability. It uses diverse base models and ensemble stacking to optimize collective performance. The pre-processing stage includes SMOTETomek and VAE to balance imbalanced classes and extract relevant features. The VAE component provides a low-dimensional representation of input data, making it easier to understand. The ensemble model is analyzed individually to identify key features for anomaly detection. However, for large datasets, the complexity, ensemble methods, and SHapley Additive exPlanations values may result in processing overhead and scalability concerns.
|
2303.17827 | A quantitative central limit theorem for Poisson horospheres in high
dimensions | Consider a stationary Poisson process of horospheres in a $d$-dimensional
hyperbolic space. In the focus of this note is the total surface area these
random horospheres induce in a sequence of balls of growing radius $R$. The
main result is a quantitative, non-standard central limit theorem for these
random variables as the radius $R$ of the balls and the space dimension $d$
tend to infinity simultaneously. | Zakhar Kabluchko, Daniel Rosen, Christoph Thäle | 2023-03-31T06:49:15Z | http://arxiv.org/abs/2303.17827v2 | # A quantitative central limit theorem for
###### Abstract
Consider a stationary Poisson process of horospheres in a \(d\)-dimensional hyperbolic space. In the focus of this note is the total surface area these random horospheres induce in a sequence of balls of growing radius \(R\). The main result is a quantitative, non-standard central limit theorem for these random variables as the radius \(R\) of the balls and the space dimension \(d\) tend to infinity simultaneously.
**Keywords:** central limit theorem, horospheres, hyperbolic stochastic geometry, Poisson processes
**MSC:** 52A55, 60D05
## 1 Introduction and main result
The study of random geometric systems in non-Euclidean geometries is a recent and fast growing branch of stochastic geometry. We refer to [1, 2, 3, 4, 5, 6, 7, 9, 10, 12, 15] for selected works on hyperbolic random geometric graphs, random tessellations and random polytopes.
In this note we address an interesting generalization of the Poisson hyperplane process to hyperbolic geometry. The study of Euclidean Poisson hyperplanes is by now classical [11, 14, 16, 19] and was extended in [12] to hyperbolic space, where Poisson processes of totally geodesic hypersurfaces are studied, see also [18] for mean values in the planar case. Even more recently, in [13] it was observed that this model fits into a one-parameter family of so-called _Poisson \(\lambda\)-geodesic hyperplanes_, and the fluctuations of the total hyperbolic surface area of such a process within a sequence of growing balls were examined in detail. The special case we consider here (corresponding to the choice \(\lambda=1\) in [13]) is the _Poisson horosphere process_.
Let us recall some definitions; for more details we refer the reader to [13] and the references cited therein. A _horosphere_ in a \(d\)-dimensional hyperbolic space \(\mathbb{H}^{d}\) is, intuitively speaking, a sphere of infinite radius. More formally, it is a complete totally umbilic hypersurface of constant normal curvature \(1\). For concreteness, in the Poincare ball model of hyperbolic space, horospheres are realized as Euclidean spheres tangent to the boundary, see Figure 1. We denote by \(\mathcal{H}\) the space of all horospheres in \(\mathbb{H}^{d}\). This space admits a transitive action by the group of hyperbolic isometries and an invariant measure for this action, which is unique up to a multiplicative constant and will be denoted by \(\Lambda\), see [8, 20].
Now, let \(\eta_{d}\) be a Poisson process on \(\mathcal{H}\) with intensity measure \(\Lambda\), see Figure 1 for a simulation in the case \(d=2\). For \(R>0\), we consider the total surface area
\[S_{R,d}:=\sum_{H\in\eta_{d}}\mathcal{H}^{d-1}(H\cap B_{R}^{d})\]
of \(\eta_{d}\) within a hyperbolic ball \(B_{R}^{d}\) around an arbitrary but fixed point in \(\mathbb{H}^{d}\) and hyperbolic radius \(R>0\). Here, \(\mathcal{H}^{d-1}\) stands for the \((d-1)\)-dimensional Hausdorff measure with respect to the
hyperbolic metric. In [13] it was proven that, for a fixed space dimension \(d\), the centred and normalized surface area satisfies a non-standard central limit theorem. Namely, it converges in distribution, as \(R\to\infty\) and after centring and normalization by the standard deviation, to a Gaussian random variable of variance \(\frac{1}{2}\). The main result of the present note extends this in two directions: first, we provide estimates on the rate on convergence. Second, our bounds depend explicitly on the dimension, providing central limit theorems for Poisson horospheres in increasing space dimensions. To measure the distance between two random variables \(X\) and \(Y\) we use the Wasserstein metric, which is defined by
\[d_{\mathrm{Wass}}(X,Y):=\sup\big{|}\mathbb{E}[h(X)]-\mathbb{E}[h(Y)]\big{|},\]
where the supremum is taken over all Lipschitz functions \(h:\mathbb{R}\to\mathbb{R}\) with Lipschitz constant at most one.
Our first result provides a quantitative non-standard limit theorem in a fixed spacial dimension.
**Theorem 1** (Central limit theorem for fixed \(d\)).: _Let \(N_{\frac{1}{2}}\) be a centred Gaussian random variable of variance \(\frac{1}{2}\). Fix \(d\in\mathbb{N}\) and consider the surface functional \(S_{R,d}\). Then there exists a constant \(C>0\) only depending on \(d\) such that_
\[d_{\mathrm{Wass}}\left(\frac{S_{R,d}-\mathbb{E}S_{R,d}}{\sqrt{\mathrm{Var}\,S_ {R,d}}},N_{\frac{1}{2}}\right)\leq C\,R^{-1/2}.\]
**Remark 2**.: We note that the rate of convergence \(R^{-1/2}\) is the same in all space dimensions. The same convergence rate, in all dimensions, is observed in the central limit theorem for the total surface area of Poisson hyperplanes in Euclidean space (see [14], but the result is also a special case of (1) below). Let us remark that this limiting behaviour is in sharp contrast with the cases of \(\lambda\)-geodesic hyperplanes with \(\lambda<1\) (we recall that horospheres correspond to \(\lambda=1\)). As described in [13], in those cases the fluctuations of the surface functional are non-Gaussian in every fixed dimension \(\geq 4\). The geometric distinction between the two cases is that horospheres are _intrinsically Euclidean_, while the intrinsic geometry of other \(\lambda\)-geodesic hyperplanes is hyperbolic.
Our second result concerns the surface functional in high dimensions. We consider an arbitrary
Figure 1: Simulation of a Poisson process of horospheres in the Poincaré disc model for the hyperbolic plane.
sequence \(R=R_{d}\) which satisfies \(R_{d}\xrightarrow[d-\infty]{}\infty\). We denote in this case by
\[S_{d}:=S_{R_{d},d}=\sum_{H\in\eta_{d}}\mathcal{H}^{d-1}(H\cap B_{R_{d}}^{d})\]
the high-dimensional surface area functional. Here we prove the following quantitative non-standard limit theorem.
**Theorem 3** (Central limit theorem for \(d\to\infty\)).: _Let \(N_{\frac{1}{2}}\) be a centred Gaussian random variable of variance \(\frac{1}{2}\). Let \(R=R_{d}\) be a sequence satisfying \(R_{d}\to\infty\) as \(d\to\infty\)._
1. _Suppose that_ \(\limsup_{d\to\infty}(R_{d}-\log d)<+\infty\)_. Then there exists a constant_ \(C>0\) _such that_ \[d_{\mathrm{Wass}}\left(\frac{S_{d}-\mathbb{E}S_{d}}{\sqrt{\operatorname{Var}S_ {d}}},N_{\frac{1}{2}}\right)\leq Ce^{-R_{d}/2}.\]
2. _Suppose that_ \(\limsup_{d\to\infty}(R_{d}-\log d)=+\infty\)_. Then there exists a constant_ \(C>0\) _such that_ \[d_{\mathrm{Wass}}\left(\frac{S_{d}-\mathbb{E}S_{d}}{\sqrt{\operatorname{Var}S_ {d}}},N_{\frac{1}{2}}\right)\leq C\left(\frac{1}{\sqrt{d}\,(R-\log d)}+\frac{ 1}{d\,\sqrt{R-\log d}}\right).\]
_In particular, the surface functional satisfies a non-standard central limit theorem as soon as \(R_{d}\to\infty\)._
For example, taking the radius as \(R_{d}=\alpha\log d\) for \(\alpha>0\), Theorem 3 gives
\[d_{\mathrm{Wass}}\left(\frac{S_{d}-\mathbb{E}S_{d}}{\sqrt{\operatorname{Var}S_ {d}}},N_{\frac{1}{2}}\right)\leq C\begin{cases}d^{-\alpha/2}&:\alpha\leq 1\\ d^{-1/2}(\log d)^{-1}&:\alpha>1.\end{cases}\]
**Remark 4**.:
1. We note that the convergence rate in the first case of Theorem 3 is always worse than \(d^{-1/2}\) (and in particular, worse than in the second case). Indeed, by assumption \(e^{R_{d}}\leq M\,d\) for some \(M>0\) and hence \(e^{-R_{d}/2}\) converges to zero slower than \(d^{-1/2}\).
2. It is natural to ask whether similar bounds holds when the Wasserstein metric is replaced by the Kolmogorov metric \(d_{\mathrm{Kol}}\). For two random variables \(X\) and \(Y\) the latter is defined as \(d_{\mathrm{Kol}}(X,Y):=\sup_{s\in\mathbb{R}}|\mathbb{P}(X\leq s)-\mathbb{P}(Y \leq s)|\). For any random variable \(X\) one has the inequality \(d_{\mathrm{Kol}}(X,N_{\frac{1}{2}})\leq\left[\frac{2}{\sqrt{\pi}}d_{\mathrm{ Wass}}\big{(}X,N_{\frac{1}{2}}\big{)}\right]^{1/2}\), see e.g. [17, Proposition 1.2.(2)]. In conjunction with Theorems 1 and 3 this provides Kolmogorov bounds the for the normalized random variables \(S_{R,d}\), but these are likely to be not optimal. So far, we were unable to prove bounds on the Kolmogorov distance of the same order as for the Wasserstein distance and leave this as an open problem for future research.
3. In the high-dimensional regime, that is if \(d\to\infty\) and \(R=R_{d}\), it is also natural to ask for sharp conditions on \(R\) which ensure that the centred and normalized total surface area is asymptotically Gaussian. Theorem 3 shows that \(R_{d}\to\infty\) is sufficient, but for fixed \(R\) our bounds do not yield a central limit theorem for the surface functional. We have to leave this as an open problem as well.
4. The reader might be interested in a comparison with the Euclidean case, where one considers the total surface area \(S_{R,d,e}\) of a stationary and isotropic Poisson process on the space of hyperplanes in \(\mathbb{R}^{d}\) within a centred ball of radius \(R\). In this situation it holds that \[d_{\mathrm{Wass}}\Big{(}\frac{S_{R,d,e}-\mathbb{E}S_{R,d,e}}{\sqrt{ \operatorname{Var}S_{R,d,e}}},N\Big{)}\leq C\,d^{1/4}R^{-1/2}\] (1) for some absolute constant \(C>0\) and where \(N\) denotes a standard Gaussian random variable. Since we could not locate this result in the literature, we provide an argument in Section 4. In particular, the bound shows that if \(d\to\infty\) we need that \(R\) grows faster than \(\sqrt{d}\) in order to deduce a central limit theorem.
Proof of the main results
Before proving Theorems 1 and 3, we need to recall some preliminaries. First we need an explicit description of the invariant measure \(\Lambda\) on the space \(\mathcal{H}\) of horospheres. We fix an origin \(\mathbf{o}\in\mathbb{H}^{d}\) and parametrize an element \(H\in\mathcal{H}\) by the pair \((s,u)\in\mathbb{R}\times\mathbb{S}^{d-1}\), where \(s\in\mathbb{R}\) is the signed distance from \(H\) to \(\mathbf{o}\) (with \(s>0\) if \(\mathbf{o}\) lies on the convex side of \(H\), and negative otherwise), and \(u\in\mathbb{S}^{d-1}\) is the unit vector (in the tangent space \(T_{\mathbf{o}}\mathbb{H}^{d}\)) along the geodesic passing through \(\mathbf{o}\) and intersecting \(H\) orthogonally, while pointing outside of the convex side. The invariant measure is then defined by the relation
\[\int_{\mathcal{H}}f(H)\,\Lambda(\mathrm{d}H)=\int_{\mathbb{R}}\int_{\mathbb{S} ^{d-1}}f(H(s,u))\,e^{-(d-1)s}\,\mathrm{d}u\,\mathrm{d}s, \tag{2}\]
where \(f:\mathcal{H}\to\mathbb{R}\) is a non-negative measurable function and \(H(s,u)\) stands for the unique element of \(\mathcal{H}\) parametrized by \((s,u)\) as just described. Here \(\mathrm{d}s\) and \(\mathrm{d}u\) stand for the Lebesgue measure on \(\mathbb{R}\) and the normalized spherical Lebesgue measure on \(\mathbb{S}^{d-1}\), respectively.
We will also need the following geometric computation of the volume of the intersection \(H(s)\cap B_{R}^{d}\), where \(H(s)\subset\mathbb{H}^{d}\) is a horosphere of signed distance \(s\in\mathbb{R}\) from the origin \(\mathbf{o}\). Observe that this notation is justified, by rotational symmetry around \(\mathbf{o}\). In [13, Proposition 4.1] it is proven that this intersection is empty for \(|s|\geq R\), and otherwise satisfies
\[\mathcal{H}^{d-1}(H(s)\cap B_{R}^{d})=\kappa_{d-1}\big{[}2e^{s}(\cosh R-\cosh s )\big{]}^{\frac{d-1}{2}}, \tag{3}\]
where for an integer \(\ell\geq 1\) we write \(\kappa_{\ell}\) for the volume of the \(\ell\)-dimensional Euclidean unit ball.
We also mention two elementary properties of the Wasserstein metric that will be useful for us. For any three integrable random variables \(X,Y\) and \(Z\) one has
\[d_{\mathrm{Wass}}(X+Y,Z)\leq d_{\mathrm{Wass}}(X,Z)+\mathbb{E}|Y|. \tag{4}\]
Moreover for any \(\alpha>0\) one has
\[d_{\mathrm{Wass}}(\alpha X,\alpha Y)=\alpha\,d_{\mathrm{Wass}}(X,Y). \tag{5}\]
First we reduce the normal approximation bound to the following integral estimate. Define
\[J_{R,d}:=\int_{0}^{R}\left(1-\frac{\cosh s-1}{\cosh R-1}\right)^{d-1}\, \mathrm{d}s.\]
**Proposition 5**.: _The following bound holds true for all \(d\geq 1\) and \(R>0\):_
\[d_{\mathrm{Wass}}\left(\frac{S_{R,d}-\mathbb{E}S_{R,d}}{\sqrt{\mathrm{Var}\,S _{R,d}}},N_{\frac{1}{2}}\right)\leq\sqrt{2}\left(\frac{1}{\sqrt{d-1}\,J_{R,d}} +\frac{2}{\left(d-1\right)\sqrt{J_{R,d}}}\right).\]
Proof.: In view of the representation (2) of the invariant measure \(\Lambda\) and the expression (3) for the intersection volume, we have that
\[S_{R,d}=\sum_{s\in\xi}f_{R}(s), \tag{6}\]
where \(\xi\) is an inhomogeneous Poisson process on \(\mathbb{R}\) with density \(s\mapsto e^{-(d-1)s}\), and the function \(f_{R}\) is defined by
\[f_{R}(s)=\begin{cases}\kappa_{d-1}\big{[}2e^{s}(\cosh R-\cosh s)\big{]}^{ \frac{d-1}{2}}&:|s|\leq R,\\ 0&:\text{else}.\end{cases} \tag{7}\]
We decompose the random variable \(S_{R,d}\) into a 'positive' and 'negative' part as follows:
\[S_{R,d}=S_{R,d}^{+}+S_{R,d}^{-},\]
where
\[S_{R,d}^{+}:=\sum_{\begin{subarray}{c}s\in\xi\\ s>0\end{subarray}}f_{R}(s)\qquad\text{and}\qquad S_{R,d}^{-}:=\sum_{ \begin{subarray}{c}s\in\xi\\ s<0\end{subarray}}f_{R}(s).\]
We then have
\[\frac{S_{R,d}-\mathbb{E}S_{R,d}}{\sqrt{\operatorname{Var}S_{R,d}}}=\frac{S_{R, d}^{+}-\mathbb{E}S_{R,d}^{+}}{\sqrt{\operatorname{Var}S_{R,d}}}+\frac{S_{R,d}^{ -}-\mathbb{E}S_{R,d}^{-}}{\sqrt{\operatorname{Var}S_{R,d}}}.\]
Observe that
\[\operatorname{Var}S_{R,d}^{+}=\operatorname{Var}S_{R,d}^{-}=\frac{1}{2} \,\operatorname{Var}S_{R,d}, \tag{8}\]
which follows from the evenness of the integrand in the variance representation
\[\operatorname{Var}S_{R,d}=\int_{\mathbb{R}}f_{R}^{2}(s)e^{-(d-1)s}\,\mathrm{d }s=2^{d-1}\kappa_{d-1}^{2}\int_{-R}^{R}(\cosh R-\cosh s)^{d-1}\,\mathrm{d}s,\]
where we used (2) and (3). We deduce that
\[d_{\text{Wass}}\left(\frac{S_{R,d}-\mathbb{E}S_{R,d}}{\sqrt{\operatorname{ Var}S_{R,d}}},N_{\frac{1}{2}}\right)\leq 2^{-\frac{1}{2}}\,d_{\text{Wass}}\left( \frac{S_{R,d}^{-}-\mathbb{E}S_{R,d}^{-}}{\sqrt{\operatorname{Var}S_{R,d}^{- }}},N\right)+2^{-\frac{1}{2}}\,\mathbb{E}\left|\frac{S_{R,d}^{+}-\mathbb{E}S_ {R,d}^{+}}{\sqrt{\operatorname{Var}S_{R,d}^{+}}}\right|, \tag{9}\]
where \(N\) is a standard Gaussian random variable, and where we have used (4), (5) together with the fact that \(2^{-\frac{1}{2}}N\) has the same distribution as our target random variable \(N_{\frac{1}{2}}\).
To control the first summand in (9), we apply the following normal approximation bound, which is a special case of a general bound for so-called Poisson \(U\)-statistics [16, Theorem 4.7]. In our case it states that
\[d_{\text{Wass}}\left(\frac{S_{R,d}^{-}-\mathbb{E}S_{R,d}^{-}}{\sqrt{ \operatorname{Var}S_{R,d}^{-}}},N\right)\leq 2\frac{\sqrt{\operatorname{cum} _{4}(S_{R,d}^{-})}}{\operatorname{Var}S_{R,d}^{-}},\]
where \(\operatorname{cum}_{4}(S_{R,d}^{-})\) denotes the fourth cumulant of \(S_{R,d}^{-}\). The second summand in (9) is easily bounded by (noting that \(S_{R,d}^{+}\geq 0\))
\[\frac{\mathbb{E}|S_{R,d}^{+}-\mathbb{E}S_{R,d}^{+}|}{\sqrt{ \operatorname{Var}S_{R,d}^{+}}}\leq\frac{2\,\mathbb{E}S_{R,d}^{+}}{\sqrt{ \operatorname{Var}S_{R,d}^{+}}}.\]
This gives
\[d_{\text{Wass}}\left(\frac{S_{R,d}-\mathbb{E}S_{R,d}}{\sqrt{ \operatorname{Var}S_{R,d}}},N_{\frac{1}{2}}\right)\leq 2^{1/2}\frac{\sqrt{ \operatorname{cum}_{4}(S_{R,d}^{-})}}{\operatorname{Var}S_{R,d}^{-}}+2^{1/2} \frac{\mathbb{E}S_{R,d}^{+}}{\operatorname{Var}S_{R,d}^{+}}. \tag{10}\]
If we denote further \(C_{d}:=2^{(d-1)/2}\kappa_{d-1}\) and define
\[I_{1} :=\int_{0}^{R}(\cosh R-\cosh s)^{\frac{d-1}{2}}e^{-\frac{d-1}{2}s }\,\mathrm{d}s,\] \[I_{2} :=\int_{0}^{R}(\cosh R-\cosh s)^{d-1}\,\mathrm{d}s,\] \[I_{4} :=\int_{0}^{R}(\cosh R-\cosh s)^{2(d-1)}e^{-(d-1)s}\,\mathrm{d}s,\]
then we compute, using (6) and (7), that
\[\mathbb{E}S_{R,d}^{+}=C_{d}I_{1},\qquad\operatorname{Var}(S_{R,d}^{\pm})=C_{d}^{2 }I_{2},\qquad\operatorname{cum}_{4}(S_{R,d}^{-})=C_{d}^{4}I_{4}.\]
Here the expectation and variance are computed with the help of the (multivariate) Mecke equation for Poisson processes, and the fourth cumulant using [14, Corollary 1]. Plugging this into (10) finally gives
\[d_{\operatorname{Wass}}\left(\frac{S_{R,d}-\mathbb{E}S_{R,d}}{\sqrt{ \operatorname{Var}S_{R,d}}},N_{\frac{1}{2}}\right)\leq\sqrt{2}\left[\frac{ \sqrt{I_{4}}}{I_{2}}+\frac{I_{1}}{\sqrt{I_{2}}}\right]. \tag{11}\]
Now we use the following trivial estimates for \(I_{1}\) and \(I_{4}\):
\[I_{1} \leq(\cosh R_{d}-1)^{\frac{d-1}{2}}\cdot\frac{2}{d-1},\] \[I_{4} \leq(\cosh R_{d}-1)^{2(d-1)}\cdot\frac{1}{d-1}.\]
Moreover, for \(I_{2}\) we write:
\[I_{2} =\int_{0}^{R}(\cosh R-\cosh s)^{d-1}\,\mathrm{d}s\] \[=(\cosh R-1)^{d-1}\int_{0}^{R}\left(1-\frac{\cosh s-1}{\cosh R-1} \right)^{d-1}\,\mathrm{d}s\] \[=(\cosh R-1)^{d-1}J_{R,d}.\]
Now plugging all this back into (11) leads to the desired estimate
\[d_{\operatorname{Wass}}\left(\frac{S_{R,d}-\mathbb{E}S_{R,d}}{\sqrt{ \operatorname{Var}S_{R,d}}},N_{\frac{1}{2}}\right)\leq\sqrt{2}\left(\frac{1}{ \sqrt{d-1}\,J_{d}}+\frac{2}{(d-1)\,\sqrt{J_{d}}}\right)\]
and completes the proof.
Our next task therefore is to estimate \(J_{R,d}\). This is achieved by the following result.
**Lemma 6**.:
1. _Suppose that_ \(d\) _is fixed. Then there exists a constant_ \(C>0\) _such that_ \[J_{R,d}\geq C\,R.\]
2. _Consider the case where_ \(d\to\infty\) _and_ \(R=R_{d}\) _is some sequence satisfying_ \(R_{d}\to\infty\) _as_ \(d\to\infty\)_. Denote in this case_ \(J_{d}:=J_{R_{d},d}\)_._ 1. _Suppose that_ \(\limsup_{d\to\infty}(R_{d}-\log d)<+\infty\)_. Then there exists a constant_ \(C>0\) _such that_ \[J_{d}\geq C\,\frac{e^{R/2}}{\sqrt{d}}.\] 2. _Suppose that_ \(\limsup_{d\to\infty}(R_{d}-\log d)=+\infty\)_. Then there exists a constant_ \(C>0\) _such that_ \[J_{d}\geq C(R-\log d).\]
We postpone the proof of Lemma 6 until Section 3, and first use it to deduce our main results.
Proof of Theorem 1 and Theorem 3.: The theorems follow upon combining Proposition 5 with the integral estimates given by Lemma 6. First suppose that \(d\) is fixed and combine Proposition 5 with Lemma 6 (a). This gives
\[d_{\mathrm{Wass}}\left(\frac{S_{R,d}-\mathbb{E}S_{R,d}}{\sqrt{\mathrm{Var}\,S_{R,d}}},N_{\frac{1}{2}}\right)\leq\widetilde{C}\Big{(}\frac{1}{R}+\frac{1}{ \sqrt{R}}\Big{)}\leq C\,R^{1/2}\]
for some constants \(\widetilde{C},C>0\) only depending on \(d\). This proves Theorem 1.
Next suppose that \(d\to\infty\) and that \(R=R_{d}\). In the case where \(\limsup_{d\to\infty}(R-\log d)<+\infty\), we obtain a bound of the form
\[d_{\mathrm{Wass}}\left(\frac{S_{R,d}-\mathbb{E}S_{R,d}}{\sqrt{\mathrm{Var}\,S_ {R,d}}},N_{\frac{1}{2}}\right)\leq C\left(e^{-R_{d}/2}+d^{3/4}e^{-R_{d}/4} \right).\]
Since by assumption, \(e^{R_{d}}\leq Md\) for some \(M>0\), the first term on the right-hand-side is dominant, leading to the bound appearing in part (b) of the theorem. In the case where \(\limsup_{d\to\infty}(R-\log d)=+\infty\), we obtain immediately the asserted bound in part (a).
## 3 Bounding \(J_{r,d}\)
Here we bound the integral \(J_{R,d}\), which we recall is given by
\[J_{R,d}=\int_{0}^{R}\left(1-\frac{\cosh s-1}{\cosh R-1}\right)^{d-1}\,\mathrm{d}s.\]
Proof of Lemma 6.: We prove only part \((b)\), regarding the case \(d\to\infty\). The proof of part \((a)\) (the case of fixed \(d\)) is very similar to case 2 below, and is omitted. First we write, with the help of the hyperbolic identity \(\cosh x-1=2\sinh^{2}\frac{x}{2}\):
\[J_{d}=\int_{0}^{R}\left(1-\frac{\sinh^{2}(s/2)}{\sinh^{2}(R/2)}\right)^{d-1}\, \mathrm{d}s,\]
where we recall our notational convention \(J_{d}=J_{R_{d},d}\). Now we make the substitution \(s\mapsto x\), where
\[\frac{\sinh(s/2)}{\sinh(R/2)}=\frac{x}{\sqrt{d}}.\]
Then
\[\frac{1}{\sqrt{d}}\mathrm{d}x=\frac{\cosh(s/2)}{\sinh(R/2)}\,\frac{\mathrm{d} s}{2}=\sqrt{1+\frac{\sinh^{2}(R/2)}{d}x^{2}}\,\frac{\mathrm{d}s}{2\sinh(R/2)}.\]
We denote also \(\rho_{d}:=\frac{\sinh(R/2)}{\sqrt{d}}\). Note that since \(R_{d}\to\infty\), one has the asymptotic equivalence
\[\rho_{d}\sim\frac{1}{2}\exp\left(\frac{R-\log d}{2}\right),\qquad d\to\infty,\]
meaning that the ratio of the left and the right side tends to \(1\) as \(d\to\infty\). This gives
\[J_{d}=2\rho_{d}\int_{0}^{\sqrt{d}}\left(1-\frac{x^{2}}{d}\right)^{d-1}\frac{ \mathrm{d}x}{\sqrt{1+\rho_{d}^{2}x^{2}}}.\]
We now consider the two cases in the lemma separately.
1. First suppose that \(\limsup_{d\to\infty}(R_{d}-\log d)<+\infty\). Then by the above, \[L:=\limsup_{d\to\infty}\rho_{d}<+\infty.\] Fatou's lemma now gives \[\liminf_{d\to\infty}(\rho_{d}^{-1}J_{d})\geq 2\int_{0}^{\infty}e^{-x^{2}}\frac{ \mathrm{d}x}{\sqrt{1+L^{2}x^{2}}}.\] Note that the latter integral converges to a strictly positive limit (in fact, a computation with the aid of Mathematica gives \(\frac{1}{2L}\cdot e^{\frac{1}{2L^{2}}}K_{0}(\frac{1}{2L^{2}})\), where \(K_{0}\) is the modified Bessel function of the second kind). Therefore, there is a constant \(C>0\) so that \[J_{d}\geq C\exp\left(\frac{R-\log d}{2}\right)=C\,\frac{e^{R/2}}{\sqrt{d}}.\]
2. Suppose now that \(\limsup_{d\to\infty}(R_{d}-\log d)=+\infty\). We then compute with the help of the substitution \(\rho_{d}x\mapsto y\): \[J_{d} \geq 2\rho_{d}\int_{0}^{1}\left(1-\frac{x^{2}}{d}\right)^{d-1} \frac{\mathrm{d}x}{\sqrt{1+\rho_{d}^{2}x^{2}}}\] \[\geq\left(1-\frac{1}{d}\right)^{d-1}\int_{0}^{\rho_{d}}\frac{ \mathrm{d}y}{\sqrt{1+y^{2}}}\] \[\geq\frac{2}{e}\mathrm{arcsinh}\,\rho_{d}\] \[=\frac{1}{e}\left(R-\log d+O(1)\right),\] where \(O(1)\) stands for a sequence which is bounded in \(d\). Therefore, the is some constant \(C>0\) such that \[J_{d}\geq C(R-\log d).\]
This completes the argument.
## 4 The Euclidean case
Let \(\eta_{d}\) be a stationary and isotropic Poisson process on the space \(\mathbb{A}(d,d-1)\) of affine hyperplanes in \(\mathbb{R}^{d}\) with intensity \(1\). Its intensity measure \(\Lambda_{e}\) is then given by
\[\int_{\mathbb{A}(d,d-1)}f(H)\,\Lambda_{e}(\mathrm{d}H)=\int_{\mathbb{R}}\int_{ \mathbb{S}^{d-1}}f(H_{e}(s,u))\,\mathrm{d}u\,\mathrm{d}s,\]
where \(f:\mathbb{A}(d,d-1)\to\mathbb{R}\) is a non-negative measurable function and \(H_{e}(s,u)\) stands for the unique hyperplane in \(\mathbb{R}^{d}\) with signed distance \(s\) from \(\mathbf{o}\) and unit normal vector \(u\). By
\[S_{R,d,e}:=\sum_{H\in\eta_{d}}\mathcal{H}_{e}^{d-1}(H\cap B_{R,e}^{d})\]
we denote the total surface area induced by the hyperplanes of \(\eta_{d}\) within a centred Euclidean ball \(B_{R,e}^{d}\) of radius \(R>0\), where the Hausdorff measure \(\mathcal{H}_{e}^{d-1}\) is understood with respect to the Euclidean metric. Using [16, Theorem 4.7] we find that
\[d_{\mathrm{Was}}\Big{(}\frac{S_{R,d,e}-\mathbb{E}S_{R,d,e}}{\sqrt{\mathrm{Var }\,S_{R,d,e}}},N\Big{)}\leq 2\frac{\sqrt{\mathrm{cum}_{4}(S_{R,d,e})}}{ \mathrm{Var}\,S_{R,d,e}} \tag{12}\]
with a standard Gaussian random variable \(N\). The variance and the fourth cumulant of \(S_{R,d,e}\) are given explicitly by
\[\operatorname{Var}S_{R,d,e} =\int_{\mathcal{H}_{e}}\mathcal{H}_{e}^{d-1}(H\cap B_{R,e}^{d})^{2 }\,\Lambda_{e}(\mathrm{d}H),\] \[\operatorname{cum}_{4}(S_{R,d,e}) =\int_{\mathcal{H}_{e}}\mathcal{H}_{e}^{d-1}(H\cap B_{R,e}^{d})^{ 4}\,\Lambda_{e}(\mathrm{d}H).\]
Denoting by \(\kappa_{d-1}\) the \((d-1)\)-volume of the \((d-1)\)-dimensional Euclidean unit ball, we have that
\[\operatorname{Var}S_{R,d,e} =2\kappa_{d-1}^{2}\int_{0}^{R}(R^{2}-s^{2})^{d-1}\,\mathrm{d}s\] \[=2\kappa_{d-1}^{2}R^{2d-1}\int_{0}^{1}(1-t^{2})^{d-1}\,\mathrm{d}t\] \[=\frac{\pi^{d-\frac{1}{2}}\Gamma(d)R^{2d-1}}{\Gamma(\frac{d}{2}+ \frac{1}{2})^{2}\Gamma(d+\frac{1}{2})},\]
where we applied the substitution \(s\mapsto Rt\). The same computation also leads to an explicit expression for \(\operatorname{cum}_{4}(S_{R,d,e})\):
\[\operatorname{cum}_{4}(S_{R,d,e}) =2\kappa_{d-1}^{4}\int_{0}^{R}(R^{2}-s^{2})^{2(d-1)}\,\mathrm{d}s\] \[=2\kappa_{d-1}^{4}R^{4d-3}\int_{0}^{1}(1-t^{2})^{2(d-1)}\,\mathrm{ d}t\] \[=\frac{\pi^{2d-\frac{3}{2}}\Gamma(2d-1)R^{4d-3}}{\Gamma(\frac{d}{ 2}+\frac{1}{2})^{4}\Gamma(2d-\frac{1}{2})}.\]
In conjunction with (12) this gives
\[d_{\mathrm{Wass}}\Big{(}\frac{S_{R,d,e}-\mathbb{E}S_{R,d,e}}{\sqrt{ \operatorname{Var}S_{R,d,e}}},N\Big{)}\leq\frac{2}{\pi^{1/4}}\frac{\Gamma(d+ \frac{1}{2})}{\Gamma(d)}\sqrt{\frac{\Gamma(2d-1)}{\Gamma(2d-\frac{1}{2})}}\,R ^{-1/2}.\]
Using the well-known asymptotics for quotients of gamma functions, as \(d\to\infty\) we arrive at
\[d_{\mathrm{Wass}}\Big{(}\frac{S_{R,d,e}-\mathbb{E}S_{R,d,e}}{\sqrt{ \operatorname{Var}S_{R,d,e}}},N\Big{)}\leq C\,d^{1/4}R^{-1/2}\]
for some absolute constant \(C>0\).
### Acknowledgement
We wish to thank Matthias Schulte (Hamburg) for motivating us to study the problem addressed in this paper.
DR and CT were supported by the German Research Foundation (DFG) via CRC/TRR 191 _Symplectic Structures in Geometry, Algebra and Dynamics_. ZK and CT were supported by the German Research Foundation (DFG) via the Priority Program SPP 2265 _Random Geometric Systems_. ZK was also supported by the German Research Foundation (DFG) under Germany's Excellence Strategy EXC 2044 - 390685587 _Mathematics Munster: Dynamics - Geometry - Structure_. |
2307.00106 | Distance Functions and Normalization Under Stream Scenarios | Data normalization is an essential task when modeling a classification
system. When dealing with data streams, data normalization becomes especially
challenging since we may not know in advance the properties of the features,
such as their minimum/maximum values, and these properties may change over
time. We compare the accuracies generated by eight well-known distance
functions in data streams without normalization, normalized considering the
statistics of the first batch of data received, and considering the previous
batch received. We argue that experimental protocols for streams that consider
the full stream as normalized are unrealistic and can lead to biased and poor
results. Our results indicate that using the original data stream without
applying normalization, and the Canberra distance, can be a good combination
when no information about the data stream is known beforehand. | Eduardo V. L. Barboza, Paulo R. Lisboa de Almeida, Alceu de Souza Britto Jr, Rafael M. O. Cruz | 2023-06-30T19:46:20Z | http://arxiv.org/abs/2307.00106v2 | # Distance Functions and Normalization Under Stream Scenarios
###### Abstract
Data normalization is an essential task when modeling a classification system. When dealing with data streams, data normalization becomes especially challenging since we may not know in advance the properties of the features, such as their minimum/maximum values, and these properties may change over time. We compare the accuracies generated by eight well-known distance functions in data streams without normalization, normalized considering the statistics of the first batch of data received, and considering the previous batch received. We argue that experimental protocols for streams that consider the full stream as normalized are unrealistic and can lead to biased and poor results. Our results indicate that using the original data stream without applying normalization, and the Canberra distance, can be a good combination when no information about the data stream is known beforehand.
data stream, distance function, data normalization, machine learning
## I Introduction
When dealing with data streams, we face the scenario where new instances arrive over time. The stream size and the rate at which new instances arrive are usually unknown. Under such circumstances, classifiers are often updated over time since, at the beginning of the stream, only a few samples covering a small portion of the classification space are known.
Data normalization in such cases is a challenge if we do not have guarantees about the range of values generated for each feature. In other words, how can we apply normalization techniques, such as the min-max, in a possibly infinite stream without knowing beforehand the minimum/maximum values of each feature?
It is essential to consider these points when dealing with classifiers that depend on data normalization or in the presence of concept drifts, where the range of the features (besides other properties) may change over time [1]. Some authors propose approaches to deal with streams that rely on normalizing the entire data stream or proceed to execute experimental protocols using normalized datasets - e.g., most datasets currently available to test streams at the MOA website [2] are normalized. This may be unrealistic and lead to data leakage problems [3].
In this paper, we evaluate eight distance functions under different stream scenarios to give light on the following Research Questions:
* Does the normalization policy influence the classifier's competence in data streams?
* Does the distance function matter when classifying data streams?
The answers to these questions are based on a robust experimental protocol composed of synthetic and real-world datasets. We confirm that the normalization of the entire stream can sometimes lead to worse results. The experiments have shown that when the classifier is retrained using the most recent data, using the original data stream without normalization combined with the Canberra distance function can provide more realistic and better results. Moreover, distances such as the Cosine and Standardized Euclidean can be more sensitive to feature changes over time than Manhattan and Canberra distances.
The remaining of this paper is structured into four sections. Section II introduces the distance functions and the min-max normalization strategy evaluated in this paper. Section III presents the related works. Section IV presents our experimental protocol, the test results, and a discussion about the observed results. Finally, Section V brings our conclusion and perspectives on future work.
## II Definitions
### _Min-Max normalization_
Throughout this paper, we employ the _min-max_ normalization, which is one of the most common normalization techniques, as it is simple to compute and to understand. The _min-max_ is defined as
\[x_{ij}=\frac{x_{ij}-min_{j}}{max_{j}-min_{j}} \tag{1}\]
where \(j\) is the index of the \(jth\) feature of the instance \(x_{i}\). The \(max_{j}\) and \(min_{j}\) are the maximum and minimum values of the \(jth\) features - these values are often found by scanning the entire training set. A con of the min-max normalization is that it is sensitive to outliers, as it deals with minimum and maximum values.
### _Distance Functions_
We can define a distance function as a mathematical measure that quantifies how far apart two objects are [4]. Consider two instances \(x=[x_{1},x_{2},\ldots,x_{n}]\) and \(y=[y_{1},y_{2},\ldots,y_{n}]\), where \(x_{i}\) and \(y_{i}\) is one of the \(n\) features of \(x\) or \(y\), respectively. Finding a representative distance between \(x\) and \(y\) can be challenging if we consider that different features may lie in different ranges (non-normalized data), the presence of categorical data, missing points, computational cost, etc.
In this paper, we consider only distance functions that deal with numerical data and assume that no missing features are present. Table I contains a list of the distance functions considered in this paper, where \(d(x,y)\) is the distance between the instances \(x\) and \(y\).
Figure 1 shows an example of the distance functions for two points \(x\) and \(y\) in a 2-dimensional space. The Euclidean Distance is the most intuitive distance metric between two points, as it calculates a straight line between them. The Manhattan Distance, also known as taxicab geometry, considers a straight route between the points. It calculates the sum of the absolute differences between the features of \(x\) and \(y\). The Chebyshev Distance considers the maximum absolute difference between the features.
The Euclidean, Manhattan, and Chebyshev distance functions belong to the Minkowski family [4], where when \(p=1\) corresponds to the Manhattan Distance, and \(p=2\) is the Euclidean Distance. When \(p\) tends to infinity, it is similar to the Chebyshev Distance. Choosing the right value for the parameter \(p\) in Minkowski Distance also might influence the performance [5, 6]. We use \(p=1.5\) in our experiments since it is a middle term between Euclidean and Manhattan distances.
The Cosine distance takes into account the angle between data points instead of the distance between them. The Mahalanobis Distance uses a covariance matrix \(C\) when calculating the distance - i.e., it takes into account the relation between the features. If the Matrix \(C\) in Mahalanobis distance is the identity matrix, it is like features have no relation with each other, and the Mahalanobis distance is similar to the Euclidean Distance. Standardized Euclidean is the same as the Euclidean Distance, but it divides the difference in features by the variance \(V\) of data. Finally, the Canberra distance divides the absolute difference of features by the sum of absolute features.
## III Related Work
The authors in [7] show some problems related to unrealistic scenarios when modeling streams. The authors demonstrate that commonly used benchmarks in state-of-the-art datasets contain a high serial dependence. Thus experimental protocols that rely on the Independent and Identically Distributed (i.i.d) assumption may lead to biased results.
When dealing with data streams, authors often use some technique to normalize the data stream using a fixed-size window. The window is moved when new data is available, and the data is normalized considering the statistics of the current window [8, 9].
The authors in [8] use disjoint sliding windows to estimate the global min-max values for normalization. They show that their adaptive normalization technique got better results than other normalization methods such as min-max normalization, z-score, decimal scaling, and min-max with a sliding window in the tested datasets. To save computational resources, the authors in [9] propose a technique to update the normalization only when the statistics in the current and previous windows are above a specified threshold. They compared their proposed method using the min-max scaler using different approaches for windowing in data normalization. They analyzed a base
Fig. 1: Distance Functions in a 2-dimensional Space.
policy where the data range is known for the whole dataset and compared the error between different policies. Their method got the least root mean squared error to this base policy.
In [10], authors proposed Adaptive Standardization and Rescaling Normalization (ASR-Norm), an adaptive normalization method where statistics for standardization and rescaling are learned through neural networks. It outperforms Batch Normalization, Instance Normalization, and Switchable Normalization.
Methods that utilize distance functions to deal with data streams, like in [11, 12], may be impacted by how we deal with data normalization. Authors in [13] tested some Machine Learning algorithms with five different scaling techniques and affirmed that the chosen scaling technique influences the performance and the best one changes with the dataset used.
In [14], the authors assessed the performance of different distance functions using a k-Nearest Neighbors (k-NN) for classifying stars. In their experiment, the Cosine distance function got the best accuracy with \(k=9\). Authors in [15] analyzed three different distance functions for Distance-Weighted k-NN: Heterogeneous Euclidean-Overlap metric, Heterogeneous Euclidean-VDM metric, and Heterogeneous Manhattan-Overlap metric The authors did not find a significant difference between them.
Authors in [16] tested different \(k\) values and distance functions for k-NN on classifying emotional electroencephalogram between stroke and normal people, and got the best results with the Manhattan distance. They also concluded that the distance functions have different performances depending on the situation. In [17], different normalization techniques, distance functions, and k-NN configurations were analyzed to classify fake news. The combination of Robust Scaler, Chebyshev distance, and \(k=34\) got the best result.
Many works have their datasets already normalized [18, 19]. The point is that many of these datasets may be normalized using the whole stream, e.g., the datasets available in the MOA repository [2], and this is not realistic. Authors in [10] argue that most works regarding normalization do not study the capacity of generalization in non-stationary environments. Care must be taken with normalization, as shown by [20], who argues that normalization sometimes leads to worse performance - a conclusion that we get to in this work as well. This proves that the assumption that normalization improves performance does not hold in all cases.
In this work, we chose the min-max normalization technique, applied different policies for normalizing data, and tested how k-NN behaves with different distance functions in different scenarios inside data streams. To the best of our knowledge, there is no work studying the impact of different normalization policies and distance functions under data streams.
## IV Experiments
### _Experimental Protocol_
In this study, we run some experiments to evaluate the impact of different distance functions in different scenarios. We do it by comparing the accuracies of a 3NN (k-NN with \(k=3\)) with the distance functions described in Section II-B. We chose the 3NN classifier since it is a weak learner that directly depends on the distance functions to classify the instances. The rationale is to perceive accuracy changes better when using different distance functions.
During the tests, we split the data into batches containing 1,000 samples. When a new batch is given at time \(t+1\), the true labels of the instances of the previous batch, received at \(t\), are given. The task of the classifier is to predict the instances available in the most recent (current) batch received. Figure 2 shows a scheme of a stream of batches.
We test the impact of normalization for the different distance metrics under four scenarios:
1. The _original_ stream, without any normalization.
2. The statistics of the _first batch_ are used to normalize the remaining batches using the min-max approach.
3. The statistics of the _previous_ batch received are used to normalize the _current_ one.
4. The _full_ stream is normalized using the min-max approach.
Notice that the normalization of the entire stream (item 4) is often unfeasible in the real world, as there is often no way to know a stream's minimum and maximum values. We use this approach to compare with items 1-3 and to demonstrate how the results may be biased under such an unrealistic scenario. When we used the approach for normalizing data in the previous batch, the scaler was retrained before updating the model.
Table II contains a summary of the datasets employed in this work. The datasets are available in well-known repositories, such as the UCI [21], the MOA [2], and the OpenML [22] repositories. When a missing feature is present, it was replaced by the mean value of the whole dataset - again, this is not possible in the real world, it was made here for testing purposes. We use the SEA Concepts [23] as an artificial dataset in the tests reported in Section IV-B. This dataset contains three randomly generated real features \(f1\), \(f2\), and \(f3\in[0,10]\). We consider only the first concept, where the generated sample belongs to the positive class if \(f1+f2\leq 8\), or to the negative class otherwise (the feature \(f3\) is noise).
The reported results are an average of 30 trials.
Fig. 2: Scheme of a Stream of Batches.
### _Tests using Synthetic Data_
In this Section, we evaluate the accuracy of the classifier in an environment where the range of the features changes over time. It may occur in the real world due to, for example, variations in temperature with the change of seasons or to a faulty sensor [24]. In each run, 40,000 instances of the SEA dataset are generated.
We first made a test varying the \(f1\) feature. From the 1st to the 10,000th instance, no modification is made. From the 10,001st to the 20,000th instance, the instances are multiplied by 10, from the 20,001st to the 30,000th by 100, and from the 30,001st to the 40,000th by 1,000. We follow the same protocol in a test where the \(f3\) feature is modified over time - notice that differently from \(f1\), the feature \(f3\) is non-informative.
The results regarding a classifier that is trained with the first batch of the stream and never updated are available in Table III. When we varied the range of feature \(f1\), we can observe that no distance metric performs well without normalization. The results improve significantly when the normalization is made, considering the _previous_ batch.
When we vary the non-informative \(f3\) feature, the normalization that considers the previous batch leads to better results. Interestingly, the normalization of the full stream led to worse results when using the Chebyshev, Canberra, and Standardized Euclidean distances. The normalization of the full stream led to better results in some scenarios, such as when using the Euclidean distance. Thus, besides being unrealistic, the normalization of the entire stream may lead to biased results for better or worse, depending on the distance function.
When we consider a classifier retrained with the previous batch, shown in Table IV, we can reach similar results with the normalization that considers the previous batch being the best one. It is also possible to notice that the Canberra distance reached the best results when the original dataset (without any normalization) was used. The Manhattan distance is the second best in such a scenario. This result corroborates with [13], where the Manhattan distance worked well when no normalization was made. In all tests, the Cosine distance showed the worst results.
In Figure 3, we show the accuracy reached in each batch of data for the tested distance functions when the \(f1\) feature is changed. In all scenarios, the cosine distance led to the worst results, even before the change in the feature range. We can observe that (apart from the cosine distance), all distance functions led to similar results when the classifier is not retrained (Figures (a)a and (b)b).
When the model is retrained, but no normalization is made (Figure (c)c), the Canberra distance leads to the best results, followed by the Manhattan distance. When both the classifier and the normalization are retrained using the previous batch (Figure (d)d), once again, all distance functions except the cosine seem to behave similarly.
When the range of a non-significant feature (\(f3\)) varies, we can see different behavior between the distance functions in Figure 4. Interestingly, the Canberra and Manhattan distance functions did not show any accuracy drop even in the moments when the range is changed - for instance, in the 10th batch (except for the scenario retrained without normalization, where the Manhattan distance presents accuracy drops).
### _Tests using Real-World Datasets_
In this section, we check the behavior of the distance functions in five real-world datasets. Results are displayed in Table V. In these experiments, the classifiers are always retrained using the previous batch. First, considering the average results, we can observe that the (unrealistic) normalization of the full datasets led to better results in the Airlines and Gas Sensor datasets, and it worsened the results in the remaining datasets. Using the original dataset without normalization often led to better results when considering the normalization techniques. In Table VI, we show a count of the number of wins each normalization technique led.
Note in Table VI that, differently from the tests in Section IV-B, the normalization using the Previous Batch did not lead
Figure 4: Accuracies in SEA Varying Range of \(f3\).
Figure 3: Accuracies in SEA Varying Range of \(f1\).
to the best results in any dataset. We hypothesize that this may have happened due to two possible factors: 1) The datasets tested do not present significative changes in the ranges of the features. 2) The datasets tested may present frequent changes (drifts) over time; thus, when the data is normalized with the previous batch, the ranges have already changed.
To better understand the results above, let us analyze the box plots of the features with the highest standard deviation of some datasets, divided into ten batches. In Figure 4(a), we show the box plot for one feature of the Forest Covertype dataset. We can see a higher variation after the 4th batch, where outliers start to arise. Notice that the min-max scaler is not optimal under the presence of outliers [13], which may explain its poor performance when compared with the original data. Similar analysis can be done for the Gas Sensor and Electricity datasets in Figures 4(b) and 4(c), respectively. Even though we are analyzing only one feature for each dataset in the plots of Figure 5, these insights suggest that analyzing these statistics for the remaining features, especially the highly correlated with the target class, can be a prospect of future work.
In Table VII, we show how many times each distance metric led to the best result (we did not consider the results when taking the full normalization of the dataset). As one can observe, the Canberra distance, followed by the Manhattan distance, were the distances that led to the best results more often. In Table VIII we show the average results achieved by each distance metric in each dataset. The average considers the original dataset and the normalization that takes the first and previous batch (full is not considered here).
### _Discussion and Limitations_
The results presented in Sections IV-B and IV-C lead us to some interesting findings:
1) Besides being unrealistic, the normalization of the full dataset may lead to biased results. Surprisingly, the bias can affect the results negatively in some scenarios. This finding reinforces that experiments for data streams should avoid the full stream's normalization to create scenarios closer to the real world.
2) The normalization made using the information from the previous batch may be beneficial under scenarios where the features range changes severely over time (see Section IV-B). Nevertheless, when this may not happen, the original data may lead to better results. Thus, as a conservative approach, using the original data (without normalization) can be a good approach since it can lead to good results without the overhead of the data normalization.
3) The Canberra distance was the most resilient one, leading to good results in various scenarios. The Canberra distance is also simple to compute; thus, we indicate this as a default metric to be used under stream scenarios.
4) The cosine and Standardized Euclidean distances did not show good results for most tested scenarios. Thus, except for specific cases, these metrics should be avoided under stream
scenarios.
With these findings, we can answer our research questions:
**RQ1 - Does the normalization policy influence the classifier's competence in data streams?** Answer: yes, normalization policy can lead to different results under data streams. We suggest the usage of the original data since it got the best result in three out of five tests using real-world data. It is important to update the classifier with the most recent data, as shown in the results of Section IV-B.
**RQ2 - Does the distance function matter when classifying data streams?** Answer: yes, and we suggest the usage of the Canberra distance. As shown in Table VII, the Canberra distance got the most victories among the distance functions tested in this study - 7 out of 15 tests.
It is important to mention some limitations of this work. First, we did not consider problems regarding a stream where the ranges of features change with a high frequency - for instance, scenarios where the range may change for every batch. Although we have not carried out tests for such scenarios, the results of this work make us hypothesize that normalizing the stream using the previous batch may deteriorate the results. Second, this work does not cover streams of instances (instead of batches), where the overhead of updating the normalization over time may be prohibitive - nevertheless, using the original data without normalization, as suggested in this work, may be a good start.
## V Conclusion
In this study, we analyzed the impact of eight distance functions and three normalization approaches in data streams - using the original data, the data normalized using the first batch, and the data normalized using the previous batch. Tests using one synthetic and five well-known real datasets showed us some interesting results.
First, we demonstrate that the normalization of the full dataset, besides being unrealistic, may lead to biased results. Surprisingly, the results can even be biased for the worse. We showed that different normalization policies may lead to different results under streams. Nevertheless, it is difficult to conclude the best normalization policy for a general case. We suggest that the usage of the original data without normalization can be a good conservative approach.
We also show that the Canberra distance function showed the best results in most of our tests, and thus we indicate this as a distance metric to be used when it is not possible to check in advance the properties of the stream. We also show that the Manhattan distance can lead to good results, and distances such as the cosine and Standardized Euclidean may lead to poor results in some streams.
In future works, we intend to test other scaling techniques, such as the z-score, as well as testing other classifiers and classification techniques that directly depend on the distance metrics. We also intend to test scenarios containing a stream of instances instead of batches and check the accuracy and overhead caused by the different distance metrics and normalization policies.
## Acknowledgement
The authors would like to thank Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq, Brazil, grant 405511/2022-1).
|
2301.13592 | Priors are Powerful: Improving a Transformer for Multi-camera 3D
Detection with 2D Priors | Transfomer-based approaches advance the recent development of multi-camera 3D
detection both in academia and industry. In a vanilla transformer architecture,
queries are randomly initialised and optimised for the whole dataset, without
considering the differences among input frames. In this work, we propose to
leverage the predictions from an image backbone, which is often highly
optimised for 2D tasks, as priors to the transformer part of a 3D detection
network. The method works by (1). augmenting image feature maps with 2D priors,
(2). sampling query locations via ray-casting along 2D box centroids, as well
as (3). initialising query features with object-level image features.
Experimental results shows that 2D priors not only help the model converge
faster, but also largely improve the baseline approach by up to 12% in terms of
average precision. | Di Feng, Francesco Ferroni | 2023-01-31T12:45:19Z | http://arxiv.org/abs/2301.13592v1 | # Priors are Powerful: Improving a Transformer for Multi-camera 3D Detection with 2D Priors
###### Abstract
Transfomer-based approaches advance the recent development of multi-camera 3D detection both in academia and industry. In a vanilla transformer architecture, queries are randomly initialised and optimised for the whole dataset, without considering the differences among input frames. In this work, we propose to leverage the predictions from an image backbone, which is often highly optimised for 2D tasks, as priors to the transformer part of a 3D detection network. The method works by (1). augmenting image feature maps with 2D priors, (2). sampling query locations via ray-casting along 2D box centroids, as well as (3). initialising query features with object-level image features. Experimental results shows that 2D priors not only help the model converge faster, but also largely improve the baseline approach by up to \(12\%\) in terms of average precision.
## I Introduction
Towards 360-degree 3D perception, self-driving vehicles are usually equipped with multiple monocular cameras, and reliable and accurate multi-camera 3D detection has become an important research challenge and industrial effort. The traditional approach takes advantage of the convolution neural nets (convnets) that are highly-optimised for 2D tasks, by performing 2D scene understanding on images, followed by a 2D to 3D projection. Recent advancements, in contrast, propose to project 2D images into 3D space, before running 3D tasks on a bird's eye view (BEV) representation [1]. This new paradigm not only provides a generic scene representation for multi-modal perception, mapping, and prediction, but also achieves improved accuracy with the help of the _transformer_ architecture [2].
A typical pipeline for multi-camera 3D detection with transformers is shown in Fig. 1. First, multi-level feature maps from multiple camera images are extracted from a backbone network, commonly a convnet. Afterwards, a transformer decoder iteratively processes queries and interacts with image feature maps via cross-attention [3]. Finally, each updated query is fed into a detection head to categorize objects and regress their cuboid parameters (such as centroid locations, cuboid extents, and orientations). In a vanilla transformer architecture, such as from the seminal work detr3d [4], query features and their location information are randomly initialised and optimised for the whole dataset, without considering the heterogeneity of inputs from different frames. We find that such query design suffers from slow training convergence and strong smearing effect with erroneous depth estimation (shown in Fig. 2(a) and will be discussed in Sec. II).
Several methods extend detr3d [4] with improved query design. For example, PETR [5] generates 3D position embedding to image features as the input of a transformer decoder. SpatialDETR [6] encodes camera intrinsics and extrinsics features to keys and queries. Graph-detr3d [7] replaces self-attention with a graph neural network for better query interaction. Finally, BEVFormer [8] discretises the 3D world with bird's eye view grids, and considers each grid as a query location.
Since convnets are often highly optimised for 2D tasks, why not reusing those 2D predictions as priors to the transformer part of 3D detection? In this work, we verify this idea in the detr3d pipeline, and incorporate 2D object detection, semantic segmentation, and depth estimation from a image backbone to the transformer decoder. We propose three simple strategies to use 2D priors: augmenting image feature maps for cross attention, sampling query locations via ray-casting along 2D box centroids, as well as initialising query features with object-level image features. Experimental results on an internal dataset shows that our methods largely improve the vanilla detr3d by up to 12% in terms of average precision, and make the model converge faster during training.
In parallel to our work, MV2D [9] also proposes to
Fig. 1: The pipeline for multi-camera 3D detector with transformers. First, a backbone network, commonly a convolution neural network (convnet), extracts multi-level feature maps from multi-camera inputs. Afterwards, a transformer decoder iteratively processes queries and interacts with feature maps. Finally, each query is fed into a detection head to predict object classes and cuboid parameters. In this work, we propose to leverage 2D predictions from the convnet backbone as priors to the transformer decoder for 3D detection. Those priors are incorporated into feature maps, as well as each query and reference point.
leverage 2D detections as priors for the transformer part of a multi-camera 3D detector. Unlike our approaches which generate multiple reference points from a 2D box centroid and employs multiple 2D cues (2D boxes, semantic maps, and depth maps), MV2D only studies how to exploit 2D detections, and how to predict one reference point for each 2D box via a dynamic object query generator. Their experimental results demonstrate higher recall rates compared to the vanilla transformer model, especially for small and distant objects.
In the sequel, Sec. II reviews the transformer decoder part of the detr3d model, with a focus on the query generation process. Sec. III introduces our proposed three methods to improve the detr3d network. Sec. IV shows the experimental results, followed by a summary and discussion in Sec. V.
## II Vanilla Detr3d Revisited
The architecture of detr3d [4] has been summarised in the previous section and depicted in Fig. 1. In this section, let us take a closer look at the transformer decoder.
A decoder is built from six standard transformer blocks [3]. In each block, queries are interacted with each other via self-attention, and fused with multi-camera multi-level feature maps via cross-attention. Unlike the common "global" cross-attention mechanism [3], detr3d only associates a query to image features that correspond to its query location (also called reference point). To do this, a position \(p=[x,y,z]\) in the 3D coordinate system is computed for each query. The position is projected onto image planes given camera intrinsics and extrinsics parameters. The image features from the projected pixels are weighted and averaged over all feature levels and cameras for updating a query in a "local" cross-attention manner.
Denote \(d\) as the embedding dimension for a query, Fig. 2(a) illustrates how a query \(q\in\mathbb{R}^{d}\) and its reference point \(p\in\mathbb{R}^{3}\) are built. First, a position embedding vector \(q_{\text{pos}}\in\mathbb{R}^{d}\) and a feature embedding vector \(q_{\text{feat}}\in\mathbb{R}^{d}\) are randomly initialized (following a uniform or a normal distribution). Afterwards, \(q_{\text{pos}}\) is mapped to the reference point \(p\) via a multi-layer perceptron (MLP), and added to \(q_{\text{feat}}\) to generate the final positional-aware query features \(q\). Through training the network with the standard Hungarian assignment and the set prediction loss [3], both \(q_{\text{pos}}\) and \(q_{\text{feat}}\) learn to encode the object statistics for the whole dataset, which can be considered as pre-defined "anchors" in common object detection pipeline, such as Faster-RCNN [10].
Though simple and straightforward, the network design in [4] lacks prior knowledge in the query and reference point generation process, resulting in slow training convergence and ambiguity in prediction. Fig. 3(a) illustrates a typical detr3d output on the bird's eye view (BEV) without post
Fig. 3: (a). Raw predictions from the vanilla detr3d model on the Bird’s Eye View (BEV). Each circle represents a query prediction. Strong smearing effect can be observed along the ray. (b). Our proposed query sampling strategy. Blue dots are generated reference points, and orange dots are ground truth centroids. All ground truth centroids can be associated with nearby reference points, though with some errors.
Fig. 2: A comparison of query generation strategies between the vanilla detr3d [4] and our proposed methods with 2D priors. (a). The vanilla detr3d randomly initialises the positional embedding and the feature embedding vectors. Reference points are predicted by a small multi-layer perceptron (MLP). (b). Our proposed approach leverages 2D detection and semantic maps as 2D priors. We sample reference points via ray-casting along 2D box centroids, which generates the positional embedding vector through a MLP. Besides, we initialize the feature embedding vector with the object-level features, weighted by semantic scores.
processing. All queries are marked by circles, and those with high classification scores are further demonstrated with red bounding boxes. We observe strong smearing effect in object detection, i.e. queries are converged along the ray from detections on 2D images, bringing many false positives. This is due to erroneous depth estimation and ambiguous target assignment during training1.
Footnote 1: Imagine multiple queries generate reference points, which are close to each other and are projected to the same object on images. Due to the Hungarian assignment [3], only one query is labelled “positive”, punishing other positive queries with “negative” signals.
## III Three Ways of Adding Priors
We propose three methods to improve the detr3d network, by incorporating 2D priors to the transformer decoder, illustrated in Fig. 2(b). To do this, we select 2D object detection, semantic segmentation, and depth estimation predicted by our convnet backbone as priors, as they are common, well-optimized 2D tasks for autonomous driving (e.g. HydraNet from Tesla [1]). The depth estimation is represented as a single-channel depth map rescaled to \([0,1]\). The semantic segmentation is represented by a semantic map with \(C\) channels, where each channel shows pixel-wise classification scores of a category.
### _Feature Map Priors_
We simply concatenate semantic and depth maps with the multi-camera feature maps at different scales. In this way, semantic and depth priors are added to queries in a cross-attention operation.
### _Location Priors_
We generate reference points \(p\in\mathbb{R}^{3}\) only along the rays from the centroids of 2D box predictions. For each ray, a simple uniform sampling with 5 meters interval is performed. In this way, the search space for objects can be narrowed efficiently, which helps reduce false positives, limit the number of queries, and accelerate model convergence. When performing cuboid prediction, the detection head regresses the offset to its reference point, denoted by \(\Delta x,\Delta y,\Delta z\), as the cuboid centroids. It also regresses the cuboid's height, length, width, and yaw angle.
The reference points may not accurately overlap with the cuboid centroids, because the centers of 2D boxes are different from those from the projected cuboids, and the 5-meters sampling interval is rough compared to the common discretisation thresholds in many well-known detection networks (e.g 0.5-meters interval in Lift-spalt-shoot [11], 0.16-meters in CaDNN [12], and 0.2-meters in Pointpillar [13]). However, we find such a simple point generation strategy provides rough-and-ready estimates to the actual object locations, as illustrated in Fig. 3(b). Besides, the location errors can be compensated by the iterative query refinement in transformer blocks. We expect more accurate reference point generation, when introducing a centerness head for projected cuboid centers (similar to CenterNet [14]), or sampling points only around predicted depth (similar to CramNet [15]).
Inspired by Anchor-DETR [16], we further incorporate location priors to queries, by projecting a reference point \(p\in\mathbb{R}^{3}\) to a position embedding vector \(q_{\text{pos}}\in\mathbb{R}^{d}\) via a small MLP. Interestingly, this is a reversed procedure compared to the vanilla detr3d, which maps a position embedding vector to its reference point.
### _Query Priors_
All queries generated from a ray come from the same 2D object. Therefore, we propose to incorporate the same object-level 2D priors to those queries, and further distinguish among them with positional information. We follow five steps: First, the semantic map, the depth map, and the multi-level multi-camera feature maps are cropped based on the 2D box estimates. Then, the channel of the cropped semantic map, which corresponds to the predicted object class, is used to weight the cropped depth map and feature maps by a pixel-wise dot production. Afterwards, a channel-wise global average-pooling operation is used to generate a 1D vector for each query prior, inspired from the squeeze-and-excitation operation from SENet [17]. Furthermore, the query prior vector, appended with an object class index, an objectness score, and the 2D bounding box parameters, is fed into a small MLP to generate query embedding features. Finally, the positional embedding features are added to the query embedding features as the final query features, so that the queries from the same ray are distinguishable.
## IV Experimental Results
Based on a pre-trained convnet backbone, we re-implement the detr3d transformer decoder, and experiment its detection performance with different 2D priors. Following the original detr3d [4], we set the initial learning rate to be \(2*10^{-4}\) with a weight decay of \(10^{-5}\). The AdamW optimiser with a consine decay is used. Unlike [4], we do not use any data augmentation tricks, and find that training with more epochs improve the model performance. All models, unless mentioned otherwise, are trained with a tiny subset of our internal dataset, with approx. 60k training, 10k validation, and 4k testing samples. The data was recorded in various locations in the US and Europe, with different lightning conditions (daytime, nighttime, rainy, sunny etc.) and scenarios (cities, rural areas, etc.). We report the Average Precision (AP) scores at the IoU=0.1 threshold on the bird's eye view (BEV) for the VEHICLE and HUMAN classes, and only consider detections within the 50 meters range.
### _Main Results_
Tab. I compares the AP scores between the vanilla detr3d model with its variants with different 2D priors. The model "+ feat prior " only adds feature map priors, "+ feat, loc priors" additionally uses location priors, and "+ feat, loc, query priors" exploits all three priors. Fig. 4(a) and Fig. 4(b) show the precision recall curves for VEHICLE and HUMAN classes, respectively. We observe that all 2D priors improve the vanilla detr3d model with higher AP scores up to nearly 12%. The largest performance gain comes from location
priors, verifying the effectiveness of our design choice for reference point generation.
In addition, Tab. II reports AP scores at the 4 meters threshold, which are commonly used in the Nuscenes metrics [18]. This threshold is less strict than the IoU=0.1 threshold when evaluating location errors, thus resulting in higher AP scores when evaluating on the same model. In this setting, we observe that the model with 2D priors largely improves the HUMAN detection by more than 30%.
### _Using Lidar Points as Location Priors_
We conduct a simple ablation study by replacing the location priors with Lidar point clouds. To do that, we train a model called "+ feat, lidar priors", which uses the uniformly sub-sampled lidar observations as reference points. Fig. 4(c) shows that our camera-only model "+ feat, loc priors" achieves similar performance with its camera-lidar fusion counterpart when detecting the VEHICLE class, but performs much worse for the HUMAN class. The result indicates that localization errors are still the bottleneck for the camera-only detection pipeline, especially for small objects. Similar findings are also reported in [2].
### _Training with Larger Data_
We experiment our model with \(\times\)20 more data, and compare it with a single-camera baseline model, which runs detection on each monocular camera separately, and aggregates results from all cameras as the final multi-camera detection outputs (with non-maximum-suppression). The baseline model follows a network architecture similar to FCOS3D [19], which regresses cuboid parameters directly from 2D images. The baseline and our proposed models use the same pre-trained image backbone. Tab. III shows the inference results on the same test subset in Sec. IV-A. Our model (detr3d + feat, loc, query priors) outperforms the baseline model by 5.70% and 3.74% AP for VEHICLE and HUMAN classes, respectively. Besides, larger training data brings approx. 1.5% performance gain, when comparing results from the small training data shown in Tab. I. This marginal AP improvement suggests that the 2D priors from the image backbone might compensate the benefits from large dataset, saving the training cost.
### _Training convergence_
We show the learning curves in Fig. 5, by overfitting a small dataset with approx. 300 data frames. Compared to the vanilla detr3d model, the model with 2D priors reaches
\begin{table}
\begin{tabular}{|l|c|c|} \hline Models & AP VEHICLE (\%) & AP HUMAN (\%) \\ \hline \hline vanilla detr3d & 75.84 & 24.79 \\ + feat, loc, query priors & 86.52 (+10.68) & 58.06 (+33.27) \\ \hline \end{tabular}
\end{table} TABLE II: A comparison of Average Precision (AP) scores at 4 meters centroid distance threshold.
\begin{table}
\begin{tabular}{|l|c|c|} \hline Models & AP VEHICLE (\%) & AP HUMAN (\%) \\ \hline \hline vanilla detr3d & 70.57 & 6.36 \\ \hline + feat prior & 72.40 (+1.83) & 9.16 (+1.80) \\ + feat, loc priors & 79.93 (+9.36) & 13.52 (+7.16) \\ + feat, loc, query priors & 82.01 (+11.44) & 14.68 (+8.32) \\ \hline \end{tabular}
\end{table} TABLE I: A comparison of Average Precision (AP) scores at the IoU=0.1 threshold.
Fig. 4: Precision recall curves. (a). A comparison among the vanilla detr3d model and its variants with 2D priors on the VEHICLE class. (b). A comparison on the HUMAN class. (c). Ablation study by replacing reference points with lidar observations.
\begin{table}
\begin{tabular}{|l|c|c|} \hline Models & AP VEHICLE (\%) & AP HUMAN (\%) \\ \hline \hline Single-camera baseline & 77.78 & 12.86 \\ Ours & 83.48 & 16.60 \\ \hline \end{tabular}
\end{table} TABLE III: Comparing the proposed model (Ours) with a single-camera baseline at the IoU=0.1 threshold.
the same epoch loss with much fewer epochs, implying the benefits of 2D priors for faster training convergence.
## V Summary
Transformer-based methods advance the recent development of mulit-camera 3D detection. The vanilla transformer architecture randomly initializes queries, without considering the heterogeneity of inputs from different frame. We argue that this approach is sub-optimal in query generation. In this regard, we propose to leverage multiple predictions from an image backbone network as 2D priors to improve the transformer part of the network, including 2D detections, semantic maps, and depth maps. The method works by augmenting image feature maps with 2D priors, sampling query locations via ray-casting along 2D box centroids, as well as initialising query features with object-level image features. Experiments results show that 2D priors can be used to largely improve the detection accuracy in terms of average precision, and to accelerate the model convergence. In the future, we intend to add more 2D priors, such as scene flow and instance masks, and extend the framework into a multi-modal fusion setting (e.g. combining cameras, lidars, and radars) [20, 21, 22].
## Acknowledgement
The authors would like to thank the full detection team at Argo AI for the technical discussions and the ML infra support. Special thanks to Jan Martin and Ahsan Iqbal for making this publication possible.
|
2301.00288 | On the stability of shear flows in bounded channels, II: non-monotonic
shear flows | We give a proof of linear inviscid damping and vorticity depletion for
non-monotonic shear flows with one critical point in a bounded periodic
channel. In particular, we obtain quantitative depletion rates for the
vorticity function without any symmetry assumptions. | Alexandru D. Ionescu, Sameer Iyer, Hao Jia | 2022-12-31T20:55:45Z | http://arxiv.org/abs/2301.00288v2 | # Linear inviscid damping and vorticity depletion for non-monotonic shear flows
###### Abstract.
We give a proof of linear inviscid damping and vorticity depletion for non-monotonic shear flows with one critical point in a bounded periodic channel. In particular, we obtain quantitative depletion rates for the vorticity function without any symmetry assumptions.
The first author was supported in part by NSF grant DMS-2007008. The second author is partially supported by a UC Davis startup grant. The third author was supported in part by NSF grant DMS-1945179.
flows [2, 23, 10, 12], and on point vortices [11]. We refer also to the recent review article [13] for a more in-depth discussion of recent developments of both linear and nonlinear inviscid damping.
Many physically important shear flows are not monotonic, such as Poiseuille flow and Kolmogorov flows. For such flows on the linear inviscid level, there is an additional significant physical phenomenon called "vorticity depletion" which refers to the asymptotic vanishing of vorticity as \(t\to\infty\) near the critical point where the derivative of the shear flow is zero, first predicted in Bouchet and Morita [5], and proved rigorously in Wei-Zhang-Zhao [31]. A similar phenomenon was proved in Bedrossian-Coti Zelati-Vicol [3] for the case of vortices. See also [17] by the first and third author for a refined description of the dynamics in Gevrey spaces as a step towards proving nonlinear vortex symmetrization.
In [31] by Wei-Zhang-Zhao, sharp linear inviscid damping estimates and quantitative depletion estimates were obtained for an important class of "symmetric shear flows" in a periodic channel (see also [32] by Wei-Zhang-Zhao for a similar result for Kolmogorov flow). When no symmetry is assumed, only qualitative bounds are available. Heuristically the general case should be similar to the symmetric one, since the main vorticity depletion mechanism is completely local and asymptotically all shear flows approach symmetric ones at the (non-degenerate) critical points. However there are significant difficulties in using the approach of [31] to extend the quantitative depletion bounds of [31] to the general case, as the argument in [31] relies heavily on decomposition of functions into odd and even parts, which are specific to symmetric shear flows.
In this paper we prove linear inviscid damping estimates and quantitative vorticity depletion estimates for a class of stable non-monotonic shear flows with one non-degenerate critical point. The main new features of our results are that we do not need symmetry condition on the background shear flow, and that our formulation on quantitative depletion for vorticity function seem to be new even for general symmetric shear flows (see however Wei-Zhang-Zhao [32] which contains a sharp depletion rate at the critical points for Kolmogorov flow), see Theorem 1.2 below for the precise statements. We begin with the description of our main equations and theorem.
### Main equations
Consider the two dimensional Euler equation linearized around a shear flow \((b(y),0)\), in the periodic channel \((x,y,t)\in\mathbb{T}\times[0,1]\times[0,\infty)\):
\[\begin{split}&\partial_{t}\omega+b(y)\partial_{x}\omega-b^{ \prime\prime}(y)u^{y}=0,\\ &\operatorname{div}u=0\qquad\text{and}\qquad\omega=-\partial_{y}u ^{x}+\partial_{x}u^{y},\end{split} \tag{1.1}\]
with the natural non-penetration boundary condition \(u^{y}|_{y=0,1}=0\).
For the linearized flow, \(\int\limits_{\mathbb{T}\times[0,\,1]}u^{x}(x,y,t)\,dxdy\) and \(\int\limits_{\mathbb{T}\times[0,\,1]}\omega(x,y,t)\,dxdy\) are conserved quantities. In this paper, we will assume that
\[\int_{\mathbb{T}\times[0,1]}u_{0}^{x}(x,y)\,dxdy=\int_{\mathbb{T}\times[0,1]} \omega_{0}\,dxdy=0.\]
These assumptions can be dropped by adjusting \(b(y)\) with a linear shear flow \(C_{0}y+C_{1}\). Then one can see from the divergence free condition on \(u\) that there exists a stream function \(\psi(t,x,y)\) with \(\psi(t,x,0)=\psi(t,x,1)\equiv 0\), such that
\[u^{x}=-\partial_{y}\psi,\ u^{y}=\partial_{x}\psi. \tag{1.2}\]
The stream function \(\psi\) can be solved through
\[\Delta\psi=\omega,\qquad\psi|_{y=0,1}=0. \tag{1.3}\]
We summarize our equations as follows
\[\left\{\begin{array}{l}\partial_{t}\omega+b(y)\partial_{x}\omega-b^{\prime \prime}(y)\partial_{x}\psi=0,\\ \Delta\psi(t,x,y)=\omega(t,x,y),\qquad\psi(t,x,0)=\psi(t,x,1)=0,\\ (u^{x},u^{y})=(-\partial_{y}\psi,\partial_{x}\psi),\end{array}\right. \tag{1.4}\]
for \(t\geq 0,(x,y)\in\mathbb{T}\times[0,1]\).
Our goal is to understand the long time behavior of \(\omega(t)\) as \(t\to\infty\), with Sobolev regular initial vorticity \(\omega_{0}\).
### The main results
We describe more precisely the main assumptions and our main conclusion. The main conditions we shall assume on the shear flow \(b(y)\in C^{4}([0,1])\) are as follows.
**Assumption 1.1**.: _We assume that the background flow \(b(y)\in C^{4}([0,1])\) satisfies the following conditions._
1. \[S:=\{y\in[0,1]:\,b^{\prime}(y)=0\}=\{y_{*}\}\subset(0,1).\] (1.5) _In addition,_ \(b^{\prime\prime}(y_{*})\neq 0\)_._
2. _For_ \(k\in\mathbb{Z}\backslash\{0\}\)_, the linearized operator_ \(L_{k}:L^{2}(0,1)\to L^{2}(0,1)\) _defined as_ \[L_{k}g(y):=b(y)g(y)+b^{\prime\prime}(y)\int_{0}^{1}G_{k}(y,z)g(z)\,dz\] (1.6) _has no discrete eigenvalues nor generalized embedded eigenvalues. In the above_ \(G_{k}\) _is the Green's function for_ \(k^{2}-\frac{d^{2}}{dy^{2}}\) _on the interval_ \((0,1)\) _with zero Dirichlet boundary condition._
We refer to section 2 below for the definition and more discussion about generalized embedded eigenvalues.
Our main result is the following theorem.
**Theorem 1.2**.: _Assume that \(\omega(t,\cdot)\in C([0,\infty),H^{4}(\mathbb{T}\times[0,1]))\) with the associated stream function \(\psi(t,\cdot)\) is the unique solution to (1.4), with initial data \(\omega_{0}\in H^{4}(\mathbb{T}\times[0,1])\) satisfying for all \(y\in[0,1]\),_
\[\int_{\mathbb{T}}\omega_{0}(x,y)\,dx=0. \tag{1.7}\]
_Then we have the following bounds._
_(i) Inviscid damping estimates:_
\[\|\psi(t,\cdot)\|_{L^{2}(\mathbb{T}\times[0,1])}\lesssim\frac{1}{\langle t \rangle^{2}}\|\omega_{0}\|_{H^{4}(\mathbb{T}\times[0,1])}, \tag{1.8}\]
\[\|u^{x}(t,\cdot)\|_{L^{2}(\mathbb{T}\times[0,1])}\lesssim\frac{1}{\langle t \rangle}\|\omega_{0}\|_{H^{4}(\mathbb{T}\times[0,1])},\quad\|u^{y}(t,\cdot)\|_ {L^{2}(\mathbb{T}\times[0,1])}\lesssim\frac{1}{\langle t\rangle^{2}}\|\omega_ {0}\|_{H^{4}(\mathbb{T}\times[0,1])}. \tag{1.9}\]
_(ii) Vorticity depletion estimates: there exists a decomposition_
\[\omega(t,x,y):=\omega_{\rm loc}(t,x,y)+\omega_{\rm{nloc}}(t,x,y), \tag{1.10}\]
_where for \((x,y,t)\in\mathbb{T}\times[0,1]\times[0,\infty)\),_
\[|\omega_{\rm loc}(t,x,y)|\lesssim|y-y_{*}|^{7/4}\|\omega_{0}\|_{H^{4}(\mathbb{T} \times[0,1])},\quad|\omega_{\rm nloc}(t,x,y)|\lesssim\frac{1}{\langle t\rangle^ {7/8}}\|\omega_{0}\|_{H^{4}(\mathbb{T}\times[0,1])}. \tag{1.11}\]
### Remarks and main ideas of proof
We have the following remarks on Theorem 1.2. _Firstly_, in the above theorem we have not tracked the minimal regularity required for the bounds (1.8), (1.9) and (1.11) to hold, and a more careful argument can probably significantly reduce the number of derivatives needed on the initial data \(\omega_{0}\). _Secondly_, we note also that the argument here can be applied to non-monotonic shear flows with multiple non-degenerate points, although the presentation will be more complicated. _Thirdly_, a more sophisticated analysis may yield a sharper rate of vorticity depletion with rate
\[|\omega_{\rm loc}(t,x,y)|\lesssim|y-y_{*}|^{2-},\quad|\omega_{\rm nloc}(t,x,y)| \lesssim\langle t\rangle^{-1+}.\]
It is not clear to us though if one can reach the optimal rates of \(|y-y_{*}|^{2}\) and \(\langle t\rangle^{-1}\).
We briefly explain the main ideas of the proof.
By a standard spectral representation formula, see (2.7), it suffices to study the spectral density functions and the associated Rayleigh equation (2.8). There are two main cases to consider. When the spectral parameter \(\lambda\) is not close to the critical value \(b(y_{*})\), the situation is similar to monotonic shear flows and can be treated as in [14]. The main new case is when the spectral parameter \(\lambda\) is close to the critical value \(b(y_{*})\). In this case, the Rayleigh equation (2.8) is very singular, and the potential term \(\frac{b^{\prime\prime}(y)}{b(y)-\lambda+i\epsilon}\) has a quadratic singularity roughly of the form \(\frac{2}{(y-y_{*})^{2}+(\lambda-b(y_{*}))+i\epsilon}\) for \(y\) close to \(y_{*}\).
The key observation here, as in [17], is that the potential term \(\frac{b^{\prime\prime}(y)}{b(y)-\lambda+i\epsilon}\) is _critically singular_ and has real part with a _favorable_ sign for \(1\gg|y-y_{*}|\gg|\lambda-b(y_{*})|^{1/2}\), which needs to be incorporated as part of the main term. We therefore define a modified Green's function for the main term, see (3.12)-(3.13), which has strong vanishing conditions near \(y=y_{*}\), leading ultimately to vorticity depletion. After extracting the main terms in the Rayleigh equation (2.8), the rest of the terms can be treated as compact perturbations, and can be bounded using a limiting absorption principle, see Lemma 4.4, thanks to the spectral assumption 1.1.
The limiting absorption principle provides preliminary bounds on the spectral density functions \(\psi_{k,\epsilon}^{\mu}(y,\lambda)\) with \(\iota\in\{\pm\}\). To obtain the desired quantitative decay rates, we take up to two derivatives in \(\lambda\) of the spectral density functions, and again use the limiting absorption principle to estimate the resulting derivatives, after extracting the main singular terms. The procedure is more or less straightforward but the calculations are quite lengthy. We refer to [14] also for similar calculations in a simpler setting. Lastly, we note that there are important cancellations between \(\psi_{k,\epsilon}^{+}(y,\lambda)\) and \(\psi_{k,\epsilon}^{-}(y,\lambda)\) in the limit \(\epsilon\to 0+\), which is the reason why we need two versions of the limiting absorption principle, see Lemma 4.4, with different weighted spaces.
### Notations
We summarize here some notations that are specific for this paper for the reader's conveniences. For positive numbers \(\alpha,\beta\), we set \(\alpha\wedge\beta:=\min\{\alpha,\beta\}\). We denote for \(d>0\), \(\Sigma_{d}:=\{b(y)\,:\,y\in[y_{*}-d,y_{*}+d]\}\), \(S_{d}:=[y_{*}-d,y_{*}+d]\). We also denote \(\Sigma:=\{b(y)\,:\,y\in[0,1]\}\) and \(I:=[0,1]\). For \(k\in\mathbb{Z}\backslash\{0\}\), we define for \(f\in H^{1}(I)\) the norm \(\|f\|_{H^{k}_{k}(I)}:=\|f\|_{L^{2}(I)}+|k|^{-1}\|f^{\prime}\|_{L^{2}(I)}\).
## 2. Spectral property and representation formula
Taking Fourier transform in \(x\) in the equation (1.4) for \(\omega\), we obtain that
\[\partial_{t}\omega_{k}+ikb(y)\omega_{k}-ikb^{\prime\prime}(y)\psi_{k}=0, \tag{2.1}\]
for \(k\in\mathbb{Z},t\geq 0,y\in[0,1]\). In the above, \(\omega_{k}\) and \(\psi_{k}\) are the \(k\)-th Fourier coefficients of \(\omega,\psi\) in \(x\) respectively. For each \(k\in\mathbb{Z}\backslash\{0\}\), recall from (1.6) that for any \(g\in L^{2}(0,1)\),
\[L_{k}g(y)=b(y)g(y)+b^{\prime\prime}(y)\int_{0}^{1}G_{k}(y,z)g(z)dz, \tag{2.2}\]
where \(G_{k}\) is the Green's function for the operator \(k^{2}-\frac{d^{2}}{dy^{2}}\) on \((0,1)\) with zero Dirichlet boundary condition. Then (2.1) can be reformulated abstractly as
\[\partial_{t}\omega_{k}+ikL_{k}\omega_{k}=0. \tag{2.3}\]
In contrast to the spectral property of the linearized operator around monotonic shear flows, the spectral property of \(L_{k}\) is less understood, especially on the generation of discrete eigenvalues and embedded eigenvalues. From general spectral theory, we know that the spectrum of \(L_{k}\) consists of the continuous spectrum
\[\Sigma:=\big{\{}b(y):\,y\in[0,1]\big{\}}, \tag{2.4}\]
together with some discrete eigenvalues with nonzero imaginary part which can only accumulate at the set of continuous spectrum \(\Sigma\). Unlike the case of monotonic shear flows where the discrete eigenvalues can accumulate only at inflection points of the background shear flow, there appears no simple characterization of the possible accumulation points for non-monotonic shear flows.
Recall that \(\lambda\in\Sigma\) is called an embedded eigenvalue if there exists a nontrivial \(g\in L^{2}(0,1)\), such that
\[L_{k}g=\lambda g. \tag{2.5}\]
For non-monotonic shear flows, this definition is too restrictive, as accumulation points of discrete eigenvalues may no longer be embedded eigenvalues. To capture the discrete eigenvalues, we recall the following definition of "generalized embedded eigenvalues", which can be found already in [31], adapted to our setting.
**Definition 2.1**.: _We call \(\lambda\in\Sigma\) a generalized embedded eigenvalue, if one of the following conditions is satisfied._
* \(\lambda\) _is an embedded eigenvalue._
* \(\lambda\neq b(y_{*})\) _and there exists a nontrivial_ \(\psi\in H^{1}_{0}(0,1):(0,1)\to\mathbb{C}\) _such that in the sense of distributions on_ \((0,1)\)_,_ \[(k^{2}-\partial_{y}^{2})\psi(y)+\mathrm{P.V.}\frac{b^{\prime\prime}(y)\psi(y)} {b(y)-\lambda}+i\pi\sum_{z\in[0,1],\,b(z)=\lambda}\frac{b^{\prime\prime}(z) \psi(z)}{|b^{\prime}(z)|}\delta(y-z)=0.\] (2.6)
We remark that our assumption that the critical point \(y_{*}\) of \(b(y)\) being non-degenerate implies that the sum in (2.6) is finite, and that the spectral assumption 1.1 is satisfied if \(b^{\prime\prime}>0\) on \([0,1]\).
**Proposition 2.2**.: _Suppose that \(k\in\mathbb{Z}\backslash\{0\}\) and \(\omega_{0}^{k}\in L^{2}([0,1])\). Then the stream function \(\psi_{k}(t,y)\) for \(k\in\mathbb{Z}\backslash\{0\},y\in[0,1],t\geq 0\) has the representation_
\[\psi_{k}(t,y)=-\frac{1}{2\pi i}\lim_{\epsilon\to 0+}\int_{\Sigma}e^{-ik \lambda t}\left[\psi_{k,\epsilon}^{-}(y,\lambda)-\psi_{k,\epsilon}^{+}(y, \lambda)\right]d\lambda, \tag{2.7}\]
_where \(\psi_{k,\epsilon}^{\iota}(y,\lambda)\) for \(\iota\in\{+,-\},\,y\in[0,1],\,\lambda\in\Sigma,k\in\mathbb{Z}\backslash\{0\}\), and sufficiently small \(\epsilon\in[-1/4,1/4]\backslash\{0\}\), are the solutions to_
\[-k^{2}\psi_{k,\epsilon}^{\iota}(y,\lambda)+\frac{d^{2}}{dy^{2}} \psi_{k,\epsilon}^{\iota}(y,\lambda)-\frac{b^{\prime\prime}(y)}{b(y)-\lambda+ i\iota\epsilon}\psi_{k,\epsilon}^{\iota}(y,\lambda)=\frac{-\omega_{0}^{k}(y)}{b(y)- \lambda+i\iota\epsilon}, \tag{2.8}\]
_with zero Dirichlet boundary condition._
Proof.: By standard theory of spectral projection, from (2.3), we obtain that for \(y\in[0,1]\),
\[\omega_{k}(t,y)=\frac{1}{2\pi i}\lim_{\epsilon\to 0+}\int_{\Sigma}e^{i \lambda t}\left\{\left[(\lambda+kL_{k}-i\epsilon)^{-1}-(\lambda+kL_{k}+i \epsilon)^{-1}\right]\omega_{0}^{k}\right\}(y)\,d\lambda. \tag{2.9}\]
We then obtain for \(y\in[0,1]\),
\[\psi_{k}(t,y)=-\frac{1}{2\pi i}\lim_{\epsilon\to 0+}\int_{ \Sigma}e^{-ik\lambda t}\int_{0}^{1}G_{k}(y,z) \tag{2.10}\] \[\times\left\{\left[(-\lambda+L_{k}-i\epsilon)^{-1}-(-\lambda+L_{k }+i\epsilon)^{-1}\right]\omega_{0}^{k}\right\}(z)\,dzd\lambda\] \[=-\frac{1}{2\pi i}\lim_{\epsilon\to 0+}\int_{\Sigma}e^{-ik \lambda t}\left[\psi_{k,\epsilon}^{-}(y,\lambda)-\psi_{k,\epsilon}^{+}(y, \lambda)\right]d\lambda.\]
In the above, for \(y\in[0,1]\) and \(\lambda\in\Sigma\),
\[\psi_{k,\epsilon}^{+}(y,\lambda):=\int_{0}^{1}G_{k}(y,z)\Big{[} (-\lambda+L_{k}+i\epsilon)^{-1}\omega_{0}^{k}\Big{]}(z)\,dz, \tag{2.11}\] \[\psi_{k,\epsilon}^{-}(y,\lambda):=\int_{0}^{1}G_{k}(y,z)\Big{[} (-\lambda+L_{k}-i\epsilon)^{-1}\omega_{0}^{k}\Big{]}(z)\,dz.\]
Therefore for \(\iota\in\{+,-\},y\in[0,1],\lambda\in\Sigma\),
\[\left(k^{2}-\frac{d^{2}}{dy^{2}}\right)\psi_{k,\epsilon}^{\iota}(y,y_{0})=(- \lambda+L_{k}+i\iota\epsilon)^{-1}\omega_{0}^{k}(y), \tag{2.12}\]
which implies
\[\omega_{0}^{k}(y)= (-\lambda+L_{k}+i\iota\epsilon)\left(k^{2}-\frac{d^{2}}{dy^{2}} \right)\psi_{k,\epsilon}^{\iota}(y,\lambda) \tag{2.13}\] \[= (b(y)-\lambda+i\iota\epsilon)\left(k^{2}-\frac{d^{2}}{dy^{2}} \right)\psi_{k,\epsilon}^{\iota}(y,\lambda)+b^{\prime\prime}(y)\psi_{k, \epsilon}^{\iota}(y,\lambda).\]
It follows from (2.13) that \(\psi_{k,\epsilon}^{+}(y,\lambda),\psi_{k,\epsilon}^{-}(y,\lambda)\) satisfy (2.8). The proposition is now proved.
**Remark 2.3**.: _The existence of \(\psi_{k,\epsilon}^{\iota}\) for sufficiently small \(\epsilon\neq 0\) follows from our spectral assumptions, which imply the solvability of (2.8) for sufficiently small \(\epsilon\neq 0\), see also (4.9)._
## 3. Bounds on the Green's function and modified Green's function
### Elementary properties of the standard Green's function
For integers \(k\in\mathbb{Z}\setminus\{0\}\), recall that the Green's function \(G_{k}(y,z)\) solves
\[-\frac{d^{2}}{dy^{2}}G_{k}(y,z)+k^{2}G_{k}(y,z)=\delta(y-z), \tag{3.1}\]
with Dirichlet boundary conditions \(G_{k}(0,z)=G_{k}(1,z)=0\), \(z\in(0,1)\). \(G_{k}\) has the explicit formula
\[G_{k}(y,z)=\frac{1}{k\sinh k}\begin{cases}\sinh(k(1-z))\sinh(ky)&\text{if $y \leq z$},\\ \sinh(kz)\sinh(k(1-y))&\text{if $y\geq z$},\end{cases} \tag{3.2}\]
and the symmetry
\[G_{k}(y,z)=G_{k}(z,y),\qquad\text{for $k\in\mathbb{Z}\setminus\{0\}$},y,z \in[0,1]. \tag{3.3}\]
We note the following bounds for \(G_{k}\)
\[\sup_{y\in[0,1],|A|\leq 10}\left[|k|^{2}\big{\|}G_{k}(y,z)(\log|z-A| )^{m}\big{\|}_{L^{1}(z\in[0,1])}+|k|\big{\|}\partial_{y,z}G_{k}(y,z)(\log|z-A| )^{m}\big{\|}_{L^{1}(z\in[0,1])}\right]\] \[\quad+\sup_{y\in[0,1],\alpha\in\{0,1\}}\left[|k|^{3/2-\alpha} \left\|\partial_{y,z}^{\alpha}G_{k}(y,z)\right\|_{L^{2}(z\in[0,1])}\right] \lesssim|\log\left\langle k\right\rangle|^{m},\qquad\text{for $m\in\{0,1,2,3\}$}. \tag{3.4}\]
Define
\[F_{k}(y,z)=\frac{1}{\sinh k}\left\{\begin{array}{ll}-k\cosh\left(k(1-z) \right)\cosh\left(ky\right),&0\leq y\leq z\leq 1;\\ -k\cosh\left(kz\right)\cosh\left(k(1-y)\right),&1\geq y>z\geq 0.\end{array}\right. \tag{3.5}\]
We note that
\[\partial_{y}\partial_{z}G_{k}(y,z)=\partial_{z}\partial_{y}G_{k}(y,z)=\delta( y-z)+F_{k}(y,z),\qquad\text{for $y,z\in[0,1]$}. \tag{3.6}\]
By direct computation, we see \(F_{k}\) satisfies the bounds
\[\sup_{y\in[0,1],|A|\leq 10}\left[\big{\|}F_{k}(y,z)(\log|z-A|)^{m} \big{\|}_{L^{1}(z\in[0,1])}+|k|^{-1}\big{\|}\partial_{y,z}F_{k}(y,z)(\log|z-A| )^{m}\big{\|}_{L^{1}(z\in[0,1])}\right]\] \[\quad+\sup_{y\in[0,1],\alpha\in\{0,1\}}\left[|k|^{-1/2-\alpha} \left\|\partial_{y,z}^{\alpha}F_{k}(y,z)\right\|_{L^{2}(z\in[0,1])}\right] \lesssim|\log\left\langle k\right\rangle|^{m},\qquad\text{for $m\in\{0,1,2,3\}$}. \tag{3.7}\]
The bounds (3.4) and (3.7) can be proved by explicit calculations and are useful in the proof of Lemma 4.1 below.
### Bounds on the modified Green's function
It follows from Assumption 1.1 that there exists a \(\delta_{0}\in(0,1/8)\) such that
\[\inf\{|y_{*}|,|y_{*}-1|\}>10\delta_{0}\quad\text{and}\quad\sup_{y\in(y_{*}-4 \delta_{0},y_{*}+4\delta_{0})}|b^{\prime\prime\prime}(y)|\delta_{0}<|b^{\prime \prime}(y_{*})|/10. \tag{3.8}\]
Define the set
\[\Sigma_{\delta_{0}}:=\{b(y):y\in[y_{*}-\delta_{0},y_{*}+\delta_{0}]\}, \tag{3.9}\]
and fix a standard smooth cutoff function \(\varphi\in C_{c}^{\infty}(-2,2)\) satisfying \(\varphi\equiv 1\) on \([-3/2,3/2]\). For simplicity of notations, we denote
\[I:=(0,1). \tag{3.10}\]
To simplify notations we define also for \(d\in(0,1/10)\),
\[S_{d}:=[y_{*}-d,y_{*}+d]. \tag{3.11}\]
For applications below, we also need to study the "modified Green's function" \(\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\) for \(y,z\in[0,1],\lambda\in\Sigma_{\delta_{0}}\) and \(\epsilon\in[-1/8,1/8]\backslash\{0\}\), which satisfies for \(y,z\in(0,1)\),
\[(k^{2}-\partial_{y}^{2})\mathcal{G}_{k}(y,z;\lambda+i\epsilon)+\frac{b^{ \prime\prime}(y)}{b(y)-\lambda+i\epsilon}\Big{[}\varphi\big{(}\frac{y-y_{*}}{ \delta_{0}}\big{)}-\varphi\big{(}\frac{y-y_{*}}{\delta(\lambda)}\big{)} \Big{]}\mathcal{G}_{k}(y,z;\lambda+i\epsilon)=\delta(y-z), \tag{3.12}\]
with the boundary condition
\[\mathcal{G}_{k}(y,z;\lambda+i\epsilon)|_{y\in\{0,1\}}=0. \tag{3.13}\]
In the above, we have used the notation that
\[\delta(\lambda):=8\sqrt{|\lambda-b(y_{*})|/b^{\prime\prime}(y_{*})}. \tag{3.14}\]
Define the weight \(\varrho(y;\lambda+i\epsilon)\) for \(y,z\in[0,1],\lambda\in\Sigma_{\delta_{0}}\) and \(\epsilon\in[-1/8,1/8]\backslash\{0\}\) as
\[\varrho(y;\lambda+i\epsilon):= |\lambda-b(y_{*})|^{1/2}+|\epsilon|^{1/2}+|y-y_{*}|. \tag{3.15}\]
The crucial bounds we need for the modified Green's function \(\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\) is the following.
**Lemma 3.1**.: _Let \(\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\) for \(y,z\in[0,1],\lambda\in\Sigma_{\delta_{0}}\) and \(\epsilon\in[-1/8,1/8]\backslash\{0\}\) be defined as in (3.12). Then we have the identity for \(y,z\in[0,1]\),_
\[\mathcal{G}_{k}(y,z;\lambda+i\epsilon)=\mathcal{G}_{k}(z,y;\lambda+i\epsilon), \tag{3.16}\]
_and the following statements hold._
_(i) We have the bounds_
\[\sup_{y\in[0,1],\,|y-z|\leq\min\{\varrho(z;\lambda+i\epsilon),1/|k|\}}| \mathcal{G}_{k}(y,z;\lambda+i\epsilon)|\lesssim\min\{\varrho(z;\lambda+i \epsilon),1/|k|\}, \tag{3.17}\]
_(ii) For \(y_{1},y_{2}\in[0,1]\) with \(y_{2}\in[\min\{y_{1},z\},\max\{y_{1},z\}]\) and \(\varrho(y_{2};\lambda+i\epsilon)\gtrsim 1/|k|\), we have the bounds with \(\alpha\in\{0,1\}\)_
\[|\partial_{y}^{\alpha}\mathcal{G}_{k}(y_{1},z;\lambda+i\epsilon)| \tag{3.18}\] \[\lesssim\Big{[}|k|+\varrho^{-1}(y_{1};\lambda+i\epsilon)\Big{]} ^{\alpha}e^{-|k||y_{1}-y_{2}|}\bigg{[}|k|\int_{[y_{2}-1/|k|,y_{2}+1/|k|]\cap I }|G_{k}(y,z;\lambda+i\epsilon)|^{2}\,dy\bigg{]}^{1/2}.\]
_(iii) For \(y_{1},y_{2}\in[0,1]\) with \(y_{2}\in[\min\{y_{1},z\},\max\{y_{1},z\}]\) and \(\varrho(y_{2};\lambda+i\epsilon)\ll 1/|k|\), we have the bounds with \(\alpha\in\{0,1\}\)_
\[|\partial_{y}^{\alpha}\mathcal{G}_{k}(y_{1},z;\lambda+i\epsilon)|\lesssim\Big{[} |k|+\varrho^{-1}(y_{1};\lambda+i\epsilon)\Big{]}^{\alpha}\min\bigg{\{}\frac{ \varrho^{2}(y_{1};\lambda+i\epsilon)}{\varrho^{2}(y_{2};\lambda+i\epsilon)}, \,\frac{\varrho(y_{2};\lambda+i\epsilon)}{\varrho(y_{1};\lambda+i\epsilon)} \bigg{\}}M, \tag{3.19}\]
_where_
\[M:=\bigg{[}\frac{1}{\varrho(y_{2};\lambda+i\epsilon)}\int_{[y_{2}-\varrho(y_{ 2};\lambda+i\epsilon),y_{2}+\varrho(y_{2};\lambda+i\epsilon)]\cap I}|\mathcal{ G}_{k}(y,z;\lambda+i\epsilon)|^{2}\,dy\bigg{]}^{1/2}. \tag{3.20}\]
Proof.: The proof is based on energy estimates and "entanglement inequalities", as in [15]. See also the earlier work [33] where this type of inequality was used. We divide the proof into several steps.
**Step 1: the proof of (3.17).** We first establish the bounds (3.17). For simplicity of notation, we suppress the dependence on \(z,\lambda+i\epsilon\) and set for \(y\in[0,1]\),
\[h(y):=\mathcal{G}_{k}(y,z;\lambda+i\epsilon),\quad V(y):=\frac{b^{\prime \prime}(y)}{b(y)-\lambda+i\epsilon}\Big{[}\varphi\big{(}\frac{y-y_{*}}{\delta_ {0}}\big{)}-\varphi\big{(}\frac{y-y_{*}}{\delta}\big{)}\Big{]}. \tag{3.21}\]
Multiplying \(\overline{h}\) to (3.12) and integrating over \([0,1]\), we obtain that
\[\int_{0}^{1}|\partial_{y}h(y)|^{2}+|k|^{2}|h(y)|^{2}\,dy+\int_{0}^{1}\frac{b^{ \prime\prime}(y)}{b(y)-\lambda+i\epsilon}\Big{[}\varphi\big{(}\frac{y-y_{*}}{ \delta_{0}}\big{)}-\varphi\big{(}\frac{y-y_{*}}{\delta}\big{)}\Big{]}|h(y)|^{ 2}\,dy=\overline{h}(z). \tag{3.22}\]
Note that for \(y\in[0,1]\), \(\Re V(y)\geq 0\), and in addition, for \(y\in S_{\delta_{0}}\) and
\[|y-y_{*}|>C_{0}\big{(}|\lambda-b(y_{*})|^{1/2}+|\epsilon|^{1/2}\big{)}\]
with sufficiently large \(C_{0}\gg 1\),
\[1+\Re V(y)\gtrsim\frac{1}{\varrho^{2}(y;\lambda+i\epsilon)}. \tag{3.23}\]
It follows from (3.22) that
\[\begin{split}&\int_{0}^{1}|\partial_{y}h(y)|^{2}+|k|^{2}|h(y)|^{ 2}\,dy+\int_{y\in S_{\delta_{0}},\;|y-y_{*}|>C_{0}(\delta+|\epsilon|^{1/2})} \,\frac{1}{\big{[}\varrho(y;\lambda+i\epsilon)\big{]}^{2}}|h(y)|^{2}\,dy\\ &\lesssim|h(z)|.\end{split} \tag{3.24}\]
Using the Sobolev type inequality
\[\|h\|_{L^{\infty}(J)}\lesssim\|h\|_{L^{2}(J_{*})}|J|^{-1/2}+\|\partial_{y}h\| _{L^{2}(J)}|J|^{1/2}, \tag{3.25}\]
for any interval \(J,J_{*}\) with \(J_{*}\subseteq J\) and \(|J_{*}|\gtrsim|J|\), and choosing the interval \(J\subset I\) as an interval containing \(z\) with length of the size \(C_{1}\min\{1/|k|,\varrho(z;\lambda+i\epsilon)\}\), we obtain from (3.24) that
\[\begin{split}&\int_{0}^{1}|\partial_{y}h(y)|^{2}+|k|^{2}|h(y)|^{ 2}\,dy+\int_{y\in S_{\delta_{0}},\;|y-y_{*}|>C_{0}(\delta+|\epsilon|^{1/2})} \,\frac{1}{\big{[}\varrho(y;\lambda+i\epsilon)\big{]}^{2}}|h(y)|^{2}\,dy\\ &\lesssim\min\{1/|k|,\varrho(z;\lambda+i\epsilon)\}.\end{split} \tag{3.26}\]
The desired bound (3.17) follows from (3.26), (3.25), and equation (3.12).
**Step 2: the proof of (3.18).** Denote
\[M_{1}:=\bigg{[}|k|\int_{[y_{2}-1/|k|,y_{2}+1/|k|]\cap I}|\mathcal{G}_{k}(y,z; \lambda+i\epsilon)|^{2}\,dy\bigg{]}^{1/2}. \tag{3.27}\]
For the sake of concreteness, we assume that \(y_{1}>z\) (so \(y_{2}\in[z,y_{1}]\)). We shall also assume that \(y_{1}-y_{2}\gg 1/|k|\) as the other case is analogous but easier. For \(\varphi\in C_{p}^{1}([y_{2},1])\), the space of piecewise \(C^{1}\) functions, with \(\varphi(y_{2})=0\), we multiply \(\varphi^{2}\overline{h}\) to equation (3.12) and integrate over
\([y_{2},1]\) to obtain that
\[\int_{y_{2}}^{1}|\partial_{y}h(y)|^{2}\varphi^{2}(y)+2\partial_{y}h(y)\overline{h( y)}\varphi(y)\partial_{y}\varphi(y)+|k|^{2}\varphi^{2}(y)|h(y)|^{2}+V(y)|h(y)|^{2} \varphi^{2}(y)\,dy=0. \tag{3.28}\]
Taking the real part of (3.28) and using Cauchy-Schwarz inequality, we get that
\[\int_{y_{2}}^{1}\big{[}|\partial_{y}\varphi(y)|^{2}-|k|^{2}|\varphi(y)|^{2} \big{]}|h(y)|^{2}\,dy\geq 0. \tag{3.29}\]
We now choose \(\varphi\) more specifically as follows. We require that
\[\begin{split}&\varphi(y_{2})=0,\ \varphi^{\prime\prime}(y)=0\ \text{for}\ y\in[y_{2},y_{2}+1/|k|],\ \varphi(y_{2}+1/|k|)=1,\\ &\varphi^{\prime}(y)=|k|\varphi(y)\ \text{for}\ y\in[y_{2}+1/|k|,y_{ 1}-1/|k|],\ \varphi^{\prime}(y)=0\ \text{for}\ y\in[y_{1}-1/|k|,1].\end{split} \tag{3.30}\]
It follows from (3.29)-(3.30) that
\[\int_{y_{1}-1/|k|}^{1}|k|^{2}\varphi^{2}(y)|h(y)|^{2}\,dy\lesssim|k|M_{1}^{2},\quad\varphi(y)\approx e^{|k||y_{1}-y_{2}|}\ \text{for}\ y\in[y_{1}-1/|k|,y_{1}+1/|k|]\cap I. \tag{3.31}\]
The desired bounds (3.18) follow from (3.31) and equation (3.12).
**Step 3: the the proof of (3.19).** For the sake of concreteness, we assume that \(y_{1}>z\) (and so \(y_{2}\in[z,y_{1}]\)). We shall also assume that \(y_{1}-y_{2}\gg\varrho(y_{2};\lambda+i\epsilon)\) and that \(y_{2}>y_{*}+\delta+|\epsilon|^{1/2}\) as the other cases are analogous.
For \(\varphi\in C_{p}^{1}([y_{2},1])\) with \(\varphi(y_{2})=0\), we multiply \(\varphi^{2}\overline{h}\) to equation (3.12) and integrate over \([y_{2},1]\) to obtain that
\[\int_{y_{2}}^{1}|\partial_{y}h(y)|^{2}\varphi^{2}(y)+2\partial_{y}h(y) \overline{h(y)}\varphi(y)\partial_{y}\varphi(y)+|k|^{2}\varphi^{2}(y)|h(y)|^{ 2}+V(y)|h(y)|^{2}\varphi^{2}(y)\,dy=0. \tag{3.32}\]
Write for \(y\in[y_{2},1]\)
\[h(y)=(y-y_{*})^{1/2}h^{*}(y). \tag{3.33}\]
Simple calculations show that
\[\begin{split}&\int_{y_{2}}^{1}(y-y_{*})|\partial_{y}h^{*}(y)|^{2} \varphi^{2}(y)+2(y-y_{*})\partial_{y}\varphi(y)\varphi(y)\partial_{y}h^{*}(y) \overline{h^{*}(y)}+\frac{1}{4(y-y_{*})}|h^{*}(y)|^{2}\varphi^{2}(y)\\ &\qquad+|k|^{2}|h(y)|^{2}\varphi^{2}(y)+(y-y_{*})V(y)\varphi^{2}( y)|h^{*}(y)|^{2}\,dy=0.\end{split} \tag{3.34}\]
Therefore
\[\int_{y_{2}}^{1}\Big{[}\frac{1}{4(y-y_{*})}+(y-y_{*})\Re V_{y_{*}}(y)\Big{]} \varphi^{2}(y)|h^{*}(y)|^{2}\,dy\leq\int_{y_{2}}^{1}(y-y_{*})(\partial_{y} \varphi)^{2}(y)|h^{*}(y)|^{2}\,dy, \tag{3.35}\]
which implies that
\[\int_{y_{2}}^{1}\frac{1}{y-y_{*}}\Big{[}\big{(}(y-y_{*})\partial_{y}\varphi \big{)}^{2}(y)-\Big{(}1/4+(y-y_{*})^{2}\Re V(y)\Big{)}\varphi^{2}(y)\Big{]}|h^{ *}(y)|^{2}\,dy\geq 0. \tag{3.36}\]
We notice the pointwise bounds for \(y\in[y_{2},1]\),
\[1/4+(y-y_{*})^{2}\Re V(y)\geq\max\Big{\{}0,9/4-C_{2}\frac{\varrho^{2}(y_{2}; \lambda+i\epsilon)}{(y-y_{*})^{2}}-C_{2}|y-y_{*}|\Big{\}}. \tag{3.37}\]
Now we choose \(\varphi\in C_{p}^{1}([y_{2},1])\) more precisely as follows. We require that
\[\begin{split}&\varphi(y_{2})=0,\ \varphi^{\prime\prime}(y)=0\ \text{for}\ y\in[y_{2},y_{2}+\varrho(y_{2};\lambda+i\epsilon)],\ \varphi(y_{2}+\varrho(y_{2};\lambda+i\epsilon))=1,\\ &(y-y_{*})\varphi^{\prime}(y)=\big{[}1/4+(y-y_{*})^{2}\Re V(y) \big{]}^{1/2}\varphi(y)\\ &\text{for}\ y\in[y_{2}+\varrho(y_{2};\lambda+i\epsilon),y_{1}- \varrho(y_{1};\lambda+i\epsilon)],\ \text{and}\ \varphi^{\prime}(y)=0\ \text{for}\ y\in[y_{1}-\varrho(y_{1};\lambda+i\epsilon),1].\end{split} \tag{3.38}\]
It follows from (3.36)-(3.38) that
\[\begin{split}&\int_{y_{1}-\varrho(y_{1};\lambda+i\epsilon)}^{y_{ 1}}\frac{1}{\varrho(y_{1};\lambda+i\epsilon)}\varphi^{2}(y)|h^{*}(y)|^{2}\,dy \lesssim M^{2}/\varrho(y_{2};\lambda+i\epsilon),\\ &\varphi(y)\approx\frac{(y_{1}-y_{*})^{3/2}}{\varrho^{3/2}(y_{2}; \lambda+i\epsilon)}\ \text{for}\ y\in[y_{1}-\varrho(y_{1};\lambda+i\epsilon),y_{1}].\end{split} \tag{3.39}\]
The desired bounds (3.19) follow from the change of variable (3.33), the bound (3.36), (3.39) and equation (3.12).
As a corollary of Lemma 3.1, we have the following additional bounds on the modified Green's function.
**Lemma 3.2**.: _Let \(\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\) for \(y,z\in[0,1],\lambda\in\Sigma_{\delta_{0}},k\in\mathbb{Z}\backslash\{0\}\) and \(\epsilon\in[-1/8,1/8]\backslash\{0\}\) be defined as in (3.12). Recall the definition (3.14) for \(\delta=\delta(\lambda)>0\). Define_
\[h:=10(\delta+|\epsilon|^{1/2}), \tag{3.40}\]
_and also for \(y,z\in[0,1]\),_
\[\mathcal{H}_{k}(y,z;\lambda+i\epsilon):=\Big{[}\partial_{z}+\varphi\big{(} \frac{y-y_{*}}{h}\big{)}\partial_{y}\Big{]}\mathcal{G}_{k}(y,z;\lambda+i \epsilon). \tag{3.41}\]
_Then the following statements hold for \(z\in S_{4\delta}\)._
_(i) We have the bounds_
\[\begin{split}&\sup_{y\in[0,1],\,|y-z|\leq\min\{\varrho(z;\lambda+i \epsilon),1/|k|\}}|\mathcal{H}_{k}(y,z;\lambda+i\epsilon)|\lesssim 1,\\ &\sup_{y\in[0,1],\,|y-z|\leq\min\{\varrho(z;\lambda+i\epsilon),1/ |k|\}}|\partial_{y}\mathcal{H}_{k}(y,z;\lambda+i\epsilon)|\lesssim 1/\min\{ \varrho(z;\lambda+i\epsilon),1/|k|\};\end{split} \tag{3.42}\]
_(ii) For \(y_{1},y_{2}\in[0,1]\) with \(y_{2}\in[\min\{y_{1},z\},\max\{y_{1},z\}]\) and \(\varrho(y_{2};\lambda+i\epsilon)\gtrsim 1/|k|\), we have the bounds with \(\alpha\in\{0,1\}\)_
\[\begin{split}&\big{[}\min\{\varrho(y_{1};\lambda+i\epsilon),1/|k| \}\big{]}^{\alpha}|\partial_{y}^{\alpha}\mathcal{H}_{k}(y_{1},z;\lambda+i \epsilon)|\\ &\lesssim\frac{e^{-|k||y_{1}-y_{2}|}}{\min\{\varrho(z;\lambda+i \epsilon),1/|k|\}}\bigg{[}|k|\int_{[y_{2}-1/|k|,y_{2}+1/|k|]\cap I}|\mathcal{G }_{k}(y,z;\lambda+i\epsilon)|^{2}\,dy\bigg{]}^{1/2}.\end{split} \tag{3.43}\]
_(iii) For \(y_{1},y_{2}\in[0,1]\) with \(y_{2}\in[\min\{y_{1},z\},\max\{y_{1},z\}]\) and \(\varrho(y_{2};\lambda+i\epsilon)\ll 1/|k|\), we have the bounds with \(\alpha\in\{0,1\}\)_
\[\begin{split}&\big{[}\min\{\varrho(y_{1};\lambda+i\epsilon),1/|k| \}\big{]}^{\alpha}|\partial_{y}^{\alpha}\mathcal{H}_{k}(y_{1},z;\lambda+i \epsilon)|\\ &\lesssim\frac{1}{\min\{\varrho(z;\lambda+i\epsilon),1/|k|\}} \min\bigg{\{}\frac{\varrho^{2}(y_{1};\lambda+i\epsilon)}{\varrho^{2}(y_{2}; \lambda+i\epsilon)},\,\frac{\varrho(y_{2};\lambda+i\epsilon)}{\varrho(y_{1}; \lambda+i\epsilon)}\bigg{\}}M,\end{split} \tag{3.44}\]
_where_
\[M:=\bigg{[}\frac{1}{\varrho(y_{2};\lambda+i\epsilon)}\int_{[y_{2}-\varrho(y_{ 2};\lambda+i\epsilon),y_{2}+\varrho(y_{2};\lambda+i\epsilon)]\cap I}| \mathcal{G}_{k}(y,z;\lambda+i\epsilon)|^{2}\,dy\bigg{]}^{1/2}. \tag{3.45}\]
Proof.: Denote with a slight abuse of notation for \(y\in[0,1]\),
\[\varphi^{\dagger}(y):=\varphi\big{(}\frac{y-y_{*}}{h}\big{)},\quad V(y):= \frac{b^{\prime\prime}(y)}{b(y)-\lambda+i\epsilon}\Big{[}\varphi\big{(}\frac{ y-y_{*}}{\delta_{0}}\big{)}-\varphi\big{(}\frac{y-y_{*}}{\delta(\lambda)} \big{)}\Big{]}. \tag{3.46}\]
Then \(\mathcal{H}_{k,j}(y,z;\lambda+i\epsilon)\) satisfies for \(y\in[0,1],z\in S_{4\delta}\),
\[\begin{split}&(k^{2}-\partial_{y}^{2})\mathcal{H}_{k}(y,z; \lambda+i\epsilon)+V(y)\mathcal{H}_{k}(y,z;\lambda+i\epsilon)\\ &=-\partial_{y}^{2}\varphi^{\dagger}(y)\partial_{y}\mathcal{G}_{k }(y,z;\lambda+i\epsilon)-\partial_{y}V(y)\varphi^{\dagger}(y)\mathcal{G}_{k}(y,z;\lambda+i\epsilon)-2\partial_{y}\varphi^{\dagger}(y)\partial_{y}^{2} \mathcal{G}_{k}(y,z;\lambda+i\epsilon).\end{split} \tag{3.47}\]
The desired bounds then follow from equation (3.47), Lemma 3.1 and standard elliptic regularity theory.
The bounds in Lemma 3.1 and Lemma 3.2 are quite sharp, since we can exploit the decay coming from both \(k^{2}\) and \(\frac{b^{\prime\prime}(y)}{b(y)-\lambda+i\epsilon}\Big{[}\varphi\big{(}\frac{ y-y_{*}}{\delta_{0}}\big{)}-\varphi\big{(}\frac{y-y_{*}}{\delta(\lambda)}\big{)} \Big{]}\). It is however somewhat complicated to formulate a concrete bound that is easy to use. Instead, the following simple bounds are more often used.
**Corollary 3.3**.: _Let \(\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\) for \(y,z\in[0,1],\lambda\in\Sigma_{\delta_{0}}\) and \(\epsilon\in[-1/8,1/8]\backslash\{0\}\) be defined as in (3.12). Then we have the following bounds._
_(i) For \(y,z\in[0,1]\), we have the bounds with \(\alpha\in\{0,1\}\)_
\[\begin{split}&\Big{[}|k|+\varrho^{-1}(y;\lambda+i\epsilon) \Big{]}^{-\alpha}|\partial_{y}^{\alpha}\mathcal{G}_{k}(y,z;\lambda+i\epsilon) |\\ &\lesssim\frac{1}{|k|+\varrho^{-1}(z;\lambda+i\epsilon)}\min\bigg{\{} e^{-|k||y-z|},\frac{\varrho^{2}(y;\lambda+i\epsilon)}{\varrho^{2}(z;\lambda+i \epsilon)},\,\frac{\varrho(z;\lambda+i\epsilon)}{\varrho(y;\lambda+i\epsilon) }\bigg{\}}.\end{split} \tag{3.48}\]
_(iii) For \(y\in[0,1],z\in S_{4\delta}\), we have the bounds with \(\alpha\in\{0,1,2\}\)_
\[\begin{split}&\Big{[}|k|+\varrho^{-1}(y;\lambda+i\epsilon) \Big{]}^{-\alpha}|\partial_{y}^{\alpha}\mathcal{H}_{k}(y,z;\lambda+i\epsilon) |\lesssim\min\bigg{\{}e^{-|k||y-z|},\frac{\varrho^{2}(y;\lambda+i\epsilon)}{ \varrho^{2}(z;\lambda+i\epsilon)},\,\frac{\varrho(z;\lambda+i\epsilon)}{ \varrho(y;\lambda+i\epsilon)}\bigg{\}}.\end{split} \tag{3.49}\]
Proof.: The desired bounds (3.48)-(3.49) follow directly from Lemma 3.1 and Lemma 3.2, by choosing, if necessary, another point \(y^{\prime}\) between \(y\) and \(z\) such that \(\varrho(y^{\prime};\lambda+i\epsilon)\approx 1/|k|\), and applying (3.48)-(3.49) on intervals \([\min\{z,y^{\prime}\},\max\{z,y^{\prime}\}]\) and \([\min\{y^{\prime},y\},\max\{y^{\prime},y\}]\) successively.
## 4. The limiting absorption principle
In this section we study the solvability of the main Rayleigh equations (2.8). It turns out that the situation is very different for the spectral range \(\lambda\in\Sigma\backslash\Sigma_{\delta_{0}/2}\) (the non-degenerate case) and \(\lambda\in\Sigma_{\delta_{0}}\) (the degenerate case). We first consider the non-degenerate case.
### The non-degenerate case
Fix \(\epsilon\in[-1/4,1/4]\backslash\{0\},\lambda\in\Sigma\backslash\Sigma_{\delta_{ 0}/2},k\in\mathbb{Z}\backslash\{0\}\). Define for each \(g\in L^{2}(0,1)\) the operator
\[T_{k,\lambda,\epsilon}g(y):=\int_{0}^{1}G_{k}(y,z)\frac{b^{\prime\prime}(z)g(z )}{b(z)-\lambda+i\epsilon}dz. \tag{4.1}\]
For applications below, we fix a smooth cutoff function \(\Phi\in C_{0}^{\infty}(y_{*}-\delta_{0}/3,y_{*}+\delta_{0}/3)\) with \(\Phi\equiv 1\) on \([y_{*}-\delta_{0}/4,y_{*}+\delta_{0}/4]\). To obtain the optimal dependence on the frequency variable \(k\), we define
\[\|g\|_{H^{1}_{k}(I)}:=\|g\|_{L^{2}(I)}+|k|^{-1}\|g^{\prime}\|_{L^{2}(I)}. \tag{4.2}\]
**Lemma 4.1**.: _For \(\epsilon\in[-1/4,1/4]\backslash\{0\},\lambda\in\Sigma\backslash\Sigma_{\delta _{0}/2},k\in\mathbb{Z}\backslash\{0\}\), the operator \(T_{k,\lambda,\epsilon}\) satisfies the bound_
\[\|T_{k,\lambda,\epsilon}g\|_{H^{1}_{k}(I)}\lesssim|k|^{-1/3}\|g\|_{H^{1}_{k}( I)},\qquad\text{for all }g\in H^{1}_{k}(I). \tag{4.3}\]
_In addition, we have the more precise regularity structure_
\[\begin{split}&\left\|\partial_{y}T_{k,\lambda,\epsilon}g(y)+ \frac{b^{\prime\prime}(y)(1-\Phi(y))g(y)}{b^{\prime}(y)}\log\left(b(y)- \lambda+i\epsilon\right)\right\|_{W^{1,1}(\mathbb{R})}\\ &\lesssim\langle k\rangle^{4/3}\|g\|_{H^{1}_{k}(I)}.\end{split} \tag{4.4}\]
Proof.: We can decompose for \(y\in[0,1]\),
\[T_{k,\lambda,\epsilon}g(y):=T^{1}_{k,\lambda,\epsilon}g(y)+T^{2}_{k,\lambda, \epsilon}g(y), \tag{4.5}\]
where
\[T^{1}_{k,\lambda,\epsilon}g(y):=\int_{0}^{1}G_{k}(y,z)\frac{\Phi(z)b^{\prime \prime}(z)g(z)}{b(z)-\lambda-i\epsilon}dz,\quad T^{2}_{k,\lambda,\epsilon}g(y) :=\int_{0}^{1}G_{k}(y,z)\frac{(1-\Phi(z))b^{\prime\prime}(z)g(z)}{b(z)- \lambda+i\epsilon}dz. \tag{4.6}\]
It follows from the definition of \(\Phi\) that \(T^{1}_{k,\lambda,\epsilon}g(y)\) satisfies the bound
\[\|T^{1}_{k,\lambda,\epsilon}g(y)\|_{H^{1}_{k}(I)}\lesssim|k|^{-1/3}\|g\|_{H^{ 1}_{k}(I)},\quad\|\partial_{y}T^{1}_{k,\lambda,\epsilon}g(y)\|_{W^{1,1}(I)} \lesssim\langle k\rangle^{4/3}\|g\|_{H^{1}_{k}(I)}. \tag{4.7}\]
To bound \(T^{2}_{k,\lambda,\epsilon}g(y)\), we follow the approach in [14]. Using integration by parts, we obtain that
\[\begin{split} T^{2}_{k,\lambda,\epsilon}g(y)&=\int_ {0}^{1}G_{k}(y,z)\frac{(1-\Phi(z))b^{\prime\prime}(z)g(z)}{b^{\prime}(z)} \partial_{z}\log(b(z)-\lambda+i\epsilon)\,dz\\ &=-\int_{0}^{1}\partial_{z}G_{k}(y,z)\frac{(1-\Phi(z))b^{\prime \prime}(z)g(z)}{b^{\prime}(z)}\log(b(z)-\lambda+i\epsilon)\,dz\\ &\quad-\int_{0}^{1}G_{k}(y,z)\partial_{z}\Big{[}\frac{(1-\Phi(z) )b^{\prime\prime}(z)g(z)}{b^{\prime}(z)}\Big{]}\log(b(z)-\lambda+i\epsilon)\,dz.\end{split} \tag{4.8}\]
The desired bounds follow from the bound (3.4), the formula (3.6) and (3.7).
We now prove the limiting absorption principle, using the assumption that there is no discrete or generalized embedded eigenvalues.
**Lemma 4.2**.: _There exist \(\epsilon_{0},\kappa>0\), such that the following statement holds. For all \(\lambda\in\Sigma\backslash\Sigma_{\delta_{0}/2},k\in\mathbb{Z}\backslash\{0\},0<| \epsilon|<\epsilon_{0}\) and any \(g\in H^{1}_{k}(I)\), we have the bound_
\[\|g+T_{k,\lambda,\epsilon}g\|_{H^{1}_{k}(I)}\geq\kappa\|g\|_{H^{1}_{k}(I)}. \tag{4.9}\]
Proof.: We prove (4.9) by contradiction. Assume that there exist for \(j\geq 1\), a sequence of numbers \(k_{j}\in\mathbb{Z}\backslash\{0\}\), \(\lambda_{j}\in\Sigma\backslash\Sigma_{\delta_{0}/2}\), \(\epsilon_{j}\in\mathbb{R}\backslash\{0\}\to 0\) and functions \(g_{j}\in H^{1}_{k_{j}}(I)\) with \(\|g_{j}\|_{H^{1}_{k_{j}}(I)}=1\), satisfying \(k_{j}\to k_{*}\in(\mathbb{Z}\backslash\{0\})\cup\{\pm\infty\}\), \(\lambda_{j}\to\lambda_{*}\in\overline{\Sigma\backslash\Sigma_{\delta_{0}}}\) as \(j\to\infty\), such that
\[\big{\|}g_{j}+T_{k_{j},\lambda_{j},\epsilon_{j}}g_{j}\big{\|}_{H^{1}_{k_{j}}( I)}\to 0,\qquad\text{as $j\to\infty$}. \tag{4.10}\]
The bounds (4.3) and (4.10) imply that \(|k_{j}|\lesssim 1\). Thus \(k_{*}\in\mathbb{Z}\backslash\{0\}\). Using \(\|g_{j}\|_{H^{1}_{k_{j}}(I)}=1\), the bounds (4.4) and the compact embedding \(W^{1,1}(I)\to L^{2}(I)\), we conclude that by passing to a subsequence, \(T_{k_{j},\lambda_{j},\epsilon_{j}}g_{j}\) converges in \(H^{1}(I)\). In view of (4.10) we can assume that \(g_{j}\to g\) in \(H^{1}(I)\), where \(\|g\|_{H^{1}_{k_{*}}}=1\).
Using formula (4.1), we obtain from (4.10) that for \(y\in I\),
\[g(y)+\lim_{j\to\infty}\int_{0}^{1}G_{k_{*}}(y,z)\frac{b^{\prime\prime}(z)g(z)} {b(z)-\lambda+i\epsilon_{j}}\,dz=0. \tag{4.11}\]
Applying \(k_{*}^{2}-\frac{d^{2}}{dy^{2}}\) to (4.11), we get that for \(y\in I\),
\[k_{*}^{2}g(y)-g^{\prime\prime}(y)+\lim_{j\to\infty}\frac{(b(y)-\lambda_{*})b^{ \prime\prime}(y)g(y)}{(b(y)-\lambda_{*})^{2}+\epsilon_{j}^{2}}+i\pi\sum_{z\in[ 0,1],b(z)=\lambda}\frac{b^{\prime\prime}(z)g(z)}{|b^{\prime}(z)|}\delta(y-z)=0, \tag{4.12}\]
in the sense of distributions for \(y\in(0,1)\), which contradicts our spectral assumption that \(\lambda_{*}\) is not a generalized embedded eigenvalue for \(L_{k}\). The lemma is then proved.
### The degenerate case \(\lambda\in\Sigma_{\delta_{0}}\)
Recall the definition (3.14) for \(\delta=\delta(\lambda)\). For \(\lambda\in\Sigma_{\delta_{0}},k\in\mathbb{Z}\backslash\{0\},y\in I\) and \(\epsilon\in[-1/8,1/8]\backslash\{0\}\), we denote
\[d_{k}(\lambda,\epsilon):=\big{[}|\lambda-b(y_{*})|^{1/2}+|\epsilon|^{1/2} \big{]}\wedge\frac{1}{|k|},\quad\varrho_{k}(y;\lambda+i\epsilon):=\varrho(y; \lambda+i\epsilon)\wedge\frac{1}{|k|}. \tag{4.13}\]
Define the weight and the associated weighted Sobolev spaces \(X_{N,\varrho_{k}}\) and \(X_{L,\varrho_{k}}\) as
\[\begin{split}\|g\|_{X_{N,\varrho_{k}}(I)}:=&\sum_{ \alpha\in\{0,1\}}(\delta+|\epsilon|^{1/2})^{-1/2}\Big{\|}\big{[}d_{k}(\lambda, \epsilon)\big{]}^{(-7/4+\alpha)}\partial_{y}^{\alpha}g\big{\|}_{L^{2}(S_{3( \delta+|\epsilon|^{1/2})})}\\ &+\sum_{\alpha\in\{0,1\}}\|\varrho_{k}^{-7/4+\alpha}(\cdot; \lambda+i\epsilon)\partial_{y}^{\alpha}g\|_{L^{\infty}(I\backslash S_{3( \delta+|\epsilon|^{1/2})})},\end{split} \tag{4.14}\]
and
\[\begin{split}\|g\|_{X_{L,\varrho_{k}}(I)}:=&\sum_{ \alpha\in\{0,1\}}(\delta+|\epsilon|^{1/2})^{-1/2}\big{\|}d_{k}^{\alpha}( \lambda,\epsilon)\partial_{y}^{\alpha}g\big{\|}_{L^{2}(S_{3(\delta+|\epsilon| ^{1/2})})}\\ &+\sum_{\alpha\in\{0,1\}}\big{\|}d_{k}(\lambda,\epsilon)^{-1} \varrho_{k}^{\alpha+1}(\cdot;\lambda+i\epsilon)\partial_{y}^{\alpha}g\big{\|} _{L^{\infty}(I\backslash S_{3(\delta+|\epsilon|^{1/2})})},\end{split} \tag{4.15}\]
Fix \(\epsilon\in[-1/4,1/4]\backslash\{0\},\lambda\in\Sigma_{\delta_{0}},k\in\mathbb{Z} \backslash\{0\}\). Recall the definition (3.14) for \(\delta=\delta(\lambda)>0\). Define for each \(g\in L^{2}(0,1)\) the operator
\[T_{k}^{*}(\lambda+i\epsilon)g(y):=\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i \epsilon)\bigg{[}1-\varphi\big{(}\frac{y-y_{*}}{\delta_{0}}\big{)}+\varphi \big{(}\frac{y-y_{*}}{\delta}\big{)}\bigg{]}\frac{b^{\prime\prime}(z)g(z)}{b(z) -\lambda+i\epsilon}dz. \tag{4.16}\]
Then we have the following bounds for \(T_{k}^{*}(\lambda+i\epsilon)\).
**Lemma 4.3**.: _For \(\epsilon\in[-1/4,1/4]\backslash\{0\},\lambda\in\Sigma_{\delta_{0}},k\in \mathbb{Z}\backslash\{0\},\) the operator \(T_{k}^{*}(\lambda+i\epsilon)\) satisfies the bound for \(X\in\{X_{N,\varrho_{k}}(I),X_{L,\varrho_{k}}(I)\}\)_
\[\|T_{k}^{*}(\lambda+i\epsilon)g\|_{X}\lesssim(1+|k|(|\lambda-b(y_{*})|^{1/2}+| \epsilon|^{1/2}))^{-1/4}\|g\|_{X},\quad\text{for all }g\in H^{1}_{k}(I). \tag{4.17}\]
Proof.: We provide the detailed proof only for the case \(X=X_{N,\varrho_{k}}(I)\) as the other case is analogous. Since \(k,\lambda,\epsilon\) are fixed, for simplicity of notations, we suppress the dependence on \(k,\lambda,\epsilon\) to write \(T^{*}\) as \(T_{k}^{*}(\lambda+i\epsilon)\), and decompose for \(y\in I\),
\[T^{*}g(y):=T_{1}^{*}g(y)+T_{2}^{*}g(y), \tag{4.18}\]
where
\[\begin{split}& T_{1}^{*}g(y):=\int_{0}^{1}\mathcal{G}_{k}(y,z; \lambda+i\epsilon)\bigg{[}1-\varphi\big{(}\frac{z-y_{*}}{\delta_{0}}\big{)} \bigg{]}\frac{b^{\prime\prime}(z)g(z)}{b(z)-\lambda+i\epsilon}dz,\\ & T_{2}^{*}g(y):=\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i \epsilon)\varphi\big{(}\frac{z-y_{*}}{\delta}\big{)}\frac{b^{\prime\prime}(z)g (z)}{b(z)-\lambda+i\epsilon}dz.\end{split} \tag{4.19}\]
It follows from the bounds on modified Green's function \(\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\), see Lemma 3.1, that
\[\big{\|}T_{1}^{*}g\big{\|}_{X_{N,\varrho_{k}}(I)}\lesssim|k|^{-1/2}\big{\|}g \big{\|}_{X_{N,\varrho_{k}}(I)}. \tag{4.20}\]
To prove (4.17), it suffices to prove
\[\|T_{2}^{*}g\|_{X_{N,\varrho_{k}}(I)}\lesssim\big{(}1+|k|(\delta+|\epsilon|^{1 /2})\big{)}^{-1/4}\|g\|_{X_{N,\varrho_{k}}(I)}. \tag{4.21}\]
We assume momentarily that \(|\epsilon|\lesssim|\lambda-b(y_{*})|\) and explain how to remove this assumption at the end of the proof. We decompose further for \(y\in I\),
\[\begin{split} T_{2}^{*}g(y)&=\int_{0}^{1}\mathcal{G} _{k}(y,z;\lambda+i\epsilon)\varphi\big{(}\frac{z-y_{*}}{\delta^{\prime}}\big{)} \varphi\big{(}\frac{z-y_{*}}{\delta}\big{)}\frac{b^{\prime\prime}(z)g(z)}{b(z )-\lambda+i\epsilon}dz\\ &+\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\Big{[}1- \varphi\big{(}\frac{z-y_{*}}{\delta^{\prime}}\big{)}\Big{]}\varphi\big{(} \frac{z-y_{*}}{\delta}\big{)}\frac{b^{\prime\prime}(z)g(z)}{b(z)-\lambda+i \epsilon}dz\\ &:=T_{2,R}^{*}g(y)+T_{2,S}^{*}g(y),\end{split} \tag{4.22}\]
where we have chosen \(\delta^{\prime}=\delta/C_{3}\) with a large constant \(C_{3}\) so that \(|b(y)-\lambda|\approx|\lambda-b(y_{*})|\) for \(|y-y_{*}|<\delta^{\prime}\).
It suffices to prove for \(\diamond\in\{R,S\}\)
\[\|T_{2,\diamond}^{*}g\|_{X_{N,\varrho_{k}}(I)}\lesssim\big{(}1+|k|(|\lambda-b (y_{*})|^{1/2}+|\epsilon|^{1/2})\big{)}^{-1/4}\|g\|_{X_{N,\varrho_{k}}(I)}. \tag{4.23}\]
**Step 1.** We first prove (4.23) with \(\diamond=R\).
_Case I: \(1/|k|>|\lambda-b(y_{*})|^{1/2}+|\epsilon|^{1/2}\)._ In this case for \(|z-y_{*}|\lesssim\delta\) and \(|y-y_{*}|\lesssim 1\) we have the bound
\[\big{|}\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\big{|}\lesssim\frac{\delta^{2}+| \epsilon|}{|y-y_{*}|+\delta+|\epsilon|^{1/2}},\quad\big{|}\partial_{y}\mathcal{ G}_{k}(y,z;\lambda+i\epsilon)\big{|}\lesssim\frac{\delta^{2}+|\epsilon|}{(|y-y_{*}|+ \delta+|\epsilon|^{1/2})^{2}}. \tag{4.24}\]
It follows from the bound (4.24) that
\[\|T_{2,R}^{*}g\|_{X_{N,\varrho_{k}}(I)}\lesssim\big{(}1+|k|(|\lambda-b(y_{*})|^{1/ 2}+|\epsilon|^{1/2})\big{)}^{-1/4}\|g\|_{X_{N,\varrho_{k}}(I)} \tag{4.25}\]
_Case II: \(1/|k|\ll|\lambda-b(y_{*})|^{1/2}+|\epsilon|^{1/2}\)._ In this case, we have for \(|z-y_{*}|\lesssim\delta\) and \(|y-y_{*}|\lesssim 1\) that
\[\big{|}\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\big{|}+|k|^{-1}\big{|}\partial_{ y}\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\big{|}\lesssim|k|^{-1}e^{-|k||y-z|}. \tag{4.26}\]
The desired bound
\[\|T_{2,R}^{*}g\|_{X_{N,\varrho_{k}}(I)}\lesssim\big{(}1+|k|(|\lambda-b(y_{*})|^ {1/2}+|\epsilon|^{1/2})\big{)}^{-1/4}\|g\|_{X_{N,\varrho_{k}}(I)} \tag{4.27}\]
follows from (4.26).
**Step 2.** We now turn to the proof of (4.23) with \(\diamond=S\) and still consider two cases.
_Case I: \(1/|k|>|\lambda-b(y_{*})|^{1/2}+|\epsilon|^{1/2}\)._ Denoting for \(y\in I\),
\[\varphi^{*}\big{(}\frac{y-y_{*}}{\delta}\big{)}:=\Big{[}1-\varphi\big{(}\frac {z-y_{*}}{\delta^{\prime}}\big{)}\Big{]}\varphi\big{(}\frac{z-y_{*}}{\delta} \big{)}, \tag{4.28}\]
we can rewrite
\[\begin{split} T_{2,S}^{*}g(y)&=\int_{0}^{1} \mathcal{G}_{k}(y,z;\lambda+i\epsilon)\varphi^{*}\big{(}\frac{z-y_{*}}{ \delta}\big{)}\frac{b^{\prime\prime}(z)g(z)}{b^{\prime}(z)}\partial_{z}\log \frac{b(z)-\lambda+i\epsilon}{\delta^{2}}\\ &=-\int_{0}^{1}\partial_{z}\bigg{[}\mathcal{G}_{k}(y,z;\lambda+i \epsilon)\varphi^{*}\big{(}\frac{z-y_{*}}{\delta}\big{)}\frac{b^{\prime\prime }(z)g(z)}{b^{\prime}(z)}\bigg{]}\log\frac{b(z)-\lambda+i\epsilon}{\delta^{2}} dz.\end{split} \tag{4.29}\]
As a consequence of (4.29), we also have
\[\begin{split}\partial_{y}\Big{[}T_{2,S}^{*}g(y)\Big{]}& =\partial_{y}\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i\epsilon) \varphi^{*}\big{(}\frac{z-y_{*}}{\delta}\big{)}\frac{b^{\prime\prime}(z)g(z)} {b^{\prime}(z)}\partial_{z}\log\frac{b(z)-\lambda+i\epsilon}{\delta^{2}}dz\\ &=-\int_{0}^{1}\bigg{[}\partial_{y}(\partial_{z}+\partial_{y}) \mathcal{G}_{k}(y,z;\lambda,\epsilon)\varphi^{*}\big{(}\frac{z-y_{*}}{\delta} \big{)}\frac{b^{\prime\prime}(z)g(z)}{b^{\prime}(z)}\bigg{]}\log\frac{b(z)- \lambda+i\epsilon}{\delta^{2}}dz\\ &\quad+\int_{0}^{1}\bigg{[}\partial_{y}^{2}\mathcal{G}_{k}(y,z; \lambda+i\epsilon)\varphi^{*}\big{(}\frac{z-y_{*}}{\delta}\big{)}\frac{b^{ \prime\prime}(z)g(z)}{b^{\prime}(z)}\bigg{]}\log\frac{b(z)-\lambda+i\epsilon} {\delta^{2}}dz\\ &\quad-\int_{0}^{1}\partial_{y}\mathcal{G}_{k}(y,z;\lambda+i \epsilon)\partial_{z}\bigg{[}\varphi^{*}\big{(}\frac{z-y_{*}}{\delta}\big{)} \frac{b^{\prime\prime}(z)g(z)}{b^{\prime}(z)}\bigg{]}\log\frac{b(z)-\lambda+i \epsilon}{\delta^{2}}dz.\end{split} \tag{4.30}\]
Note that on the support of \(\varphi^{*}(\frac{z-y_{*}}{\delta})\), we have
\[|b^{\prime}(z)|\approx\delta,\quad\varrho(z;\lambda+i\epsilon)\approx\delta. \tag{4.31}\]
The desired bound (4.23) for \(\diamond=S\) follows from (4.29)-(4.30), Lemma 3.1 and 3.2, and we have, in addition,
\[\begin{split}&(\delta+|\epsilon|^{1/2})^{-1/2}\bigg{\|}\partial _{y}\bigg{\{}\partial_{y}T_{2,S}^{*}g(y)+\varphi^{*}\big{(}\frac{y-y_{*}}{ \delta}\big{)}\frac{b^{\prime\prime}(y)g(y)}{b^{\prime}(y)}\log\frac{b(y)- \lambda+i\epsilon}{\delta^{2}}\bigg{\}}\bigg{\|}_{L^{2}(S_{3(\delta+|\epsilon|^{ 1/2}),j})}\\ &\lesssim\delta^{-1/4}\Big{[}1+|k|(|\lambda-b(y_{*})|^{1/2}+| \epsilon|^{1/2})\Big{]}^{-1/4}\|g\|_{X_{N,\varrho_{k}}(I)}.\end{split} \tag{4.32}\]
_Case II: \(1/|k|\ll|\lambda-b(y_{*})|^{1/2}+|\epsilon|^{1/2}\)._ This case is analogous to _Case I_, using Lemma 3.1 and Lemma 3.2.
Finally we turn to the assumption that \(|\epsilon|^{1/2}\lesssim\delta\). Suppose \(|\epsilon|^{1/2}\gg\delta\), then the factor \(\frac{1}{b(z)-\lambda+i\epsilon}\) is not truly singular, and the desired bounds (4.21) follow directly from the bounds on the modified Green's function \(\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\) from Lemma 3.1 and Lemma 3.2. Indeed, we have the stronger bound
\[\|T_{2}^{*}g\|_{X_{N,\varrho_{k}}(I)}\lesssim\frac{\delta}{\sqrt{|\epsilon|}} \|g\|_{X_{N,\varrho_{k}}(I)}, \tag{4.33}\]
which will be useful below.
The following limiting absorption principle plays an essential role in establishing the vorticity depletion phenomenon.
**Lemma 4.4**.: _There exist positive numbers \(\epsilon_{0},\kappa\) such that the following statement holds._
_For \(\epsilon\in[-\epsilon_{0},\epsilon_{0}]\backslash\{0\}\), \(\lambda\in\Sigma_{\delta_{0}}\), \(k\in\mathbb{Z}\backslash\{0\}\), and \(X\in\{X_{N,\varrho_{k}}(I),X_{L,\varrho_{k}}(I)\}\),_
\[\|(I+T_{k}^{*}(\lambda+i\epsilon))g\|_{X_{N,\varrho_{k}}(I)}\geq\kappa\|g\|_{ X_{N,\varrho_{k}}(I)},\quad\text{for all $g\in H^{1}_{k}(I)$}. \tag{4.34}\]
Proof.: We only consider the case \(X=X_{N,\varrho_{k}}(I)\) as the other case is analogous. We prove (4.34) by a contradiction argument. Assume (4.34) does not hold for any \(\epsilon_{0}>0\). Then there exist for \(\ell\in\mathbb{Z}\cap[1,\infty)\),
\[\lambda_{\ell}\to\lambda_{*}\in\Sigma_{\delta_{0}},\ \epsilon_{\ell}\neq 0 \ \text{with}\ \epsilon_{\ell}\to 0,\ k_{\ell}\to k_{*}\in(\mathbb{Z}\backslash\{0\})\cup\{\pm\infty\}, \tag{4.35}\]
and functions \(g_{\ell}\) satisfying
\[\|g_{\ell}\|_{X_{N,\varrho_{k_{\ell}}}(I)}=1 \tag{4.36}\]
such that
\[\big{\|}(I+T_{k_{\ell}}^{*}(\lambda_{\ell}+i\epsilon_{\ell}))g_{\ell}\big{\|} _{X_{N,\varrho_{k_{\ell}}}(I)}\to 0. \tag{4.37}\]
We can assume that \(\lambda_{*}=b(y_{*})\), otherwise the proof follows from the argument in the non-degenerate case. We consider several cases.
_Case I: \(\limsup_{\ell\to\infty}\|g_{\ell}\|_{H^{1}(I\backslash S_{\delta_{0}})}>0\)._ By the bound (4.20), we can assume that \(k_{*}\in\mathbb{Z}\backslash\{0\}\). By the bounds (4.36) and (4.37), we can assume (passing to a subsequence if necessary) that
\[g_{\ell}\to g,\ \text{in}\ H^{1}_{\text{loc}}(I\backslash\{y_{*}\})\ \text{as}\ \ell\to\infty,\quad g(0)=g(1)=0. \tag{4.38}\]
Then it follows from (4.36) and (4.37) that \(g\) satisfies
\[|g(y)|\lesssim|y-y_{*}|^{7/4}, \tag{4.39}\]
and for \(y\in(0,1)\),
\[(k_{*}^{2}-\partial_{y}^{2})g(y)+\frac{b^{\prime\prime}(y)}{b(y)-b(y_{*})}g(y) =0, \tag{4.40}\]
which imply that \(b(y_{*})\) is an embedded eigenvalue for \(L_{k}\), a contradiction to the spectral assumption.
_Case II: \(\limsup_{\ell\to\infty}\|g_{\ell}\|_{H^{1}(I\backslash S_{\delta_{0}})}=0\)._ By the bound (4.17) we can assume that \(|k_{\ell}|(\delta_{\ell}+|\epsilon_{\ell}|^{1/2})\lesssim 1\). In this case, using (4.37), we obtain that (passing to a subsequence if necessary)
\[\begin{split}&\big{\|}(|\lambda_{\ell}-b(y_{*})|+|\epsilon|)^{-9/8}g_{ \ell}\big{\|}_{L^{2}([y_{*}-\delta_{\ell}-|\epsilon_{\ell}|^{1/2},y_{*}+\delta _{\ell}+|\epsilon_{\ell}|^{1/2}])}\\ &+\big{\|}(|\lambda_{\ell}-b(y_{*})|+|\epsilon|)^{-5/8}\partial_{ y}g_{\ell}\big{\|}_{L^{2}([y_{*}-\delta_{\ell}-|\epsilon_{\ell}|^{1/2},y_{*}+\delta_{ \ell}+|\epsilon_{\ell}|^{1/2}])}\geq\sigma>0,\end{split} \tag{4.41}\]
where we recall from (3.14) that
\[\delta_{\ell}\approx|\lambda_{\ell}-b(y_{*})|^{1/2}. \tag{4.42}\]
We divide into several subcases.
_Subcase II.1: \(|\epsilon_{\ell}|^{1/2}\approx\delta_{\ell}\) for a subsequence._
Define the change of variables for \(\ell\geq 1,y\in I,\)
\[y-y_{*}=\delta_{\ell}Y,\quad g_{\ell}(y):=(|\lambda_{\ell}-b(y_{*})|+|\epsilon_ {\ell}|)^{-7/8}H_{\ell}(Y). \tag{4.43}\]
It follows from (4.32) that we can extract a nontrivial limit \(H\in H^{1}(\mathbb{R})\) of \(H_{\ell}\) satisfying for \(Y\in\mathbb{R},\)
\[(\beta^{2}-\partial_{Y}^{2})H(Y)+\frac{b^{\prime\prime}(y_{*})}{b^{\prime \prime}(y_{*})Y^{2}/2+\gamma+i\alpha}H(Y)=0, \tag{4.44}\]
where \(\beta\in\mathbb{R},\alpha,\gamma\in\mathbb{R}\backslash\{0\}.\) This is impossible since the shear flow \((b^{\prime\prime}(y_{*})Y^{2}/2,0),Y\in\mathbb{R}\) is spectrally stable.
_Subcase II.2: \(|\epsilon_{\ell}|^{1/2}=o(\delta_{\ell})\) for a subsequence._ Passing to a subsequence and using rescaling as in (4.43) we can extract a nontrivial limit \(H\in H^{1}(\mathbb{R}),\) such that
\[(\beta^{2}-\partial_{Y}^{2})H(Y)+\lim_{\epsilon\to 0}\frac{b^{\prime\prime}(y_{*})} {b^{\prime\prime}(y_{*})Y^{2}/2+\gamma+i\epsilon}H(Y)=0. \tag{4.45}\]
This is again impossible since the shear flow \((b^{\prime\prime}(y_{*})Y^{2}/2,0),Y\in\mathbb{R}\) is spectrally stable.
_Subcase II.3: \(\delta_{\ell}=o(|\epsilon_{\ell}|^{1/2})\) for a subsequence._ This case is not possible thanks to the bound (4.33). The lemma is now proved.
## 5. Bounds on \(\psi_{k,\epsilon}^{\iota}\): the non-degenerate case
In this section we obtain bounds on \(\psi_{k,\epsilon}^{\iota}(y,\lambda)\) in the non-degenerate case, i.e. when \(\lambda\in\Sigma\backslash\Sigma_{\delta_{0}/2}\). Since the arguments are analogous to those in [14], we will be brief in the proofs, and provide only comments on the main ideas involved.
We begin with the following preliminary bounds.
**Lemma 5.1**.: _For \(\lambda\in\Sigma\backslash\Sigma_{\delta_{0}/2},k\in\mathbb{Z}\backslash\{0 \},\iota\in\{\pm\}\) and \(0<\epsilon<\epsilon_{0}\), we have the bounds_
\[\|\psi_{k,\epsilon}^{\iota}(\cdot,\lambda)\|_{H^{1}_{k}(I)}\lesssim|k|^{-1/2} \|\omega_{0k}\|_{H^{1}_{k}(I)}. \tag{5.1}\]
Proof.: The desired bounds (5.1) follow directly from the Rayleigh equation (2.8) and Lemma 4.2, once we use the Green's function \(G_{k}\) to invert \(k^{2}-\partial_{y}^{2}\) and formulate (2.8) as an integral equation.
To obtain control on \(\partial_{\lambda}\psi_{k,\epsilon}^{\iota}(\cdot,\lambda)\) for \(\lambda\in\Sigma\backslash\Sigma_{\delta_{0}/2},\) we take derivative in (2.8), and obtain that
\[(k^{2}-\partial_{y}^{2})\partial_{\lambda}\psi_{k,\epsilon}^{\iota}(y,\lambda )+\frac{b^{\prime\prime}(y)\partial_{\lambda}\psi_{k,\epsilon}^{\iota}(y, \lambda)}{b(y)-\lambda+i\iota\epsilon}=\frac{\omega_{0}^{k}(y)}{(b(y)-\lambda +i\iota\epsilon)^{2}}-\frac{b^{\prime\prime}(y)\psi_{k,\epsilon}^{\iota}(z, \lambda)}{(b(y)-\lambda+i\iota\epsilon)^{2}}, \tag{5.2}\]
for \(y\in I\) with zero boundary value at \(y\in\{0,1\}\). Reformulating (5.2) as an integral equation, we obtain that
\[\begin{split}&\partial_{\lambda}\psi^{\iota}_{k,\epsilon}(y, \lambda)+\int_{0}^{1}G_{k}(y,z)\frac{b^{\prime\prime}(z)\partial_{\lambda}\psi^ {\iota}_{k,\epsilon}(z,\lambda)}{b(z)-\lambda+i\iota\epsilon}\,dz\\ &=\int_{0}^{1}G_{k}(y,z)\frac{\omega_{0}^{k}(z)}{(b(z)-\lambda+ i\iota\epsilon)^{2}}\,dz-\int_{0}^{1}G_{k}(y,z)\frac{b^{\prime\prime}(z)\psi^{ \iota}_{k,\epsilon}(z,\lambda)}{(b(z)-\lambda+i\iota\epsilon)^{2}}\,dz.\end{split} \tag{5.3}\]
Recall the definition of the smooth cutoff function \(\Phi\) below (4.1). We have the following bounds for \(\partial_{\lambda}\psi^{\iota}_{k,\epsilon}(y,\lambda)\) when \(\lambda\in\Sigma\backslash\Sigma_{\delta_{0}}\).
**Lemma 5.2**.: _For \(\lambda\in\Sigma\backslash\Sigma_{\delta_{0}/2},k\in\mathbb{Z}\backslash\{0 \},\iota\in\{\pm\}\) and \(0<\epsilon<\epsilon_{0}\), \(\partial_{\lambda}\psi^{\iota}_{k,\epsilon}(y,\lambda)\) satisfies the following decomposition_
\[\begin{split}\partial_{\lambda}\psi^{\iota}_{k,\epsilon}(y, \lambda)=&\bigg{[}\frac{b^{\prime}(y_{0})\omega_{0}^{k}(y)}{|b^{ \prime}(y)|^{2}}-\frac{b^{\prime\prime}(y)\psi^{\iota}_{k,\epsilon}(y, \lambda)}{|b^{\prime}(y)|^{2}}\bigg{]}(1-\Phi(y))\log\left(b(y)-\lambda+i \iota\epsilon\right)\\ &+\sum_{\sigma=0,1}\omega_{0}^{k}(\sigma)\Psi^{\iota}_{k,\sigma, \epsilon}(y,\lambda)\log\left(b(\sigma)-\lambda+i\iota\epsilon\right)+\mathcal{ R}^{\iota}_{\sigma,k,y_{0},\epsilon}(y).\end{split} \tag{5.4}\]
_In the above for \(\sigma\in\{0,1\}\), \(\iota\in\{\pm\}\), \(0<\epsilon<\epsilon_{0}\), and \(\lambda\in\Sigma\backslash\Sigma_{\delta_{0}/2}\),_
\[\big{\|}\mathcal{R}^{\iota}_{\sigma,k,y_{0},\epsilon}\big{\|}_{H^{1}_{k}(I)} \lesssim|k|^{1/2}\|\omega_{0k}\|_{H^{2}_{k}(I)},\quad\big{\|}\Psi^{\iota}_{k,\sigma,\epsilon}(\cdot,\lambda)\big{\|}_{H^{1}_{k}(I)}\lesssim|k|^{-1/2}. \tag{5.5}\]
Proof.: The basic idea is to expand the right hand side of (5.3) using integration by parts, and apply Lemma 4.2 after removing the most singular parts. Indeed, denoting schematically,
\[\mathcal{U}:=\int_{0}^{1}G_{k}(y,z)\frac{\omega_{0}^{k}(z)}{(b(z)-\lambda+i \iota\epsilon)^{2}}\,dz-\int_{0}^{1}G_{k}(y,z)\frac{b^{\prime\prime}(z)\psi^{ \iota}_{k,\epsilon}(z,\lambda)}{(b(z)-\lambda+i\iota\epsilon)^{2}}\,dz, \tag{5.6}\]
we note that \(\partial_{\lambda}\psi^{\iota}_{k,\epsilon}(y,\lambda)-\mathcal{U}\) satisfies the equation (recalling (4.1) for the definition of \(T_{k,\lambda,\iota\epsilon}\)),
\[(I+T_{k,\lambda,\iota\epsilon})\big{[}\partial_{\lambda}\psi^{\iota}_{k, \epsilon}(y,\lambda)\big{]}=-T_{k,\lambda,\iota\epsilon}\mathcal{U}. \tag{5.7}\]
The term \(T_{k,\lambda,\iota\epsilon}\mathcal{U}\in H^{1}_{k}(I)\) (noting however that for the boundary terms we need to track the singular coefficient \(\log\left(b(\sigma)-\lambda+i\iota\epsilon\right),\sigma\in\{0,1\}\)), and we can apply Lemma 4.2 to (5.7) in order to obtain the desired conclusions. We refer to [14] for the detailed proof.
To obtain bounds on \(\partial_{\lambda}^{2}\psi^{\iota}_{k,\epsilon}(y,\lambda)\) for \(\lambda\in\Sigma\backslash\Sigma_{\delta_{0}/2}\), we take two derivatives in (2.8) and obtain that
\[\begin{split}&(k^{2}-\partial_{y}^{2})\partial_{\lambda}^{2}\psi^{ \iota}_{k,\epsilon}(y,\lambda)+\frac{b^{\prime\prime}(y)\partial_{\lambda}^{2} \psi^{\iota}_{k,\epsilon}(y,\lambda)}{b(y)-\lambda+i\iota\epsilon}\\ &=2\frac{\omega_{0}^{k}(y)}{(b(y)-\lambda+i\iota\epsilon)^{3}}-2 \frac{b^{\prime\prime}(y)\psi^{\iota}_{k,\epsilon}(z,\lambda)}{(b(y)-\lambda +i\iota\epsilon)^{3}}+\frac{b^{\prime\prime}(y)\partial_{\lambda}\psi^{\iota}_ {k,\epsilon}(z,\lambda)}{(b(y)-\lambda+i\iota\epsilon)^{2}},\end{split} \tag{5.8}\]
for \(y\in I\) with zero boundary value at \(y\in\{0,1\}\). We can reformulate (5.8) in the integral form for \(y\in I\), as
\[\begin{split}&\partial_{\lambda}^{2}\psi_{k,\epsilon}^{t}(y,\lambda)+ \int_{0}^{1}G_{k}(y,z)\frac{b^{\prime\prime}(z)\partial_{\lambda}^{2}\psi_{k, \epsilon}^{t}(z,\lambda)}{b(z)-\lambda+i\epsilon\epsilon}\,dz\\ &=\int_{0}^{1}G_{k}(y,z)\bigg{[}2\frac{\omega_{0}^{k}(z)}{(b(z)- \lambda+i\epsilon)^{3}}-2\frac{b^{\prime\prime}(z)\psi_{k,\epsilon}^{t}(z, \lambda)}{(b(z)-\lambda+i\epsilon)^{3}}+\frac{b^{\prime\prime}(z)\partial_{ \lambda}\psi_{k,\epsilon}^{t}(z,\lambda)}{(b(z)-\lambda+i\epsilon)^{2}}\bigg{]} \,dz.\end{split} \tag{5.9}\]
We have the following bounds on \(\partial_{\lambda}^{2}\psi_{k,\epsilon}^{t}(y,\lambda)\) for \(\lambda\in\Sigma\backslash\Sigma_{\delta_{0}/2}\).
**Lemma 5.3**.: _For \(k\in\mathbb{Z}\backslash\{0\},\iota\in\{\pm\}\) and \(0<\epsilon<\epsilon_{0}\), we have the following bound_
\[\begin{split}&\bigg{\|}\partial_{\lambda}^{2}\psi_{k,\epsilon}^{t} (y,\lambda)-\frac{\omega_{0}^{k}(1)\Phi_{k,\epsilon}^{1\iota}(y,\lambda)}{b(1 )-\lambda+i\epsilon}-\frac{\omega_{0}^{k}(0)\Phi_{k,\epsilon}^{0\iota}(y, \lambda)}{b(0)-\lambda+i\epsilon}-\frac{b^{\prime\prime}(y)\psi_{k,\epsilon}^ {t}(y,\lambda)-\omega_{0}^{k}(y)}{|b^{\prime}(y)|^{2}(b(y)-\lambda+i\epsilon) }\bigg{\|}_{L^{2}(y\in I,\lambda\in\Sigma\backslash\Sigma_{\delta_{0}/2})}\\ &\lesssim|k|^{3/2}\|\omega_{0k}\|_{H^{3}_{k}(I)}\end{split} \tag{5.10}\]
_In the above the functions \(\Phi_{k,\epsilon}^{\sigma\iota},\sigma\in\{0,1\}\) satisfy the equation for \(y\in I\)_
\[\begin{split}&(I+T_{k,\lambda,\iota\epsilon})\Phi_{k,\epsilon}^{1 \iota}=\frac{\sinh{(ky)}}{|b^{\prime}(1)|^{2}\sinh{k}},\\ &(I+T_{k,\lambda,\iota\epsilon})\Phi_{k,\epsilon}^{0\iota}= \frac{\sinh{(k(1-y))}}{|b^{\prime}(0)|^{2}\sinh{k}}.\end{split} \tag{5.11}\]
Proof.: The main idea of the proof is to expand the right side of (5.9) and apply Lemma 4.2 after removing the most singular terms. Indeed, denoting schematically,
\[\mathcal{U}^{*}:=\int_{0}^{1}G_{k}(y,z)\bigg{[}2\frac{\omega_{0}^{k}(z)}{(b(z) -\lambda+i\epsilon)^{3}}-2\frac{b^{\prime\prime}(z)\psi_{k,\epsilon\epsilon}^ {t}(z,\lambda)}{(b(z)-\lambda+i\epsilon)^{3}}+\frac{b^{\prime\prime}(z) \partial_{\lambda}\psi_{k,\epsilon\epsilon}^{t}(z,\lambda)}{(b(z)-\lambda+i \epsilon)^{2}}\bigg{]}\,dz, \tag{5.12}\]
we have
\[(I+T_{k,\lambda,\iota\epsilon})\Big{[}\partial_{\lambda}^{2}\psi_{k,\epsilon }^{t}(y,\lambda)-\mathcal{U}^{*}+T_{k,\lambda,\iota\epsilon}\mathcal{U}^{*} \Big{]}=\big{[}T_{k,\lambda,\iota\epsilon}\big{]}^{2}\mathcal{U}^{*}. \tag{5.13}\]
We note that \(\partial_{\lambda}^{2}\psi_{k,\epsilon}^{t}(y,\lambda)-\mathcal{U}^{*}+T_{k, \lambda,\iota\epsilon}\mathcal{U}^{*}\in H^{1}_{k}(I)\) (however we again need to track the singularities in \(\lambda\) in the boundary terms, involving \(\log(b(\sigma)-\lambda+i\iota\epsilon)\) and \(1/(b(\sigma)-\lambda+i\iota\epsilon)\) for \(\sigma\in\{0,1\}\)), and we can apply Lemma (4.2) in order to obtain the desired conclusions. We refer to [14] for the detailed proof.
## 6. Bounds on \(\psi_{k,\epsilon}^{\iota}\): the degenerate case
In this section we use the limiting absorption principle to study the Rayleigh equation (2.8) for \(\lambda\in\Sigma_{\delta_{0}}\). More precisely, write for \(k\in\mathbb{Z}\backslash\{0\},\iota\in\{\pm\},\lambda\in\Sigma_{\delta_{0}},0 <\epsilon<\epsilon_{0}\), (recall the definition of \(\epsilon_{0}\) from Lemma 4.4)
\[\psi_{k,\epsilon}^{t}(y,\lambda)=\phi_{k,\epsilon}^{t}(y,\lambda)+\Psi(y)\frac {1}{b^{\prime\prime}(y)}\omega_{0k}(y), \tag{6.1}\]
where \(\Psi\in C_{c}^{\infty}(S_{3\delta_{0}})\) and \(\Psi\equiv 1\) on \(S_{2\delta_{0}}\). Recall that \(S_{d}=[y_{*}-d,y_{*}+d]\) for \(d>0\) from (3.11). Then \(\phi_{k,\epsilon}^{t}(y,\lambda)\) satisfies for \(y\in I\),
\[(k^{2}-\partial_{y}^{2})\phi_{k,\epsilon}^{t}(y,\lambda)+\frac{b^{\prime\prime }(y)}{b(y)-\lambda+i\iota\epsilon}\phi_{k,\epsilon}^{t}(y,\lambda)=g_{k, \epsilon}^{t}(y,\lambda), \tag{6.2}\]
where for \(k\in\mathbb{Z}\backslash\{0\},\iota\in\{\pm\},\lambda\in\Sigma_{\delta_{0}},0<\epsilon <\epsilon_{0}\)
\[g^{\iota}_{k,\epsilon}(y,\lambda):=\frac{1-\Psi(y)}{b(y)-\lambda+i\iota\epsilon} \omega_{0k}(y)-(k^{2}-\partial_{y}^{2})\Big{[}\frac{\Psi(y)}{b^{\prime\prime}(y )}\omega_{0k}(y)\Big{]}. \tag{6.3}\]
Our main results are bounds for the functions \(\phi^{\iota}_{k,\epsilon}(y,\lambda)\). We begin with the following preliminary bounds.
**Lemma 6.1**.: _Assume that \(k\in\mathbb{Z}\backslash\{0\},\lambda\in\Sigma_{\delta_{0}}\) and let \(\phi^{\iota}_{k,\epsilon}(y,\lambda)\) with \(\iota\in\{\pm\},\epsilon\in(0,\epsilon_{0})\) be as defined in (6.1)-(6.2). Recall from (3.14) and (4.13) that_
\[\delta:=\delta(\lambda)=8\sqrt{|\lambda-b(y_{*})|/|b^{\prime\prime}(y_{*})|}, \quad d_{k}=d_{k}(\lambda,\epsilon):=\big{[}|\lambda-b(y_{*})|^{1/2}+|\epsilon |^{1/2}\big{]}\wedge\frac{1}{|k|}. \tag{6.4}\]
_We have the bounds for \(k\in\mathbb{Z}\backslash\{0\},\epsilon\in(0,\epsilon_{0}),\iota\in\{\pm\}, \lambda\in\Sigma_{\delta_{0}}\),_
\[\begin{split}&\sum_{\alpha\in\{0,1\}}\big{\|}d_{k}^{-7/4+\alpha} \partial_{y}^{\alpha}\phi^{\iota}_{k,\epsilon}(y,\lambda)\big{\|}_{L^{2} \big{(}[y_{*}-3(\delta+|\epsilon|^{1/2}),y_{*}+3(\delta+|\epsilon|^{1/2})] \big{)}}(\delta+|\epsilon|^{1/2})^{-1/2}\\ &+\sum_{\alpha\in\{0,1\}}\big{\|}(|y-y_{*}|\wedge d_{k})^{-7/4+ \alpha}\partial_{y}^{\alpha}\phi^{\iota}_{k,\epsilon}(y,\lambda)\big{\|}_{L^{ \infty}\big{(}[0,1]\backslash[y_{*}-3(\delta+|\epsilon|^{1/2}),y_{*}+3(\delta+ |\epsilon|^{1/2})]\big{)}}\\ &\lesssim|k|^{5/2}\big{\|}\omega_{0k}\big{\|}_{H^{3}_{k}(I)}. \end{split} \tag{6.5}\]
_Define for \(y\in[0,1],k\in\mathbb{Z}\backslash\{0\},\lambda\in\Sigma_{\delta_{0}} \backslash\{b(y_{*})\}\),_
\[\psi_{k}(y,\lambda):=\lim_{\epsilon\to 0+}\Big{[}\psi^{+}_{k,\epsilon}(y, \lambda)-\psi^{-}_{k,\epsilon}(y,\lambda)\Big{]}=\lim_{\epsilon\to 0+} \Big{[}\phi^{+}_{k,\epsilon}(y,\lambda)-\phi^{-}_{k,\epsilon}(y,\lambda) \Big{]}. \tag{6.6}\]
_Then we have the bounds for \(\lambda\in\Sigma_{\delta_{0}}\backslash\{b(y_{*})\}\),_
\[\begin{split}&\sum_{\alpha\in\{0,1\}}\big{\|}(\delta\wedge|k|^{-1 })^{-7/4+\alpha}\partial_{y}^{\alpha}\psi_{k}(y,\lambda)\big{\|}_{L^{2}([y_{*} -3\delta,y_{*}+3\delta])}\delta^{-1/2}\\ &+\sum_{\alpha\in\{0,1\}}\big{\|}(\delta\wedge|k|^{-1})^{-11/4}(|y -y_{*}|\wedge\frac{1}{|k|})^{1+\alpha}\partial_{y}^{\alpha}\psi_{k}(y,\lambda) \big{\|}_{L^{\infty}([0,1]\backslash[y_{*}-3\delta,y_{*}+3\delta]))}\\ &\lesssim|k|^{5/2}\big{\|}\omega_{0k}\big{\|}_{H^{3}_{k}(I)}. \end{split} \tag{6.7}\]
Proof.: It follows from (6.3) and our assumptions on the initial data \(\omega_{0k}\) that we have the bound for \(k\in\mathbb{Z}\backslash\{0\},\iota\in\{\pm\},0<\epsilon<\epsilon_{0}\) and \(\lambda\in\Sigma_{\delta_{0}}\),
\[\big{\|}g^{\iota}_{k,\epsilon}(y,\lambda)\big{\|}_{C^{2}_{k}(I)}\lesssim|k|^{ 1/2}\|\omega_{0k}\|_{H^{3}_{k}(I)}. \tag{6.8}\]
We can reformulate equation (6.2) in the integral form as (recall the definition of \(T^{*}(\lambda+i\epsilon)\) from (4.16))
\[\phi^{\iota}_{k,\epsilon}(y,\lambda)+T^{*}_{k}(\lambda+i\iota\epsilon)\phi^{ \iota}_{k,\epsilon}(y,\lambda)=\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i \iota\epsilon)g^{\iota}_{k,\epsilon}(z,\lambda)dz, \tag{6.9}\]
for \(y\in I\). By Lemma 4.4, we obtain the bound
\[\big{\|}\phi^{\iota}_{k,\epsilon}(\cdot,\lambda)\big{\|}_{X_{N,\varrho_{k}}(I) }\lesssim\Big{\|}\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i\iota\epsilon)g^{ \iota}_{k,\epsilon}(z,\lambda)dz\Big{\|}_{X_{N,\varrho_{k}}}\lesssim|k|^{5/2} \|\omega_{0k}\|_{H^{3}_{k}(I)}, \tag{6.10}\]
which, by the definition of the space \(X_{N,\varrho_{k}}\), see (4.14), implies the desired bounds (6.5).
For applications below on isolating the singularity at \(\lambda=b(y)\), we fix \(\varphi_{\delta}(y)\in C^{\infty}_{c}(S_{2\delta})\) as
\[\varphi_{\delta}(y):=\varphi(\frac{y}{\delta})\big{[}1-\varphi(\frac{y}{\delta^ {\prime}})\big{]}, \tag{6.11}\]
for \(y\in I\), with \(\delta^{\prime}:=\delta/M\) and an \(M\gg 1\) sufficiently large such that \(|b(y)-\lambda|\approx|\lambda-b(y_{*})|\) for \(|y-y_{*}|<\delta/M\).
To prove (6.7), we note from (6.2) that \(\phi^{+}_{k,\epsilon}(y,\lambda)-\phi^{-}_{k,\epsilon}(y,\lambda)\) satisfies the equation for \(y\in I\).
\[\begin{split}&(k^{2}-\partial_{y}^{2})\big{[}\phi^{+}_{k, \epsilon}(y,\lambda)-\phi^{-}_{k,\epsilon}(y,\lambda)\big{]}+\frac{b^{\prime \prime}(y)}{b(y)-\lambda+i\epsilon}\big{[}\phi^{+}_{k,\epsilon}(y,\lambda)- \phi^{-}_{k,\epsilon}(y,\lambda)\big{]}\\ &=g^{+}_{k,\epsilon}(y,\lambda)-g^{-}_{k,\epsilon}(y,\lambda)- \Big{[}\frac{b^{\prime\prime}(y)}{b(y)-\lambda+i\epsilon}-\frac{b^{\prime \prime}(y)}{b(y)-\lambda-i\epsilon}\Big{]}\phi^{-}_{k,\epsilon}(y,\lambda). \end{split} \tag{6.12}\]
Denote for \(\lambda\in\Sigma_{\delta_{0}}\backslash\{b(y_{*})\}\), \(\epsilon\in(0,\epsilon_{0})\) and \(y\in I\) the function \(h_{k,\epsilon}(y,\lambda)\) as the solution to
\[(k^{2}-\partial_{y}^{2})h_{k,\epsilon}(y,\lambda)+\frac{b^{\prime\prime}(y)}{ b(y)-\lambda+i\epsilon}h_{k,\epsilon}(y,\lambda)=\varphi_{\delta}(y)\Big{[} \frac{b^{\prime\prime}(y)}{b(y)-\lambda-i\epsilon}-\frac{b^{\prime\prime}(y)}{ b(y)-\lambda+i\epsilon}\Big{]}\phi^{-}_{k,\epsilon}(y,\lambda), \tag{6.13}\]
with zero Dirichlet boundary condition. Then it is clear that for \(\lambda\in\Sigma_{\delta_{0}}\backslash\{b(y_{*})\},y\in I\),
\[\psi_{k}(y,\lambda)=\lim_{\epsilon\to 0+}h_{k,\epsilon}(y,\lambda). \tag{6.14}\]
We can reformulate (6.13) as the following integral equation for \(\lambda\in\Sigma_{\delta_{0}}\backslash\{b(y_{*})\},y\in I\),
\[\begin{split}& h_{k,\epsilon}(y,\lambda)+T^{*}_{k}(\lambda+i \epsilon)h_{k,\epsilon}(y,\lambda)\\ &=-\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\varphi_{ \delta}(z)\Big{[}\frac{b^{\prime\prime}(z)}{b(z)-\lambda+i\epsilon}-\frac{b^{ \prime\prime}(z)}{b(z)-\lambda-i\epsilon}\Big{]}\phi^{-}_{k,\epsilon}(z, \lambda)\,dz.\end{split} \tag{6.15}\]
It follows from the bound (6.5) that for \(|\epsilon|\lesssim(\delta\wedge\frac{1}{|k|})^{4}\),
\[\bigg{\|}\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\varphi_{\delta}( z)\Big{[}\frac{b^{\prime\prime}(z)}{b(z)-\lambda+i\epsilon}-\frac{b^{\prime\prime}(z)}{ b(z)-\lambda-i\epsilon}\Big{]}\phi^{-}_{k,\epsilon}(z,\lambda)\,dz\bigg{\|}_{X_{L, \varrho_{k}}}\lesssim(\delta\wedge\frac{1}{|k|})^{7/4}. \tag{6.16}\]
The desired bound (6.7) then follows from Lemma 4.4 with \(X=X_{L,\varrho_{k}}\).
To obtain higher order regularity bounds (in \(\lambda\)) of \(\phi^{\iota}_{k,\epsilon}(\cdot,\lambda)\), we take the derivative \(\partial_{\lambda}\) in (6.2). It follows that \(\partial_{\lambda}\phi^{\iota}_{k,\epsilon}(y,\lambda)\) satisfies for \(y\in I\),
\[\Big{[}k^{2}-\partial_{y}^{2}+\frac{b^{\prime\prime}(y)}{b(y)-\lambda+i \epsilon\epsilon}\Big{]}\partial_{\lambda}\phi^{\iota}_{k,\epsilon}(y,\lambda) =-\frac{b^{\prime\prime}(y)}{(b(y)-\lambda+i\epsilon)^{2}}\phi^{\iota}_{k, \epsilon}(y,\lambda)+\partial_{\lambda}g^{\iota}_{k,\epsilon}(y,\lambda), \tag{6.17}\]
with zero Dirichlet boundary condition.
Recall the definition of \(\varphi_{\delta}\) from (6.11). We have the following bounds on \(\partial_{\lambda}\phi^{\iota}_{k,\epsilon}(y,\lambda)\).
**Lemma 6.2**.: _Assume that \(k\in\mathbb{Z}\backslash\{0\},\lambda\in\Sigma_{\delta_{0}}\backslash\{b(y_{*})\}\). Let \(\psi^{\iota}_{k,\epsilon}(y,\lambda)\) and \(\phi^{\iota}_{k,\epsilon}(y,\lambda)\) with \(\iota\in\{\pm\},0<\epsilon<\min\{|\lambda-b(y_{*})|,\epsilon_{0}\}\) be as defined in (2.8) and (6.1) respectively. Recall from (3.14) that_
\[\delta:=\delta(\lambda)=8\sqrt{|\lambda-b(y_{*})|/b^{\prime\prime}(y_{*})}. \tag{6.18}\]
_Denote for \(y\in[0,1],\iota\in\{\pm\},\lambda\in\Sigma_{\delta_{0}}\backslash\{b(y_{*})\},0< \epsilon<\min\{|\lambda-b(y_{*})|,\epsilon_{0}\}\),_
\[\begin{split}\Lambda^{\iota}_{1,\epsilon}(y,\lambda)&: =\,\phi^{\iota}_{k,\epsilon}(y,\lambda)\varphi_{\delta}(y)\frac{b^{\prime \prime}(y)}{(b^{\prime}(y))^{2}}\log\frac{b(y)-\lambda+i\iota\epsilon}{\delta^ {2}},\\ \Lambda_{1}(y,\lambda)&:=\,\psi_{k}(y,\lambda) \varphi_{\delta}(y)\frac{b^{\prime\prime}(y)}{(b^{\prime}(y))^{2}}\log\frac{b(y )-\lambda}{\delta^{2}}.\end{split} \tag{6.19}\]
_We have the bounds for \(0<\epsilon<\min\{|\lambda-b(y_{*})|,\epsilon_{0}\},\iota\in\{\pm\}\), and \(\lambda\in\Sigma_{\delta_{0}}\) that_
\[\begin{split}&\sum_{\alpha\in\{0,1\}}\Big{\|}(\delta\wedge|k|^{-1 })^{1/4+\alpha}\partial_{y}^{\alpha}\Big{[}\partial_{\lambda}\phi^{\iota}_{k, \epsilon}(y,\lambda)-\Lambda^{\iota}_{1,\epsilon}(y,\lambda)\Big{]}\Big{\|}_ {L^{2}([y_{*}-3\delta,y_{*}+3\delta])}\delta^{-1/2}\\ &+\sum_{\alpha\in\{0,1\}}\Big{\|}(\delta\wedge|k|^{-1})^{2}(|y-y_ {*}|\wedge\frac{1}{|k|})^{-7/4+\alpha}\partial_{y}^{\alpha}\partial_{\lambda} \phi^{\iota}_{k,\epsilon}(y,\lambda)\Big{\|}_{L^{\infty}([0,1]\backslash[y_{*} -3\delta,y_{*}+3\delta]))}\\ &\lesssim|k|^{5/2}\big{\|}\omega_{0k}\big{\|}_{H^{3}_{k}(I)}. \end{split} \tag{6.20}\]
_In addition, we have the bounds for \(\lambda\in\Sigma_{\delta_{0}}\backslash\{b(y_{*})\}\) and \(k\in\mathbb{Z}\backslash\{0\}\),_
\[\begin{split}&\sum_{\alpha\in\{0,1\}}\big{\|}(\delta\wedge|k|^{- 1})^{1/4+\alpha}\partial_{y}^{\alpha}\Big{[}\partial_{\lambda}\psi_{k}(y, \lambda)-\Lambda_{1}(y,\lambda)\Big{]}\Big{\|}_{L^{2}([y_{*}-3\delta,y_{*}+3 \delta])}\delta^{-1/2}\\ &+\sum_{\alpha\in\{0,1\}}\big{\|}(\delta\wedge|k|^{-1})^{-3/4}(|y -y_{*}|\wedge\frac{1}{|k|})^{1+\alpha}\partial_{y}^{\alpha}\partial_{\lambda} \psi_{k}(y,\lambda)\big{\|}_{L^{\infty}([0,1]\backslash[y_{*}-3\delta,y_{*}+3 \delta]))}\\ &\lesssim|k|^{5/2}\big{\|}\omega_{0k}\big{\|}_{H^{3}_{k}(I)}. \end{split} \tag{6.21}\]
Proof.: Define for \(k\in\mathbb{Z}\backslash\{0\},\iota\in\{\pm\},\lambda\in\Sigma_{\delta_{0}} \backslash\{b(y_{*})\},0<\epsilon<\min\{|\lambda-b(y_{*})|,\epsilon_{0}\},y\in I\),
\[\partial_{\lambda}\phi^{\iota}_{k,\epsilon}(y,\lambda):=\phi^{\iota}_{k, \epsilon}(y,\lambda;1)+\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i\iota\epsilon) \Big{[}\frac{-b^{\prime\prime}(z)}{(b(z)-\lambda+i\iota\epsilon)^{2}}\phi^{ \iota}_{k,\epsilon}(z,\lambda)+\partial_{\lambda}g^{\iota}_{k,\epsilon}(z, \lambda)\Big{]}\,dz. \tag{6.22}\]
It follows from (6.17) that \(\phi^{\iota}_{k,\epsilon}(y,\lambda;1)\) satisfies for \(y\in I\),
\[\begin{split}&\phi^{\iota}_{k,\epsilon}(y,\lambda;1)+T^{*}_{k}( \lambda+i\iota\epsilon)\phi^{\iota}_{k,\epsilon}(y,\lambda;1)\\ &=-T^{*}_{k}(\lambda+i\iota\epsilon)\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i\iota\epsilon)\Big{[}-\frac{b^{\prime\prime}(z)}{(b(z)-\lambda+i \iota\epsilon)^{2}}\phi^{\iota}_{k,\epsilon}(z,\lambda)+\partial_{\lambda}g^{ \iota}_{k,\epsilon}(z,\lambda)\Big{]}\,dz.\end{split} \tag{6.23}\]
Denote for \(k\in\mathbb{Z}\backslash\{0\},\iota\in\{\pm\},\lambda\in\Sigma_{\delta_{0}} \backslash\{b(y_{*})\},0<\epsilon<\min\{|\lambda-b(y_{*})|,\epsilon_{0}\},z\in I\),
\[\begin{split}& h^{\iota}_{k,\epsilon}(z,\lambda;1):=\!\frac{b^{ \prime\prime}(z)}{(b(z)-\lambda+i\iota\epsilon)^{2}}\varphi_{\delta}(z)\phi^{ \iota}_{k,\epsilon}(z,\lambda),\\ & h^{\iota}_{k,\epsilon}(z,\lambda;2):=\!\frac{b^{\prime\prime}(z) }{(b(z)-\lambda+i\iota\epsilon)^{2}}(1-\varphi_{\delta}(z))\phi^{\iota}_{k, \epsilon}(z,\lambda),\quad h^{\iota}_{k,\epsilon}(z,\lambda;3):=\partial_{ \lambda}g^{\iota}_{k,\epsilon}(z,\lambda).\end{split} \tag{6.24}\]
It follows from the bound (6.5) and Lemma 3.1 that for \(j\in\{2,3\}\)
\[\big{\|}T^{*}_{k}(\lambda+i\iota\epsilon)\int_{0}^{1}\mathcal{G}_{k}(y,z; \lambda+i\iota\epsilon)h^{\iota}_{k,\epsilon}(z,\lambda;j)\,dz\big{\|}_{X_{N, \epsilon_{k}}}\lesssim(\delta\wedge|k|^{-1})^{-2}|k|^{5/2}\|\omega_{0k}\|_{H^{3}_ {k}(I)}. \tag{6.25}\]
Using integration by parts argument similar to (4.29)-(4.30), we have also
\[\left\|T_{k}^{*}(\lambda+i\epsilon)\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i \epsilon)h_{k,\epsilon}^{\iota}(z,\lambda;1)\,dz\right\|_{X_{N,\varrho_{k}}} \lesssim(\delta\wedge|k|^{-1})^{-2}|k|^{5/2}\big{\|}\omega_{0k}\big{\|}_{H_{k}^ {3}(I)}. \tag{6.26}\]
It follows from (6.25)-(6.26) and Lemma 4.4 that for \(\lambda\backslash\{b(y_{*})\}\),
\[\big{\|}\phi_{k,\epsilon}^{\iota}(y,\lambda;1)\big{\|}_{X_{N,\varrho_{k}}} \lesssim(\delta\wedge|k|^{-1})^{-2}|k|^{5/2}\big{\|}\omega_{0k}\big{\|}_{H_{k}^ {3}(I)}. \tag{6.27}\]
The desired bound (6.20) follows, as a consequence of (6.27) and (6.22).
Using (6.17), we get that for \(y\in I\),
\[\begin{split}&\Big{[}k^{2}-\partial_{y}^{2}+\frac{b^{\prime \prime}(y)}{b(y)-\lambda+i\epsilon}\Big{]}\big{[}\partial_{\lambda}\phi_{k, \epsilon}^{+}(y,\lambda)-\partial_{\lambda}\phi_{k,\epsilon}^{-}(y,\lambda) \big{]}\\ &=-\bigg{[}\frac{b^{\prime\prime}(y)}{(b(y)-\lambda+i\epsilon)^{2} }\phi_{k,\epsilon}^{+}(y,\lambda)-\frac{b^{\prime\prime}(y)}{(b(y)-\lambda-i \epsilon)^{2}}\phi_{k,\epsilon}^{-}(y,\lambda)\bigg{]}+\big{[}\partial_{ \lambda}g_{k,\epsilon}^{+}(y,\lambda)-\partial_{\lambda}g_{k,\epsilon}^{-}(y, \lambda)\big{]}\\ &\quad-\bigg{[}\frac{b^{\prime\prime}(y)}{b(y)-\lambda+i\epsilon }-\frac{b^{\prime\prime}(y)}{b(y)-\lambda-i\epsilon}\bigg{]}\partial_{\lambda} \phi_{k,\epsilon}^{-}(y,\lambda),\end{split} \tag{6.28}\]
with zero Dirichlet boundary condition.
Denoting for \(\lambda\in\Sigma_{\delta_{0}}\backslash\{b(y_{*})\}\) and \(y\in I\), \(D\phi_{k,\epsilon}(y,\lambda)\) as the solution to
\[\begin{split}&\Big{[}k^{2}-\partial_{y}^{2}+\frac{b^{\prime \prime}(y)}{b(y)-\lambda+i\epsilon}\Big{]}D\phi_{k,\epsilon}(y,\lambda)\\ &=-\varphi_{\delta}(y)\bigg{[}\frac{b^{\prime\prime}(y)}{(b(y)- \lambda+i\epsilon)^{2}}\phi_{k,\epsilon}^{+}(y,\lambda)-\frac{b^{\prime \prime}(y)}{(b(y)-\lambda-i\epsilon)^{2}}\phi_{k,\epsilon}^{-}(y,\lambda) \bigg{]}\\ &\quad-\varphi_{\delta}(y)\bigg{[}\frac{b^{\prime\prime}(y)}{b(y)- \lambda+i\epsilon}-\frac{b^{\prime\prime}(y)}{b(y)-\lambda-i\epsilon}\bigg{]} \partial_{\lambda}\phi_{k,\epsilon}^{-}(y,\lambda),\end{split} \tag{6.29}\]
for \(y\in I\) with zero Dirichlet boundary condition.
We notice the identity that for \(y\in I,\lambda\in\Sigma_{\delta_{0}}\backslash\{b(y_{*})\}\),
\[\partial_{\lambda}\psi_{k}(y,\lambda)=\lim_{\epsilon\to 0+}D\phi_{k, \epsilon}(y,\lambda). \tag{6.30}\]
We can reformulate (6.29) as the integral equation for \(y\in I\),
\[\begin{split}& D\phi_{k,\epsilon}(y,\lambda)+T_{k}^{*}(\lambda+i \epsilon)D\phi_{k,\epsilon}(y,\lambda)\\ &=-\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\varphi_{ \delta}(z)\bigg{[}\frac{b^{\prime\prime}(z)}{(b(z)-\lambda+i\epsilon)^{2}} \phi_{k,\epsilon}^{+}(z,\lambda)-\frac{b^{\prime\prime}(z)}{(b(z)-\lambda-i \epsilon)^{2}}\phi_{k,\epsilon}^{-}(z,\lambda)\bigg{]}\,dz\\ &\quad-\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\varphi_{ \delta}(z)\bigg{[}\frac{b^{\prime\prime}(z)}{b(z)-\lambda+i\epsilon}-\frac{b^{ \prime\prime}(z)}{b(z)-\lambda-i\epsilon}\bigg{]}\partial_{\lambda}\phi_{k, \epsilon}^{-}(z,\lambda)\,dz\\ &:=R_{k,\epsilon}(y,\lambda).\end{split} \tag{6.31}\]
We can write for \(y\in I,\lambda\in\Sigma_{\delta_{0}}\backslash\{b(y_{*})\},0<\epsilon<\min\{| \lambda-b(y_{*})|,\epsilon_{0}\}\),
\[D\phi_{k,\epsilon}(y,\lambda):=R_{k,\epsilon}(y,\lambda)+D\phi_{k,\epsilon}(y, \lambda;1). \tag{6.32}\]
Then \(D\phi_{k,\epsilon}(y,\lambda;1)\) satisfies for \(y\in I,\lambda\in\Sigma_{\delta_{0}}\backslash\{b(y_{*})\},0<\epsilon<\min\{| \lambda-b(y_{*})|,\epsilon_{0}\}\), the equation
\[D\phi_{k,\epsilon}(y,\lambda;1)+T_{k}^{*}(\lambda+i\epsilon)D\phi_{k,\epsilon}( y,\lambda;1)=-T_{k}^{*}(\lambda+i\epsilon)R_{k,\epsilon}(y,\lambda). \tag{6.33}\]
The desired bounds (6.37) follow from (6.31)-(6.33), and Lemma 3.2 with \(X=X_{L,\varrho_{k}}\).
Lastly we turn to the highest order derivative \(\partial_{\lambda}^{2}\psi_{k,\epsilon}^{t}(y,\lambda)\) that we need to control. To study \(\partial_{\lambda}^{2}\psi_{k,\epsilon}^{t}(y,\lambda)\), we take the derivative \(\partial_{\lambda}\) in (6.17) and obtain that
\[\begin{split}\Big{[}k^{2}-\partial_{y}^{2}+\frac{b^{\prime \prime}(y)}{b(y)-\lambda+i\epsilon\epsilon}\Big{]}\partial_{\lambda}^{2}\phi_ {k,\epsilon}^{t}(\cdot,\lambda)=&-\frac{2b^{\prime\prime}(y)}{( b(y)-\lambda+i\epsilon)^{2}}\partial_{\lambda}\phi_{k,\epsilon}^{t}( \cdot,\lambda)\\ &-\frac{2b^{\prime\prime}(y)}{(b(y)-\lambda+i\epsilon)^{3}}\phi_ {k,\epsilon}^{t}(y,\lambda)+\partial_{\lambda}^{2}g_{k,\epsilon}^{t}(y, \lambda).\end{split} \tag{6.34}\]
**Lemma 6.3**.: _Assume that \(k\in\mathbb{Z}\backslash\{0\},\lambda\in\Lambda_{\delta_{0}}\backslash\{b(y_{ *})\}\) and let \(\phi_{k,\epsilon}^{t}(y,\lambda)\) with \(\iota\in\{\pm\},0<\epsilon<\min\{|\lambda-b(y_{*})|,\epsilon_{0}\}\) be as defined in (6.2). Recall that_
\[\delta:=\delta(\lambda)=8\sqrt{|\lambda-b(y_{*})|/b^{\prime\prime}(y_{*})}. \tag{6.35}\]
_Denoting for \(y\in[0,1],\lambda\in\Lambda_{\delta_{0}}\backslash\{b(y_{*})\}\),_
\[\begin{split}\Lambda_{2}(y,\lambda):=&-\psi_{k}(y, \lambda)\varphi_{\delta}(y)\frac{b^{\prime\prime}(y)}{(b^{\prime}(y))^{2}}\lim _{\epsilon\to 0+}\frac{1}{b(y)-\lambda+i\epsilon}\\ &-\varphi_{\delta}(y)\frac{b^{\prime\prime}(y)}{(b^{\prime}(y))^{ 2}}\lim_{\epsilon\to 0+}\Big{[}\frac{1}{b(y)-\lambda+i\epsilon}-\frac{1}{b(y)- \lambda-i\epsilon}\Big{]}\phi_{k,\epsilon}^{-}(y,\lambda),\end{split} \tag{6.36}\]
_then we have the bounds for \(\lambda\in\Lambda_{\delta_{0}}\backslash\{b(y_{*})\}\),_
\[\begin{split}&\sum_{\alpha\in\{0,1\}}\Big{\|}(\delta\wedge|k|^{-1 })^{9/4}\Big{[}\partial_{\lambda}^{2}\psi_{k}(y,\lambda)-\Lambda_{2}(y,\lambda )\Big{]}\Big{\|}_{L^{2}([y_{*}-3\delta,y_{*}+3\delta])}\delta^{-1/2}\\ &+\sum_{\alpha\in\{0,1\}}\Big{\|}(\delta\wedge|k|^{-1})^{5/4}(|y -y_{*}|\wedge\frac{1}{|k|})\partial_{\lambda}^{2}\psi_{k}(y,\lambda)\Big{\|}_ {L^{\infty}([0,1]\backslash[y_{*}-3\delta,y_{*}+3\delta]))}\lesssim|k|^{5/2} \|\omega_{0k}\|_{H^{3}_{k}(I)}.\end{split} \tag{6.37}\]
Proof.: Denote for \(k\in\mathbb{Z}\backslash\{0\},\lambda\in\Lambda_{\delta_{0}}\backslash\{b(y_{ *})\}\), \(\iota\in\{\pm\},0<\epsilon<\min\{|\lambda-b(y_{*})|,\epsilon_{0}\}\) and \(y\in I\),
\[\begin{split} h^{\iota}_{k,\epsilon}(z,\lambda;4):=&- \frac{2b^{\prime\prime}(z)}{(b(z)-\lambda-i\epsilon)^{2}}\varphi_{\delta}(z) \partial_{\lambda}\phi_{k,\epsilon}^{\iota}(z,\lambda),\\ h^{\iota}_{k,\epsilon}(z,\lambda;5)=&-\frac{2b^{ \prime\prime}(z)}{(b(z)-\lambda-i\epsilon)^{3}}\varphi_{\delta}(z)\phi_{k, \epsilon}^{\iota}(z,\lambda)\\ h^{\iota}_{k,\epsilon}(z,\lambda;6):=&-\frac{b^{ \prime\prime}(z)}{(b(z)-\lambda-i\epsilon)^{2}}(1-\varphi_{\delta}(z))\partial _{\lambda}\phi_{k,\epsilon}^{\iota}(z,\lambda),\\ h^{\iota}_{k,\epsilon}(z,\lambda;7)=&-\frac{2b^{ \prime\prime}(z)}{(b(z)-\lambda-i\epsilon)^{3}}(1-\varphi_{\delta}(z))\phi_{k, \epsilon}^{\iota}(z,\lambda),\quad h^{\iota}_{k,\epsilon}(z,\lambda;8):= \partial_{\lambda}^{2}g_{k,\epsilon}^{\iota}(z,\lambda).\end{split} \tag{6.38}\]
Define for \(k\in\mathbb{Z}\backslash\{0\},\lambda\in\Lambda_{\delta_{0}}\backslash\{b(y_{*})\}\), \(\iota\in\{\pm\},0<\epsilon<\min\{|\lambda-b(y_{*})|,\epsilon_{0}\}\) and \(z\in I\),
\[\begin{split}\partial_{\lambda}^{2}\phi_{k,\epsilon}^{\iota}(y, \lambda):=&\,\phi_{k,\epsilon}^{\iota}(y,\lambda;2)+\sum_{j=4}^{ 8}\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i\epsilon)h_{k,\epsilon}^{\iota}(z, \lambda;j)\,dz\\ &-\sum_{j=4}^{8}T_{k}^{*}(\lambda+i\epsilon)\int_{0}^{1} \mathcal{G}_{k}(y,z;\lambda+i\epsilon)h_{k,\epsilon}^{\iota}(z,\lambda;j)\,dz \end{split} \tag{6.39}\]
It follows from (6.34) that \(\phi_{k,\epsilon}^{\iota}(y,\lambda;2)\) satisfies for \(y\in I\),
\[\phi_{k,\epsilon}^{\iota}(y,\lambda;2)+T_{k}^{*}(\lambda+i\epsilon)\phi_{k, \epsilon}^{\iota}(y,\lambda;2)=\sum_{j=4}^{8}\left[T_{k}^{*}(\lambda+i \epsilon)\right]^{2}\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i\epsilon)h_{k, \epsilon}^{\iota}(z,\lambda;j)\,dz. \tag{6.40}\]
It follows from Lemma 6.2 and Lemma 3.1 that for \(j\in\{6,7,8\}\)
\[\left\|\left[T_{k}^{*}(\lambda+i\epsilon)\right]^{2}\int_{0}^{1}\mathcal{G}_{ k}(y,z;\lambda+i\epsilon)h_{k,\epsilon}^{\iota}(z,\lambda;j)\,dz\right\|_{X_{N,e_{ k}}}\lesssim(\delta\wedge|k|^{-1})^{-4}|k|^{5/2}\|\omega_{0k}\|_{H_{k}^{3}(I)}. \tag{6.41}\]
Using integration by parts argument similar to (4.29)-(4.30), we have also for \(j\in\{4,5\}\),
\[\left\|\left[T_{k}^{*}(\lambda+i\epsilon)\right]^{2}\int_{0}^{1}\mathcal{G}_{ k}(y,z;\lambda+i\epsilon)h_{k,\epsilon}^{\iota}(z,\lambda;j)\,dz\right\|_{X_{N,e_{ k}}}\lesssim(\delta\wedge|k|^{-1})^{-4}|k|^{5/2}\|\omega_{0k}\|_{H_{k}^{3}(I)}. \tag{6.42}\]
It follows from (6.38)-(6.42) and Lemma 4.4 that for \(\lambda\in\Lambda_{\delta_{0}}\backslash\{b(y_{*})\}\), \(\iota\in\{\pm\},0<\epsilon<\min\{|\lambda-b(y_{*})|,\epsilon_{0}\}\),
\[\left\|\phi_{k,\epsilon}^{\iota}(y,\lambda;2)\right\|_{X_{N,e_{k}}}\lesssim( \delta\wedge|k|^{-1})^{-4}|k|^{5/2}\|\omega_{0k}\|_{H_{k}^{1}(I)}. \tag{6.43}\]
Using (6.34), we get that for \(y\in I\),
\[\begin{split}\bigg{[}k^{2}-\partial_{y}^{2}+\frac{b^{\prime \prime}(y)}{b(y)-\lambda+i\epsilon}\bigg{]}\big{(}\partial_{\lambda}^{2}\phi_ {k,\epsilon}^{+}(y,\lambda)-\partial_{\lambda}^{2}\phi_{k,\epsilon}^{-}(y, \lambda)\big{)}=\sum_{j=4}^{8}\Big{[}h_{k,\epsilon}^{+}(y,\lambda;j)-h_{k, \epsilon}^{-}(y,\lambda;j)\Big{]}.\end{split} \tag{6.44}\]
Denoting \(D^{2}\phi_{k,\epsilon}(y,\lambda)\), \(h\in I,\lambda\in\Lambda_{\delta_{0}}\backslash\{b(y_{*})\}\), as the solution to
\[\Big{[}k^{2}-\partial_{y}^{2}+\frac{b^{\prime\prime}(y)}{b(y)-\lambda+i \epsilon}\Big{]}D^{2}\phi_{k,\epsilon}(y,\lambda)=\sum_{j=4}^{5}\Big{[}h_{k, \epsilon}^{+}(y,\lambda;j)-h_{k,\epsilon}^{-}(y,\lambda;j)\Big{]}, \tag{6.45}\]
for \(y\in I\) with zero Dirichlet boundary condition.
We note the identity that for \(y\in I,\lambda\in\Sigma_{\delta_{0}}\backslash\{b(y_{*})\}\),
\[\partial_{\lambda}^{2}\psi_{k}(y,\lambda)=\lim_{\epsilon\to 0+}D^{2}\phi_{k, \epsilon}(y,\lambda). \tag{6.46}\]
We can reformulate (6.45) as the integral equation for \(y\in I\)
\[\begin{split}& D^{2}\phi_{k,\epsilon}(y,\lambda)+T_{k}^{*}(\lambda+i \epsilon)D^{2}\phi_{k,\epsilon}(y,\lambda)\\ &=\int_{0}^{1}\mathcal{G}_{k}(y,z;\lambda+i\epsilon)\varphi_{ \delta}(z)\sum_{j=4}^{5}\left[h_{k,\epsilon}^{+}(z,\lambda;j)-h_{k,\epsilon}^ {-}(z,\lambda;j)\right]dz:=R_{k,\epsilon}^{*}(y,\lambda).\end{split} \tag{6.47}\]
We can write for \(\lambda\in\Sigma_{\delta_{0}}\backslash\{b(y_{*})\},0<\epsilon<\min\{|\lambda- b(y_{*})|,\epsilon_{0}\},y\in I\),
\[D^{2}\phi_{k,\epsilon}(y,\lambda):=D^{2}\phi_{k,\epsilon}(y,\lambda;2)+R_{k, \epsilon}^{*}(y,\lambda)-T_{k}^{*}(\lambda+i\epsilon)R_{k,\epsilon}^{*}(y, \lambda). \tag{6.48}\]
Then \(D^{2}\phi_{k,\epsilon}(y,\lambda;2)\) satisfies for \(y\in I,\lambda\in\Sigma_{\delta_{0}}\backslash\{b(y_{*})\}\),
\[D^{2}\phi_{k,\epsilon}(y,\lambda;2)+T_{k}^{*}(\lambda+i\epsilon)D^{2}\phi_{k, \epsilon}(y,\lambda;2)=\big{[}T_{k}^{*}(\lambda+i\epsilon)\big{]}^{2}R_{k, \epsilon}^{*}(y,\lambda). \tag{6.49}\]
The desired bounds (6.37) follow from (6.47)-(6.49), and Lemma 3.2 with \(X=X_{L,\varrho_{k}}\), using also the bound
\[\big{\|}\big{[}T_{k}^{*}(\lambda+i\epsilon)\big{]}^{2}R_{k,\epsilon}^{*}( \cdot,\lambda)\big{\|}_{X_{L,\varrho_{k}}}\lesssim\Big{(}\delta\wedge\frac{1}{ |k|}\Big{)}^{-4}|k|^{5/2}\|\omega_{0k}\|_{H_{k}^{3}(I)}. \tag{6.50}\]
## 7. Proof of Theorem 1.2
In this section, we prove Theorem 1.2. We can assume that \(t\geq 1\). We first give the proof of (1.8)-(1.9). Using the representation formula (2.7), we have
\[\begin{split}\psi_{k}(t,y)&=\frac{1}{2\pi i}\lim_{ \epsilon\to 0+}\int_{\Sigma}e^{-ik\lambda t}\Big{[}\psi_{k,\epsilon}^{+}(y, \lambda)-\psi_{k,\epsilon}^{-}(y,\lambda)\Big{]}\,d\lambda\\ &=-\frac{1}{2\pi ik^{2}t^{2}}\lim_{\epsilon\to 0+}\int_{ \Sigma}e^{-ik\lambda t}\Big{[}\partial_{\lambda}^{2}\psi_{k,\epsilon}^{+}(y, \lambda)-\partial_{\lambda}^{2}\psi_{k,\epsilon}^{-}(y,\lambda)\Big{]}\,d \lambda.\end{split} \tag{7.1}\]
Fix \(\Phi^{*}\in C_{0}^{\infty}(\Sigma_{\delta_{0}})\) with \(\Phi^{*}\equiv 1\) on \(\Sigma_{2\delta_{0}/3}\). We can decompose for \(t\geq 1,y\in[0,1]\),
\[\psi_{k}(t,y):=\psi_{k}^{1}(t,y)+\psi_{k}^{2}(t,y), \tag{7.2}\]
where
\[\begin{split}&\psi_{k}^{1}(t,y):=-\frac{1}{2\pi ik^{2}t^{2}}\lim_{ \epsilon\to 0+}\int_{\Sigma}e^{-ik\lambda t}(1-\Phi^{*}(\lambda))\Big{[}\partial_{ \lambda}^{2}\psi_{k,\epsilon}^{+}(y,\lambda)-\partial_{\lambda}^{2}\psi_{k, \epsilon}^{-}(y,\lambda)\Big{]}\,d\lambda,\\ &\psi_{k}^{2}(t,y):=-\frac{1}{2\pi ik^{2}t^{2}}\lim_{\epsilon\to 0+} \int_{\Sigma}e^{-ik\lambda t}\Phi^{*}(\lambda)\Big{[}\partial_{\lambda}^{2} \psi_{k,\epsilon}^{+}(y,\lambda)-\partial_{\lambda}^{2}\psi_{k,\epsilon}^{-}(y,\lambda)\Big{]}\,d\lambda.\end{split} \tag{7.3}\]
For (1.8), it suffices to prove that for \(\sigma\in\{1,2\}\), \(k\in\mathbb{Z}\backslash\{0\}\) and \(t\geq 1\),
\[\big{\|}\psi_{k}^{\sigma}(t,\cdot)\big{\|}_{L^{2}([0,1])}\lesssim\frac{|k|^{3}} {t^{2}}\|\omega_{0k}\|_{H_{k}^{3}([0,1])}. \tag{7.4}\]
The case \(\sigma=1\) in (7.4) corresponding to the non-degenerate case is analogous to the case of monotonic shear flows, see [14], and follow from Lemma 5.1-Lemma 5.3. We focus on the main new case \(\sigma=2\) in (7.4). Denote for \(k\in\mathbb{Z}\backslash\{0\}\),
\[M_{k}:=|k|^{5/2}\|\omega_{0k}\|_{H_{k}^{3}([0,1])}. \tag{7.5}\]
Our main tools are Lemmas 6.1, Lemma 6.2 and Lemma 6.3, which imply the following bounds for \(y\in[0,1],\lambda\in\Sigma_{\delta_{0}}\).
* If \(|\lambda-b(y_{*})|^{1/2}<|y-y_{*}|/20\), then \[\begin{split}&|\psi_{k}(y,\lambda)|\lesssim\big{(}\min\big{\{}| \lambda-b(y_{*})|^{1/2},|k|^{-1}\big{\}}\big{)}^{11/4}(|y-y_{*}|^{-1}+|k|)M_{k}, \\ &|\partial_{\lambda}^{2}\psi_{k}(y,\lambda)\lesssim\big{(}\min \big{\{}|\lambda-b(y_{*})|^{1/2},|k|^{-1}\big{\}}\big{)}^{-5/4}(|y-y_{*}|^{-1 }+|k|)M_{k};\end{split}\] (7.6)
* If \(|y-y_{*}|/20<|\lambda-b(y_{*})|^{1/2}<20|y-y_{*}|\), then \[\begin{split}&|\psi_{k}(y,\lambda)|\lesssim\big{(}\min\big{\{}| \lambda-b(y_{*})|^{1/2},|k|^{-1}\big{\}}\big{)}^{5/4}|\lambda-b(y_{*})|^{1/4} M_{k},\\ &|\psi_{k}(y,\lambda)-\psi_{k}(y,b(y))|\lesssim|\lambda-b(y)|^{1 /2}|\lambda-b(y_{*})|^{3/8}M_{k},\\ &\big{\|}\partial_{\lambda}^{2}\psi_{k}(\cdot,\lambda)-\Lambda_ {2}(\cdot,\lambda)\big{\|}_{L^{2}(|y-y_{*}|\approx|\lambda-b(y_{*})|^{1/2})} \lesssim(|\lambda-b(y_{*})|^{-1/2}+|k|)^{9/4}|\lambda-b(y_{*})|^{1/4}M_{k}; \end{split}\] (7.7)
* If \(|\lambda-b(y_{*})|^{1/2}>20|y-y_{*}|\), then \[\begin{split}&|\psi_{k}(y,\lambda)|\lesssim|\lambda-b(y_{*})|^{1 /4}\big{(}\min\big{\{}|\lambda-b(y_{*})|^{1/2},|k|^{-1}\big{\}}\big{)}^{5/4}M_ {k},\\ &\big{\|}\partial_{\lambda}^{2}\psi_{k}(\cdot,\lambda)-\Lambda_ {2}(\cdot,\lambda)\big{\|}_{L^{2}(|y-y_{*}|<|\lambda-b(y_{*})|^{1/2}/20)} \lesssim(|\lambda-b(y_{*})|^{-1/2}+|k|)^{9/4}|\lambda-b(y_{*})|^{1/4}M_{k}. \end{split}\] (7.8)
It follows from (7.6)-(7.8) that for \(y\in[0,1],t\geq 1\),
\[\Big{|}\int_{\mathbb{R}}e^{-ik\lambda t}\Phi^{*}(\lambda)\Lambda_{2}(y, \lambda)d\lambda\Big{|}\lesssim|y-y_{*}|^{-1/4}\max\big{\{}1,|k|^{1/2}|y-y_{*} |^{1/2}\big{\}}M_{k}, \tag{7.9}\]
and, by considering the cases \(|\lambda-b(y_{*})|\ll|y-y_{*}|^{2}\), \(|\lambda-b(y_{*})|\approx|y-y_{*}|^{2}\) and \(|\lambda-b(y_{*})|\gg|y-y_{*}|^{2}\), also that for \(y\in[0,1],t\geq 1\),
\[\Big{\|}\int_{\mathbb{R}}e^{-ik\lambda t}\Phi^{*}(\lambda)\big{[}\partial_{ \lambda}^{2}\psi_{k}(y,\lambda)-\Lambda_{2}(y,\lambda)\big{]}d\lambda\Big{\|}_ {L^{2}([0,1])}\lesssim|k|^{9/4}M_{k}. \tag{7.10}\]
The desired bound (7.4) for \(\sigma=2\) follows from (7.9)-(7.10).
The proof of (1.9) is similar to the proof of (1.8), using Lemma 6.1 and Lemma 6.2.
We now turn to the proof of the depletion bounds (1.11). Assume that \(k\in\mathbb{Z}\backslash\{0\}\). Applying \(-k^{2}+\partial_{y}^{2}\) to \(\psi_{k}(t,y)\) in (2.7), and using (2.8), we get that for \(y\in[0,1],t\geq 1\),
\[\omega_{k}(t,y)=\omega_{k}^{*}(t,y)+\omega_{k}^{**}(t,y), \tag{7.11}\]
where
\[\begin{split}&\omega_{k}^{*}(t,y)\\ &:=\frac{1}{2\pi i}\lim_{\epsilon\to 0+}\int_{\Sigma}e^{-ik \lambda t}(1-\Phi^{*}(y))\bigg{[}\frac{b^{\prime\prime}(y)\psi_{k,\epsilon}^{+} (y,\lambda)-\omega_{0k}(y)}{b(y)-\lambda+i\epsilon}-\frac{b^{\prime\prime}(y) \psi_{k,\epsilon}^{-}(y,\lambda)-\omega_{0k}(y)}{b(y)-\lambda-i\epsilon}\bigg{]} \,d\lambda,\\ &\omega_{k}^{**}(t,y):=\frac{1}{2\pi i}\lim_{\epsilon\to 0+}\int_{ \Sigma}e^{-ik\lambda t}\Phi^{*}(y)\bigg{[}\frac{b^{\prime\prime}(y)\psi_{k, \epsilon}^{+}(y,\lambda)-\omega_{0k}(y)}{b(y)-\lambda+i\epsilon}-\frac{b^{ \prime\prime}(y)\psi_{k,\epsilon}^{-}(y,\lambda)-\omega_{0k}(y)}{b(y)-\lambda -i\epsilon}\bigg{]}\,d\lambda.\end{split} \tag{7.12}\]
We have the bound for \(t\geq 1\),
\[\big{\|}\omega_{k}^{*}(t,y)\big{\|}_{L^{\infty}([0,1])}\lesssim|k|^{2}M_{k}. \tag{7.13}\]
For \(|y-y_{*}|<\delta_{0}/10,t\geq 1\), since \((b(y)-\lambda+i\epsilon)\) with \(\iota\in\{\pm\}\) is not singular in this case, we have in addition by integration by parts that
\[|\omega_{k}^{*}(t,y)|\lesssim|k|^{2}\frac{1}{t}M_{k}. \tag{7.14}\]
We now turn to \(\omega_{k}^{**}(t,y)\). Using (6.1), we can write for \(y\in[0,1],t\geq 1\),
\[\begin{split}& 2\pi i\,\omega_{k}^{**}(t,y)\\ &=\lim_{\epsilon\to 0+}\int_{\mathbb{R}}e^{-ik\lambda t}\Phi^{*}( \lambda)\bigg{[}\frac{\phi_{k,\epsilon}^{+}(y,\lambda)-(1-\Psi(y))\omega_{0k} (y)}{b(y)-\lambda+i\epsilon}-\frac{\phi_{k,\epsilon}^{-}(y,\lambda)-(1-\Psi(y ))\omega_{0k}(y)}{b(y)-\lambda-i\epsilon}\bigg{]}\,d\lambda\\ &=\lim_{\epsilon\to 0+}\int_{\mathbb{R}}e^{-ik\lambda t}\Phi^{*}( \lambda)\bigg{[}\frac{\phi_{k,\epsilon}^{+}(y,\lambda)}{b(y)-\lambda+i \epsilon}-\frac{\phi_{k,\epsilon}^{-}(y,\lambda)}{b(y)-\lambda-i\epsilon} \bigg{]}\,d\lambda+W_{k}(t,y),\end{split} \tag{7.15}\]
where \(W_{k}(t,y)\) satisfies the bound for \(t\geq 1\),
\[\|W_{k}(t,\cdot)\|_{L^{\infty}([0,1])}\lesssim t^{-1}M_{k}, \tag{7.16}\]
which follows from simple integration by parts argument. We decompose for \(y\in[0,1]\backslash\{y_{*}\}\),
\[\begin{split}\omega_{k}^{**}(t,y)-\frac{W_{k}(t,y)}{2\pi i}& =\frac{1}{2\pi i}\lim_{\epsilon\to 0+}\int_{\mathbb{R}}e^{-ik\lambda t}\Phi^{*}( \lambda)\bigg{[}\frac{\psi_{k}(y,\lambda)}{b(y)-\lambda+i\epsilon}\bigg{]}\,d \lambda\\ &+\frac{1}{2\pi i}\lim_{\epsilon\to 0+}\int_{\mathbb{R}}e^{-ik \lambda t}\Phi^{*}(\lambda)\phi_{k,\epsilon}^{-}(y,\lambda)\bigg{[}\frac{1}{b (y)-\lambda+i\epsilon}-\frac{1}{b(y)-\lambda-i\epsilon}\bigg{]}\,d\lambda. \end{split} \tag{7.17}\]
It follows from (7.6)-(7.8) that
\[\bigg{|}\frac{1}{2\pi i}\lim_{\epsilon\to 0+}\int_{\mathbb{R}}e^{-ik \lambda t}\Phi^{*}(\lambda)\phi_{k,\epsilon}^{-}(y,\lambda)\bigg{[}\frac{1}{ b(y)-\lambda+i\epsilon}-\frac{1}{b(y)-\lambda-i\epsilon}\bigg{]}\,d\lambda \bigg{|}\lesssim|y-y_{*}|^{7/4}M_{k}. \tag{7.18}\]
For \(\gamma\in(1,\infty)\) to be fixed below, by considering the three ranges (I) \(|\lambda-b(y_{*})|\lesssim|y-y_{*}|^{2}\), (II) \(|\lambda-b(y_{*})|\geq\gamma|y-y_{*}|^{2}\), and (III) \(|y-y_{*}|^{2}\ll|\lambda-b(y_{*})|<\gamma|y-y_{*}|^{2}\), and using Lemma 6.2 and Lemma 6.2, we get that
\[\begin{split}&\bigg{|}\frac{1}{2\pi i}\lim_{\epsilon\to 0+}\int_{ \mathbb{R}}e^{-ik\lambda t}\Phi^{*}(y)\bigg{[}\frac{\psi_{k}(y,\lambda)}{b(y)- \lambda+i\epsilon}\bigg{]}\,d\lambda\bigg{|}\\ &\lesssim\Big{[}|y-y_{*}|^{7/4}\big{(}1+|k|^{1/2}|y-y_{*}|^{1/2} \big{)}+\frac{1}{|k|t}(|k|^{1/2}+\gamma^{-1/8}|y-y_{*}|^{-1/4})+\gamma^{7/8}|y -y_{*}|^{7/4}\Big{]}M_{k}.\end{split} \tag{7.19}\]
In the above, we used integration by part to get decay in \(t\) in range (II). Optimizing in \(\gamma\), we get that for \(t\geq 1\),
(i) if \(t|y-y_{*}|^{2}\lesssim 1\),
\[\begin{split}&\bigg{|}\frac{1}{2\pi i}\lim_{\epsilon\to 0+} \int_{\mathbb{R}}e^{-ik\lambda t}\Phi^{*}(y)\bigg{[}\frac{\psi_{k}(y,\lambda)}{b (y)-\lambda+i\epsilon}\bigg{]}\,d\lambda\bigg{|}\\ &\lesssim\Big{[}t^{-1}+|k|^{1/2}|y-y_{*}|^{7/4}+t^{-7/8}\Big{]} \end{split} \tag{7.20}\]
(ii) if \(t|y-y_{*}|^{2}\gg 1\),
\[\begin{split}&\bigg{|}\frac{1}{2\pi i}\lim_{\epsilon\to 0+} \int_{\mathbb{R}}e^{-ik\lambda t}\Phi^{*}(y)\bigg{[}\frac{\psi_{k}(y,\lambda)}{b( y)-\lambda+i\epsilon}\bigg{]}\,d\lambda\bigg{|}\\ &\lesssim\bigg{[}|y-y_{*}|^{7/4}\big{(}1+|k|^{1/2}|y-y_{*}|^{1/2} \big{)}+\frac{1}{|k|^{1/2}t^{7/8}}+|y-y_{*}|^{7/4}\bigg{]}M_{k}.\end{split} \tag{7.21}\]
The desired bounds (7.16), (7.18), (7.20)-(7.21). Theorem 1.2 is now proved.
|
2310.20619 | Effect of Energy Conservation Law, Space Dimension, and Problem Symmetry
on the Poynting Vector Field Singularities | A brief review is given of the author recent achievements in classifying
singular points of the Poynting vector patterns in electromagnetic fields of
complex configuration. The deep connection between the topological structure of
the force lines pattern and the law of energy conservation, the symmetry of the
problem, and the dimension of the space has been unveiled | Michael I. Tribelsky | 2023-10-31T16:52:14Z | http://arxiv.org/abs/2310.20619v1 | # Effect of Energy Conservation Law, Space Dimension, and Problem Symmetry
###### Abstract
A brief review is given of the author's recent achievements in classifying singular points of the Poynting vector patterns in electromagnetic fields of complex configuration. The deep connection between the topological structure of the force lines pattern and the law of energy conservation, the symmetry of the problem, and the dimension of the space has been unveiled.
**DOI:** 10.1134/S0021364023601859
## I Introduction
Singular points of electromagnetic field significantly impact its topological structure. The type and location of these points are crucial to the overall structure of the entire field pattern. For this reason, many researchers are interested in studying this topic. Recent advances in experimental setups for generating and analyzing complex light beam configurations have increased attention to the study of this issue; see, e.g., [1; 2; 3; 4; 5; 6] and references therein.
The above is fully applied to the Poynting vector field \(\mathbf{S}(\mathbf{r})\). In particular, the singularities of this field are essential in resonance scattering of light by sub-wavelength objects. For example, in light scattering by nanoparticles, the topological structure of the Poynting vector field determines the complex energy circulation in the vicinity of and within the light-scattering particle, while the divergence of this field determines the dissipation of electromagnetic energy.
In addition to the purely academic interest, these issues are of great practical importance for various nanotechnologies, as nanoscale controlled energy release is a unique way to achieve a localized effect on various materials. The Poynting vector field also contributes to the ponderomotive forces [4], which is essential for manipulating nanoobjects using electromagnetic field; see, e.g., review [7] for details.
At the same time, experimental measurement of the Poynting vector field is a very difficult task [8]. Moreover, the measurements are hardly possible in the practically important case of the field inside the solid-state scattering object. In such a situation, the theoretical description of this field is the only tool for its study.
Since the pioneering work of Bohren [9] many authors have carried out such study [10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. In particular, work [2] presents a detailed classification of the Poynting vector field singularities.
However, most studies relate to examining features of complex configuration fields. Many interesting, important, and often unexpected results have been obtained in these studies. They include the singularities of non-paraxial Bessel beams [4]; singularities in superoscillating fields [20]; singularities due to interference with toroidal modes [21]; the appearance of a reverse (relative to the incident wave) energy flow near the axis of the sharply focused vortex beam [22; 23], etc. As a rule, the theoretical description of these features implies cumbersome computations involving such concepts as topological charges [24; 25], geometric Pancharatnam-Berry phase [26; 27], etc.
Meanwhile, the most straightforward description in terms of the traditional Poincare classification (saddle, node, etc.) of the spontaneously emerging singularities at the scattering of a plane, linearly polarized electromagnetic wave does not require cumbersome calculations and the introduction of new concepts. Nevertheless, it still needs to be completed. In particular, the fact that the field of the Poynting vector satisfies the energy conservation law, which, together with the dimension of space and the symmetry of the problem, imposes significant constraints on both the type of the singular points and the bifurcation scenario of their formation (annihilation), has not yet received sufficient attention. The author's recent results [28; 29; 30], briefly discussed in the present paper, have partly filled up this gap. The mini-review format does not allow other aspects of the problem to be discussed in detail here. The reader interested in them is addressed to other works cited in this paper.
Note also that in terms of the theoretical description of the problem, the solution of Maxwell's equations defines the fields \(\mathbf{E}\) and \(\mathbf{H}\). As for the field \(\mathbf{S}\), it is _calculated_ based on the obtained fields \(\mathbf{E}\) and \(\mathbf{H}\), and, in this sense, it is secondary. As we will see later, this fact plays an essential role in understanding the physical meaning of the singularities of such a field.
The paper has the following structure: Sec. II gives the problem formulation; Sec. III deals with singularities in
a non-dissipative environment based on the exact solutions for a sphere (III.1) and a cylinder (III.2); Sec. IV discusses the dissipative effects; Sec. V focuses on the effects of controlled symmetry breaking; Sec. VI formulates some unsolved problems, contains conclusions and acknowledgments.
## II Problem formulation
It is well known that the local topological properties of a given singularity determine the field's behavior in its neighborhood. The global behavior of this field at a distance only weakly affects it. For this reason, the phenomenological theory discussed below is universal. It is insensitive to changes in the scattered beam's geometry, its shape, and the optical properties of the scattering object. However, concrete examples illustrating this theory used below are based on calculations with the help of the exact solutions of the corresponding problems of scattering of a plane linearly polarized electromagnetic wave by a sphere of radius \(R\) or an infinite right circular cylinder of cross-sectional radius \(R\). These solutions are well known; see, e.g., [31]. Such an approach made it relatively easy to find the singular points themselves and to control the calculations' accuracy to be sufficient for a safe resolution of all the necessary details of the phenomena under discussion.
The magnetic permeability of the scattering body \(\mu\) is assumed to be equal to unity (which corresponds to optical frequencies); and its permittivity \(\varepsilon=\varepsilon^{\prime}+i\varepsilon^{\prime\prime}=const\), where \(\varepsilon^{\prime\prime}>0\). If necessary, the results obtained can be easily generalized to the case of \(\mu\) other than unity and/or \(\varepsilon^{\prime\prime}<0\) (active scattering body with population inversion).
In the theoretical description of scattering problems, the field outside the scattering object is usually presented as the sum of the incident wave field and the radiation field scattered by the object. To avoid misunderstandings, we emphasize that anywhere in the present paper, the fields \(\mathbf{E}\) and \(\mathbf{H}\) mean the full fields equal to the indicated sum.
The Poynting vector is assumed to be real. It is defined in the usual way [32]:
\[\mathbf{S}=\frac{c}{16\pi}([\mathbf{E}^{*}\mathbf{H}]+[\mathbf{E}\mathbf{H}^{ *}]), \tag{1}\]
where \(c\) is the speed of light in a vacuum, and the asterisk designates complex conjugation. Note, however, that in some cases it is expedient to introduce the complex Poynting vector \(\hat{\mathbf{S}}=\frac{c}{8\pi}[\mathbf{E}^{*}\mathbf{H}]\), whose imaginary part has the meaning of an oscillating stored energy flow [33]. This quantity plays a vital role in some problems of the interaction of light with matter [34; 35; 36; 37; 38; 39; 40]. As a rule, the results discussed below, obtained for real \(\mathbf{S}\), can easily be generalized to the case of complex \(\hat{\mathbf{S}}\).
The pattern of the Poynting vector field consists of a set of force lines, which, by analogy with hydrodynamics, we will also call streamlines. The tangents to these lines at each point correspond to the direction of the \(\mathbf{S}\) vector at this point. It is convenient to specify the streamline equation in the parametric form \(\mathbf{r}=\mathbf{r}(t)\). Here, \(t\) is a dimensionless parameter, generally speaking, in no way connected with the actual time. With this representation, the "velocity" \(d\mathbf{r}/dt\) will be directed tangentially to the streamline, i.e., proportional to the vector \(\mathbf{S}(\mathbf{r})\). By properly scaling \(t\), we can always set the corresponding proportionality factor to unity. Then, the equation that determines the streamlines takes the form:
\[\frac{d\mathbf{r}}{dt}=\mathbf{S}(\mathbf{r}). \tag{2}\]
Next, we introduce dimensionless variables, normalizing the coordinate \(\mathbf{r}\) to \(R\), and the fields \(\mathbf{E},\mathbf{H}\), and \(\mathbf{S}\) to the corresponding values in the incident wave. Since only dimensionless variables are in use below, we will keep the same notations for them since this cannot lead to misunderstandings.
The vector \(\mathbf{S}(\mathbf{r})\) singularities, as well as those of any vector field, correspond to the intersection points of its lines of force. Since, as already noted, the tangents to the lines of force determine the direction of the vector \(\mathbf{S}(\mathbf{r})\), their intersection at some point \(\mathbf{r}_{s}\) means that the vector \(\mathbf{S}(\mathbf{r}_{s})\) is not uniquely defined. On the other hand, the fundamental physical principles require that the field \(\mathbf{S}(\mathbf{r})\) must be uniquely defined. The only way to reconcile these two facts is to suppose that \(\mathbf{S}(\mathbf{r}_{s})=0\).
Obviously, there are three possibilities for this condition to fulfill: (a) \(\mathbf{E}(\mathbf{r}_{s})=0\), (b) \(\mathbf{H}(\mathbf{r}_{s})=0\) and (c) \(\mathbf{S}(\mathbf{r}_{s})=0\), when neither (a) nor (b) hold (this case corresponds to the formation of a standing wave). In accordance with the terminology introduced in Ref. [2], we will call such singular points _E-induced_, _H-induced_ and _polarization-induced_, respectively. Calling the first two types both _field-induced_ is also convenient. We will call this classification electromagnetic to distinguish it from the standard Poincare classification (node, focus, etc.).
It is important to emphasize that if in field-induced singularities, the entire vector product \([\mathbf{E}^{*}\mathbf{H}]\) equals zero, the polarization-induced singularities require only its real part to vanish. As a result, field-induced singularities for the real and imaginary parts of the complex Poynting vector \(\hat{\mathbf{S}}\) coincide, but generally speaking, polarization-type singularities do not.
## III Non-dissipative limit
Note that the standard problem formulation for light scattering by a material object, in which the incident wave comes from infinity and the scattered one goes to infinity [31], is physically adequate only if this object is embedded in a non-dissipative medium, a particular case of which is a vacuum. Otherwise, the incident wave attenuates before reaching the scattering object.
For this reason, all singularities outside the scattering body must be considered in a non-dissipative medium. In other words, the absence of dissipation in the vicinity of the singularity is essential for describing scattering by any, even strongly absorbing, particle. Since the scattering problem for a particle in a medium with an arbitrary value of \(\varepsilon>0\) is reduced to the case of scattering in a vacuum by a trivial scale transformation [31], without losing the generality of the consideration, we can assume that the permittivity of the environment is equal to unity, which will be supposed below.
### Sphere
We begin our analysis of singularities by discussing the case of light scattering by a finite object with a plane of symmetry. Then, we assume that the plane of polarization of the incident wave coincides with the plane of symmetry. The simplest example is scattering by a sphere. However, we emphasize that although the specific results presented below refer to such an example, this is done only for the sake of simplicity of calculations. The developed phenomenological theory is valid for a body of any shape with a plane of symmetry.
Following the traditional problem formulation [31], we choose a coordinate system with the \(z\)-axis directed along the wave vector of the incident wave \(\mathbf{k}\), and the \(x\)-axis coinciding with the direction of oscillations of the vector \(\mathbf{E}\) of this wave. Then, the vector \(\mathbf{H}\) of the incident electromagnetic wave occurs to be parallel to the \(y\)-axis.
In the general case, in this problem, the Poynting vector streamlines are essentially three-dimensional. However, it follows from the symmetry that the \(xz\) plane is invariant: the streamlines belonging to this plane are two-dimensional. If the scattering body, in addition to the mirror symmetry against the \(xz\) plane, is also symmetric with respect to the \(yz\) plane (as is the case for a sphere), then this plane is also invariant. Nevertheless, although any restrictions on the positions of the singularities of the \(\mathbf{S}(\mathbf{r})\) field are unknown (and there are reasons to expect that such restrictions do not exist), so far, only singularities belonging to an invariant plane parallel to the polarization plane have been observed [10; 11; 12; 13; 14; 28; 29]. Therefore, just these singularities will be discussed below.
Choosing the coordinate system origin at a singular point, taking into account that, due to the invariance of the \(xz\) plane, the component \(S_{y}(x,0,z)\equiv 0\), and expanding the components of the vector \(\mathbf{S}\) into a series in small departures from the singularity, we conclude that in this case the equation (2) takes the form:
\[\frac{dx}{dt} = S_{x}(x,y,z)\approx s_{x}^{(x)}x+s_{x}^{(y)}y+s_{x}^{(z)}z, \tag{3}\] \[\frac{dy}{dt} = S_{y}(x,y,z)\approx s_{y}^{(y)}y,\] (4) \[\frac{dz}{dt} = S_{z}(x,y,z)\approx s_{z}^{(x)}x+s_{z}^{(y)}y+s_{z}^{(z)}z, \tag{5}\]
where \(s_{x_{n}}^{(x_{m})}\equiv\left(\frac{\partial S_{x_{n}}}{\partial x_{m}} \right)_{s}\). Here index \(s\) means that we calculate the derivative at the singular point, and \(x_{m}\) designates any of the three components of vector \(\mathbf{r}\).
In the standard way, looking for the solution of the system of equations (3)-(5) in the form \(x_{n}=x_{n0}\exp(\kappa t)\), \(x_{n0}=const_{n}\) and equating the determinants of the resulting system of algebraic equations to zero, we obtain the roots of the characteristic equation
\[\kappa_{1,2}=\gamma\pm\alpha,\ \ \kappa_{3}=s_{y}^{(y)}, \tag{6}\] \[\gamma=\frac{s_{x}^{(x)}+s_{z}^{(z)}}{2},\,\alpha=\frac{\sqrt{ \left(s_{x}^{(x)}-s_{z}^{(z)}\right)^{2}+4s_{x}^{(z)}s_{z}^{(x)}}}{2}. \tag{7}\]
Note that \(\kappa_{3}\) is always a real quantity, which in the vicinity of the considered singularity corresponds to the exponential repulsion of streamlines from the invariant plane at \(\kappa_{3}>0\) and their exponential approach to it at \(\kappa_{3}<0\).
Let us apply the energy conservation law, according to which \(\mathrm{div}\;\mathbf{S}=-Q\), where \(Q\) is the density of the dissipated electromagnetic energy. Since we are now discussing the non-dissipative case, \(Q=0\). As for \(\mathrm{div}\,\mathbf{S}\), in the considered approximation, \(\mathrm{div}\,\mathbf{S}\approx s_{x}^{(x)}+s_{y}^{(y)}+s_{z}^{(z)}\). In this case, it follows from expressions (6)-(7) that \(\kappa_{3}\) and \(\gamma\) have opposite signs. In other words, if, in the invariant plane, the energy flow is directed toward the singularity, then the streamlines go away from the singular point in the perpendicular direction and vice versa. This property is consistent with the integral form of the law of conservation of energy, which states that for a stationary problem in a non-dissipative medium, the energy flux through any closed surface (including a surface surrounding the singularity) must vanish.
Now, we can discuss the types of singularities in the invariant plane. As an example, Fig. 1 shows the field pattern that occurs in this case for \(\varepsilon=-2.17\) and the size parameter \(q\equiv kR=0.3\), where \(k=\omega/c\) stands for the wavenumber of the incident wave in a vacuum and \(\omega\) is the circular frequency of the incident wave.
Fig. 1a, exhibits six singular points belonging to the \(xz\) invariant plane. Two (numbered 4 and 5) are stable foci; the others are saddles. We emphasize that it is clear from the discussion above that in three-dimensional space (3D), stable in the invariant plane, foci 4 and 5 are unstable in the direction perpendicular to it, i.e., they are complex saddle-focus singular points.
The calculations show that the Poynting vector vanishes at all singular points, as it should. To understand which electromagnetic type (field-induced or polarization-induced) these singularities belong to, we superimpose the streamlines of Fig. 1a with the profile of the fields \(|\mathbf{E}|^{2}\) and \(|\mathbf{H}|^{2}\), see Fig. 1b,c. It can be seen that the foci are \(H\)-induced singularities, and the saddles are polarization-induced.
To interpret this result, we note that polarization-induced singularities correspond to the local formation of standing waves when pairs of waves of the same amplitude pass the singularity in opposite directions. It implies
that the singularity has singled out directions. A saddle has such directions: these are the "whiskers" (stable and unstable manifolds) of the separatrices. Unlike saddles, foci has no preferred directions, which makes it difficult for standing waves to form near a focus 1. Note that such a distinction between foci and saddles is topologically stable, since these singularities are not topologically equivalent, i.e., a transformation of the coordinate system cannot reduce them to each other [41].
Footnote 1: An important comment on this argument is given in Conclusions.
We particularly emphasize that the discussed area of standing wave formation has essentially sub-wavelength size, which can be clearly seen in Fig. 1 where, for clarity, the wavelength of the incident radiation is shown at the same scale as that for the field structures.
An essential feature of the results discussed is that the field-induced singularities (foci) are associated with the vanishing of that field, which is _perpendicular_ to the invariant plane. It is \(\mathbf{H}\) in the above case. However, such a structure of field-induced singularities is generic for singularities belonging to invariant planes. If the polarization and invariant planes are perpendicular, the singularities occur \(E\)-induced. To verify this fact, we consider the scattering of light by a cylinder.
### Cylinder
As in the previous section, the developed phenomenological theory is valid for an arbitrary cross-section-shaped infinite cylinder. The employed examples for a right circular cylinder are given only to simplify the calculations. Let us start with the cases of perpendicular incidence and two independent polarizations of the incident wave: TE and TM, see Fig. 2, where the orientation of the coordinate axes and the \(\mathbf{k}\)-vector corresponds to the generally accepted one, see, e.g. [31].
In this case, the dependence of the fields on the \(z\)-coordinate vanishes, the problem becomes two-dimensional (2D), the expansions for the Poynting vector components near the singularity takes the form \(S_{x}(x,y)\approx s_{x}^{(x)}x+s_{x}^{(y)}y;S_{y}(x,y)\approx s_{y}^{(x)}x+s_{ y}^{(y)}y\), and the condition \(\operatorname{div}\mathbf{S}=0\) reduces to \(s_{x}^{(x)}+s_{y}^{(y)}=0\). As a result, for the roots of the characteristic equation, instead of (6), (7), we get the following expression:
\[\kappa_{1,2}=\pm\sqrt{\left(s_{x}^{(x)}\right)^{2}+s_{x}^{(y)}s_{y}^{(x)}} \tag{8}\]
Figure 1: Exact solution of Maxwell’s equations. Scattering by a sphere of a plane linearly polarized monochromatic wave in a vacuum. Invariant plane \(xz\). Streamlines of the field of the Poynting vector, the values of the logarithm of the square modulus of the Poynting vector (a), electric (b), and magnetic (c) fields (shown in color). The incident wave polarization plane coincides with the one of the figure. The wave vector \(\mathbf{k}\) of the incident wave is parallel to the \(z\)-axis. Size parameter \(q=0.3\); \(\varepsilon=-2.17\). The solid green line designates the surface of the sphere. The patterns of all fields are symmetric against the plane \(x=0\) (perpendicular to the plane of the figure). Crosses (x) mark the position of singular points of the Poynting vector field. Points 4 and 5 are foci; other singularities are saddles; \(|\mathbf{S}|^{2}=0\) at all singular points. The characteristic scale of the field structure is much smaller than the radiation wavelength. The latter’s size is shown in the upper part of the figure [28]. See text for details.
Figure 2: Mutual orientation of the cylinder, coordinate axes, and vectors \(\mathbf{k}\), \(\mathbf{E}\), \(\mathbf{H}\) of the incident wave. (a) TE polarization, (b) TM polarization [28; 31].
Thus, in the 2D case, the law of conservation of energy leads to a significant reduction in the possible types of singularities, namely: for \(\left(s_{x}^{(x)}\right)^{2}+s_{x}^{(y)}s_{y}^{(x)}>0\) they are saddles, and for \(\left(s_{x}^{(x)}\right)^{2}+s_{x}^{(y)}s_{y}^{(x)}<0\) the singularities are centers. The existence of other singular points is forbidden. The case \(\left(s_{x}^{(x)}\right)^{2}+s_{x}^{(y)}s_{y}^{(x)}=0\) is degenerate. To analyze this case, higher-order-terms in the \(S_{x,y}(x,y)\)-expansion must be considered.
As an example, Fig. 3, 4 show the structures of the fields and streamlines of the Poynting vector in the scattering of TE and TM polarized radiation by a cylinder. These examples are in complete agreement with the general considerations above.
## IV Dissipative effects
As has been said above, the medium embedding the scattering particle must be non-dissipative. Thus, only the particle itself can have dissipative properties. Therefore, the singularities discussed in this section must be situated inside such a particle. How does dissipation affect the properties of these singularities?
First, note that the streamlines cannot form a family of closed-loop lines since dissipative losses now accompany the motion along them. Because of that, all center-type singularities turn into foci. 2
Footnote 2: This is entirely true in 2D. In 3D, the field \(\mathbf{S}(\mathbf{r})\), in principle, may have such a pattern that the energy inflow along the directions transverse to this plane exactly compensates the dissipative losses for trajectories in the invariant plane. In this exceptional situation, the existence of closed-loop trajectories becomes possible.
In the quantitative description of the problem under consideration, the only difference between dissipative and nondissipative media is the formulation of the energy conservation law, which in a dissipative medium is of the form \(\mathrm{div}\,\mathbf{S}=-Q<0\). For \(Q\), in the chosen dimensionless variables, the following formula holds: \(Q=\varepsilon^{\prime\prime}q|\mathbf{E}|^{2}\) (remember: \(q=kR\) is the size parameter).
Note that the dissipation is related only to the electric
Figure 4: The same as that in Fig. 3 with TM polarization of incident radiation and \(q=0.3\), \(\varepsilon=16\). In this case, the centers (points 1, 4) correspond to \(E\)-induced singularities [28].
Figure 3: Scattering of a plane TE-polarized monochromatic electromagnetic wave by a right circular cylinder. Streamlines of the Poynting vector, as well as fields \(|\mathbf{S}|^{2}\), \(|\mathbf{E}|^{2}\), \(|\mathbf{H}|^{2}\) on a logarithmic scale; \(q=0.1\); \(\varepsilon=-1\). The incident plane wave propagates in the negative direction of the \(x\) axis; see Fig. 2. The field structures are mirror symmetric against the plane \(y=0\) perpendicular to the plane of the figure. The crosses (x) denote the singularities of the field \(\mathbf{S}\). A green circle marks the cylinder surface. The centers (points 1,2,7,8) correspond to \(H\)-induced singularities. Saddles (points 3–6) are polarization-induced [28].
field since the magnetic permeability at optical frequencies equals unity. This leads to asymmetry between the \(\mathbf{E}\) and \(\mathbf{H}\) fields. In particular, the effect of dissipation on \(H\)-induced singularities is expected to be much larger than that on \(E\)-induced singularities.
Indeed, in the equation \(\operatorname{div}\mathbf{S}(\mathbf{r})=-Q(\mathbf{r})\), it is necessary for the right-hand and left-hand sides to be of the same order of magnitude. In the approximation being considered, \(\operatorname{div}\mathbf{S}\approx\sum_{n}s_{x_{n}}^{(x_{n})}=const\). Consequently, the right-hand side of the equation should be represented as \(Q(\mathbf{r})\approx Q(\mathbf{r}_{s})=const\), where \(\mathbf{r}_{s}\) denotes the coordinates of the singular point. Hereafter, \(Q\) will specifically denote \(Q(\mathbf{r}_{s})\). In \(H\)- and polarization-induced singularities, the electric field does not turn to zero; thus, \(Q\neq 0\). Further analysis for a sphere and cylinder is convenient to conduct separately.
### Sphere
For a sphere, the characteristic equation roots are still given by Eqs. (6), (7). However, now instead of the expression \(\sum_{n}^{3}s_{x_{n}}^{(x_{n})}=0\), the condition \(\sum_{n}^{3}s_{x_{n}}^{(x_{n})}=-Q\) should be in use. This additional condition imposes one constraint on the three independent coefficients \(s_{x_{n}}^{(x_{n})}\). Therefore, the problem still has enough "degrees of freedom," so that this constraint does not change its qualitatively. Specifically, in expressions (6), (7), the parameter \(\gamma\) can have both positive and negative signs, indicating that streamlines can either approach or move away from the singularity. Escaping trajectories do not contradict the presence of dissipative losses since energy inflow occurring along directions transversal to the invariant plane
Figure 5: The Poynting vector field and its streamlines at light scattering by a germanium cylinder in a vacuum. The plane linearly polarized monochromatic incident wave propagates in the negative direction of the \(x\)-axis. A green circle designates the cylinder surface. The wavelength is \(\lambda=\lambda_{1}=1590\) nm, and the complex permittivity \(\varepsilon(\lambda_{1})\approx 17.775+i0.024\). The size parameter \(q=1.62\). The field pattern exhibits symmetry against the plane \(y=0\), which is perpendicular to the plane of the figure. Panels (a)-(c) correspond to TE polarization of the incident radiation, while panels (d) and (e) represent TM polarization. Panel (b) offers a closer view of the singularity region, marked in panel (a) with a small black rectangle. The same applies to panels (d) and (e). Panel (c) provides a zoomed-in view of the region marked in panel (b) with a rectangle. See text for details. Note the significant difference in scales between panels (c) and (e) [29].
compensates for these losses.
It is important to note that the vanishing of \(\gamma\) at \(s_{x}^{(x)}+s_{z}^{(z)}=0\) does not imply that the streamlines are closed-loop, as higher-order terms, not considered in the linear approximation, affect their behavior. A more detailed discussion on this matter is presented below; see Sec. IV.2.
Additionally, note that the integral form of the energy conservation law makes it possible to derive a simple formula relating the various components of the Poynting vector near the singularity. To obtain it, we consider an imaginary right circular cylinder surrounding the singularity. Its bases with a small radius \(r\) are situated symmetrically against the invariant plane at distances \(\pm z\) (\(z\sim r\)), while its axis passes through the given singular point perpendicular to the invariant plane. We introduce a local cylindrical coordinate system where the center co
Figure 7: Mutual orientation of the cylinder, coordinate frame, and vectors \(\mathbf{k}\), \(\mathbf{E}\), \(\mathbf{H}\) of the incident electromagnetic wave at an arbitrary orientation of its polarization plane against the cylinder axis [30].
incides with the singularity, and the axis is aligned along the axis of the imaginary cylinder. Taking into account that the Poynting vector flux through the entire surface of the cylinder is equal to the power dissipated within its volume, we readily obtain
\[\langle S_{r}\rangle\approx-\frac{rs_{y}^{(y)}+Q}{2}, \tag{9}\]
where \(\langle S_{r}\rangle\) is the angular-averaged radial component of the Poynting vector in the local coordinate system. Expression (9) is valid for any type of singular point belonging to the invariant plane.
### Cylinder
In contrast to a sphere, the impact of dissipation is much more pronounced in the case of scattering by an infinite cylinder. It happens owing to the reduction of the spatial dimension from 3D to 2D. We begin the consideration with the case of the incident radiation's pure TE or TM polarization. Firstly, it should be noted that the expressions for the roots of the characteristic equation are now analogous to the expressions for \(\kappa_{1,2}\) for the sphere; see Eq. (6). In this case, \(\gamma=-Q/2\)[29]. The value of \(\alpha\) can be either purely real or purely imaginary. For \(Q>0\) and a TE-polarized incident wave, where all singular points are \(H\)-induced, only saddles, stable foci, and nodes can occur as singularities. These quantitative results agree perfectly with the qualitative arguments mentioned earlier.
For TM polarization of the incident radiation, the electric field at a singular point turns to zero (\(Q=0\)). The case looks equivalent to the non-dissipative limit. However, it is not. The critical distinction is that the dissipation is zero only at the singularity itself. In its vicinity, the amplitude of the electric field is small but not equal to zero. It is described by higher-order terms dropped in the discussed linear theory. Accounting for these terms results in closed-loop streamlines (singularity of the center type) transforming into spiral-shaped ones that converge towards the singular point, similar to the TE polarization case. The difference, however, is that the dissipation is extremely weak now. It gives rise to a much smaller pitch for these spirals than that at the same \(\varepsilon\) for TE polarization.
As an example, Figs. 5, 6 depict the fields and streamlines calculated for a Germanium cylinder at wavelengths \(\lambda_{1}=1590\) nm and \(\lambda_{2}=1494\) nm, respectively. At the calculations, we employ the actual permittivity values for these wavelengths \(\lambda\)[42]. They are as follows:
Figure 8: The field of the Poynting vector and its streamlines inside a Germanium cylinder. The wave vector of an incident plane linearly polarized wave is antiparallel to the \(x\) axis; see Fig. 7. The axis of the cylinder makes angle \(\alpha=45.403^{\circ}\) with the polarization plane; \(\varepsilon=17.775+0.024i\); \(q=1.62\); (a) 2D projection of the streamlines onto the plane perpendicular to the cylinder axis. The projection has three singularities marked with black dots: two saddles (\(\mathbf{r}_{1,2}\)) belonging to the \(x\)-axis and a stable focus outside it (\(\mathbf{r}_{3}\)). The latter is a false singularity: it is regular in 3D space. This is clearly seen in panel (b), which shows a 3D image of a part of the separatrix whisker emerging in panel (a) from saddle \(\mathbf{r}_{1}\) towards \(\mathbf{r}_{3}\). It is a spiral asymptotically going to infinity, which winds on a straight line parallel to the cylinder axis. The projection of this line onto the \(xy\) plane gives the point \(\mathbf{r}_{3}\)[30]. See text for details. Note the significantly different scales of the \(x\)-, \(y\)-, and \(z\)-axis in panel (b).
\(\varepsilon(\lambda_{1})\approx 17.775+i0.024\), while \(\varepsilon(\lambda_{2})\approx 17.983+i0.483\). In other words, the real parts of \(\varepsilon(\lambda_{1,2})\) are close to each other, while their imaginary parts differ by more than twenty times. Such a choice facilitates an exploration of the dissipation effects at different values of the dissipative constant while keeping other parameters of the problem practically fixed.
The size parameter \(q\) for both values of \(\lambda\) is the same and equal to 1.62, which, for the given values of \(\varepsilon(\lambda_{1,2})\), corresponds to the vicinity of the dipole resonance for both TE and TM polarizations. This fact ensured the similarity of the \(\mathbf{S}(\mathbf{r})\) field patterns for both polarizations. Note also that \(R/\lambda_{1,2}\approx 0.26\), i.e., for the incident radiation, such a cylinder is essentially a sub-wavelength particle. Although the singularities marked in Figs. 5a, 5d, 6a, and 6c look like centers, zoom shows that, in fact, they are stable foci. In agreement with the above discussion, the pitch of the helical streamlines in the case of TE polarization turns out to be much larger than in the case of TM. In both cases, the pitch increases with increasing value of the imaginary part of the permittivity, cf. Fig. 5 and Fig. 6.
## V Symmetry breaking effects
So far, we have considered highly symmetric solutions. However, in an actual experiment, symmetry is always violated. In this context, the topological stability of the results under symmetry breaking is essential. The key question here is: How universal are the results discussed above, and how do the properties of the singularities change when the symmetry is broken? To elucidate this issue, we have to distinguish between weak symmetry breaking due to a non-ideal shape of the laser beam and/or the scattering particle, fluctuations in permittivity, etc., and substantial violations, for example, when a particle of an arbitrary shape scatters light.
As for substantial symmetry breaking, although the author is unaware of any reliable example of singularities of the Poynting vector field in such problems, there is no reason why such singularities could not occur. If such a singularity does occur, it must be essentially three-dimensional because of the lack of symmetry. In general, the Jacobian \(J\equiv\left(\frac{\partial(S_{x},S_{y},S_{z})}{\partial(x,y,z)}\right)_{s}\) has nine real nonzero entries. The corresponding cubic characteristic equation has three roots \(\kappa_{1,2,3}\), of which either all three are real, or one is real, and two are complex conjugate. In this case, the single condition \(Sp\{J\}=-Q\) imposed on the Jacobian entries does not lead to restrictions on the signs of the roots of the characteristic equation. The situation is the same as that described above in the case of a sphere. If three real roots have the same sign, it is a node (stable for \(\kappa_{1,2,3}<0\) and unstable for the opposite sign). If one of the roots has a sign opposite to the other two, this is a saddle-node. Finally, if two roots are complex, the singularity is a saddle-focus. This is all we can say about a general case singularity at substantial symmetry breaking.
The case of weak symmetry breaking is much more interesting. This issue is inspected in paper [30]. The difference in the problem formulation in this study with the ones discussed above is that while in Ref. [30] the wave vector \(\mathbf{k}\) still remains perpendicular to the cylinder axis, the latter made an arbitrary angle \(\alpha\) with the polarization plane; see Fig. 7. This case is one of the simplest versions of the problem with several symmetry groups, some of which are violated in a controlled manner while others remain non-broken. Specifically, in the example under consideration, one can control the violation of mirror symmetry against the \(xy\) plane by changing the angle \(\alpha\) while preserving the symmetry against arbitrary translations along the cylinder axis.
Because of the mentioned translational symmetry, the fields \(\mathbf{E}\), \(\mathbf{H}\), and \(\mathbf{S}\) can depend only on \(x\) and \(y\) but not on \(z\). Since all three components of the Poynting vector vanish at a singular point, its coordinates \(x,y\) must satisfy _three_ independent equations: \(S_{x}(x,y)=S_{y}(x,y)=S_{z}(x,y)=0\). However, _two_ variables \(x\) and \(y\) cannot satisfy three independent equations simultaneously. Such a system is overdetermined and has no solutions.
Seemingly, it leads to the conclusion that the problem has no singularities. However, this is not quite the case. The point is that one or more components of the Poynting vector can identically vanish because of the remaining problem symmetry. It reduces the number of equations in the system, which determine the position of the singularity, making them compatible.
A detailed analysis of the symmetry of this problem for a right circular cylinder, taking into account the restrictions imposed by the boundary conditions on its surface, shows that the components of the Poynting vector satisfy the following relations: [30]
\[(S_{x}(x,y),S_{y}(x,y),S_{z}(x,y))=\] \[(S_{x}(x,-y),-S_{y}(x,-y),-S_{z}(x,-y)). \tag{10}\]
These relations, in particular, show that \(S_{z}\equiv 0\) at \(y=0\). Then, singular points may appear on the \(x\)-axis. Note that such singularities can only be nodes and saddles since the topological structure of centers and foci does not satisfy the symmetry of the \(S_{x}\) and \(S_{y}\) components against the \(y\rightarrow-y\) transformation.
However, now, along with true singularities, "false" singularities appear. These points are singular in the 2D projection of streamlines onto the \(xy\) plane but have a non-zero \(z\)-component of the Poynting vector, so in 3D, they are regular points, the set of which creates a vertical line parallel to the \(z\) axis. Figure 8 shows an example of such a false singularity and two true singular points belonging to the \(x\)-axis.
Note that similar "false" singularities are well known and often encountered, for example, in the theory of paraxial beams. These points are singular only in the projection of streamlines onto a plane perpendicular to
the beam axis. In contrast, the component of the Poynting vector along the beam axis remains finite, i.e., in 3D, such "singularities" are regular points; see, for example, publications [43; 44].
The essential difference between such "singularities" and those discussed in this section is that for the former the beam symmetry imposes the direction in which the Poynting vector component does not vanish. This singled-out direction is the beam axis: the non-vanishing component of the Poynting vector either coincides with the average direction of propagation of the incident radiation or, in some extraordinary situations, it has the opposite direction. In the cases discussed here, this restriction does not exist. In particular, the nonzero component of the Poynting vector at the point \(\mathbf{r}_{3}\) in Fig. 8a is _perpendicular_ to the wave vector of the incident plane wave, oriented antiparallel to the \(x\)-axis. It is clearly seen in Fig. 8b.
Fig. 8b also shows that the depicted three-dimensional streamline has no translational symmetry along the \(z\)-axis. It is easy to understand. Indeed, in the general case, at a regular point, all three components of the vector \(\mathbf{S}\) have different nonzero values. Then, as it follows from Eq. (2), for the streamline emerging from this regular point, the dependence on "time" \(t\) for each of the three spatial coordinates is individual. This results in a substantially three-dimensional shape of the given line. On the other hand, since the projection of this line onto the \(xy\) plane corresponds to a converging spiral, see Fig. 8a, such a line cannot be invariant against translations along the \(z\)-axis. From these considerations, it is evident that three-dimensionality and the absence of translational symmetry are typical properties of streamlines in such a problem, except straight lines parallel to the \(x\)-axis lying in the \(xz\) plane, which, due to the conditions (10) are also exact solutions of Eq. (2).
The conclusion about the absence of translational symmetry for three-dimensional streamlines seems contradictory to the above-noted invariance of the \(\mathbf{S}\) field against arbitrary translations along the \(z\)-axis. Actually, there is no contradiction. The replacement \(z\to z+const\) transforms the streamline into _another_ streamline, the shape of which is identical to the original one.
These results show that the behavior of streamlines in the vicinity of the singularities discussed here is topologically stable against weak symmetry breaking. Although such symmetry violation leads to the regularization of singular points due to the appearance of a small non-zero component of the Poynting vector transversal to the original invariant plane, the streamline pattern projection onto the original invariant plane does not change not only qualitatively (cf. Fig. 5d and 8a), but the quantitative changes remain small as long as the symmetry breaking is small. For more detail, see Ref. [30].
The constructive use of the problem symmetry also makes it possible to develop a phenomenological theory explaining the sequence of bifurcations leading to the emergence (annihilation) of both false and true singularities when the varying bifurcation parameter is either the angle \(\alpha\) or any other parameter of the problem (i.e., \(q\) or \(\varepsilon\)). However, the author believes that these results should be of interest only to experts in this specific field. Therefore, they will not be discussed here. Readers interested in these issues may find their discussion in Ref. [30].
## VI Conclusions
In conclusion, we note that since polarization-induced singularities are associated with the formation of standing waves in a small neighborhood of a singular point, this implies the existence of singled-out directions in this region, along which pairs of waves propagate in opposite directions. Further reasoning is essentially different in 2D and 3D cases.
In 2D, there are no such directions for center and focus. Therefore, these singularities belong to the field-induced type. As for nodes and saddles, there are no restrictions here, and both types of electromagnetic singularities are possible for them. However, the fields \(\mathbf{E}\) and \(\mathbf{H}\) enter the problem symmetrically. The vanishing of one of them at a singularity distinguishes that field relative to the other and breaks the \(\mathbf{E}-\mathbf{H}\) symmetry. Then, one should expect that if the vanishing does not follow from the topological properties of the singularity (as it is in the case of a center and focus), nodes and saddles should be polarization-induced in the general case. Field-induced singularities for nodes and saddles are exceptional cases associated with a certain degeneracy. One way or another, all the saddles and nodes that we have observed so far in various cases are polarization-induced.
As for 3D, the cubic characteristic equation always has at least one purely real root. Then, in the vicinity of such a singularity, there is a singled-out direction of the eigenvector corresponding to this purely real root. Along this direction, on opposite sides of the singularity, the energy flows are aligned opposite each other. For this reason, the topological structure of the singularity does not impose any restrictions on its electrodynamic type. Nevertheless, in the case of scattering by a sphere, all foci belonging to the invariant plane turned out to be of the field-induced type. It may be due to the symmetry violation between \(\mathbf{E}\) and \(\mathbf{H}\), introduced by the coincidence of the invariant plane with the plane of polarization.
Note also the problem of the topological charge calculation for these singularities and the issue of its conservation at bifurcations discussed in Ref. [30]. Although there are no fundamental difficulties in finding answers to these questions, at the moment, they remain open and can be considered as issues for future study.
In conclusion, we again emphasize that the singularities discussed here are not related to any peculiarities of the incident laser beam. They appear spontaneously in the scattering of a plane linearly polarized wave with no singularities and have essentially sub-wavelength characteristic scales.
Thus, the paper reveals the deep connection between the topological structure of the singularities of the Poynting vector field, their electromagnetic type, the law of energy conservation, the symmetry of the problem, and the dimension of space. These results shed new light on the problem of electromagnetic energy circulation in fields of complex configurations and create a basis for their practical implementations in various applications, particularly in tailoring energy flows of a given sub-wavelength structure, which is essential for many nanotechnologies.
The author is grateful to B. Ya. Rubinshtein, for the discussion of this article and valuable comments. This work was financially supported by the Russian Science Foundation under project no. 21-12-00151 (analytical research) and by the Ministry of Science and Higher Education of the Russian Federation under project no. 075-15-2022-1150 (computer calculations and computer graphics). The influence of symmetry effects on the properties of singular points of the Poynting vector was studied with the support of the Russian Science Foundation grant No. 23-72-00037.
|
2303.17833 | AI-Oriented Two-Phase Multi-Factor Authentication in SAGINs: Prospects
and Challenges | Space-air-ground integrated networks (SAGINs), which have emerged as an
expansion of terrestrial networks, provide flexible access, ubiquitous
coverage, high-capacity backhaul, and emergency/disaster recovery for mobile
users (MUs). While the massive benefits brought by SAGIN may improve the
quality of service, unauthorized access to SAGIN entities is potentially
dangerous. At present, conventional crypto-based authentication is facing
challenges, such as the inability to provide continuous and transparent
protection for MUs. In this article, we propose an AI-oriented two-phase
multi-factor authentication scheme (ATMAS) by introducing intelligence to
authentication. The satellite and network control center collaborate on
continuous authentication, while unique spatial-temporal features, including
service features and geographic features, are utilized to enhance the system
security. Our further security analysis and performance evaluations show that
ATMAS has proper security characteristics which can meet various security
requirements. Moreover, we shed light on lightweight and efficient
authentication mechanism design through a proper combination of
spatial-temporal factors. | Bin Yang, Shanyun Liu, Tao Xu, Chuyu Li, Yongdong Zhu, Zipeng Li, Zhifeng Zhao | 2023-03-31T06:56:40Z | http://arxiv.org/abs/2303.17833v1 | # AI-Oriented Two-Phase Multi-Factor Authentication in SAGINs: Prospects and Challenges
###### Abstract
Space-air-ground integrated networks (SAGINs), which have emerged as an expansion of terrestrial networks, provide flexible access, ubiquitous coverage, high-capacity backhaul, and emergency/disaster recovery for mobile users (MUs). While the massive benefits brought by SAGIN may improve the quality of service, unauthorized access to SAGIN entities is potentially dangerous. At present, conventional crypto-based authentication is facing challenges, such as the inability to provide continuous and transparent protection for MUs. In this article, we propose an AI-oriented two-phase multi-factor authentication scheme (ATMAS) by introducing intelligence to authentication. The satellite and network control center collaborate on continuous authentication, while unique spatial-temporal features, including service features and geographic features, are utilized to enhance the system security. Our further security analysis and performance evaluations show that ATMAS has proper security characteristics which can meet various security requirements. Moreover, we shed light on lightweight and efficient authentication mechanism design through a proper combination of spatial-temporal factors.
With the prosperity of the fifth-generation (5G) mobile communication networks, the number of 5G subscribers has already reached an enormous scale, leading to a significant shift in people's daily lives. Nowadays, academia and industry are turning their attention to emerging next-generation systems. It is envisioned that the sixth-generation (6G) communications will support five application scenarios: Enhanced Mobile Broadband Plus, Big Communications, Secure Ultra-Reliable Low-Latency Communications, Three-Dimensional Integrated Communications, and Unconventional Data Communications [1]. Space-air-ground integrated networks (SAGINs) leverage the advantages of satellite networks (including geosynchronous Earth orbit (GEO) satellites, medium Earth orbit (MEO) satellites, low Earth orbit (LEO) satellites, and their mutual links), high altitude platforms, and terrestrial communication systems. SAGINs are therefore considered as a promising architecture to support ubiquitous, seamless, reliable and high-data-rate services anytime and anywhere in 6G communication networks. Remarkably, practical experiments and ambitious projects have been initiated to offer ubiquitous Internet services through SAGINs, such as Google Loon, Thales Stratobus, OneWeb, and SpaceX.
SAGIN is a highly heterogeneous and multidimensional network compared to conventional terrestrial or satellite networks. It has wide coverage, broadcast channels, and various network entities, including networking infrastructure and connected Internet of Thing (IoT) devices. However, its wide coverage feature brings trust and safety issues. Specifically, the wireless signal in SAGIN propagates mainly in free space. In other words, not only authorized users could receive the information but also malicious users could obtain wireless power and retrieve secure information from power leakage in wireless signals. Besides, intrinsic trust and data reliability issues may arise in SAGIN during multi-hop transmissions among distrustful entities within each segment or across segments. Due to the lack of unified security precautions, interconnected intelligent devices can be vulnerable to various cyber attacks, and traceable data provenance is difficult [2]. Therefore, merits including security, privacy, and intelligence have become the crux of determining whether SAGIN can continue to evolve healthily.
As the first security perimeter, authentication is a pivotal method to identify the legitimacy of IoT devices that access the network, thereby enabling real-time monitoring and promoting collaborative sharing [3, 4]. Authentication can be classified into one-shot authentication and continuous authentication based on the duration of the authentication process. One-shot authentication is an elementary authentication mechanism that identifies IoT devices using crypto-based techniques such as passcodes, PINs, and fingerprints. However, it only protects the security of SAGIN during the initial access process and cannot guarantee security during the operational phase. With the proliferation of intelligent IoT devices, especially wearable devices like fitness bands and augmented reality (AR) glasses, personal behavioral and physiological biometrics can be easily collected by built-in sensors, which promotes continuous authentication from concept to implementation. As an enhancement and supplement to one-shot authentication, continuous authentication defenses against attacks in the background without user intervention by constantly validating access devices according to the historical features of users. To cope with the new features of SAGINs, it becomes increasingly essential to combine one-shot authentication for quick access and continuous authentication for sustainable security together, supporting convenient access and security robustness of various kinds of communication entities [5].
To adequately utilize the collected data from IoT devices, adopting multi-factor authentication (MFA) is promising. This means that multiple heterogeneous validation methods can be combined intelligently to grant or deny access reliably [6]. In MFA, three types of factor groups, i.e., knowledge factor, ownership factor and biometric factor, are available to connect an individual with the established credentials. Knowledge factor, ownership factor and biometric factor refer to factors like passwords, tokens and behavioral patterns, respectively. Specifically, spatial-temporal features, such as geographical position, Doppler shift and traffic volume, are likely to be utilized in the continuous authentication phase due to the critical roles of SAGINs in providing varieties of vertical applications by connecting enormous heterogeneous devices, machines, and industrial processes. Through MFA by artificial intelligence (AI) techniques, including supervised learning, unsupervised learning, and reinforcement learning, trusted communications and services can be accomplished to adapt to dynamic
environments.
Driven by the limitations of conventional authentication mechanisms and the induced demands in an integrated network, this article presents a novel authentication framework for SAGINs, where an AI-oriented two-phase multi-factor authentication scheme (ATMAS) is proposed. In ATMAS, a machine-learning-based continuous authentication which refers to 'Phase II' is performed to intelligently grant or deny access reliably by capturing user-profiles and traits, following a conventional cryptographic authentication named 'Phase I'. The contributions of this article are summarized as follows:
* We propose the ATMAS based on the characteristics of SAGINs. For the ATMAS, four communication entities, i.e., mobile user (MU), base station (BS), satellite, and network control center (NCC) are considered, in which satellite and NCC cooperate with each other to implement authentication.
* Through the design of ATMAS, we leverage the unique spatial-temporal features of SAGIN in Phase II, including service and geographic features. This allows us to ensure the security of the SAGIN from login to logout.
* To evaluate the robustness and validate the security of the proposed scheme, we conduct security analysis by analyzing the security characteristics. Moreover, from the perspective of performance evaluation, we shed light on lightweight and efficient authentication mechanism design by a proper combination of spatial-temporal factors.
The remainder of this article is organized as follows. Challenges of authentication for the SAGINs are first introduced. Then we present the advantages of AI-enhanced multi-factor continuous authentication. The design of the proposed ATMAS and its security analysis are described in the following Sections. In addition, simulation results are presented to evaluate the performance of authentication accuracy. Finally, concluding remarks and future research directions are provided.
## Challenges of Authentication for the SAGIN
Compprised of inter-satellite and inter-satellite links, the SAGIN is a highly open wireless system. As satellites are exposed to the open environment, it is easy to cause attacks such as link hijacking and entity counterfeiting, which seriously affects the secure communication of the SAGIN. Reliable access control plays a vital role in distinguishing the source nodes and addressing identity-based attacks such as spoofing and Sybil attacks. In general, access control can be decomposed into user authentication and authorization. Authentication is a process that verifies that someone or something is who they say they are and authorization is the security process that determines a user or service's level of access. The latter can be accomplished by methods like enforcing allow or deny rules based on the user's authorization level, e.g., general user, super user and administrator, which is relatively less complicated in the implementation stage. However, authentication brings about a number of SAGIN-specific research questions that should be carefully considered in this article.
**Long Propagation and Processing Latencies.** In SAGIN, the introduction of high-altitude satellites leads to unidirectional propagation delays of around 15 ms, even for the links between LEO satellites and IoT devices. This results in round-trip delays of approximately 50 ms for IoT devices, which significantly reduces quality of service (QoS) in latency-sensitive scenarios. Additionally, due to high mobility, the propagation delay is not permanent and varies with the satellite's position. Apart from propagation delay, conventional cryptographic authentication requires increased communication and computation overhead to cope with the increasing requirements of security, leading to long processing latencies [7]. Such prolonged latencies are intolerable for scenarios like disaster rescue and military missions. Therefore, lightweight and selectable authentication mechanisms are urgently needed for these latency-sensitive applications.
**Insufficiency of Authentication Uniformity.** As mentioned before, the SAGIN is a highly heterogeneous network supported by different communication protocols. Each network entity in the SAGIN encompasses a tremendous number of IoT devices with various interfaces for control and management, which requires a unified and pragmatic authentication mechanism when IoT devices access the SAGIN from different locations. While a unified authentication framework has been proposed for both the Third Generation Partnership Project (3GPP) and non-3GPP access in [8, 9], sensitive data-related authentication must be enforced to be implemented in the core network, which mitigates efficiency. As a result, a new unified authentication scheme is necessary to adapt to the heterogeneity of the SAGIN.
**Inability to Achieve Real-time and Transparent Authentication.** The traditional crypto-based authentication verifies the legitimacy of IoT devices only at the beginning of login based on a password, a personal identification number (PIN), or a secret pattern. However, these methods are susceptible to guessing, video capture, and spoofing as users often choose a simple password for convenience. Moreover, traditional one-shot authentication mechanisms cannot verify whether the logged-on user is the initially authenticated one due to a lack of physiological and behavioral biometrics. To overcome these shortcomings, users have to identify themselves by re-entering the password periodically, which declines the experience and satisfaction of services. In short, traditional authentication mechanisms lead to either security vulnerabilities or inconvenience for the SAGIN. To achieve privacy protection in the background, new mechanisms should be designed for real-time and transparent security provisioning.
To put it succinctly, it is imperative to enhance the current authentication mechanism for the SAGIN-specific system.
## AI-Enhanced Multi-Factor Continuous Authentication
In the era of IoT, the SAGIN system has become more complex than traditional information and communication technology platforms. To ensure the SAGIN security from login to logout during IoT device activity, a continuous authentication mechanism is indispensable to fulfill security requirements without user intervention. Typically, continuous authentication validates legitimate users based on behavioral and physiological biometrics using built-in sensors on IoT devices. In recent years, due to advancements in storage and computational resources, and the increasing diversity of sensors, continuous authentication has become more effective and accurate by collecting, storing, and analyzing massive amounts of data [10].
The current authentication mechanisms can be divided into two categories: single-factor authentication and MFA, based on the features or authentication factors applied. Single-factor authentication is a standard, low-security method of authentication that requires matching only one factor, such as password,
Figure 1: Network Architecture of the SAGIN. The user/device segment shows the trajectory of a MU and the changes of its location, traffic and biometric. The ground/air/space access segment illustrates all kinds of access entities, including BSs, high-altitude platforms and satellites.
to a username to get access to a system. However, single-factor authentication provides a limited level of security because it is susceptible to guessing, video capture, and spoofing. MFA is an enhanced authentication mechanism that jointly conducts authentication utilizing multiple factors. Factors in authentication can be classified into five categories: **l**. something a user knows, **ii**. something a user possesses, **iii**. something a user is, **iv**. something a user does and **v**. somewhere a user is [11]. With proper factor selection and combination strategies, an efficient and high-security authentication mechanism can be achieved.
By extracting the unique features of users from the abundant collected data, AI algorithms including machine learning and deep learning are leveraged to enhance the security of the SAGIN system. As illustrated in Fig. 2, the pipeline of AI-based authentication mechanisms mainly consists of data acquisition, data preprocessing, feature extraction, classification and decision. Data acquisition involves sensors that sample real-world parameters and convert them to electrical signals, which are then converted to digital values. Data preprocessing is a critical procedure that distill high-quality data from raw data, which is generally incomplete, noisy, inconsistent, and redundant. To reduce noise and align output data, methods like clipping, Z-score, and scaling are necessary [12]. Feature extraction means extracting user features that present one's identity or behavior from the collected dataset. For classification, the goal is to learn a mapping function that predicts label information for a given behavior sequence with minimal biases. Finally, through evaluations of a verification system, the ultimate decision can be made to grant access to legitimate users and deny access to impostors. In this article, we envision ATMAS to address challenges for the SAGIN. The advantages of ATMAS are summarized as follows.
**Continuous Security.** The proposed ATMAS not only authenticates at the initial stage of login but also provides seamless protection to legitimate devices in the background by adopting AI-enhanced algorithms.
**High Flexibility.** We utilize a conventional cryptographic authentication named 'Phase I' to quickly grant or deny service requester access, and an AI-aided continuous authentication named 'Phase II' to intelligently verify legitimacy during service operation. The SAGIN system operator is flexible in choosing whether to use only 'Phase I' or to select a suitable number of authentication factors based on security requirements.
**High Robustness.** By utilizing multiple factors in the procedure of continuous authentication, it becomes difficult for adversaries to imitate or crack all selected features in a single round of communication based on received signals and observations.
### Design of the Proposed ATMAS in SAGINs
Before introducing the detailed design of the proposed ATMAS, we would like to provide an overview of the architecture of the SAGIN, as illustrated in Fig. 1 and Fig. 2. The SAGIN system consists of three segments: the user/device segment, the ground/air/space
Figure 2: Authentication Phase of the ATMAS. In Phase I, a crypto-based authentication is performed. In Phase II, an AI-based continuous authentication including enrollment phase and authentication phase is proposed.
access segment and the authentication segment. The user/device segment is an assemblage of various end mobile users and IoT devices, and it is assumed that users and devices have the ability to sense their locations, traffic and biometrics. The ground/air/space access segment consists of all kinds of access entities, such as terrestrial BSs, high-altitude platforms and satellites. The authentication segment is responsible for one-shot authentication and continuous authentication. In this article, the focus is on MEO/LEO satellites, which have relatively shorter transmission delays compared to the GEO satellites. This allows satellites to provide large-scale coverage, and beams of BSs or other relays ensure high-precision user targeting [13]. The authentication procedure of the proposed ATMAS is shown in Fig. 3, in which four network entities are involved: MU, BS, satellite, and NCC. In the following subsections, the authentication procedure will be discussed in detail.
### Phase I: One-shot Authentication
We divide authentication of Phase I into three procedures: initialization and enrollment, registration and one-shot authentication.
**Initialization and Enrollment:** The authentication in Phase I is cryptography-based. NCC uses Elgamal encryption technology [14] to generate a private key and a corresponding public key. The private key is stored by the NCC locally, and the public key and its related key parameters are published to other entities in SAGINs. Based on the key parameters, other entities can generate their own private keys and public keys, respectively.
**Registration:** Before participating in the authentication procedure, entities including MUs, BSs and satellites must register with the NCC. The BS and the satellite have a similar registration procedure and we take satellite registration as an example. After receiving a registration request from a satellite, the NCC generates both a unique identity and a temporary identity for the satellite. Authentication parameters, such as satellite identities and shared public keys, are stored locally on the satellite. Next, the satellite generates a timestamp and sends it, along with authentication parameters, to the NCC. Upon receiving this information, the NCC first verifies whether the transmission delay (i.e., the difference between the current time and the sent timestamp) is less than the predefined delay threshold. If the transmission delay does not meet the requirement, the NCC rejects the registration request. Otherwise, the NCC checks where the authentication parameters sent by the satellite coincide with parameters stored in the NCC's local database. If the parameters do not match, the satellite registration request is rejected. Otherwise, another timestamp is generated and sent together with the authentication parameters back to the satellite. When the satellite receives the message from the NCC, it first verifies whether the transmission delay from the NCC to the satellite meets the requirements. If so, the satellite also verifies the authentication parameters received from the NCC. The satellite registration procedure is considered validated if all checks pass.
For MU registration, we assume that MU and IoT devices have the ability to collect raw biometric data. Firstly, the user selects an identity and a high-entropy password, and utilizes a fuzzy extractor probabilistic key generation algorithm to extract the key of biometric information. The MU uses a hash function to generate authentication parameters of the combination of identity, password and biometric features. The authentication parameters are transmitted to the NCC. After receiving the information, the NCC verifies whether the authentication parameters are the same as those stored locally. If so, the authentication parameters will be calculated by a hash function and sent to the MU. The MU authentication procedure is accomplished if the authentication parameters are validated.
**One-Shot Authentication:** In this phase, when the MU requests to access the SAGIN, the MU must provide the identity information, a password, and biometric information for a handheld device. Then, the device generates a timestamp and obtains the MU's current location. A hash function is utilized to calculate the key based on the combination of the above information. The key is then sent to the satellite through the BS. After receiving the key, the satellite first verifies whether the transmission delay is within the threshold. If not, the MU access request is denied. Otherwise, the satellite sequentially verifies the identity, location, password, and biometric information. If the validation succeeds, authentication parameters and a timestamp are generated and transmitted to the MU through the BS. Similarly, the MU checks the timestamp and authentication parameters. If matched, the MU is marked as a legal user and allowed to access the SAGIN.
### Phase II: Continuous Authentication
After the user gets access to the SAGIN, the NCC initiates a continuous authentication request and a biometric/behavioral data acquisition request to the MU. The collected data is then sent to the NCC through the BS. In the MU registration phase, the NCC trains an AI-based model and extracts features of the MU. In the continuous authentication phase, the NCC compares the real-time extracted features with the stored features from the registration phase. Based on the security level of the MU, the NCC determines the legality of the user, and the transmission terminates if it is an illegal user.
Moreover, due to the latency requirement of the authentication, it is arduous to use high-complexity algorithms. In phase II, a binary classification is introduced where both spoofing actions and spoofers are seen as spoofing attacks. For the sake of simplicity, we focus on this scenario as a quintessential example to validate the proposed authentication scheme.
### Security Analysis
This section evaluates the security of the proposed authentication scheme and demonstrates its ability to prevent security threats. In our proposed ATMAS, we have implemented five security characteristics: mutual authentication, forward security, resistance to replay attack, resistance to man-in-the-middle attack, and data confidentiality and integrity. In addition, to demonstrate the effectiveness of our proposed authentication scheme, we compare it with existing counterparts in terms of security requirements and authentication features. By doing so, we can show how our scheme offers improved security and more robust authentication capabilities in comparison to existing solutions.
* **Mutual Authentication:** Mutual authentication implies that two participants can authenticate each other. In the key agreement phase described in one-shot authentication, the satellite can verify the legitimacy of the BS by checking whether the hash result is consistent with the credential information
Figure 3: Authentication Procedure of the ATMAS. The ATMAS consists of four procedures, i.e., initialization and enrollment, registration, one-shot authentication, and continuous authentication.
saved locally. Adversaries cannot get knowledge of the private key from public values due to the properties of hash function. Thus, only the legitimate BS who owns the private key can be authenticated by the satellite. Similarly, the BS authenticates the satellite based on the authentication messages from the satellite, which are encrypted by the satellite's private key. The secure mutual authentication between a MU and a BS can be guaranteed using a similar analysis. Therefore, our scheme could provide secure mutual authentication.
* **Forward Security:** Forward secrecy can ensure that session keys will not be compromised even if the long-term secrets used in the session key exchange are compromised. In our scheme, if an adversary learns the initial token and wants to derive the initial token used in the previous session, they would need to know the previously generated random numbers. However, the previously generated random numbers cannot be obtained from the previous eavesdropped messages as the hash is a one-way function. Therefore, our proposed scheme has the property of forward secrecy.
* **Resistance to Replay Attack:** Replay attack mainly refers to when a malicious adversary uses its message regeneration ability to generate and replay a message, thereby compromising protocol security. However, this attack can be prevented by successfully authenticating the message through checking the validity of its timestamp value. Additionally, because the hash function used in initial authentication is unidirectional, an adversary is unable to fake the message by modifying the timestamp value. Therefore, our proposed scheme is capable of resisting replay attack.
* **Resistance to Man-in-the-Middle Attack:** A man-in-the-middle attack means that an adversary intercepts and selectively modifies communicated data to masquerade as one entity involved in a communication session. In our scheme, it is impossible for the adversary in the middle to register with the role that already exists in the SAGIN. Besides, the secret keys of any entities in the system cannot be obtained. As a result, the adversary cannot modify or manipulate transmitting messages to invade the existing connection. Therefore, the scheme would not be exposed to man-in-the-middle attacks.
* **Data Confidentiality and Integrity:** Data confidentiality and integrity imply that a message receiver can ensure the message has not been tampered with during transmission. In our proposed scheme, the private information is encrypted with the public key of the target entity. An adversary cannot decrypt the information without the corresponding private key. Therefore, our proposed scheme achieves data integrity property.
Moreover, we also compare our proposed ATMAS with its counterparts against security requirements and authentication features in Table 1. From this table, the work in [15]-[17] adopts crypto-based authentication schemes, in which the schemes in [15] and [16] have no authentication at the user device side. The work in [17] and our scheme use a two-phase authentication method, but the second phase in [17] is still crypto-based, which is less robust than ours.
### Case Study and Performance Evaluations
In this section, we first provide the factors/features used in our analysis. Then, a comparison of the performance of classic AI-based algorithms using selected factors is presented. Lastly, we discuss the effect of factor selection strategy on authentication accuracy.
In our case study, we consider three kinds of services: conversational, streaming, and interactive. We assume that the BSs in the SAGIN are fixed and uniformly distributed, and the coverage range of a BS is 20 \(km\). The satellites in the SAGIN orbit at a height of 20000 \(km\) and have a maximum beam scanning range of \(\pm 11.64^{\circ}\). Table 2 provides the factors used in
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Li et al. [15] & Deng et al. [16] & Badhib et al. [17] & Our Scheme \\ \hline \hline Authentication Phase & One & One & Two & Two \\ \hline Method & Crypto & Crypto & Crypto & Crypto, AI \\ \hline Dynamic & \(\times\) & \(\times\) & \(\surd\) & \(\surd\) \\ \hline Mutual Authentication & \(\times\) & \(\times\) & \(\surd\) & \(\surd\) \\ \hline Forward Security & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ \hline Resistance to Replay Attack & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ \hline Resistance to Man-in-the-Middle Attack & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ \hline Data Confidentiality and Integrity & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ \hline \end{tabular}
\end{table}
Table 1: **Comparisons of Security Requirements and Authentication Features.**
our proposed ATMAS, including identities, trajectory, and communication attributes.
We compare the authentication accuracy of different AI-based algorithms with respect to the percentage of illegal access. The authentication accuracy is defined as \(ACC=\left(TP+TN\right)/\left(TP+FP+TN+FN\right)\), where \(TP\), \(FP\), \(TN\), and \(FN\) denote true positives, false positives, true negative, and false negative, respectively. As shown in Fig. 4, with an increase in the percentage of illegal access, the authentication accuracy decreases slightly by less than 8%. Besides, based on the random forest algorithm, the authentication accuracy can achieve over 92%. By using the random forest algorithm, the proposed authentication scheme is robust even if the percentage of illegal access is high.
Fig. 5 illustrates the authentication accuracy using different numbers of factors. F[1] means that we only choose Factor 1 and F[1-9] means that all factors are selected. The results show that as the number of selected factors increases, the authentication accuracy also increases. However, the increase rate decreases, such that the authentication accuracy using 8 factors is almost the same as that using 9 factors. This indicates that not every factor has the same contribution to the increase in authentication accuracy. Therefore, choosing the proper combination of factors makes lightweight and effective authentication possible. In Fig. 6, we present the authentication accuracy of a combination of 3 factors. The results show that the combination with Factor 1, Factor 2 and Factor 4 has the highest authentication accuracy. Besides, combinations with Factor 1 can achieve an average accuracy of more than 95%, which indicates that in our simulation scenario, Factor 1 contributes the most to the ATMAS.
## Conclusion
In this article, an ATMAS is proposed in which spatial-temporal features are taken into account, including traffic volume and geographic location. An
\begin{table}
\begin{tabular}{|c|c|} \hline Items & Factors \\ \hline Factor 1 & Traffic Volume \\ \hline Factor 2 & Service Type \\ \hline Factor 3 & Uplink Rate \\ \hline Factor 4 & Simusity of the MU \\ \hline Factor 5 & Index of the BS \\ \hline Factor 6 & Distance between the BS and the MU \\ \hline Factor 7 & Position of the MU \\ \hline Factor 8 & Heading azimuth of the MU \\ \hline Factor 9 & Elevation between the BS and the Satellite \\ \hline \end{tabular}
\end{table}
Table 2: **Descriptions of Factors in the ATMAS**.
Figure 4: A Comparison of Authentication Accuracy with Classic AI-based Algorithms.
Figure 5: Authentication Accuracy of the SAGIN Using a Different Number of Factors.
Figure 6: Authentication Accuracy of the SAGIN Using Random Selected Factors.
AI-based continuous authentication, referred to as 'Phase II', is performed to intelligently grant or deny access reliably by capturing user profiles and traits following a conventional cryptographic authentication named 'Phase I'. Security analysis is conducted to evaluate the robustness and validate the security of the proposed Scheme. Performance evaluation shows that the mean authentication accuracy achieves over \(92\%\) by ATMAS. Additionally, we analyze the importance of factors used in our case study, shedding light on lightweight and efficient authentication mechanism design. However, there are still several research opportunities for future study.
**Authentication Centers Deployment.** In our proposed ATMAS, authentication is completed by the NCC on the ground. However, this increases transmission delays since all authentication information must be transmitted to the NCC. To address this, choosing MEO satellites as partial authentication centers is a promising solution. However, communication and computation resources on the satellite are limited, which requires optimization of the allocation of authentication tasks between MEO satellites and the NCC.
**Efficient AI-based Authentication.** AI-based authentication algorithms have intensive computation and communication costs and require a large amount of training data as well as a complicated feature-extraction process [18]. Besides, low computation and communication overhead algorithms should be investigated to the deployment of authentication centers on satellites.
**Blockchain-based Intelligent Authentication.** Blockchain is a distributed ledger technology with characteristics of decentralization, security, interoperation, and trust establishment [19]. As the SAGIN becomes increasingly complex, the system may suffer from false authentications leading to potential privacy leakage and security risks. Blockchain-based techniques can be utilized to track past security breaches and provide the necessary log analysis.
## Acknowledgments
This work was supported in part by the National Key Research and Development Program of China under Grant 2020YFB1804800 and Grant 2021YFB2900200, in part by the Key Research and Development Program of Zhejiang Province under Grant 2021C01197, in part by the National Natural Science Foundation of China under Grant 62101509, in part by the Natural Science Foundation of Zhejiang Province under Grant LQ22F010018. The corresponding authors of this article are Yongdong Zhu and Zipeng Li.
|
2305.19548 | Semi-device-independently characterizing quantum temporal correlations | We develop a framework for characterizing quantum temporal correlations in a
general temporal scenario, in which an initial quantum state is measured, sent
through a quantum channel, and finally measured again. This framework does not
make any assumptions on the system nor on the measurements, namely, it is
device-independent. It is versatile enough, however, to allow for the addition
of further constraints in a semi-device-independent setting. Our framework
serves as a natural tool for quantum certification in a temporal scenario when
the quantum devices involved are uncharacterized or partially characterized. It
can hence also be used for characterizing quantum temporal correlations when
one assumes an additional constraint of no-signalling in time, there are upper
bounds on the involved systems' dimensions, rank constraints -- for which we
prove genuine quantum separations over local hidden variable models -- or
further linear constraints. We present a number of applications, including
bounding the maximal violation of temporal Bell inequalities, quantifying
temporal steerability, bounding the maximum successful probability in quantum
randomness access codes. | Shin-Liang Chen, Jens Eisert | 2023-05-31T04:29:21Z | http://arxiv.org/abs/2305.19548v3 | # (Semi-)device independently characterizing quantum temporal correlations
###### Abstract
We develop a framework for characterizing quantum temporal correlations in a general temporal scenario, in which an initial quantum state is measured, sent through a quantum channel, and finally measured again. This framework does not make any assumptions on the system nor on the measurements, namely, it is device-independent. It is versatile enough, however, to allow for the addition of further constraints in a semi-device-independent setting. Our framework serves as a natural tool for quantum certification in a temporal scenario when the quantum devices involved are uncharacterized or partially characterized. It can hence also be used for characterizing quantum temporal correlations when one assumes an additional constraint of no-signalling in time, there are upper bounds on the involved systems' dimensions, rank constraints - for which we prove genuine quantum separations over local hidden variable models - or further linear constraints. We present a number of applications, including bounding the maximal violation of temporal Bell inequalities, quantifying temporal steerability, bounding the maximum successful probability in a scenario of quantum randomness access codes.
Quantum mechanics features correlations between spatially separated systems that are stronger than attainable in physical systems following classical laws. Bell's theorem [1] limits correlations that classical local-hidden-variable models can exhibit. This feature of quantum mechanics, also referred to as _non-locality_[2], is not only the defining feature that sets apart quantum from classical mechanics, it can also be exploited in technological-minded applications. Notably, it can be used in new modes of quantum certification that do not require any (possibly unwarranted) assumptions on the underlying states nor on the measurements involved. In such _device-independent_ (DI) quantum certification [2; 3; 4], interestingly, data alone can be seen as being sufficient to certify properties. Along this line of thought, randomness certification [5], entanglement verification [6; 7] and estimation [8], quantum state certification [9], steerability witnessing [10; 11], and measurement incompatibility certification [12] have all been obtained through the observed non-local correlations only and no assumption has to be made on the shared quantum state nor the measurement involved. The _Navascues-Pironio-Acin_ hierarchy [8; 13; 14; 15] - building on earlier work [16; 17] - has been a key tool in these efforts. The framework of device independence is compelling, in that one learns about properties of quantum systems without having to make assumptions about the devices with which these properties are being assessed.
That said, the original Bell scenario referring to spatial correlations is by no means the only setting that certifies quantum features beyond what classical local-hidden-variable models can deliver. It has been extended to include temporal correlations, making reference to non-macro-realistic temporal correlations of single systems between two instances in time [18; 19]. Leggett and Garg [20] have shown that, in quantum theory, there exists temporal correlations that are not macro-realistic, i.e., they do not admit the joint assumption of macroscopicity and non-invasive measurability. The original Leggett-Garg scenario is as follows: A quantum state is initially prepared and sent through a quantum channel. During the dynamics, the same measurement is performed at some, at least three, points in time. This has then been generalized to an identical preparation step, but followed by multiple choices of measurements at each point of time [21; 22]. Such a setting has been dubbed _temporal Bell scenario_, since one may view it as a temporal analogue of the standard Bell scenario. Unlike the Leggett-Garg scenario, in a temporal Bell scenario, measurement outcomes between _two_ points of time are sufficient to observe non-macroscopic correlations. Like the situation in the Bell scenario, researchers are searching for a practical way to characterize quantum temporal correlations. The question is, given observed statistics in a temporal scheme, do there exist quantum states and measurements which reproduce such statistics? Steps have been taken to characterize quantum temporal correlations in the standard Leggett-Garg scenario [23]. Nevertheless, characterizing quantum temporal correlations in the temporal Bell scenario remains an open problem, again with implications for device-independence. Indeed, it is not even known whether such an approach can be pursued at all.
In this work, we develop a framework based on what we call _instrument moment matrices_ (IMMs) to characterize quantum temporal correlations in a temporal Bell scenario. The IMMs are matrices of expectation values of the post-measurement states, where measurements are described by _instruments_. By construction, if the initial state and the measurements follow quantum theory, the IMMs are positive semi-definite. As such, quantum temporal correlations can be characterized by semi-definite programming [24]. Besides, the characterization will be more accurate when the size of IMMs becomes larger (see Refs. [13; 14] for the original idea behind such a hierarchical characterization and Refs. [8; 10; 11; 12; 12; 25; 26; 27; 28] for some variants). Our characterization is implemented both in a fully _device-independent_ (DI) and _semi-DI_ fashion that incorporates partial knowledge about the devices: We generalize the reading of semi-DI settings of Ref. [29] and advocate--complementing similarly motivated steps closer to the setting of fully specified devices of "semi-device-dependent" characterization [30]--that this _intermediate regime_ is highly reasonable and important. By DI we mean that the results are
based on the _observed_ temporal correlations only, but no measurements and channels have to be specified a-priori. In the temporal scenario, there is no way to rule out the possibility of sending information from an earlier time; therefore, we assume there are no side channels in our setting. In other words, we assume that we are not in an adversarial scenario such as in that of quantum key distribution. However, since the space of temporal correlations is so abundant that temporal quantum correlations can, in general, be realized by classical ones [31; 32], we have to add additional constraints to reveal quantum advantages. For this reason, we further consider 1) the constraint of _no-signaling in time_, 2) the constraint on the system's dimension, and 3) the constraint on the system's rank respectively. We show that IMMs allows us to characterize several quantum resources and tasks in a DI or semi-DI scenario. These includes computing an upper bound on the maximal violation of a temporal Bell inequality, estimating the minimum degree of temporal steerability, computing the maximum successful probability in a scenario of quantum randomness access codes, and identifying quantum state preparation. For including the rank constraint, to the best of our knowledge, this is the first work to enforce additional constraint apart from the dimensional constraint into a device-independent scenario. We would like to stress that in Ref. [33], the general idea of characterizing temporal correlations has been proposed. The difference is that Ref. [33] has focused on the prepare-and-measure scenario while we consider a two-time-measurement scenario (see Fig. 1). Building on this, we demonstrate several explicit applications.
_The scenario._ First, we introduce the notion of an _instrument_. An instrument \(\{\mathcal{J}_{a}^{\mathrm{A_{1}\to A_{2}}}\}\): \(\mathcal{L}(\mathcal{H}_{\mathrm{A_{1}}})\to\mathcal{L}(\mathcal{H}_{\mathrm{ A_{2}}})\) is a set of _completely positive_ (CP) and trace non-increasing maps which maps a quantum state \(\rho^{\mathrm{A_{1}}}\) to a post-measurement state \(\mathcal{J}_{a}^{\mathrm{A_{1}\to A_{2}}}(\rho^{\mathrm{A_{1}}})\) where \(a\in\mathcal{A}=\{0,1,2,\dots\}\) can be treated as the assigned outcome associated with the state \(\mathcal{J}_{a}^{\mathrm{A_{1}\to A_{2}}}(\rho^{\mathrm{A_{1}}})\). The probability of obtaining the outcome \(a\), denoted by \(P(a)\), can be computed via \(P(a)=\mathrm{tr}(\mathcal{J}_{a}^{\mathrm{A_{1}\to A_{2}}}(\rho^{\mathrm{A_{1}} }))\), therefore one has \(\mathrm{tr}\sum_{a}\mathcal{J}_{a}^{\mathrm{A_{1}\to A_{2}}}(\rho^{\mathrm{A_{1 }}})=\mathrm{tr}(\rho^{\mathrm{A_{1}}})\) due to the normalization.
In our scenario, we can choose different instruments to measure the state. We use the notation \(\{\mathcal{J}_{a|x}^{\mathrm{A_{1}\to A_{2}}}\}\) to denote the collection of instruments, where \(x\in\mathcal{X}=\{0,1,2,\dots\}\) labels the choice of measurement settings (see Fig. 1). The post-measurement state \(\mathcal{J}_{a|x}^{\mathrm{A_{1}\to A_{2}}}(\rho^{\mathrm{A_{1}}})\) is then submitted into a quantum channel \(\Lambda^{\mathrm{A_{2}\to B_{1}}}\): \(\mathcal{L}(\mathcal{H}_{\mathrm{A_{2}}})\to\mathcal{L}(\mathcal{H}_{\mathrm{ B_{1}}})\). Finally, the evolved state is measured by another measurement. At this stage, we only care about the outcome, and hence the measurements can be described by _positive operator-valued measures_ (POVMs) \(\{E_{b|y}^{\mathrm{B_{1}}}\}\) that are positive semi-definite \(E_{b|y}^{\mathrm{B_{1}}}\succeq 0\) and normalized as \(\sum_{b}E_{b|y}^{\mathrm{B_{1}}}=\openone\), where \(b\in\mathcal{B}\) and \(y\in\mathcal{Y}\) denote the measurement outcome and setting, respectively. By repeating the above experiment many rounds, we will observe a set of probabilities \(\{P(a,b|x,y):=P(b|a,x,y)P(a|x)\}\), termed _temporal correlations_. The temporal correlations can be obtained by applying the Born rule
\[P(a,b|x,y) = \mathrm{tr}\left\{E_{b|y}^{\mathrm{B_{1}}}\big{[}\Lambda^{ \mathrm{A_{2}\to B_{1}}}\left(\mathcal{J}_{a|x}^{\mathrm{A_{1}\to A_{2}}}( \rho^{\mathrm{A_{1}}})\right)\right]\right\} \tag{1}\] \[= \mathrm{tr}\big{[}E_{b|y}^{\mathrm{B_{1}}}\mathcal{I}_{a|x}^{ \mathrm{A_{1}\to B_{1}}(\rho^{\mathrm{A_{1}}})}\big{]}\]
where \(\{\mathcal{I}_{a|x}^{\mathrm{A_{1}\to B_{1}}}:=\Lambda^{\mathrm{A_{2}\to B_{1} }}\circ\mathcal{J}_{a|x}^{\mathrm{A_{1}\to A_{2}}}\}_{a}\) is a valid instrument for each \(x\). In a temporal scenario, there exists an inherent constraint that a futural observer can not send any information to the past, i.e., the constraint of _arrow of time_, yielding \(\sum_{b}P(a,b|x,y)=\sum_{b}P(a,b|x,y^{\prime})\) for all \(y\neq y^{\prime}\).
_The instrument moment matrices and their DI formulation. The instrument moment matrices (IMMs) are constructed by applying CP maps \(\mathcal{E}\): \(\mathcal{L}(\mathcal{H}_{\mathrm{B_{1}}})\to\mathcal{L}(\mathcal{H}_{\mathrm{ B_{1}}})\) on the post-measurement states \(\mathcal{I}_{a|x}^{\mathrm{A_{1}\to B_{1}}}(\rho^{\mathrm{A_{1}}})\), i.e., \(\mathcal{E}(\mathcal{I}_{a|x}^{\mathrm{A_{1}\to B_{1}}}(\rho^{\mathrm{A_{1}}})) =\sum_{n}K_{n}[\mathcal{I}_{a|x}^{\mathrm{A_{1}\to B_{1}}}(\rho^{\mathrm{A_{1}}}) ]K_{n}^{\dagger}\) with \(K_{n}:=\sum_{i}|i\rangle_{\mathrm{B_{1}B_{1}}}\langle n|S_{i}\) being the Kraus operators. Here, \(\{|i\rangle_{\mathrm{B_{1}}}\}\) and \(\{|j\rangle_{\mathrm{B_{1}}}\}\) are orthonormal bases for the output space \(\mathcal{H}_{\mathrm{B_{1}}}\) and input space \(\mathcal{H}_{\mathrm{B_{1}}}\), respectively. Following Ref. [8], given a level \(\ell\) we choose \(\{S_{i}\}\) as \(\openone\cup\mathcal{S}^{(1)}\cup\mathcal{S}^{(2)}\cup\dots\cup\mathcal{S}^ {(\ell)}\), where \(\mathcal{S}^{(\ell)}\) is composed of the \(\ell\)th order products of the operators in the set \(\{E_{b|y}^{\mathrm{B_{1}}}\}_{b=1,\dots,|\mathcal{S}|-1}^{y=1,\dots,|\mathcal{ Y}|}\). The \(\ell\)th-level IMMs can be defined as
\[\chi_{a|x}^{(\ell)}:=\mathcal{E}[\mathcal{I}_{a|x}(\rho^{\mathrm{A_{1}}})]=\sum _{i,j}|i\rangle\langle j|\,\mathrm{tr}\left[\mathcal{I}_{a|x}(\rho^{\mathrm{A_{ 1}}})S_{j}^{\dagger}S_{i}\right]. \tag{2}\]
Therefore, the entry of the \(i\)th row and \(j\)th column of \(\chi_{a|x}^{(\ell)}\) can be treated as the "expectation value" of the product of \(S_{j}^{\dagger}\) and \(S_{i}\) given the state \(\mathcal{I}_{a|x}^{\mathrm{A_{1}\to B_{1}}}(\rho^{\mathrm{A_{1}}})\). In Appendix A, we explicitly provide an example of IMMs for dichotomic measurement settings and outcomes. Note that the IMMs are positive semi-definite whenever \(\mathcal{I}_{a|x}\), \(\rho\), \(E_{b|y}^{\mathrm{B_{1}}}\) are quantum realizable: The set of constraints of positive semi-definiteness \(\chi_{a|x}^{(\ell)}\succeq 0\ \forall a,x\) serves as a natural characterization of the quantum set of temporal correlations \(\{P(a,b|x,y)\}\). The characterization is improved when the level \(\ell\) increases. Depending on the scenario under consideration, the improvement is hard to be observed from a level \(\ell_{c}\) and we say \(\chi_{a|x}^{(\ell_{c})}\) provides a proper approximation of the quantum set of temporal correlations. We will from now on use the notation \(\chi_{a|x}\) to simply denote \(\chi_{a|x}^{(\ell)}\).
When focusing on temporal correlations, quantum systems do not "outperform" classical systems in that a classical system with a sufficiently high dimension carries information which allows observers at later time to obtain. The simplest scheme is that an observer at earlier time can just send all the information about the measurement settings and outcomes to an observer at later time, then the correlation space will be filled by such a strategy. To let quantum systems demonstrate their superior performance, a constraint is to limit the dimension of the underlying system. By doing so, it has been shown that quantum systems outperform
Figure 1: The scenario considered in this work.
classical systems with the same dimension [34]. If we require that the entire system is embedded in dimension _at most_\(d\), we have \(P(a,b|x,y)=\mathrm{tr}\{E^{\mathrm{B}_{1|y}}_{|y}[\mathcal{I}^{\lambda_{1}\to \mathrm{B}_{1}}_{a|x}(\rho^{\mathrm{A}_{1}})]\},\) with \(\rho^{\mathrm{A}_{1}}\in\mathcal{L}(\mathcal{H}^{A_{1}}_{d})\), \(\mathcal{I}^{\mathrm{A}_{1}\to\mathrm{B}_{1}}_{a|x}:\mathcal{L}(\mathcal{H}^{ A_{1}}_{d})\to\mathcal{L}(\mathcal{H}^{\mathrm{B}_{1}}_{d})\), and \(E^{\mathrm{B}_{1}}_{b|y}\in\mathcal{D}(\mathcal{H}^{\mathrm{B}_{1}}_{d})\). Following the idea of Ref. [33], the set of probabilities \(P(a,b|x,y)\) generated by \(d\)-dimensional systems can be characterized by embedding IMMs into dimension-restricted IMMs, namely, \(\{\chi_{a|x}\}_{a,x}\in\mathcal{G}_{d}\) where \(\mathcal{G}_{d}\) is the set of IMMs composed of \(d\)-dimensional quantum systems.
The second kind of constraints we would like to impose is an upper bound on the rank of Bob's measurements. To this end, when generating Bob's \(d\)-dimensional POVMs \(E^{\mathrm{B}_{1}}_{b|y}\), we generate \(E^{\mathrm{B}_{1}}_{b|y}\) with rank \(k\) only, namely, \(\text{Rk}(E^{\mathrm{B}_{1}}_{b|y})=k\ \ \forall b,y,\) where \(\text{Rk}(\cdot)\) denotes the rank. We denote with \(\mathcal{G}^{k}_{d}\) the set of IMMs with such a construction, i.e., \(\{\chi_{a|x}\}_{a,x}\in\mathcal{G}^{k}_{d}\). In our method, the rank constraint cannot be considered alone without the dimensional constraint. The reason is that when generating the POVM elements \(E^{\mathrm{B}_{1}}_{b|y}\), the dimension of them is automatically defined. In the same sense, in the typical dimension-constraint scenario, one implicitly sets the upper bound on the rank of measurements to be full rank. The final constraint we would like to consider is the so-called _no signaling in time_ (NSIT). Such a constraint states that the observer at earlier time cannot transmit information by changing the measurement settings, i.e., \(\sum_{a}P(a,b|x,y)=\sum_{a}P(a,b|x^{\prime},y)\) for all \(x\neq x^{\prime}\), yielding \(\sum_{a}\chi_{a|x}=\sum_{a}\chi_{a|x^{\prime}}\,\forall x\neq x^{\prime}\). Since no information is transmitted between two observers at different points of time, the NSTT constraint in the temporal scenario is in general the same as the typical (i.e., spatial) Bell scenario.
Depending on different circumstances, we have four types of constraints used for characterizing quantum sets of temporal correlations: the device-independent (DI) constraint, DI \(+\) dimensional constraint, DI \(+\) rank constraint, and NSIT constraint. They are respectively denoted as
* DI\(:\chi_{a|x}\succeq 0\ \forall a,x\),
* DI\(+\)Dim.\(:\chi_{a|x}\succeq 0\ \forall a,x\), \(\{\chi_{a|x}\}_{a,x}\in\mathcal{G}_{d}\).
* DI\(+\)Dim.\(+\)Rank: \(\chi_{a|x}\succeq 0\ \forall a,x\), \(\{\chi_{a|x}\}_{a,x}\in\mathcal{G}^{k}_{d}\).
* NSIT: \(\chi_{a|x}\succeq 0\ \forall a,x\), \(\sum_{a}\chi_{a|x}=\sum_{a}\chi_{a|x^{\prime}}\ \forall x\neq x^{\prime}\).
When we mention _semi-device-independent_ (semi-DI) scenarios, we include the second to fourth types of constraints.
_Quantum upper bounds on temporal Bell inequalities._ To demonstrate that the IMMs provide a proper characterization, we first show that the IMMs can be used to compute an upper bound on the maximal quantum violation of a temporal Bell inequality. To simplify the problem, we consider the temporal _Clauser-Horne-Shimony-Holt_ (CHSH) scenario [21, 22, 35, 36], i.e., the scenario with binary settings and outcomes. The generalization to arbitrary scenarios can be straightforwardly obtained. The temporal CHSH inequality is written as
\[K_{\mathrm{CHSH}}:=\langle A_{0}B_{0}\rangle+\langle A_{0}B_{1}\rangle+\langle A _{1}B_{0}\rangle-\langle A_{1}B_{1}\rangle\leq 2, \tag{3}\]
where \(\langle A_{x}B_{y}\rangle:=P(a=b|x,y)-P(a\neq b|x,y)\). The bound with the value of \(2\) is obtained from the so-called _macroscopic realistic model_[18, 19]. As seen known, the inequality can be violated since quantum physics does not admit a macroscopic realistic model. An quantum upper bound on the inequality can be computed via the _semi-definite program_ (SDP) [24]\(\max\{K_{\mathrm{CHSH}}|\chi_{a|x}\succeq 0,\ \forall a,x\}\). The solution gives us the value of \(4\), the maximal algebraic value. This coincides with one of results in Ref. [37], which states that any correlation admitting the arrow of time can always be realized by quantum theory [38]. Even when we consider the dimensional constraint, the tight quantum upper bound on \(K_{\mathrm{CHSH}}\) is still \(4\) and can be computed by the SDP
\[\max\Big{\{}K_{\mathrm{CHSH}}\Big{|}\chi_{a|x}\succeq 0,\quad\{\chi_{a|x}\}_{a,x} \in\mathcal{G}_{d=2}\Big{\}}. \tag{4}\]
It is easy to find a quantum realization to achieve the bound, therefore the bound is tight. It is interesting to note that if we further restrict Bob's POVMs to be rank \(1\) and solve the SDP
\[\max\Big{\{}K_{\mathrm{CHSH}}\Big{|}\chi_{a|x}\succeq 0,\quad\{\chi_{a|x}\}_{a,x} \in\mathcal{G}^{k=1}_{d=2}\Big{\}}, \tag{5}\]
the upper bound on \(K_{\mathrm{CHSH}}\) will be around \(2.8284\) (within the numerical precision with \(2\sqrt{2}\)), same with the Tsirelson bound [39] in the spatial CHSH scenario. Finally, if we consider the NSIT constraint, the scenario will be the same as that of the spatial CHSH; that is, two-way communication is forbidden. The upper bound on \(K_{\mathrm{CHSH}}\) we obtain is around \(2.8284\), within the numerical precision with the Tsirelson bound [39], \(2\sqrt{2}\). It is computed by the SDP
\[\max\Big{\{}K_{\mathrm{CHSH}}\Big{|}\chi_{a|x}\succeq 0,\quad\sum_{a}\chi_{a|x}= \sum_{a}\chi_{a|x^{\prime}}\Big{\}}. \tag{6}\]
_Bounding the degree of temporal steerability._ The idea of temporal steerability has first been proposed in Ref. [40]. The authors have shown that, under the assumption of non-invasive measurement of the earlier point of time, there exists a temporal analogue of a steering inequality [41], while quantum theory can violate such a temporal steering inequality. The works of Refs. [42, 43, 44] have reformulated the classical model by introducing the hidden state model [45]. In our formulation, the hidden state model indicates that the post-measurement states obey the hidden-state model (see also Ref. [46]): \(\mathcal{I}_{a|x}(\rho)=\sum_{\lambda}P(\lambda)P(a|x,\lambda)\sigma_{\lambda}\), where \(P(\lambda)\), \(P(a|x,\lambda)\) are probabilities and \(\sigma_{\lambda}\) are quantum states. The equation above tells us that the post-measurement states \(\mathcal{I}_{a|x}(\rho)\) are simply a classical post-processing of the set of fixed states \(\sigma_{\lambda}\). In quantum theory, there exist instruments \(\mathcal{I}_{a|x}\) such that the post-measurement states \(\mathcal{I}_{a|x}(\rho)\) do not admit a hidden-state model. The incompatibility with a hidden-state model is called _temporal steering_, and the degree of which is measured by the _temporal steering robustness_[47] and the _temporal steerable weight_[42].
Here, we show that by observing the statistics \(P(a,b|x,y)\), we are still capable of bounding the degree of temporal steerability in DI and semi-DI scenarios. For the DI result, the method is similar to the work of Ref. [10], where the authors have employed moment matrices induced by a bipartite system to quantify steerability. Here, we use the moment matrices induced by a single system to quantify temporal steerability. Consider the _temporal steering robustness_[47], which is defined as the minimal ratio of the set
of noisy post-measurement states \(\mathcal{J}_{a|x}(\rho)\) one has to mix with \(\mathcal{I}_{a|x}(\rho)\) before the mixture admits the hidden state model. That is, \(R_{\rm ts}=\min\{t\Big{|}(\mathcal{I}_{a|x}(\rho)+t\mathcal{J}_{a|x}(\rho))/(1+t )=\sum_{\lambda}P(\lambda)P(a|x,\lambda)\sigma_{\lambda}\}\), with \(\mathcal{J}_{a|x}(\rho)\succeq 0\) and \(\operatorname{tr}\sum_{a}\mathcal{J}_{a|x}(\rho)=1\). This gives
\[\min_{\tilde{\sigma}_{\lambda}\succeq 0}\Big{\{}\operatorname{tr}\sum_{ \lambda}\tilde{\sigma}_{\lambda}-1\Big{|}\sum_{\lambda}\delta_{a,\lambda(x)} \tilde{\sigma}_{\lambda}-\mathcal{I}_{a|x}(\rho)\succeq 0\Big{\}}, \tag{7}\]
where each \(\lambda\) is a vector whose \(x\)th element assigns a measurement outcome \(a\), describing a deterministic strategy of observing outcome \(a\) with choice \(x\). In a DI scenario, no assumption is made on \(\mathcal{I}_{a|x}\) nor on \(\rho\), therefore, the above SDP cannot be computed. However, by applying the IMMs on the above SDP, some elements such as temporal correlations in the IMMs can be characterized therefore the new SDP is solvable. The new constraints will be more relaxed (since we drop the characterization of \(\mathcal{I}_{a|x}(\rho)\)), therefore the solution of the relaxed SDP will be a lower bound on \(R_{\rm ts}\). We present the relaxed SDP and the numerical results in Appendix B. For other semi-DI results, we add the associated constraints.
_Characterization of quantum randomness access codes_. In the \(n\to 1\)_random access code_ (RAC) scenario, an observer, called Alice, has \(n\) bits of information, denoted by \(\vec{x}=(x_{0},x_{1},\ldots x_{y},\ldots,x_{n-1})\) with \(x_{i}\in\{0,1\}\). She then encodes them into a single bit and sends it to the other observer, called Bob, who is queried for guessing Alice's \(y\)th bit. Their goal is to maximize Bob's guessing probability, i.e., \(P(b=x_{y}|\vec{x},y)\), where \(b\) is Bob's guess (see Fig. 2). We denote with \(\mathcal{P}^{\rm C}_{n\to 1}\) the maximum average (over all \(x_{y}\) and \(y\)) successful probability by a classical strategy. It has been shown that \(\mathcal{P}^{\rm C}_{2\to 1}=\mathcal{P}^{\rm C}_{3\to 1}=3/4\). In quantum theory, Alice's \(n\) bits of information are encoded in the way of quantum state preparation, i.e., for each given \(\vec{x}\), she sends the associated quantum state \(\rho_{\vec{x}}\) to Bob. Bob then performs his \(y\)th quantum measurement, described by a POVM \(\{E_{b|y}\}_{b}\), on the state. The quantum realization of the guessing probability will be \(P(b=x_{y}|\vec{x},y)=\operatorname{tr}(E_{b|y}\rho_{\vec{x}})\). Denoting \(\mathcal{P}^{\rm O}_{n\to 1}\) as the maximum average successful probability by a quantum strategy, it has been shown that \(\mathcal{P}^{\rm Q}_{2\to 1}=\frac{1}{2}(1+1/\sqrt{2})\approx 0.8536\) and \(\mathcal{P}^{\rm Q}_{3\to 1}=\frac{1}{2}(1+1/\sqrt{3})\approx 0.7887\). We now show how to use the framework of IMMs to recover these quantum bounds.
First, note that the post-measurement states depicted in our scenario (i.e., Fig. 1) can be regarded as the set of states \(\rho_{\vec{x}}\) prepared in QRAC scenario. As such, the formulation of moment matrices for \(\rho_{\vec{x}}\) will be \(\chi_{\vec{x}}=\sum_{i,j}|i\rangle\langle j|\operatorname{tr}(\rho_{\vec{x}}S ^{\dagger}_{j}S_{i})\). The accessible data \(P(a^{\prime},b^{\prime}|x^{\prime},y^{\prime})\) in a general temporal scenario is associated with the average successful probability \(P(b|\vec{x},y)\). In fact, such a transformation can always be made by choosing \(a^{\prime}=x_{0}\), \(x^{\prime}=(x_{1},x_{2},\ldots,x_{n-1})\), \(b^{\prime}=b\in\{0,1\}\), and \(y^{\prime}=y\in\{0,1,\ldots,n-1\}\). Consequently, for unknown states and measurements, the constraint of \(\chi_{\vec{x}}\succeq 0\) naturally provides a characterization of quantum set of \(P(b|\vec{x},y)\). For instance, the four prepared states \(\rho_{x_{0},x_{1}}\) in the \(2\to 1\) scenario can be directly treated as the four post-measurement states \(\{\mathcal{I}_{a^{\prime}|x^{\prime}}(\rho)\}_{a^{\prime},x^{\prime}}\) by choosing \(a^{\prime}=x_{0}\) and \(x^{\prime}=x_{1}\). The average successful probability for the \(2\to 1\) scenario is given by \(\mathcal{P}_{2\to 1}:=(1/8)\sum_{x_{0},x_{1},y}P(b=x_{y}|x_{0},x_{1},y)\) for \(x_{i},b,y\in\{0,1\}\). An upper bound on the maximum value of \(\mathcal{P}_{2\to 1}\) for quantum strategies can be computed via
\[\max\Big{\{}\mathcal{P}_{2\to 1}\Big{|}\chi_{x_{0},x_{1}}\succeq 0,\quad\{\chi_{x_{0}, x_{1}}\}_{x_{0},x_{1}}\in\mathcal{G}^{k=1}_{d=2}\Big{\}}. \tag{8}\]
We assume the measurements in the qubit-QRAC scenario to be projective, which is equal to requiring the POVMs be rank-one. The result matches the quantum bound of \(\mathcal{P}^{\rm Q}_{2\to 1}:=(1+1/\sqrt{2})/2\) within the numerical precision for the first level of hierarchy of the IMMs (i.e., \(\mathcal{S}=\{\openone,E_{1|1},E_{1|2}\}\)).
For the \(3\to 1\) scenario, there are eight prepared states \(\rho_{x_{0},x_{1},x_{2}}\) with \(x_{i}\in\{0,1\}\). The correspondence with general temporal scenario can be made by choosing \(a^{\prime}=x_{0}\), \(x^{\prime}=(x_{1},x_{2})\), \(b^{\prime}=b\in\{0,1\}\), and \(y^{\prime}=y\in\{0,1,2\}\). The average successful probability is defined as \(\mathcal{P}_{3\to 1}:=\frac{1}{24}\sum_{x_{0},x_{1},x_{2},y}P(b=x_{y}|x_{0},x_{1},x_{2},y)\). Similarly with Eq. (8), an quantum upper bound on \(\mathcal{P}_{3\to 1}\) can be computed. The result matches \(\mathcal{P}^{\rm Q}_{3\to 1}:=\frac{1}{2}(1+1/\sqrt{3})\) for the first level of hierarchy, therefore the bound is tight as well.
_Self-testing quantum states in a prepare-and-measure scenario_. Finally, we show that the IMMs can be used for verifying set of quantum states in a semi-DI way. More explicitly, we consider the QRAC scenario in the last section and uniquely (up to some isometries) identify the underlying set of states \(\rho_{\vec{x}}\) by the observed probabilities \(P(b|\vec{x},y)\) only. Such identification, called _self-testing in a prepare-and-measure scenario_, has been proposed in Refs. [48; 49; 50]. We here provide an alternative approach to achieve the task. A robust self-testing of quantum states can be defined as follows [48; 51]). Given an upper bound \(d\) on the dimension of the systems involved, we say that the observed correlation \(\vec{P}:=\{P(b|\vec{x},y)\}_{b,\vec{x},y}\) robustly self-tests, in a prepare-and-measure scenario, the reference set of states \(\vec{\rho}_{\rm ref}:=\{\rho_{\vec{x}}^{\rm ref}\}_{\vec{x}}\) at least with a fidelity \(f\) if for each set of states \(\vec{\rho}:=\{\rho_{\vec{x}}\in\mathcal{H}_{d}\}_{\vec{x}}\) compatible with \(\vec{P}\) there exists a _completely positive and trace-preserving_ (CPTP) map \(\Lambda\), such that \(F(\vec{\rho}_{\rm ref},\Lambda(\vec{\rho}))\geq f\). Here, \(\Lambda(\vec{\rho})\) represents for \(\Lambda(\rho_{\vec{x}})\) for all \(\vec{x}\) and \(F(\vec{\rho},\vec{\sigma})\) is the fidelity between two sets of states \(\vec{\rho}\) and \(\vec{\sigma}\), namely [52],
\[F(\vec{\rho},\vec{\sigma}):=\sum_{\vec{x}}F^{\rm UJ}(\rho_{\vec{x}},\sigma_{\vec{x} })=\frac{1}{2^{n}}\sum_{\vec{x}}\operatorname{tr}(\rho_{\vec{x}}\sigma_{\vec{x}}), \tag{9}\]
where \(F^{\rm UJ}\) is the _Uhlmann-Josza fidelity_[53; 54] and the second equality holds when \(\rho_{\vec{x}}\) or \(\sigma_{\vec{x}}\) are pure.
To compute \(F(\vec{\rho}_{\rm ref},\Lambda(\vec{\rho}))\) in a DI way, we use a method similar to that of Ref. [55], where the authors self-test steering assemblages. Correcting a flaw in the method of Ref. [55]
Figure 2: The \(n\to 1\)_quantum randomness access codes_ (QRACs).
and building on insights of a corrected method [56], here, we compute bounds on the fidelity (see Appendix C). The idea is to express the _Choi-Jamiolkowski_ (CJ) matrix reflecting the channel in terms of Bob's observables. The fidelity can then be written as a polynomial where each monomial is of the form \(\mathrm{tr}(\rho_{\vec{x}}S_{j}^{\dagger}S_{i})\) with \(S_{i}\) being Bob's observables or their products. Given the observed correlation \(\vec{P}\), a DI bound on \(F(\vec{\rho}_{\mathrm{ref}},\Lambda(\vec{\rho}))\), denoted as \(F^{\mathrm{DI}}\), can be computed as
\[\min\bigl{\{}F^{\mathrm{DI}}(\vec{\rho}_{\mathrm{ref}},\Lambda(\vec{\rho})) \Bigl{|}\chi_{\vec{x}}\geq 0,\quad\chi_{\vec{x}}\in\mathcal{G}_{d}^{k}\bigr{\}}. \tag{10}\]
We consider the example of a \(2\to 1\) scenario, where the reference preparation is chosen as a unitary equivalent to \(\{|0\rangle,|1\rangle,|+\rangle,|-\rangle\}\), implying \(d=2\). We assume the measurement to be projective (as most works do), so that \(k=1\). The result is presented by the blue-solid line in Fig. 3. The observed correlation \(\vec{P}\) is represented by the average successful probability \(\mathcal{P}_{2\to 1}:=\frac{1}{8}\sum_{x_{0},x_{1},y}P(b=x_{y}|x_{0},x_{1},y)\). Given the maximal quantum value of \(\mathcal{P}_{2\to 1}=\mathcal{P}_{2\to 1}^{\mathcal{O}}\), we perfectly self-test the reference set of states with fidelity equal to 1. When \(\mathcal{P}_{2\to 1}\) is below around \(0.8232\), we no longer have self-testing statement, since the fidelity is below the classical fidelity \(0.8536\) (see Appendix D) The optimal bounds on the fidelity have been proposed in Ref. [48], i.e., the black-dashed line in Fig. 3. It is an open question how to find the best expression of the CJ matrix to make our bounds optimal.
_Summary and discussion._ In this work, we have established a general temporal scenario and develop a method, dubbed as _instrument moment matrices_ (IMMs), to characterize quantum temporal correlations generated by such a scenario. The method of IMMs can be implemented in a fully DI scenario, but we can also include additional constraints (such as the dimension and rank of the system) when these information is accessible. Along the side, we contribute to advocating to explore the "room in the middle" between the (precise, but very restrictive) DI and device-specific scenarios: In contrast to Ref. [30] which is close to device-dependence and is hence dubbed _semi-device-dependent_, we are here close to the DI regime, in the _semi-device-independent_ setting. We explicitly provide several DI and semi-DI examples, including bounding the maximal value of temporal Bell inequalities and the minimum degree of temporal steerability. Moreover, its variant allows us to compute the maximal successful probability and certify the set of quantum states in a QRAC scenario.
Our work invites a number of questions for future research: First, the temporal scenario considered in this work is composed of two moments of time. There will be more significant applications in the field of quantum network if the framework can be generalized to multiple moments of time. Second, since the construction of the IMMs includes the measurements and channels, we expect that the method of IMMs can be used for certifying properties of quantum measurements and channels, e.g., incompatible measurements or non entanglement-breaking channels, or, even self-testing measurements and channels. Finally, it is interesting to see if the IMMs can also be used for self-testing a set of complex-valued states.
_Acknowledgements._ We thank Nikolai Miklin, Costantino Budroni, Yeong-Cherng Liang, and Armin Tavakoli for fruitful discussions. S.-L. C. acknowledges the support of the National Science and Technology Council (NSTC) Taiwan (Grant No. NSTC 111-2112-M-005-007-MY4) and National Center for Theoretical Sciences Taiwan (Grant No. NSTC 112-2124-M-002-003). J. E. acknowledges support by the BMBF (QR.X), the Munich Quantum Valley (K-8), and the Einstein Foundation.
|
2308.16687 | DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew | We present DictaBERT, a new state-of-the-art pre-trained BERT model for
modern Hebrew, outperforming existing models on most benchmarks. Additionally,
we release three fine-tuned versions of the model, designed to perform three
specific foundational tasks in the analysis of Hebrew texts: prefix
segmentation, morphological tagging and question answering. These fine-tuned
models allow any developer to perform prefix segmentation, morphological
tagging and question answering of a Hebrew input with a single call to a
HuggingFace model, without the need to integrate any additional libraries or
code. In this paper we describe the details of the training as well and the
results on the different benchmarks. We release the models to the community,
along with sample code demonstrating their use. We release these models as part
of our goal to help further research and development in Hebrew NLP. | Shaltiel Shmidman, Avi Shmidman, Moshe Koppel | 2023-08-31T12:43:18Z | http://arxiv.org/abs/2308.16687v2 | # DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew
###### Abstract
We present DictaBERT, a new state-of-the-art pre-trained BERT model for modern Hebrew, outperforming existing models on most benchmarks. Additionally, we release three fine-tuned versions of the model, designed to perform three specific foundational tasks in the analysis of Hebrew texts: prefix segmentation, morphological tagging and question answering. These fine-tuned models allow any developer to perform prefix segmentation, morphological tagging and question answering of a Hebrew input with a single call to a HuggingFace model, without the need to integrate any additional libraries or code. In this paper we describe the details of the training as well and the results on the different benchmarks. We release the models1 to the community, along with sample code demonstrating their use. We release these models as part of our goal to help further research and development in Hebrew NLP.
Footnote 1: Exact license can be found at [https://creativecommons.org/licenses/by-sa/4.0/](https://creativecommons.org/licenses/by-sa/4.0/)
## 1 Introduction
In recent years, Hebrew NLP research has made major progress with the release of various pre-trained language models with Hebrew support. The first was the multi-lingual transformer model mBERT, based on the BERT architecture Devlin et al. (2019), which was then followed by several other models of a similar architecture HeBERT Chriqui and Yahav (2022), AlephBERT Seker et al. (2021), AlephBERTGimmel Gueta et al. (2023), and HeRo Shalumov and Haskey (2023)). Earlier this year it was shown that when finetuned, the much larger mT5 Xue et al. (2021) models perform very well on the various Hebrew benchmarks Eyal et al. (2022) resulting in SOTA scores for most of the experiments.
We present our model DictaBERT which is based on the BERT architecture with minor modifications to the training parameters, as well as improved training set and an improved procedure for preprocessing training samples. The model outperforms previous BERT models on most benchmarks, with the most noticeable improvement seen on the QA task. This gain is significant, as the QA task requires more complex syntactic understanding than other tasks. The performance gain on this task may indicate increased language comprehension in the model. Notably, the model's performance on the QA task is essentially equivalent2 to that of the mT5-XL model, a model with over 10 times as many parameters as DictaBERT.
Footnote 2: The QA task is scores with two measures: EM (exact match) and F1. DictaBERT achieves a slightly higher score for the EM, whereas mT5-XL achieves a slightly higher score for the F1.
We also present DictaBERT-morph, DictaBERT-seg and DictaBERT-heq, customized models fine-tuned for morphological tagging, prefix segmentation and question answering, respectively. The models are released with sample code for easy integration.3 These fine-tuned models allow any developer to perform prefix segmentation and morphological tagging and question answering of a Hebrew input with a single call to a HuggingFace model, without the need to integrate any additional libraries or code. The details of the fine-tuning are detailed in section 4.
Footnote 3: The sample code can be found in the appendix.
## 2 Approach
### Tokenizer
We use the Word-Piece tokenization method proposed by Song et al. (2021) with the default normalizers and preprocessors suggested by HuggingFace, with the following modification:
We added a pre-tokenizer to handle usage of quotation marks and apostrophes in Modern Hebrew. We make sure to keep as single tokens words
with quotation marks meant as abbreviations (e.g., \(\daldal\daldal\daldaldal\
also compare our results to the much larger mT5-XL model (3.7B parameters).
Following previous publications, we tested our model on the following tasks:
**Morphology** We follow the multi-task configuration setup and evaluation procedure detailed in Seker et al. (2021) in order to fine-tune for segmentation, part of speech, and fine-grained morphological feature prediction, and in order to calculate the MultiSet (mset) scores for evaluation. We train the model with the same training & test data used in previous works for this task. The results for this task are listed in Table 1. DictaBERT outperforms all previous models.
**Named Entity Recognition (NER)** We train on the NEMO dataset presented by Bareket and Tsarfaty (2021), which contains 9 categories and 6,220 sentences (7,713 entities). We report the token F1 score for each model. The results for this task are listed in Table 2. DictaBERT outperforms all previous models.
**Sentiment Analysis** We evaluate our model on the sentiment analysis dataset presented in Amram et al. (2018), based on 12K social media comments. The results for this task are also listed in Table 2. DictaBERT outperforms all previous models.
**Question Answering** For this task we use the newly released HeQ dataset by Cohen et al. (2023), which contains 30K high quality samples from the GeekTime newsfeed, and from Wikipedia. The results for this task are listed in Table 3. As discussed above, DictaBERT outperforms all previous models of similar parameter size, and performs essentially equivalent to the much larger mT5-XL model5.
Footnote 5: We are also pleased to share the weights to our finetuned version of mT5-XL, after finetuning it on the HeQ corpus. The model is available for use on huggingface here: [https://huggingface.co./dicta-il/mt5-xl-heq](https://huggingface.co./dicta-il/mt5-xl-heq)
## 4 Fine-tuned Models
With this publication, we are also releasing three fine-tuned models, with sample code, in order to allow easy integration of DictaBERT's segmentation and morph analysis and question answering capabilities into any product or experiment.
We wish to emphasize that for the morph analysis and segmentation models, although the evaluation above used the exact training and test data used in previous works in order to ensure fair comparison of DictaBERT's capabilities with previous BERT models, the fine-tuned models that we are currently releasing were fine-tuned on much larger corpora and with a simpler output format for easy integration, in order to provide maximum practical value to the community.
### DictaBERT-seg
This model was trained on a prefix segmentation task, wherein, given a sentence, we aim to identify the letters that function as proclities at the beginnings of the words, segmenting them from the essential word unit. For example, the word \(\overrightarrow{\texttt{\texttt{\char 37.0}}}\) ("that went") would be segmented into \(\overrightarrow{\texttt{\texttt{\char 37.0}}}\) ("that") and \(\overrightarrow{\texttt{\texttt{\char 37.0}}}\) ("went"). Note that this model does not segment suffixes, if any, at the ends of the words.
**Training Data** The training data for the fine
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **Seg** & **POS** & **Features** \\ \hline
**mBERT** & 96.07 & 93.14 & 92.68 \\ \hline
**AlephBert** & 97.88 & 95.81 & 95.27 \\ \hline
**ABG** & 98.09 & 96.22 & 95.76 \\ \hline
**HeRo** & 97.86 & 96.05 & 95.61 \\ \hline
**mT5-Base** & 96.34 & 95.9 & - \\ \hline
**DictaBERT** & **98.16** & **96.25** & **96.1** \\ \hline \end{tabular}
\end{table}
Table 1: Performance Comparison of Different Models on the Morphology task
\begin{table}
\begin{tabular}{|c|c|c|} \hline & **HeQ-EM** & **HeQ-F1** \\ \hline
**mBERT** & 62.7 & 71.42 \\ \hline
**AlephBert** & 57.91 & 67.66 \\ \hline
**AlephBertGimmel** & 57.12 & 67.37 \\ \hline
**HeRo** & 61.89 & 71.3 \\ \hline
**mT5-Base** & 53.65 & 65.03 \\ \hline
**mT5-XL** & 63.4 & **73.5** \\ \hline
**DictaBERT** & **63.6** & 72.9 \\ \hline
**ChatGPT** & 9.38 & 32.19 \\ \hline \end{tabular}
\end{table}
Table 2: Performance Comparison of Different Models on the Sentiment and NER tasks
\begin{table}
\begin{tabular}{|c|c|c|} \hline & **Seg** & **POS** & **Features** \\ \hline
**mBERT** & 96.07 & 93.14 & 92.68 \\ \hline
**AlephBert** & 97.88 & 95.81 & 95.27 \\ \hline
**ABG** & 98.09 & 96.22 & 95.76 \\ \hline
**HeRo** & 97.86 & 96.05 & 95.61 \\ \hline
**mT5-Base** & 96.34 & 95.9 & - \\ \hline
**DictaBERT** & **98.16** & **96.25** & **96.1** \\ \hline \end{tabular}
\end{table}
Table 1: Performance Comparison of Different Models on the Morphology task
tuning of DictaBERT-seq is made up of various modern Hebrew texts including Wikipedia, blogs, and more. The prefix segmentation training set was initially automatically derived from Dicta's in-house diacritized corpus, and subsequently reviewed and corrected by an expert human annotator. The training data consists of 52K sentences with a total of 1.2M words.
**Architecture** The architecture of the model is as follows: We run each word of the sentence through eight classifiers in order to predict the probability of each of eight different possible prefix functions a Hebrewprolitic may serve. We train the model to predict the relevant prefix classes for each word given its sentence context. During inference, we limit the predictions to valid sets ofprolitic functions given the initial letters of the word. When words are comprised of multiple word pieces, we train on the first word piece alone, because the proclities will virtually always be contained within the initial word piece (although we determine the valid sets ofprolitic functions based on the whole word).
Sample code and output is displayed in Appendix A.
### DictaBERT-morph
This model was trained on a morphology task, where given a sentence, we aim to tag the fine-grained morphological features of each word. Specifically, DictaBERT-morph predicts part-of-speech, as well as gender, number, person, and tense, wherever relevant. Similarly to the previous model, this model also identifiesprolitic functions. Additionally, this model also identifies whether there is a suffix appended to the word, and if so, which function it serves, and also the gender, number, and person of the suffix.
**Training Data** We used the UD Treebank (Sade et al., 2018) which consists of 5K tagged sentences in the train split, as well an additional 35K sentences from the IAHLT6 UD corpus of tagged Hebrew sentences (Zeldes et al., 2022).
Footnote 6: We would like to express our thanks to IAHLT for this tagged corpus. For more information regarding the resources curated and made available by IAHLT, see: [https://github.com/IAHLT/iahlt.github.io/blob/main/index.md](https://github.com/IAHLT/iahlt.github.io/blob/main/index.md)
**Architecture** For every word in a sentence, we we train five classifiers to predict the morphological features of that word. For cases where a word is broken up into multiple word pieces, we perform the predictions on the first word piece. The 5 classifiers are as follows:
* Prediction of the POS for the main word
* e.g., ["from the meal"] has both an ADP and DET prefix)
* Prediction of the fine-grained morphological features of the word (gender, number, person, and tense)
* Prediction of whether there is a suffix and which function it serves
* Predictions of fine-grained features of the suffix (gender, number, and person), if the previous classifier predicts that there is a suffix
Sample code and output is displayed in Appendix B.
### DictaBERT-heq
This model was trained on the question answering task, where given a context and a question, we aim to extract the answer to the question from the context. This model uses the standard HuggingFace architecture for Bert-Question-Answering. The model was trained on the HeQ train dataset as listed in the experiment section.
Sample code and output is displayed in Appendix C.
## 5 Conclusion
We are happy to release these model to the public to help further research and development in Hebrew.7
Footnote 7: The base model DictaBERT is available at [https://huggingface.co./dicta-il/dictabert](https://huggingface.co./dicta-il/dictabert)
The fine-tuned segmentation model DictaBERT-seg is available at [https://huggingface.co./dicta-il/dictabert-seg](https://huggingface.co./dicta-il/dictabert-seg)
The fine-tuned morphology model DictaBERT-morph is available at [https://huggingface.co./dicta-il/dictabert-morph](https://huggingface.co./dicta-il/dictabert-morph)
The fine-tuned QA model DictaBERT-heq is available at [https://huggingface.co./dicta-il/dictabert-heq](https://huggingface.co./dicta-il/dictabert-heq).
|
2309.09511 | Surface directed spinodal decomposition of fluids confined in
cylindrical pore | The surface directed spinodal decomposition of a binary liquid confined
inside cylindrical pore is investigated using molecular dynamics simulation.
One component of the liquid wets the pore surface while the other remains
neutral. A variety of wetting conditions are studied. For the partial wetting
case, after an initial period of phase separation, the domains organize
themselves into plug-like structure and the system enters into a metastable
state. Therefore, a complete phase separation is never achieved. Analysis of
domain growth and the structure factor suggests an one-dimensional growth
dynamics for partial wetting case. As the wetting interaction is increased
beyond a critical value, a transition from the plug-like to tube-like domain
formation is observed which corresponds to the full wetting morphology. Thus, a
complete phase separation is achieved as the wetting species moves towards the
pore surface and forms layers enclosing the non wetting species residing around
the axis of the cylinder. The coarsening dynamics of both the species are
studied separately. The wetting species is found to follow a two-dimensional
domain growth dynamics with a growth exponent 1/2 in the viscous hydrodynamic
regime. This was substantiated by the Porod tail of the structure factor. On
the other hand, the domain grows linearly with time for the non wetting
species. This suggests that the non wetting species behaves akin to a
three-dimensional bulk system. An appropriate reasoning is presented to justify
the given observations. | Daniya Davis, Bhaskar Sen Gupta | 2023-09-18T06:44:04Z | http://arxiv.org/abs/2309.09511v1 | # Surface directed spinodal decomposition of fluids confined in cylindrical pore
###### Abstract
The surface directed spinodal decomposition of a binary liquid confined inside cylindrical pore is investigated using molecular dynamics simulation. One component of the liquid wets the pore surface while the other remains neutral. A variety of wetting conditions are studied. For the partial wetting case, after an initial period of phase separation, the domains organize themselves into plug-like structure and the system enters into a metastable state. Therefore, a complete phase separation is never achieved. Analysis of domain growth and the structure factor suggests an one-dimensional growth dynamics for partial wetting case. As the wetting interaction is increased beyond a critical value, a transition from the plug-like to tube-like domain formation is observed which corresponds to the full wetting morphology. Thus, a complete phase separation is achieved as the wetting species moves towards the pore surface and forms layers enclosing the non wetting species residing around the axis of the cylinder. The coarsening dynamics of both the species are studied separately. The wetting species is found to follow a two-dimensional domain growth dynamics with a growth exponent \(1/2\) in the viscous hydrodynamic regime. This was substantiated by the Porod tail of the structure factor. On the other hand, the domain grows linearly with time for the non wetting species. This suggests that the non wetting species behaves akin to a three-dimensional bulk system. An appropriate reasoning is presented to justify the given observations.
## I Introduction
The kinetics of phase separation of fluids in confinement are of high importance in scientific research [1; 2; 3; 4] as well as in the industry [5]. There are boundless applications of phase separating fluids in confinement. Especially the oil, gasoline and natural gas extraction industries are highly relied on these phenomena [6]. Nonetheless, many possibilities are still unexplored and plenty of questions regarding phase separation in such systems are unanswered. In this context, there is a paramount importance to study the transformation of single as well as multi-component phase separating fluid mixtures.
When a homogeneous binary liquid system is rapidly cooled within the miscibility gap, it loses thermodynamic stability and undergoes phase separation, forming distinct regions or domains. Over time, these domains grow and evolve until a state of local equilibrium or saturation is reached [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. However, a system under confinement behaves differently from its bulk counterpart due to the presence of additional factors like restriction, surface effects, and system size. For instance, under confinement, the emergence of anisotropic domain growth becomes apparent. This phenomenon primarily arises from the constraints imposed by the limited capacity of particles that can occupy a confined space. The salient features of phase separation in such systems are metastability and lack of observable macroscopic phase separation. In real experiments, physical systems are often enclosed within containers or possess exposed surfaces, which typically exhibits a preferential attraction towards one of the species of the mixture. This selective affinity can significantly influence the rate of phase separation. This phenomenon, referred to as the wetting effect, entails a continual and persistent competition between phase separation and interactions with the surface or wall.
The coarsening process is always affected by the nature of the system. Usually a single time dependent length scale \(\ell(t)\) characterizes the domain morphology [20]. This is obtained from the equal time correlation function \(C(\vec{r},t)\), where \(\vec{r}\) is the distance between two spatial points and \(t\) is the time after quench. The average domain size of the system follows the power law \(\ell(t)\sim t^{\alpha}\), where \(\alpha\) is the growth exponent. The value of \(\alpha\) is determined by corresponding coarsening mechanism that drives the phase separation.
For the phase separation in solid-solid mixtures, diffusion takes precedence, and the growth exponent is \(\alpha=1/3\)[21]. However, in fluid systems, the hydrodynamic effect becomes significant and the growth exponents change accordingly. In fluid-fluid mixtures the diffusive phase is short lived and the system quickly transits to hydrodynamic regime. Here, we have two exponents corresponding to the viscous hydrodynamics (\(\alpha=1\)) and inertial hydrodynamics (\(\alpha=2/3\)). The results mentioned above refer to the bulk systems [8; 9].
For the phase separation of fluids in confined geometry, existing studies predominantly employ two primary methods of analysis. The first one involves utilization of the random pore Ising model [3; 4; 22], which maps the system onto a network of random pores. The second method, known as the single pore model [23] is a widely accepted model for studying phase separation of liquids inside porous media, and does not rely on mapping to any specific model or randomness. For the later, theoretical studies were conducted, focusing on the wetting behavior of a binary fluid system inside a cylindrical pore [23]. This study introduced the benchmark single pore model, which allowed to examine the phase separation in confined space, particularly applicable to scenarios such as binary fluid segregation within vycor glasses where the random pore Ising model is not suitable due to low-porosity. The transition of the liquid structure from a plug-like to a tube-like form was illustrated via a wetting phase diagram. In between, there exists an intermediate capsule-like structure, which occurred only when the radius of the pore is relatively larger.
The domain growth was found to slow down when it became comparable to the pore size.
The phase separation of binary liquid inside a two-dimensional porous media was studied by numerically integrating the Cahn-Hilliard equation with and without wetting effects [24]. While the random field Ising model failed to explained the slowing down of the domain growth and the breakdown of scaling laws in such systems, the single pore model successfully explained the source of slow growth. Subsequent work on binary liquid inside two-dimensional strip geometry involving the numerical study of Cahn-Hilliard-Cook equation [25] further confirmed the validity of the single pore model. Later on, this work was extended to study the effect of a variety of asymmetric pores, i.e. a simple strip pore, an uneven single pore, and a junction made out of two pores [26]. The single pore method was explored further to study the liquid-liquid phase separation using molecular dynamics simulations with neutral pore wall (no wetting) [27; 28].
The surface directed spinodal decomposition in binary liquid mixture was studied in the bulk system using a mesoscopic-level modeling in terms of coupled Vlasov-Boltzmann equations with long range interactions [29]. The effect of weak and strong surface field on the domain growth was analysed. A two-dimensional study on how the wetting effect towards the mobile and immobile particles in the binary fluid system affects the phase separation of the later was examined in ref. [30]. Similarly, surface field study was conducted numerically and attention was paid to obtain the standard growth laws in the bulk region of the system [31]. Recently, molecular dynamics simulation was carried out on the binary fluid inside the cylindrical nanopore with neutral wall and the growth nature of the domain was studied before the system attained a metastable state [32]. An early time diffusive growth was observed and the later time growth exponent was found to match with the inertial hydrodynamic growth in the two-dimensional bulk system.
However, the evolution of domains of segregating fluids inside the single pore cylindrical tube in the presence of wetting interaction of a preferred component of the liquid with the confining wall has not been addressed properly till now. In particular, the effect on the domain structures and the growth laws when the wetting interaction is systematically changed is missing. As previously mentioned, this model becomes more representative of experimental observations when the influence of wetting effects is taken into account. In this paper we use extensive molecular dynamic simulation to study the kinetics of persistent interplay between the phase separation and wetting, deep within the coexistence curve.
## II Models and methods
In this study we use a binary AB liquid mixture confined inside a cylindrical pore using molecular dynamic simulation. The fluid particles interact with each other via the Lennard Jones (LJ) potential
\[U_{\alpha\beta}(r)=4\epsilon_{\alpha\beta}\left[\left(\frac{\sigma_{\alpha \beta}}{r_{ij}}\right)^{12}-\left(\frac{\sigma_{\alpha\beta}}{r_{ij}}\right)^{ 6}\right] \tag{1}\]
where \(\epsilon\) is the interaction strength, \(\sigma\) is the particle diameter, \(r_{ij}=|r_{i}-r_{j}|\) is the scalar distance between the two particles \(i\) and \(j\) and \(\alpha\), \(\beta\in\) A, B. The phase separation between the two types of particles is assured by assigning the interparticle diameters as \(\sigma_{AA}=\sigma_{BB}=\sigma_{AB}=1.0\) and the interaction parameters as \(\epsilon_{AA}=\epsilon_{BB}=2\epsilon_{AB}=\epsilon\). This method can be mapped to Ising model. The computational load is reduced by assigning a cut-off at \(r=r_{c}=2.5\) for the LJ potential. This cut-off introduces a discontinuity in the potential and force term. This is resolved by modifying the potential as
\[u(r)=U(r)-U(r_{c})-(r-r_{c})\left(\frac{dU}{dr}\right)|_{r=r_{c}} \tag{2}\]
The final term in the Eq. 2 avoids the abrupt jumps in the force at \(r_{c}\). The system mentioned above is characterized in bulk with a critical temperature of \(T_{c}=1.421\) and critical density of \(\rho_{c}=N/V=1\) in three-dimension [33]. Here \(N\) is the number of particles in the system and \(V\) is the volume. We measure the temperature and length in units of \(\epsilon/k_{B}\) and \(\sigma\) respectively. For convenience \(\epsilon\), \(k_{B}\) and mass of each particle \(m\) are set to unity.
A cylindrical tube with a large length to diameter ratio is considered, which serves as a confining structure containing the binary mixture. The axis of the cylinder is chosen to be the x-axis. A periodic boundary condition is applied along the length of the cylinder. The wall of the cylinder is constructed with closely packed particles similar to that of the fluid particles. The wetting effect is incorporated by introducing a preferable attraction of one type of particles, say type A towards the wall via the LJ potential given in Eq. 2, and no interaction at all for the other species. The interaction between the wall and type A particles, denoted as \(\epsilon_{w}\) is tuned over a wide range of values and the effect of this wetting strength on the phase separation is studied. We vary the \(\epsilon_{w}\) in the range of (0.1, 0.8).
Molecular dynamic simulation (MD) is performed in the canonical ensemble. Since our system is in liquid state, it is important to take into consideration the effect of hydrodynamics. Therefore, Nose Hoover thermostat is used which controls the temperature, and at the same time preserves the hydrodynamics of the system [34]. Velocity-Verlet Algorithm is used in the MD simulation to compute the positions and velocities of the particles with timestep of \(\Delta t=0.005\)[35]. Here time is measured in units of \((m\sigma^{2}/\epsilon)^{1/2}\).
The cylindrical pore we consider has a radius \(R=10\) and length \(L=200\). It is filled with the binary liquid of number density \(\rho=0.8\), where \(50\%\) of the particles are type A and \(50\%\) type B. The system is first equilibrated at a high temperature of \(T_{i}=10\) to prepare a homogeneous mixture and then suddenly quenched to a temperature \(T_{f}=0.8\) well below \(T_{c}\). Finally, the time evolution of the system towards the thermodynamically favored state at \(T_{f}=0.8\) is studied. The results are averaged over 80 independent initial configurations.
To study the domain growth and coarsening dynamics of the segregating liquid inside the cylindrical pore, we use the so-called two-point equal time correlation function \(C(\vec{r},t)\) given by
\[C(\vec{r},t)=\langle\psi(0,t)\psi(\vec{r},t)\rangle-\langle\psi(0,t)\rangle \langle\psi(\vec{r},t)\rangle. \tag{3}\]
The angular brackets represent the ensemble averaging. \(\psi(\vec{r},t)\) is the order parameter of the system defined in terms of the local density fluctuations as
\[\psi(\vec{r},t)=\frac{\rho_{A}(\vec{r},t)-\rho_{B}(\vec{r},t)}{\rho_{A}(\vec{r },t)+\rho_{B}(\vec{r},t)} \tag{4}\]
Here \(\rho_{A}(\vec{r},t)\) and \(\rho_{B}(\vec{r},t)\) are the local concentrations of A and B particles at time it around the position \(\vec{r}\). For the domain structure related studies, we resort to the static structure factor \(S(\vec{k},t)\), obtained from the Fourier transformation of the correlation function given by
\[S(\vec{k},t)=\int d\vec{r}\,exp(i\vec{k}.\vec{r})\ C(\vec{r},t) \tag{5}\]
where \(k\) is the wave vector [20]. For the large-\(k\) limit in \(d\)-dimension, the \(S(\vec{k},t)\) follows the Porod law given by
\[S(k,t)\sim k^{(-d+1)} \tag{6}\]
A detailed description of computing the order parameter \(\psi(\vec{r},t)\) under different wetting conditions is provided in the next section.
## III Results
It is well-established that in a bulk system, when our symmetric binary liquid is quenched below the critical temperature, it completely phase separates into two domains of type A and type B. But when the same liquid is considered inside a cylindrical pore, after the sudden quench, phase segregation commences with the growth of tiny isotropic domains. With time, these domains grow and organize themselves into stripes along the axis of the cylinder in a periodic pattern. Therefore, plug-like domains are formed in the absence of wetting interactions between the cylinder wall and the fluid particles [23; 32]. Finally, the system attains a metastable state and a complete macroscopic phase separation is never achieved. The scenario remains the same far inside the coexistence region also. The width of these domains are found to be insensitive to the length of the cylinder but varies linearly with the pore diameter.
Nevertheless, when the wetting effect is considered, the growth behavior is quite tangled. In our present study, we analyze the wetting effect on the domain growth dynamics over a wide range of wetting strengths \(\epsilon_{w}\), from partial to full wetting, systematically. The preferential attraction of type A particles implies that the said particles have comparatively lesser surface tension \(\gamma_{A}\) with the wall, than that of the other type \(\gamma_{B}\)[36]. Hence, if \(\theta\) is the contact angle between the fluid and wall interface, then according to Young's condition [37]\(\gamma_{AB}\)cos\(\theta=\gamma_{B}-\gamma_{A}\), where \(\gamma_{AB}\) is the surface tension between the A and B interface. The condition for partial wetting and complete wetting is deduced from this criterion [36]. When \(\gamma_{B}-\gamma_{A}<\gamma_{AB}\), both A and B species are in contact with the surface and the system is only partially wet. On the other hand, when \(\gamma_{B}-\gamma_{A}>\gamma_{AB}\), the Young's condition is not valid and the B phase is expelled from the wall resulting in the complete wetting of the wall with phase A.
Following the rapid cooling process, the phase segregation begins as small isotropic domains starts to form inside the pore. The interaction of phase separation and wetting, known as surface-directed spinodal decomposition, involves a dynamic interplay between these two kinetic processes. In Fig. 1 we show the time evolution of the domain structures of our system for the wetting interaction \(\epsilon_{w}=0.1\). The outcome is more or less close to the phase separation of binary liquid inside the nanopore without wetting [25; 32]. The system freezes into a multi-domain metastable state and no further domain evolution is observed with time. The reason can be attributed to the following. When the adjacent stripes are separated by more than a characteristic distance, the length scale saturates as a result of a weak contact between the fronts of the neighboring stripes. The plug like structures are formed and the metastable state is stabilized.
In Fig. 2 we show the domain structures corresponding to the longest possible simulation time for the wetting interactions \(\epsilon_{w}\) in the range 0.1 to 0.5. It clearly depicts how the metastable state varies with \(\epsilon_{w}\). The width of the striped domains appears to increase as the wetting strength increases. This is because the pore wall acts as a bridge between the alternative stripes which facilitates phase separation with increasing \(\epsilon_{w}\). We find a critical field strength \(\epsilon_{w}=0.5\) up to which the metastable phase separation takes place with the formation of stripes. Therefore, \(\epsilon_{w}\leq 0.5\) corresponds to partial wetting.
As the wetting strength is increased further, stripe formation no longer occurs. Instead, the transition from a plug-like to a tube-like domain is observed, which corresponds to the full wetting morphology. In Fig. 3 we show the time evolution of the domain structure for the highest interaction strength
Figure 1: Time evolution of the segregating binary liquid mixture confined inside cylindrical pore for the partial wetting interaction \(\epsilon_{w}=0.1\). A and B type of particles are represented by red and blue colors respectively.
\(\epsilon_{w}=0.8\) chosen in our simulation. We clearly observe a complete phase separation of the binary liquid inside the pore. For better visualization, the cross sectional view of the system after the complete phase separation is shown in Fig. 4. This surface field value satisfies the complete wetting condition mentioned above. Correspondingly, the type A particles interact with the pore wall, forming a layer near to it. On the other hand, the neutral B type particles are pushed towards the axis of the cylinder [23]. Thus our simulation confirms that, when the wetting interaction is above a particular threshold value (in our case \(\epsilon_{w}>0.5\)), we find the tube-like domain along the axis of the cylinder formed by the non-wetting particles, whereas the wetting species coats the inner surface of the pore. Therefore, a complete phase separation is achieved for the full wetting case.
Next, we examine the dynamical properties of the system for the partial wetting case, that exhibits stripe formation. Since the geometrical confinement imposed on the system results in the stripe patterned domains, growth is analyzed along the axial direction [27; 28]. Therefore, the order parameter takes the form
\[\psi(x,t)=\frac{\rho_{A}(x,t)-\rho_{B}(x,t)}{\rho_{A}(x,t)+\rho_{B}(x,t)} \tag{7}\]
To compute \(\psi(x,t)\) we divide the cylinder vertically into sections of equal width \(\Delta x=2.0\). The \(\rho_{A}(x,t)\) and \(\rho_{B}(x,t)\) are calculated for each section and thus the \(\psi(x,t)\) is obtained from Eq. 7.
To study the domain growth dynamics we compute the two-point equal-time correlation function given by Eq. 3 along the x axis. For the wetting strength \(\epsilon_{w}\leq 0.5\), the observation of a consistent self-similarity pattern in stripe formation suggests that our system is likely to adhere to the scaling law \(C(x,t)\equiv\bar{C}(x/\ell(t))\), where \(\bar{C}\) is a time independent master scaling function [20]. The identification of this scaling law enables the definition of a time-dependent length scale \(\ell(t)\) based on the decay of \(C(x,t)\). Throughout the paper, we utilize the first zero-crossing of \(C(x,t)\) as a reliable measure of \(\ell(t)\).
Fig. 5 confirms the scaling law of the correlation where \(C(x,t)\) is plotted vs \(x/\ell(t)\) for different strength of wall interactions corresponding to partial wetting. The data collapse is highly evident, except for the case of \(\epsilon_{w}=0.5\), where the impact of wetting is close to disrupting the barrier responsible
Figure 4: The cross sectional view of the fully phase separated liquid inside the pore.
Figure 3: Representative snapshots for the phase separating binary liquid mixture inside the cylindrical pore for the full wetting interaction \(\epsilon_{w}=0.8\).
Figure 2: Final configurations of our binary liquid system forming plug like domain structure for different wetting interaction \(\epsilon_{w}\).
for stripe formation and maintaining a metastable equilibrium. So, we can generalize that Porod law is valid for partial wetting. The inset of the figure shows the scaling of correlation for a particular choice of \(\epsilon_{w}=0.2\) at different times. We observe an excellent data collapse. A similar scaling behavior is observed for other \(\epsilon_{w}\) values also (not shown here).
To examine patterns and investigate domain structures in both simulations and experiments, it is a common practice to calculate the structure factor. One-dimensional correspondence of Eq. 5 is used to calculate the same. Fig. 6 shows the scaled structure factor. The decaying part of the tail exhibits a power law \(S(k,t)\sim k^{-2}\), showing the Porod law behavior and supporting the one-dimensional growth in the system as defined in Eq. 6. It is also evident that the structure factor associated with \(\epsilon_{w}=0.5\) exhibits a slight deviation from the Porod tail, indicating the critical limit for stripes formation where the interfaces are less distinct and rough. Additionally, the \(S(k,t)\sim k^{2}\) behavior at the small-\(k\) limit further supports the argument of one-dimensional domain growth in the system.
The correlation and structure factor clearly show an onset of deviation for \(\epsilon_{w}=0.5\), as they start to diverge from the one-dimensional growth and the wetting effect becomes dominant and tempts to shift towards the capsule-like structure. The pore diameter not being large enough, we do not observe a proper capsule formation. Instead, a direct transition occurs from plug to tube like domains as the \(\epsilon_{w}\) is increased further. A more detailed discussion of this phenomenon will follow.
Subsequently, our attention turns to quantifying the growth of the stripes along the axial direction in terms of the length-scale \(\ell(t)\). As mentioned earlier, this quantity is computed from the first zero crossing of the correlation function \(C(x=\ell,t)=0\). The time evolution of \(\ell(t)\) is shown in Fig. 7. The dashed lines in the graph show the power law correspondence at different stages of the growth. The transport mechanism in the system decides the rate of domain growth. During the initial stage, the system exhibits diffusive behavior, adhering to the Lifshitz-Slyozov growth law as \(t^{1/3}\). This is followed by a crossover to the inertial hydrodynamic growth characterized by a power law of \(t^{2/3}\)[38]. The same growth exponents were obtained when the pore wall was considered neutral (no-wetting). It is worth noting that the wetting strength of \(\epsilon_{w}=0.5\) is evidently a critical scenario where, despite the presence of stripe domains, the dynamical properties deviate significantly from the typical behavior observed for partial wetting case.
Next, we shift our attention to investigate the phase sepa
Figure 5: Scaled correlation function \(C(x,t)\) vs \(x/\ell(t)\) for different partial wetting interaction \(\epsilon_{w}\). In the inset we show the scaling plot of \(C(x,t)\) vs \(x/\ell(t)\) for \(\epsilon_{w}=0.2\) for different times.
Figure 6: The scaled structure factor \(S(k,t)\ell^{-1}\) vs. \(k\ell\) for the different partial wetting strength \(\epsilon_{w}\). The dashed lines are the guide line for the Porod law.
Figure 7: The time evolution of the length scale \(\ell(t)\) for the partial wetting case with different \(\epsilon_{w}\). The dashed lines are the reference for the power law growth.
ration dynamics for the full wetting case, i.e. \(\epsilon_{w}>0.5\). A complete phase separation is achieved here via the formation of tube-like domains. A typical domain morphology is displayed in Fig. 3. It is crucial to emphasize that during the phase separation of domains inside the pore for the complete wetting case, the correlation is assessed radially rather than axially. Hence the order parameter is calculated accordingly from Eq. 4. For that, the whole system is divided into small cubic boxes of size \((2\sigma)^{3}\) and the local density fluctuations are computed over these boxes. Finally, we calculate the correlation function along the radial direction from Eq. 3.
During the coarsening process, the two species proceed individually following the surface directed spinodal decomposition. The wetting species endures surface enrichment while the other is expelled from the surface. This results in complete phase separation, as shown in Fig. 3. The correlation function for both the species is calculated separately to study their individual domain growth. Fig. 8 corresponds to the scaled correlation of wetting particles. We observe a satisfactory data collapse for different interaction strength \(\epsilon_{w}\). The inset shows the scaled correlation for the maximum interaction strength \(\epsilon_{w}=0.8\) at different times. They exhibit a perfect data collapse as well. Hence, the surface directed migration of particles in our confined system perseveres and upholds the presence of superuniversality and Porod law [39; 40; 41]. The same exercise is repeated for the non-wetting species (not shown), and a similar scaling behavior is observed.
Considering the rationale mentioned earlier, it is prudent to compute the structure factor independently for each of the species. The results are shown in Fig. 9 for three different \(\epsilon_{w}\) at a particular time \(t=110\). The dotted lines correspond to the power law reference. The results clearly demonstrate that the trailing section of the structure factor exhibits distinct power laws for the two species. According to Eq. 5, wherein \(d\) represents the dimension of the domain, \(k^{-3}\) pertains to growth in two-dimensions while \(k^{-4}\) refers to growth in three-dimensions. This suggests that the wetting species experiences two-dimensional domain growth, while the other undergoes three-dimensional growth. This can be clearly understood from Figs. 3 and 4, where the type A particles form a layer on the inner surface of the pore wall, resembling a curved two-dimensional plane. Therefore, the structure obtained for the wetting particles is two-dimensional, providing a rationale for the Porod law exponent. On the other hand, type B particles that congregate around the axis of the cylindrical pore behave akin to a bulk system. This three-dimensional structure of the non-wetting particles is affirmed by the Porod tail behavior observed in the structure factor.
The time-dependence of the characteristic domain growth for the two types of particles are computed separately for three different \(\epsilon_{w}\). In Fig. 10 we show the \(\ell(t)\) for both the species. The dotted lines indicate the power law. The domain growth of the non-wetting species resembles liquids in three
Figure 8: Scaled correlation function \(C(r,t)\) vs \(r/\ell(t)\) for different \(\epsilon_{w}\) corresponding to the full wetting case. In the inset we show the scaling plot of \(C(r,t)\) vs \(r/\ell(t)\) for \(\epsilon_{w}=0.8\) for different times.
Figure 9: The scaled structure factor \(S(k,t)\ell^{-1}\) vs. \(k\ell\) graph for different \(\epsilon_{w}\) corresponding to the full wetting case at a fix time \(t=110\) for the (a) wetting species (A particles), (b) non-wetting species (B particles). The dashed lines are the guide line for the Porod law.
-dimensional bulk system. After an initial transition period, the domain size grows as \(\ell(t)\sim t\), which corresponds to the bulk viscous hydrodynamics growth. This result is consistent with the structure factor which shows a three-dimensional Porod tail of \(k^{-4}\).
For the wetting species, the growth law is found to be \(t^{1/2}\), which resembles domain growth of liquids in two-dimensional surface. This can be comprehended as follow. The wetting particles interact with the pore wall and form layers inside the wall which specifically encloses the non-wetting particles. Therefore, this structure is identical to the two-dimensional curved surface. It is well known that a binary liquid phase separates with a growth law exponent of \(1/2\) on a two-dimensional plane. Hence the domain growth of the wetting particles can be explained analogously. This is further endorsed by the structure factor in Fig. 9, which shows Porod tail behavior of \(k^{-3}\).
## IV Conclusion
In summary, we have studied the surface directed spinodal decomposition of a segregating binary liquid mixture system confined inside cylindrical pore using comprehensive molecular dynamics simulation. One of the species of the liquid adheres to the pore surface, whereas the other remains inert. A wide range of wetting interactions is being contemplated, encompassing both partial and full wetting. For the partial wetting case, the domain structure resembles no wetting scenario. After the initial domain growth, phase separation is halted via formation of plug-like structures. The growth exponent of the domain is estimated to be 2/3 which suggests an one-dimensional growth dynamics. This is further confirmed from the Porod law tail of the structure factor.
The scenario changes completely as the wetting interaction is increased beyond a critical value (\(\epsilon_{w}>0.5\)). The plug-like structure breaks down and cylindrical domains emerge for the full wetting case. Hence, a complete phase segregation is observed when the wetting substance migrates toward the pore surface and creates layers that encompass the non-wetting species located around the axis of the cylinder. The wetting substance is observed to adhere to a two-dimensional domain growth pattern, characterized by the growth exponent \(\alpha=1/2\) in the viscous hydrodynamic regime. This is supported by the Porod tail pertaining to the structure factor. On the other hand, the non-wetting species is found to experience linear domain growth over time. This implies that the non-wetting species behaves similarly to a three-dimensional bulk system. This behavior was additionally affirmed through an examination of the tail section of the structure factor. Our works provides a comprehensive understanding of the kinetics of phase separation in confined liquids under different wetting conditions. It will be interesting to extend this work where the confinement has complex topology, i.e. random porous media [42].
_Acknowledgement.--_ B. Sen Gupta acknowledges Science and Engineering Research Board (SERB), Department of Science and Technology (DST), Government of India (no. CRG/2022/009343) for financial support. Daniya Davis acknowledges VIT for doctoral fellowship.
|
2310.00330 | A DSP shared is a DSP earned: HLS Task-Level Multi-Pumping for
High-Performance Low-Resource Designs | High-level synthesis (HLS) enhances digital hardware design productivity
through a high abstraction level. Even if the HLS abstraction prevents
fine-grained manual register-transfer level (RTL) optimizations, it also
enables automatable optimizations that would be unfeasible or hard to automate
at RTL. Specifically, we propose a task-level multi-pumping methodology to
reduce resource utilization, particularly digital signal processors (DSPs),
while preserving the throughput of HLS kernels modeled as dataflow graphs
(DFGs) targeting field-programmable gate arrays. The methodology exploits the
HLS resource sharing to automatically insert the logic for reusing the same
functional unit for different operations. In addition, it relies on multi-clock
DFG s to run the multi-pumped tasks at higher frequencies. The methodology
scales the pipeline initiation interval (II) and the clock frequency
constraints of resource-intensive tasks by a multi-pumping factor (M). The
looser II allows sharing the same resource among M different operations, while
the tighter clock frequency preserves the throughput. We verified that our
methodology opens a new Pareto front in the throughput and resource space by
applying it to open-source HLS designs using state-of-the-art commercial HLS
and implementation tools by Xilinx. The multi-pumped designs require up to 40%
fewer DSP resources at the same throughput as the original designs optimized
for performance (i.e., running at the maximum clock frequency) and achieve up
to 50% better throughput using the same DSP s as the original designs optimized
for resources with a single clock. | Giovanni Brignone, Mihai T. Lazarescu, Luciano Lavagno | 2023-09-30T10:28:47Z | http://arxiv.org/abs/2310.00330v1 | # A DSP shared is a DSP earned:
###### Abstract
High-level synthesis (HLS) enhances digital hardware design productivity through a high abstraction level. Even if the HLS abstraction prevents fine-grained manual register-transfer level (RTL) optimizations, it also enables automatable optimizations that would be unfeasible or hard to automate at RTL. Specifically, we propose a task-level multi-pumping methodology to reduce resource utilization, particularly digital signal processors (DSPs), while preserving the throughput of HLS kernels modeled as dataflow graphs (DFGs) targeting field-programmable gate arrays. The methodology exploits the HLS resource sharing to automatically insert the logic for reusing the same functional unit for different operations. In addition, it relies on multi-clock DFGs to run the multi-pumped tasks at higher frequencies. The methodology scales the pipeline initiation interval (II) and the clock frequency constraints of resource-intensive tasks by a multi-pumping factor (\(M\)). The looser II allows sharing the same resource among \(M\) different operations, while the tighter clock frequency preserves the throughput. We verified that our methodology opens a new Pareto front in the throughput and resource space by applying it to open-source HLS designs using state-of-the-art commercial HLS and implementation tools by Xilinx. The multi-pumped designs require up to 40% fewer DSP resources at the same throughput as the original designs optimized for performance (i.e., running at the maximum clock frequency) and achieve up to 50% better throughput using the same DSPs as the original designs optimized for resources with a single clock.
Dataflow architectures, FPGA, high-level synthesis, multi-pumping, resource sharing
## I Introduction
High-level synthesis (HLS) raises the abstraction level of electronic design automation tools to improve the digital hardware designer's productivity. The high abstraction precludes some low-level manual optimizations, making the quality of results (QoR) of HLS circuits inferior to those manually optimized at the register-transfer level (RTL), especially for the area and maximum clock frequency [1]. On the other hand, we deem the HLS description introduces new optimization opportunities at a high level.
We focus on HLS designs modeled as dataflow graphs (DFGs) (e.g., with _dataflow_ in Xilinx Vivado/Vitis HLS [2], _hierarchy_ in Siemens Catapult HLS [3], or _task functions_ in Intel HLS compiler [4]). Modeling HLS designs as DFGs proved its effectiveness both in industrial [5, 6] and academic [7, 8] projects.
A DFG is a set of parallel computational tasks (C/C++ functions in HLS) communicating asynchronously through first-in-first-out (FIFO) queues. HLS tools typically implement DFGs as single-clock dataflow graphs (SCDFGs), where all the tasks share the same clock signal. Many modern HLS tools do not support multi-clock designs [2, 4]. Nevertheless, we can generalize SCDFGs to multi-clock dataflow graphs (MCDFGs) by assigning each task to a dedicated clock domain. The generalization enhances the tasks' flexibility and maximum frequency, limited only by the critical timing path local to the task rather than the global one. Clock architectures of modern field-programmable gate array (FPGA) system-on-chips (SoCs) seamlessly support multiple clocks, and the area overhead for safe clock domain crossing (CDC) is negligible since the tasks already communicate through FIFOs, which can be configured with independent read and write clocks with comparable resource utilization [9].
Multiple clock domains allow optimizations like multi-pumping, which reduces the area while preserving the throughput by reusing \(M\) times a resource, usually a digital signal processor (DSP) unit in the FPGA context, clocked at a frequency \(M\) times larger than the rest of the system. Designers typically apply the technique at RTL by manually inserting the custom logic to share the resource and safely perform CDC.
In this work, we achieve multi-pumping at the task level by tuning only the high-level parameters of the tasks, in particular the pipeline initiation interval (II), i.e., the clock cycles between the start of successive pipeline executions, and the clock constraint at the task granularity, taking advantage of the MCDFG. The HLS resource sharing algorithm automatically builds the logic for sharing the resource within a dataflow task. At the same time, the inter-task FIFOs allow safe CDC. We focused on DSPs, which are critical in compute-intensive kernels and can run at high frequencies. However, the technique can multi-pump any shareable resource, including entire sub-functions.
For example, consider a 2D Convolution HLS kernel by Xilinx [10], implemented as an SCDFG, as shown in Fig. 1a,
where the rectangular nodes and the arrows represent the tasks and the FIFOs, respectively. At each iteration, the Filter2D task processes a convolution window of up to \(15\times 15\) elements, which requires computing \(225\) multiply and accumulate (MAC) operations bound to DSPs. Thus, an II of \(1\,\mathrm{cycle}\) requires \(225\) DSPs. On the other hand, scaling the II to \(2\,\mathrm{cycles}\) implies that a new pipeline iteration starts every two clock cycles. Therefore, the pipeline has two cycles to compute the 225 operations. Hence, thanks to resource sharing, the HLS binding allocates only \(\lceil 225/2\rceil=113\) DSPs, each of which computes two MACs. Assume that we target a throughput of \(250\,\mathrm{M}\mathrm{S}\mathrm{a}\mathrm{/}\mathrm{s}\). With the state-of-the-art SCDFG flow (Fig. 1b), we set the clock frequency of the whole DFG, including the Filter2D task that allocates 225 DSPs at \(250\,\mathrm{M}\mathrm{H}\mathrm{z}\). On the other hand, with our multi-pumping approach (Fig. 1c), we optimize the Filter2D task by scaling its II to \(2\,\mathrm{cycles}\), to save half of the DSPs, and its clock frequency to \(500\,\mathrm{M}\mathrm{H}\mathrm{z}\), to preserve the throughput.
This paper proposes an area-minimization methodology that preserves the throughput via task-level multi-pumping for FPGA HLS designs described as DFGs. Its effectiveness is validated on open-source designs using the workflow shown in Fig. 2, which generates an optimized multi-pumped intellectual property (IP) block from C/C++ source code using state-of-the-art Xilinx commercial tools [11].
To the best of our knowledge, this is the first work that combines multiple clock domains with resource sharing in HLS of DFGs for task-level multi-pumping. The empirical results show that a new Pareto front opens in the power, performance, and area (PPA) space, with circuits that use up to \(60\,\mathrm{\char 37}\) fewer resources at maximum throughput or achieve up to \(50\,\mathrm{\char 37}\) higher throughput with the same resources.
## II Related work
Our work is mainly related to QoR improvement of HLS designs by tuning the HLS directives (i.e., the instructions for the HLS compiler to control hardware optimizations such as loop pipelining), focusing on multi-clock designs.
Several works [12, 13, 14, 15, 16] optimize for performance the HLS directives applied to plain software code not intended for HLS via design-space exploration (DSE). However, the goals of their works differ from ours since we optimize for resources while preserving the throughput of source code already optimized for HLS. In addition, our methodology avoids time-consuming DSEs and analytically computes the multi-pumping factor and, consequently, the corresponding II and clock frequency constraints. Finally, they all consider only single clock designs, except for Liang _et al._[16] (discussed further in Section II-A).
HLS design optimizations based on multiple clock domains work at the _operation level_, assigning domains at the low-level resource (e.g., adder or multiplier) granularity, typically during scheduling [17, 18, 19], or at the _task level_, assigning domains at function granularity (i.e., MCDFGs) [16, 20].
### _Operation-level multi-clock in high-level synthesis_
Lhairech-Lebreton _et al._[17] use multiple clock domains in HLS to reduce power consumption while preserving the throughput by halving the operating frequency of two-cycle operations. We instead focus on area and performance optimizations because power is only a secondary quality metric for FPGA designs after performance and area.
Canis _et al._[18] and Ronak _et al._[19] design double-pumped DSP modules and use them in HLS with custom resource-sharing algorithms. Theoretically, Xilinx Vitis HLS supports double-pumped MAC operations through user-callable functions from the dsp_builtins library, but it is undocumented and faulty [21]. Our approach produces similar results when double-pumping a task. However, our task-level solution does not require custom modules, changes to the HLS sharing algorithm, or changes to the source code. In addition, it can select multi-pumping factors greater than two, resulting in larger resource savings.
### _Task-level multi-clock in high-level synthesis_
Ragheb _et al._[20] focus on extending the LegUp HLS tool to support MCDFGs synthesis but leave the selection of the clock frequencies to a suboptimal, time-consuming profiling-based approach. Our work focuses instead on a general methodology for exploiting the multiple clock domains. The workflow we define for building MCDFGs, based on state-of-the-art Xilinx tools, is just a means to apply our methodology.
Liang _et al._[16] propose a DSE methodology for maximizing the throughput under area constraints for HLS of MCDFG designs. They iteratively push for performance the HLS loop directives applied to the bottleneck tasks. If a task is still a bottleneck after maximally pushing the directives (e.g., when the pipeline II constraint is \(1\,\mathrm{cycle}\)), they relax the directives of every task, increase the clock frequency of the bottleneck task, and restart the procedure. The goal of our work is different since we minimize the area while preserving the throughput. The optimization approaches differ, too, since we optimize all the resource-intensive tasks independently of whether they are bottlenecks, and we never push the II constraints, which the
Fig. 1: Task-level multi-pumping saves resources at equal throughput for HLS of dataflow graphs (DFGs). The Filter2D task from a 2D Convolution kernel [10] (a) is double-pumped (c) by doubling its clock frequency and II to save half of the multipliers of the single-clock solution (b).
HLS compiler may fail to meet (e.g., due to data dependencies).
## III Background
Given the DFG throughput model defined in Section III-A, our multi-pumping methodology exploits the resource sharing executed by the HLS binding step to build the sharing logic. The relaxed timing mode for the HLS scheduling step ensures that the II of the pipelines is independent of the target clock frequency, as explained in Section III-B.
### _Dataflow graph_
A DFG \(G(V,E)\) is a set of tasks \(v\in V\) running in parallel and communicating asynchronously through FIFO channels \(e\in E\).
In HLS, each task is described as a C/C++ function whose core computational part typically consists of a pipelined loop. Given a task \(v_{i}\) clocked at frequency \(f_{i}\) and whose core loop is scheduled with initiation interval \(\mathit{II}_{i}\), an approximation of its throughput is
\[\Phi_{i}\coloneqq\frac{f_{i}}{\mathit{II}_{i}}. \tag{1}\]
The maximum external dynamic random access memory (DRAM) bandwidth can also limit throughput. However, this is out of the scope of our methodology since it does not change the overall throughput and, consequently, the DRAM bandwidth requirements.
The overall DFG throughput matches the one of the _bottle-neck task_ (i.e., the task with the lowest throughput)
\[\Phi_{G}\coloneqq\min_{v_{i}\in V}\Phi_{i}. \tag{2}\]
According to (1), the high-level knobs for tuning the throughput of task \(v_{i}\) are its clock frequency (\(f_{i}\)) and initiation interval (\(\mathit{II}_{i}\)). All tasks share the same clock in single-clock dataflow graphs (SCDFGs). Thus, \(f_{i}\) is the same for all tasks and is bounded by the global (i.e., among all tasks) critical path. Therefore, only the \(\mathit{II}_{i}\) can be tuned independently for each task. In an MCDFG, on the other hand, the clock frequency can be set individually for each task. This additional degree of freedom allows for higher flexibility and tasks frequencies than SCDFG since the clock frequency of a task is limited only by its local critical path and not the one of the whole DFG.
### _High-level synthesis_
Our multi-pumping technique relies on two key concepts of the HLS tools: (1) the minimum pipeline II is independent of the clock frequency constraint when the scheduler works in _relaxed timing_ mode (i.e., the clock frequency is subordinate to meet the II constraints), and (2) the level of _resource sharing_ is directly dependent on the II. Both are implemented by the HLS back-end that generates the hardware description. The timing model is used during _scheduling_ and the resource sharing is done during _binding_[2].
#### Iii-B1 Scheduling
Scheduling assigns operations to specific clock cycles; thus, it also implements loop and function _pipelining_. Designers can constrain the II of the pipelines, which is lower bound by the resource constraints and the data dependencies. Consider the data dependence graph (DDG) modeling the data dependencies in a kernel. Given a cycle \(\theta\) in the DDG, we define \(\mathit{delay}_{\theta}\) as the sum of the delays of the operations along \(\theta\) and \(\mathit{dist}_{\theta}\) as the total loop-carried dependence distance along \(\theta\). The lower bound of the II is
\[\mathit{II}^{\text{min}}\coloneqq\max_{\theta\in\mathrm{DDG}}\left(\frac{ \mathit{delay}_{\theta}}{\mathit{dist}_{\theta}}\right). \tag{3}\]
The associated cycle is called _critical_[22].
For example, consider the following loop to be scheduled:
\[\boxed{\begin{array}{c}\texttt{for (int i = 0; i < N; i++)}\\ \texttt{a = a + b;}\end{array}}\]
The read-after-write dependency on a, produced at the \(i\)-th iteration and consumed at the \(i+1\)-th iteration, introduces a cycle \(\theta\) in the DDG. \(\mathit{delay}_{\theta}\) is the latency of the adder computing a+b. \(\mathit{dist}_{\theta}\) is 1 since a is consumed at the iteration after it is produced. Therefore, (3) implies that the minimum II for this loop equals the latency of the adder.
The clock constraints determine how many operations fit within a clock cycle, thus affecting the depth of the pipelines. The pipeline depth determines the latencies of its operations, impacting the critical cycle and, in turn, the II lower bound. However, the II constraints take precedence over clock constraints in _relaxed timing_ mode, yielding lower II pipelines in exchange for potential HLS timing violations. These are usually acceptable at HLS time since HLS timing estimations
Fig. 2: Given the C/C++ source code of a dataflow graph (DFG) application and its base clock frequency, the proposed workflow builds the optimized multi-pumped IP by (a) analyzing the DFG (_DFG charact_.), (b) optimizing the multi-pumping factors (\(\mathcal{M}\)_opt_.), and (c) synthesizing the multi-pumped IP (_MCDFG synth._).
may be overly pessimistic [1], and downstream implementation steps may resolve them.
#### Iii-C2 Binding
Binding assigns each operation to a compatible functional unit, depending on resource and performance (e.g., clock frequency, II) constraints.
_Resource sharing_ is a crucial binding optimization that maps operations of the same type to the same functional unit, scheduled on different clock cycles or under mutually exclusive conditions (e.g., on different if-then-else branches). The II constraints directly affect the degree of resource sharing. In particular, if a pipeline scheduled with an II of \(\text{II}_{i}\,\text{cycles}\) computes \(N_{i}^{\text{OP}}\) operations (OPs) of the same kind at each iteration, the binding step allocates \(N_{i}^{\text{FU}}\) functional units (FUs), with
\[N_{i}^{\text{FU}}\coloneqq\left\lceil\frac{N_{i}^{\text{OP}}}{\text{II}_{i}} \right\rceil. \tag{4}\]
Note that the operations can be either computations or memory accesses. The functional units associated with the memory operations are ports proportional to the partitioning factors (i.e., the number of submemories into which a memory resource is divided to increase its parallelism). Therefore, larger II values result in fewer functional units and smaller memory partitioning factors.
Consider the Filter2D task from the 2D Convolution kernel introduced in Section I, whose source code is in Fig. 2(a). Assuming a filter of size \(2\times 2\) (i.e., \(\text{FILTER\_V\_SIZE}=\text{FILTER\_H\_SIZE}=2\)), with the schedule with an II of \(1\,\text{cycle}\) (shown in Fig. 2(b), where the nodes represent the operations, and the edges the data dependencies), at the steady-state, four multiplications are computed in parallel on different data within the same clock cycle (highlighted by the red rectangle), thus requiring four DSP-mapped multipliers. With an II of \(2\,\text{cycles}\) instead (see Fig. 2(c)), only two multiplications are computed per clock cycle. Therefore, the binding step allocates only two multipliers and shares these among two multiplications.
## IV Task-level multi-pumping
We multi-pump the resources of task \(v_{i}\) by simultaneously scaling by a multi-pumping factor \(M_{i}\) the II and the clock frequency of \(v_{i}\).
The underlying principles of our approach are:
* (2) allows tuning each task independently without reducing the overall DFG throughput, as long as the throughput of the task does not get lower than the bottleneck task one.
* As discussed in Section III-B2, scaling the II of a pipelined loop by a factor \(M_{i}\) allows reusing the same functional unit for \(M_{i}\) operations in different clock cycles.
* (1) implies that the task throughput is unchanged if we scale by \(M_{i}\) the task clock frequency together with the II.
Assume that \(v_{i}\) meets the timing constraints up to \(f_{i}^{\text{max}}\) and computes \(N_{i}^{\text{OP}}\) operations mapped to DSPs. Moreover, the non-multi-pumped tasks are clocked at \(f_{\text{base}}\) (i.e., the clock constraint given by the designer). The maximum multi-pumping factor for task \(v_{i}\) is
\[M_{i}^{\text{max}}\coloneqq\min\left(\left\lfloor\frac{f_{i}^{\text{max}}}{f_ {\text{base}}}\right\rfloor,N_{i}^{\text{OP}}\right). \tag{5}\]
It is worth noting that our task-level multi-pumping _changes only the HLS directives while using the HLS tool as a black box and without requiring manual source code restructuring_. The automation of this step will be the subject of future work.
Fig. 2: The pipeline initiation interval (II) directly affects the resource sharing. For example, in the Filter2D task (a), the pipeline with \(\text{II}=1\,\text{cycle}\) (b) computes four multiplications per clock cycle in steady state, while the one with \(\text{II}=2\,\text{cycles}\) (c) only two. Thus, the latter datapath allocates half of the multipliers.
## V Multi-pumping workflow
To validate our task-level multi-pumping, we define a workflow from the C/C++ source code to an optimized MCDFG IP block compatible with Xilinx tools [11], as shown in Fig. 2. The main steps of the workflow are (A) _DFG characterization_ to extract the maximum clock frequency and the number of DSP operations of each task, needed by the later steps, (B) _multi-pumping factor optimization_ to select the multi-pumping factor of each task, and (C) _MCDFG synthesis_ to generate the multi-pumped IP.
### _Dataflow graph characterization_
For each task in the DFG \(G(V,E)\), we collect the number of DSP operations (\(\mathcal{N}=\{N_{i}^{\text{OP}},\forall v_{i}\in V\}\)) from the reports of the standard SCDFG HLS. We collect the maximum frequency meeting the timing constraints (\(\mathcal{F}=\{f_{i}^{\text{max}},\forall v_{i}\in V\}\)) from the post-implementation reports of the SCDFG. We execute the implementation with a tight clock constraint (e.g., \(500\,\mathrm{MHz}\)) and at the lowest pipeline II, which is the worst case for the critical cycle (defined in Section III-B). Indeed, when multi-pumping increases the II, it relaxes the critical cycle, allowing deeper pipelines and shorter critical paths, thus higher clock frequencies.
We do not extract \(\mathcal{F}\) from the earlier-available HLS clock frequency estimations since they are unreliable [1]. We run the SCDFG implementation only once, so the overhead is usually acceptable. However, if a fast flow is required (e.g., in early design phases), we can run only the logic synthesis step without placement and routing. The timing estimations at the logic synthesis step are more accurate than the one of the HLS compiler since they have access to lower-level information. When the estimated maximum frequency is less than the actual one, we miss chances of saving resources because of lower multi-pumping factors, as per (5). On the contrary, if the frequency is overestimated, the timing fails during implementation.
### _Multi-pumping factor optimization_
We select the multi-pumping factors (\(\mathcal{M}=\{M_{i},\forall v_{i}\in V\}\)) that minimize the DSP utilization. If \(v_{i}\) contains operations mapped to DSP, we set \(M_{i}=M_{i}^{\text{max}}\), as defined by (5). Otherwise, we do not apply multi-pumping to \(v_{i}\).
### _Multi-clock dataflow graph synthesis_
Xilinx Vitis HLS cannot synthesize MCDFGs directly since it supports only one clock domain per the design. However, the dataflow directive generates several independent modules, one for each task, and interconnects them in a top-level module. Thus, we run a _split_ HLS, synthesizing each task separately (i.e., setting it as the top module) with its clock constraint.
The Xilinx HLS binding algorithm guarantees optimal resource sharing if guided by resource constraints only. Therefore, we constrain the number of DSPs according to (4). For instance, if we multi-pump with a factor \(M_{i}\) a task \(v_{i}\) that originally uses \(N_{i}^{\text{DSP}}\), we constrain its DSPs to \(\left\lceil N_{i}^{\text{DSP}}/M_{i}\right\rceil\).
In principle, we could also scale down the memory partitioning factors by \(M_{i}\) to reduce on-chip memory resource usage, namely block random access memories (BRAMs) and registers. However, we cannot apply this optimization to the test cases considered in Section VI with Xilinx HLS. Indeed, the tool ignores the coarser partitioning directives and automatically partitions the memories, presumably to minimize the pipelines II, regardless of the provided directives. We plan to revisit the issue as a more recent version of the HLS tool is available.
Finally, we interconnect the tasks synthesized separately using the Vivado intellectual property integrator (IPI).
The Xilinx HLS tools use FIFOs as inter-task communication channels when data are produced and consumed in the same order; otherwise, ping-pong buffers. Our method could support both, but since the Xilinx IPI flow does not provide a configurable multi-clock ping-pong buffer, we currently only support FIFO channels using the Xilinx FIFO generator [23]. FIFOs are configured with independent clocks for read and write ports when interconnecting tasks assigned to different clock domains.
## VI Evaluation
We verify the applicability and the advantages in the PPA space of our task-level multi-pumping workflow, described in Section V, by applying it to open-source HLS designs.
Our experiments target the embedded platform Zynq Ultra-Scale+ FPGA SoC hosted by the Avnet Ultra96v1 board [24]. We use Vitis HLS 2022.2 [2] and Vivado HLS 2019.2 [25] for the synthesis and Vivado 2022.2 [11] for the implementation.
We collect the resource utilization from the post-implementation reports and the power estimations from the post-implementation static power analysis. We verify that the throughput (i.e., the number of output samples produced in the unit of time) matches the theoretical one by measuring the time for \(10\,000\) executions in auto-restart mode [2] (to make the time overhead for control negligible) of the kernels in hardware, using the PYNQ application programming interfaces [26].
We apply our flow to some open-source HLS designs, including (a) the _2D Convolution_ from the Vitis Tutorials [10] already introduced in Section I, (b) the _Optical Flow_ from the Rosetta suite [8], and (c) the _virtual molecule screening (VMS)_[27], a drug discovery accelerator.
For each design, we compare the multi-pumped implementations (_M-Pump_) with the original ones (_Base_) and with the best SCDFG implementations without source code changes (_S-Pump_). For the _S-Pump_ implementations, we apply our flow without the generalization to MCDFG. Thus, if task \(v_{i}\) is "single-pumped" by a factor \(S_{i}\), we scale by \(S_{i}\) its II, as with our original workflow, and the clock frequency of the whole kernel. The maximum "single-pumping" factor for each task is lower than the corresponding maximum multi-pumping factor (defined by (5)) since it is at most
\[S_{i}^{\text{max}}\coloneqq\left\lfloor\frac{\min_{\forall v_{i}\in V}f_{i}^{ \text{max}}}{f_{\text{base}}}\right\rfloor\,. \tag{6}\]
Figure 4 shows the tradeoffs between the DSP utilization and the throughput obtained by varying the base clock frequency within the range allowed by the critical path of the designs. The dashed lines represent computation throughputs that exceed the memory throughput. Thus, the effective throughputs are, in practice, clipped to the maximum non-dashed value, corresponding to the maximum memory throughput.
The number of DSPs used by _Base_ designs is independent of the clock frequency. The plots of the _Pump_ designs are characterized by a step shape, whose discontinuities correspond to the, which only assume integer values. The _Pump_ solutions provide different tradeoffs in the throughput versus DSP space, thanks to the tuning of the pipelines'. The additional degree of freedom of the _M-Pump_ implementations (i.e., the task clock frequency) makes them always Pareto optimal.
Both _M-Pump_ and _S-Pump_ designs degenerate to _Base_ designs (i.e., all the pumping factors to one and no resource savings) at the highest throughputs since they need the lowest to reach the best performance. Note that the _M-Pump_ designs consistently degenerate to _Base_ at throughputs higher than _S-Pump_ since the multiple clock domains let the multi-pumped tasks run at the maximum frequency their local critical path allows. Therefore, the _M-Pump_ designs achieve up to higher throughput than _S-Pump_ with the same DSPs in the 2D Convolution test case. Moreover, with the Optical Flow benchmark, the _M-Pump_ reaches the maximum effective throughput using \(40\,\mathrm{\char 37}\) fewer DSPs than the _Base_.
Table I reports the post-implementation PPA data for the design points marked with the dots in Fig. 4. We select those points since their throughputs are the upper extremes of the last steps of _M-Pump_ and _S-Pump_ within the memory bound.
Comparing the _M-Pump_ designs with the _Base_ ones, the consistent DSP saving (\(54\,\mathrm{\char 37}\) on average) implies power and flip-flops (FFs) overheads. The additional power (\(24\,\mathrm{\char 37}\) on average) is because the multi-pumped tasks are characterized by greater switching activity due to higher resource reuse and clock frequencies. The additional FFs (\(33\,\mathrm{\char 37}\) on average) are inserted by the tool in the multi-pumped tasks to build deeper pipelines and reach higher clock frequencies.
As expected [9], the PPA overhead for CDC in _M-Pump_ is negligible. The overhead for routing multiple clocks is also marginal, as each additional clock domain allocates only \(1.4\,\mathrm{\char 37}\) of the available clock routing resources.
In general, the _M-Pump_ solutions Pareto dominate the _S-Pump_ ones. In fact, at the same throughput, they allocate fewer DSPs, similar look-up tables (LUTs) and FFs, and consume less power. This is because the _M-Pump_ designs take advantage of the multiple clock domains to increase the clock frequency of the multi-pumped tasks only, thus reaching higher multi-pumping factors and avoiding power and FF overheads in the non-multi-pumped tasks. The VMS test case is the only exception because only a small fraction of its logic runs at the base clock frequency, while the rest is double or triple-pumped; thus, the lower-frequency tasks are not enough to balance the power and FF overhead for the multi-pumped tasks.
## VII Conclusion
We propose a task-level multi-pumping technique for saving hardware resources while maintaining the original throughput for.
Given a state-of-the-art single-clock DFG, our approach first generalizes it to a multi-clock DFG. Secondly, it tunes the tasks' high-level parameters (i.e., clock frequency and pipeline ) to multi-pump their functional units. The overhead for generalization is negligible, thanks to the DFGs structure, which consists of independent blocks communicating via FIFOs, allowing for safe CDC, and modern FPGA clock architectures, which seamlessly handle multiple clock domains even if current HLS tools do not exploit them.
Fig. 4: Digital signal processors allocated for a given throughput. The _M-Pump_ designs are optimized using the proposed task-level multi-pumping technique. The _M-Pump_ designs are Pareto-optimal compared to the _Base_ designs, whose DSP utilization is constant since they are optimized by tuning the clock frequency only, and to the _S-Pump_ designs, which are optimized for area by changing both the and the global clock frequency of the tasks. The dashed lines represent the theoretical throughputs achievable with the allocated DSPs, which are unreachable in practice due to memory bandwidth limitations. The dots show the design points implemented in hardware.
The experimental results reported in Section VI prove that our method opens a new Pareto front in the performance versus DSPs space, saving up to \(40\,\%\) of resources at maximum throughput. Moreover, our method does not require any manual architecture changes from the designer, since it acts only on the high-level parameters of the tasks and uses the HLS binding algorithm to automatically generate the resource sharing logic. Finally, the generalization to multi-clock DFGs simply requires replacing single-clock with multi-clock FIFOs. Therefore, our technique is well suited for a fully automated HLS optimization pass, which will be the subject of future work.
## Acknowledgment
This work was partially supported by the Key Digital Technologies Joint Undertaking under the REBECCA Project with grant agreement number 101097224, receiving support from the European Union, Greece, Germany, Netherlands, Spain, Italy, Sweden, Turkey, Lithuania, and Switzerland.
|
2309.15138 | Quasilocal Corrections to Bondi's Mass-Loss Formula and Dynamical
Horizons | In this work, a null geometric approach to the Brown-York quasilocal
formalism is used to derive an integral law that describes the rate of change
of mass and/or radiative energy escaping through a dynamical horizon of a
non-stationary spacetime. The result thus obtained shows - in accordance with
previous results from the theory of dynamical horizons of Ashtekar et al. -
that the rate at which energy is transferred from the bulk to the boundary of
spacetime through the dynamical horizon becomes zero at equilibrium, where said
horizon becomes non-expanding and null. Moreover, it reveals previously
unrecognized quasilocal corrections to the Bondi mass-loss formula arising from
the combined variation of bulk and boundary components of the Brown-York
Hamiltonian, given in terms of a bulk-to-boundary inflow term akin to an
expression derived in an earlier paper by the author [#huber2022remark]. For
clarity, this is discussed with reference to the Generalized Vaidya family of
spacetimes, for which derived integral expressions take a particularly simple
form. | Albert Huber | 2023-09-26T15:21:09Z | http://arxiv.org/abs/2309.15138v1 | # Quasilocal Corrections to Bondi's Mass-Loss Formula and Dynamical Horizons
###### Abstract
In this work, a null geometric approach to the Brown-York quasilocal formalism is used to derive an integral law that describes the rate of change of mass and/or radiative energy escaping through a dynamical horizon of a non-stationary spacetime. The result thus obtained shows - in accordance with previous results from the theory of dynamical horizons of Ashtekar et al. - that the rate at which energy is transferred from the bulk to the boundary of spacetime through the dynamical horizon becomes zero at equilibrium, where said horizon becomes non-expanding and null. Moreover, it reveals previously unrecognized quasilocal corrections to the Bondi mass-loss formula arising from the combined variation of bulk and boundary components of the Brown-York Hamiltonian, given in terms of a bulk-to-boundary inflow term akin to an expression derived in an earlier paper by the author [17]. For clarity, this is discussed with reference to the Generalized Vaidya family of spacetimes, for which derived integral expressions take a particularly simple form.
_Key words: quasilocal Hamiltonian, dynamical horizons, Bondi mass-loss formula_
## Introduction
To determine within the Brown-York quasilocal formalism [10, 13] the change in mass and/or radiant energy escaping through the spatial boundary of a finitely extended gravitating physical system, it generally incurs, as only recently shown in [17], the necessity to calculate the time derivative of the total quasilocal gravitational Hamiltonian (bulk plus boundary term) rather than just that of the boundary part. The main reason for this is that the temporal variation of the ADM Hamiltonian, which corresponds to the bulk part of the total expression mentioned above, yields a non-vanishing bulk-to-boundary inflow term that leads to corrections to Einstein's quadrupole formula in the linearized weak-field approximation of general relativity.
This integral term, if different from zero (which is possible only in the non-vacuum case), has been shown to play a role in the quasilocal description of various physical phenomena, such as tidal deformation and heating processes as well as gravitoelectromagnetic effects [17]. Moreover, as has also been shown, its existence entails some remarkable consequences, perhaps the most striking of which is that the corrections it causes lead to a shift in the overall intensity of gravitational radiation emanating from compact gravitational sources such as stars and black holes. This is remarkable not least because the intensity shift in question should in principle prove to be experimentally detectable resp. observable in gravitational wave simulations, thus leading to a physical prediction that can readily be tested with modern methods of gravitational wave astronomy.
The main problem in this respect, however, is that it has not yet been clearly established whether the corrections caused by the mentioned inflow term are smaller or of the same order of magnitude as other integral terms resulting from the variation of the quasilocal Brown-York Hamiltonian. Moreover, with the exception of selected models of linearized Einstein-Hilbert gravity, the precise physical meaning of the corrections in question has remained elusive to this day.
In response to these shortcomings, the present work takes a specific approach to the subject by calculating within a bounded non-stationary spacetime the flux of mass and/or radiant energy through the dynamical horizon of the geometry, as well as its temporal variation. As a basis for these calculations, a null geometric approach to the quasilocal Brown-York formalism is pursued, which is shown to be compatible with the powerful dynamical horizon framework of Ashtekar et al. [6, 7] and Hayward's related trapping horizon approach [16]. To this end, following a previous work on the subject [11], a geometric setting is introduced that involves a spatially and temporally bounded spacetime with inner and outer boundaries, where the inner boundary is given by a dynamical horizon. Regarding this particular geometric setting, the time-flow vector field of spacetime is then chosen to coincide once with the lightlike horizon vector field of the geometry (which is generally non-tangential to said horizon) and once with the same horizon vector field plus a boundary shift vector, and the resulting total Hamiltonian is varied with respect to these same vector fields. Thereby, it is shown that the methods used naturally lead to a null geometric equivalent of the bulk-to-boundary inflow term derived in [17] and thus to a corresponding intensity shift of emitted gravitational radiation.
The latter is concluded from the fact that the resulting quasilocal corrections do not vanish even if the outer boundary of spacetime is shifted to infinity in the large sphere limit. In lieu thereof, as shown in the second section of the paper, a modification of Bondi's celebrated mass-loss formula [9, 18] results in such a case, which shows that radiative contributions at infinity can occur even if the Bondi news function is zero, and thus supposedly the time derivative of the associated mass aspect. It thus appears that, according to the quasilocal Hamiltonian formalism, there are exceptions to the generally accepted rule: _The mass of a system is constant if and only if there is no news_. As it seems, no similar result has been obtained in the literature so far. The quasilocal corrections responsible for this fact are determined explicitly in section two of
the work.
For the standard choice for the lapse function proposed in the dynamical horizon framework, the result thus obtained shows that the temporal variation of the total quasilocal Brown-York Hamiltonian vanishes once the horizon reaches a steady state of equilibrium and becomes an isolated or weakly isolated horizon in the sense of [1, 2, 3, 4, 5]. Thus, in agreement with the common expectation, the discussed model confirms that any matter and/or radiation flux (of the specified type) from the bulk to the boundary of spacetime that crosses a dynamical horizon necessarily subsides completely in the limiting case where the local horizon geometry becomes stationary and settles into a state of equilibrium, as in the case of a black hole.
To eventually assess the magnitude of the integral terms involved and to provide an explicit example of non-vanishing radiative contributions at infinity, the corresponding expressions are calculated in the third and final section of the paper with respect to models of the Generalized Vaidya spacetime family, for which the resulting integral expressions take a particularly simple form in case that the boundary of spacetime is shifted to infinity in the large sphere limit. In doing so, it is shown _i_) that radiation fields may be detected at null infinity even in cases where the Bondi news function is zero, and _ii_) that the resulting quasilocal corrections depend to a large extent on the choice of the time-flow vector field of the geometry. Potential implications of these findings are discussed towards the end of the paper.
## 1 Quasilocal Hamiltonian and Mass-Energy Transfer in Bounded Gravitational Fields
In this first preliminary section, the geometric setting to be considered is introduced, and some of the main results of [17] are recapitulated and generalized to fit this same setting. In particular, the time derivative of the quasilocal Brown-York gravitational Hamiltonian is calculated in a spacetime with interior and exterior boundaries, leading to an integral law describing how the matter and/or radiation content of a spatially and temporally bounded gravitating physical system changes with time. The bulk-to-boundary inflow term mentioned in the introduction is derived in the process, and it is shown what form some of the relevant integral expressions take with respect to the special choice of a lightlike (horizon) time-flow vector field of spacetime.
As a basis for the ensuing calculations, the present section essentially takes up the geometric setting considered in [17]. However, the latter is to be extended to comply with the dynamical horizon framework of Ashtekar et al. [6] in a manner similar to an earlier, slightly related work by Booth and Fairhurst [11]. For this purpose, let a fully dynamical, spatially compact, time orientable spacetime \((\mathcal{M},g)\) with manifold structure \(\mathcal{M}\equiv M\cup\partial M\) be considered, which is foliated by a family of \(t=const.\)-hypersurfaces \(\{\Sigma_{t}\}\). This spacetime may be envisioned as a non-stationary spacetime in a 'box', i.e., a dynamical spacetime
with a cylindrical outer boundary, the latter being later shifted to infinity. In more concrete terms, the boundary \(\partial M\) of said spacetime shall consist of two parts: an exterior part \(\partial M_{ext}\) and an an interior part \(\partial M_{int}\) such that \(\partial M\equiv\partial M_{int}\cup\partial M_{ext}\). The exterior part \(\partial M_{ext}\) of the boundary shall be chosen in such a way that \(\partial M_{ext}\equiv\Sigma_{1}\cup\mathcal{B}\cup\Sigma_{2}\) applies, where \(\Sigma_{1}\) and \(\Sigma_{2}\) represent spatial boundary parts, while \(\mathcal{B}\) represents a timelike boundary portion. This timelike portion shall be assumed to be foliated by a collection of two-surfaces \(\{\Omega_{t}\}\) such that \(\mathcal{B}=\{\cup_{t}\Omega_{t}:\,t_{1}\leq t\leq t_{2}\}\). Additionally, it shall be assumed that there exists an interior boundary \(\partial M_{int}\equiv\mathcal{S}_{1}\cup\mathcal{T}\cup\mathcal{S}_{2}\), where \(\mathcal{T}\) is a spacelike hypersurface representing a (canonical) dynamical horizon in the sense of Ashtekar et al. That is to say, \(\mathcal{T}\) is assumed to be a smooth, three-dimensional, spacelike submanifold of spacetime that exhibits a foliation \(\{\mathcal{S}_{t}\}\) by marginally trapped surfaces such that relative to each leaf of the foliation there exist two null normals \(l^{a}\) and \(k^{a}\) and two associated null expansion scalars \(\Theta=q^{ab}\nabla_{a}l_{b}\) and \(\Xi=q^{ab}\nabla_{a}k_{b}\), where \(q^{ab}\) is the inverse of the induced metric \(q_{ab}=g_{ab}+l_{a}k_{b}+k_{a}l_{b}\), one of which vanishes locally and the other of which is strictly negative, i.e. \(\Theta=0\) and \(\Xi<0\) on \(\mathcal{T}\).
Taking these assumptions as a starting point, the results of [17] shall be recapitulated in the following. To this end, the conventions of the mentioned work shall be adopted and adapted to the given geometric setting. To start with, a future-directed time evolution vector field \(t^{a}=Nn^{a}+N^{a}\) shall be considered, which, at the timelike boundary \(\mathcal{B}\), takes the form \(t^{a}=\mathcal{N}v^{a}+\mathcal{N}^{a}\), where \(N\) and \(N^{a}\) are the corresponding lapse function and shift vector field as usual, \(n^{a}\) is the normalized timelike generator leading to the the spacelike slicing of
Figure 1: A schematic three-dimensional representation of the spacetime manifold \(M\) along with its boundaries.
\((\mathcal{M},g)\), \(v^{a}\) is some timelike vector field tangent to \(\mathcal{B}\) and orthogonal to \(\Omega_{t}\) and \(\mathcal{N}\) and \(\mathcal{N}^{a}\) are the corresponding boundary lapse function and boundary shift vector field, respectively. Given this vector field and the related conventions, the corresponding three-metric at \(\Sigma_{t}\) reads \(h_{ab}=g_{ab}+n_{a}n_{b}\). To define the induced three-metric \(\gamma_{ab}\) at \(\mathcal{B}\), on the other hand, one may additionally consider a spatial unit vector field \(u^{a}\) which is perpendicular to \(\mathcal{B}\) and thus orthogonal to the temporal unit vector field \(v^{a}\) tangent to \(\mathcal{B}\). With respect to the latter, the three-metric then is \(\gamma_{ab}=g_{ab}-u_{a}u_{b}\). Moreover, considering a further spacelike vector field \(s^{a}\) that is orthogonal to the timelike generator \(n^{a}\) of the spacelike foliation \(\{\Sigma_{t}\}\), but generally non-orthogonal to \(v^{a}\) (in contrast to \(u^{a}\), which is generally non-orthogonal to \(n^{a}\)), one finds that the induced two-metric \(q_{ab}\) at \(\Omega_{t}\) takes the form \(q_{ab}=g_{ab}-u_{a}u_{b}+v_{a}v_{b}=g_{ab}+n_{a}n_{b}-s_{a}s_{b}\). Using the latter relation in combination with the decompositions \(n^{a}=\frac{1}{\sqrt{2}}(l^{a}+k^{a})\) and \(s^{a}=\frac{1}{\sqrt{2}}(l^{a}-k^{a})\) of \(n^{a}\) and \(s^{a}\), where \(l^{a}\) and \(k^{a}\) are null normals reducing locally to those associated with a given leaf \(\mathcal{S}_{t}\) of the foliation \(\{\mathcal{S}_{t}\}\) of the dynamical horizon \(\mathcal{T}\), it then becomes clear that the induced metric at said horizon takes the previously claimed form \(q_{ab}=g_{ab}+l_{a}k_{b}+k_{a}l_{b}\). With respect to this induced metric, the boundary shift vector can be written in the form \(\mathcal{N}^{a}=q^{a}_{\ c}N^{c}\).
The consideration of all the foregoing definitions proves to be beneficial for setting up the quasilocal Brown-York-Hamiltonian
\[H=H_{0}+H_{h}, \tag{1}\]
which in the given context consists of two parts \(H_{0}\) and \(H_{h}\), both of which themselves consist of bulk and boundary parts such that
\[H_{0}=H_{0}^{Bulk}+H_{0}^{Boundary};\quad H_{h}=H_{h}^{Bulk}+H_{h}^{Boundary}, \tag{2}\]
where
\[H_{0}^{Bulk}= \int\limits_{\Sigma_{t}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(\eta n^{a}\), given in terms of the parameter \(\eta=u_{a}n^{a}=-s_{a}v^{a}\) measuring the non-ohogonality of \(n^{a}\) and \(u^{a}\) as well as \(v^{a}\) and \(s^{a}\) and a related boost parameter \(\lambda=\frac{1}{\sqrt{1+\eta^{2}}}\), one finds here that \(\mathfrak{H}\) can alternatively be specified in terms of \(n^{a}\) and \(s^{a}\). In particular, the identity
\[\mathcal{N}\mathfrak{K}=Nk-(K_{ab}-Kh_{ab})N^{a}s^{b}-\lambda(\mathcal{ND}) \eta-(\mathcal{ND})v_{a}u^{a} \tag{4}\]
is found to be valid in this context, which will play a role later in determining changes in the matter and/or radiation content of the system at infinity as well as through the dynamical horizon \(\mathcal{T}\). The quantity \(k=q^{ab}k_{ab}=q^{ab}\mathcal{D}_{a}s_{b}\) entering this very identity is admittedly the extrinsic curvature of \(\Omega_{t}\), calculated with respect to \(s^{a}\).
As a supplementary remark, it may be noted that \(\mathcal{H}\) and \(\mathcal{H}_{a}\) as well as \(\mathfrak{h}\) and \(\mathfrak{h}_{a}\) could be specified completely analogously at \(\mathcal{T}\) and \(\mathcal{S}_{t}\) in terms of the corresponding conjugate momenta (even if only with respect to the first and second fundamental forms of said hypersurface and surface, respectively). However, taking such a step will not prove necessary in the following and will therefore not be undertaken at this point. Instead, direct recourse will be made to the null geometric framework used in [6] in the second section of the paper, rendering a specification of said quantities obsolete.
That said, the next step will be to calculate the time derivative \(\frac{dH}{dt}\) of the quasilocal Hamiltonian by performing a variation \(\mathfrak{L}_{t}\) of each of the integral expressions in (3), where \(\mathfrak{L}_{t}\) denotes the Lie derivative along the spacelike section \(\Sigma_{t}\) with respect to the time evolution vector \(t^{a}\). This yields a result of the form \(\frac{dH}{dt}=\frac{dH_{0}}{dt}+\frac{dH_{h}}{dt}=\mathfrak{L}_{t}H_{0}+ \mathfrak{L}_{t}H_{h}\).
The emphasis is here first put on the calculation of the former term \(\frac{dH_{0}}{dt}\) of the time derivative of \(H\) and only then on that of the second term \(\frac{dH_{h}}{dt}\), which proceeds completely analogously. For the purpose of calculating said term, one may take into account that expressions split once more in bulk and boundary parts, i.e. \(\frac{dH_{0}}{dt}=\frac{dH_{0}^{ Bulk}}{dt}+\frac{dH_{0}^{ Boundary}}{dt}\), and use the identity
\[\mathfrak{L}_{t}\mathcal{H}+\mathfrak{L}_{t}\mathcal{H}_{a}\cdot N ^{a} =\frac{N}{2}\mathcal{Q}_{ab}\mathfrak{L}_{t}h^{ab}-(NK+D_{a}N^{a} )(N\mathcal{H}+\mathcal{H}_{b}N^{b})+ \tag{5}\] \[+D_{a}[(N\mathcal{H}+\mathcal{H}_{b}N^{b})N^{a}+N^{2}\mathcal{H} ^{a}+2N\mathcal{Q}_{\ b}^{a}N^{b}],\]
in combination with \(\mathfrak{L}_{t}\sqrt{h}=\sqrt{h}(NK+D_{a}N^{a})\) to obtain the integral expressions
\[\frac{dH_{0}^{ Bulk}}{dt}=\int\limits_{\Sigma_{t}}(\dot{N}\mathcal{H}+ \mathcal{H}_{a}\dot{N}^{a}+\frac{N}{2}\mathcal{Q}_{ab}\dot{h}^{ab})\omega_{h}+ \int\limits_{\Omega_{t}}\hskip-1.422638pt\Pi\omega_{q}, \tag{6}\]
and
\[\frac{dH_{0}^{Boundary}}{dt}=\int\limits_{\Omega_{t}}(\dot{\mathcal{N}} \mathfrak{h}+\mathfrak{h}_{a}\dot{\mathcal{N}}^{a}+\frac{\mathcal{N}}{2} \mathfrak{Q}_{ab}\dot{q}^{ab})\omega_{q}, \tag{7}\]
where
\[\mathcal{Q}_{ab} =\frac{\sqrt{h}}{8\pi}\{{{}^{(3)}R_{ab}-2K_{ac}K^{c}_{\ b}+KK_{ab}- \frac{1}{N}\left[\dot{K}_{ab}+(ND)K_{ab}+D_{a}D_{b}N\right]-} \tag{8}\] \[-\frac{1}{2}h_{ab}\left({{}^{(3)}R+K^{2}-K_{cd}K^{cd}-\frac{2}{N} \left[\dot{K}+(ND)K+D_{a}D^{a}N\right]}\right)\};\] \[\mathfrak{Q}_{ab} =\frac{\sqrt{q}}{8\pi}(\mathfrak{K}_{ab}-(\mathfrak{K}-b_{a}u^{a })q_{ab})\]
applies by definition. To obtain the above integral expressions, as should be noted, the definitions \(\mathcal{Q}_{ab}=h_{a}^{\ c}h_{b}^{\ d}G_{cd}\), \(\mathfrak{Q}_{ab}=q_{a}^{\ c}q_{b}^{\ d}\rho_{cd}\), \(b^{a}\equiv(v\nabla)v^{a}\), \(\dot{N}=\mathfrak{L}_{t}N\), \(\dot{N}^{a}=\mathfrak{L}_{t}N^{a}\), \(\dot{h}^{ab}=\mathfrak{L}_{t}h^{ab}\) as well as \(\dot{\mathcal{N}}=\mathcal{L}_{t}\mathcal{N}\), \(\dot{\mathcal{N}}^{a}=\mathcal{L}_{t}\mathcal{N}^{a}\), \(\dot{q}^{ab}=\mathcal{L}_{t}q^{ab}\) have been used, where \(\mathcal{L}_{t}\) denotes the induced Lie-derivative at \(\Omega_{t}\) pointing along \(t^{a}\).
As can be seen, the result thus obtained now decomposes into three terms: a bulk term, a boundary term and the bulk-to-boundary inflow term already mentioned in the introduction. The latter term occurring in (6) results from a total divergence and is given with respect to an integrand of the form
\[\Pi=\frac{1}{8\pi}[(N\mathcal{H}+\mathcal{H}_{b}N^{b})N_{a}s^{a}+N^{2} \mathcal{H}_{a}s^{a}+N\mathcal{Q}_{ab}N^{a}s^{b}]. \tag{9}\]
From this, it is found that relations (2), (3), (6) and (7) give rise to a power functional of the form
\[\mathscr{P}_{0}=\frac{dH_{0}^{Boundary}}{dt}+\underset{\Omega_{t}}{\int}\Pi \omega_{q}=\underset{\Omega_{t}}{\int}\mathcal{I}\omega_{q}, \tag{10}\]
where the occurring intensity expression \(\mathcal{I}\) reads
\[\mathcal{I}=\mathcal{I}_{0}+\Pi \tag{11}\]
with \(\mathcal{I}_{0}:=\dot{\mathcal{N}}\mathfrak{h}+\mathfrak{h}_{a}\dot{\mathcal{ N}}^{a}+\frac{\mathcal{N}}{2}\mathfrak{Q}_{ab}\dot{q}^{ab}\) and \(\frac{dH_{0}^{Boundary}}{dt}=\mathcal{L}_{t}H_{0}^{Boundary}\). The default candidate \(\mathcal{I}_{0}\) for such an intensity, previously derived in [10, 12], is therefore shifted by a \(\Pi\)-term of the form (9), which is zero only in the vacuum case. In all other cases, this term is generally different from zero, which implies that the corresponding surface integral in (6) and (10) does not vanish even if the outer boundary of spacetime is shifted to infinity in the large sphere limit; a point from which it was concluded in [17] that quasilocal corrections of Einstein's quadrupole formula arise in the same limit.
This, of course, applies to any choice of the time-flow vector field \(t^{a}\). In particular, it applies to a class of such vector fields arising from an orthogonal \(2+1\)-decomposition \(N^{a}=Os^{a}+\mathcal{N}^{a}\) of the shift vector with respect to the surface \(\Omega_{t}\), which yields a bulk-to-boundary inflow term with an integrand of the form
\[\Pi=\Pi_{0}+\Pi_{N}; \tag{12}\]
by virtue of the fact that the definitions \(\Pi_{0}:=\frac{N^{2}}{8\pi}\mathcal{H}_{a}s^{a}\) and \(\Pi_{N}:=\frac{1}{8\pi}[O(N\mathcal{H}+O\mathcal{H}_{a}s^{a}+\mathcal{H}_{a} \mathcal{N}^{a})+O\cdot N\mathcal{Q}_{ab}s^{a}s^{b}+N\mathcal{Q}_{ab}s^{a} \mathcal{N}^{b}]\) are used in this context.
As may be noted, the foreging results can be generalized in the sense that one may choose a linear combination of the form \(\xi^{a}=t^{a}+\Omega\varphi^{a}\) as time evolution vector field of spacetime, where \(\varphi^{a}\) is an angular vector field tangential to all the cross-sections of \(\Sigma_{t}\), i.e. a vector field that coincides with a corresponding Killing field in the regime in which the dynamical horizon framework tends to the isolated horizon framework; a regime in which spacetime typically exhibits global generators of time translations and rotations. In this case, the form of the corresponding quasilocal corrections can be straightforwardly determined from the above, using the fact that the vector field \(\xi^{a}\) can be decomposed in the form \(\xi^{a}=\tilde{N}n^{a}+\tilde{N}^{a}\) with \(\tilde{N}=N-\Omega n_{b}\varphi^{b}\) and \(\tilde{N}^{a}=N^{a}+h^{a}_{\ b}\varphi^{b}=\tilde{O}s^{a}+\tilde{\mathcal{N}} ^{a}\) and making the replacements \(N\rightarrow\tilde{N}\), \(O\rightarrow\tilde{O}\) and \(\mathcal{N}^{a}\rightarrow\tilde{\mathcal{N}}^{a}\) in (12), which yields the analogous expression
\[\tilde{\Pi}=\tilde{\Pi}_{0}+\tilde{\Pi}_{\tilde{N}} \tag{13}\]
with \(\tilde{\Pi}_{0}:=\tilde{N}^{2}\mathcal{H}_{a}s^{a}\). Accordingly, the problem of determining quasilocal corrections in the rotating case can be handled exactly along the same lines as in the non-rotating case; with the only difference being that \(\tilde{N}\) and \(\tilde{N}^{a}\) are now the associated shifted versions of the lapse function and the shift vector field of the spacetime metric. Otherwise, there is no difference in the treatment of these cases.
That said, let it be noted that there is an important special case resulting from (12), which arises when the time-flow vector of the geometry is chosen to be \(t^{a}=\sqrt{2}Nl^{a}\), with \(l^{a}=\frac{1}{\sqrt{2}}(n^{a}+s^{a})\) being a null vector field that reduces locally to the horizon vector field of spacetime at \(\mathcal{T}\). The latter follows directly from (9) for the case that \(O\equiv N\), \(\mathcal{N}^{a}\equiv 0\) and thus \(N^{a}=Ns^{a}\) is chosen to be satisfied. Given precisely this choice, the identity \(\mathcal{H}+2\mathcal{H}_{a}s^{a}+\mathcal{Q}_{ab}s^{a}s^{b}=G_{ab}n^{a}n^{b}+ 2G_{ab}n^{a}s^{b}+G_{ab}s^{a}s^{b}=2G_{ab}l^{a}l^{b}\) can be used to convert the integrand of the bulk-to-boundary integral term in (6), leading to the result
\[\Pi=\Pi=\frac{N^{2}}{4\pi}G_{ab}l^{a}l^{b}. \tag{14}\]
Moreover, the further identity \(K-K_{ab}s^{a}s^{b}+k=\sqrt{2}\Theta\) can be used to convert the boundary Hamiltonian density \(\mathfrak{H}\) into
\[\mathfrak{H}=\frac{\sqrt{2q}N\Theta}{8\pi}. \tag{15}\]
This makes it clear that the boundary Hamiltonian vanishes for the given choice of time-flow vector field whenever \(\Theta=0\) applies at the exterior spatial boundary \(\mathcal{B}\) of spacetime (that is, in particular, when said boundary constitutes a timelike dynamical horizon); quite in contrast to the temporal variation of said term with respect to the null time evolution vector field \(t^{a}=\sqrt{2}Nl^{a}\), which is generally different from zero.
As will be shown in the subsequent section of this work, the latter proves to be particularly important in that said variation of the boundary Hamiltonian - after being combined with a bulk-to-boundary inflow term with an integrand of the form (14) - leads to Bondi's result for mass loss in gravitating systems due to gravitational radiation; while other choices typically lead to quasilocal corrections to Bondi's formula. This is the reason why, in order to capture deviations from Bondi's mass-loss formula and simultaneously quantify the strength of the aforementioned quasilocal corrections, it will prove useful in the following to make an ansatz of the form \(t^{a}=\sqrt{2}Nl^{a}+V^{a}\) for the time evolution vector of spacetime, where \(V^{a}=-Ns^{a}+N^{a}=(O-N)s^{a}+\mathcal{N}^{a}\) must be satisfied for the sake of consistency.
Provided that this is indeed the case, the ansatz mentioned proves to be fully compatible with all foregoing results, yielding a \(\Pi\)-term of the form (12) and a boundary Hamiltonian density given by the expression
\[\mathfrak{H}=\frac{\sqrt{q}}{8\pi}\left[\sqrt{2}N\Theta-\varGamma_{V}\right], \tag{16}\]
where the quantity \(\varGamma_{V}:=(K_{ab}-Kh_{ab})s^{a}V^{b}+\lambda(\mathcal{ND})\eta\) has been introduced; a quantity that proves consistent with that of the null Brown-York tensor given in [14] for \(O=\eta=0\) and \(\Omega_{a}=q^{c}_{\ a}K_{bc}s^{b}\), where \(\Omega_{a}=-q_{a}^{\ b}k^{c}\nabla_{b}l_{c}\) is the Haji\(\check{c}\)ek one-form.
As may be noted, in order to return to the case of a rotating black hole, just the replacements \(N\to\tilde{N}\), \(O\to\tilde{O}\), and \(\mathcal{N}^{a}\to\tilde{\mathcal{N}}^{a}\) need to be made in this context. Such a transition has the interesting consequence that part of the boundary component \(H_{0}^{Boundary}\) of the quasilocal Hamiltonian gives rise to an integral expression of the form
\[J_{0}^{\Omega\varphi}=\frac{1}{8\pi}\underset{\Omega_{t}}{\int}[\Omega(K_{ab} -Kh_{ab})s^{a}\varphi^{b}]\omega_{q}, \tag{17}\]
which can be converted to agree with a generalized version of Komar's angular momentum integral by applying Gauss' theorem and using the momentum constraint equation. Hence, as a direct consequence, it is found that the exterior boundary part of the Hamiltonian splits up in two parts, i.e. \(H_{0}^{Boundary}=H_{0,red}^{Boundary}-J_{0}^{\Omega\varphi}\), where \(J_{0}^{\Omega\varphi}\)coincides with the ADM angular momentum associated with \(\Omega\varphi^{b}\). Accordingly, the variation of \(J_{0}^{\Omega\varphi}\) with respect to \(t^{a}\) necessarily yields a torque term \(M_{0}^{\Omega\varphi}\equiv\frac{dJ_{0}^{\Omega\varphi}}{dt}\) which fully characterizes the power of the rotational motion of the system at the boundary of spacetime, especially for spacetimes with asymptotic rotational symmetry when the latter is shifted to infinity.
With this now being clarified, it may next be noted that, for all the foregoing applies in a similar way to fields at the interior boundary of spacetime, the variation of the horizon part \(H_{h}\) of the Hamiltonian \(H\) can be calculated in exactly the same way as above. Yet, since \(\mathcal{T}\) is a dynamical horizon in the sense of Ashtekar et al. it is clear that when \(t^{a}=\sqrt{2}Nl^{a}\) is chosen to be the time-flow
vector of spacetime, the boundary part of this part of the quasilocal Hamiltonian will generally be zero, while its variation will in turn generally be different from zero. Therefore, also in the given case, a quasilocal power functional of the form
\[\mathscr{P}_{h}=\frac{dH_{h}^{Boundary}}{dt}+\underset{\mathcal{S}_{t}}{\int} \Pi\omega_{q}=\underset{\mathcal{S}_{t}}{\int}\mathcal{I}\omega_{q} \tag{18}\]
can be defined, where \(\frac{dH_{h}^{Boundary}}{dt}=\mathcal{L}_{t}H_{h}^{Boundary}\) holds by definition and the \(\Pi\)-term is given by (14).
In this case, too, it proves useful to consider the shifted horizon vector field \(t^{a}=\sqrt{2}Nl^{a}+V^{a}\), if only to to be able to include again the rotating case by choosing \(V^{a}=\Omega\varphi^{a}\), where in case of a black hole spacetime \(\Omega\) and \(\varphi^{a}\) represent the angular velocity of the black hole and an associated angular vector field coincident with the Killing field of the geometry when the black hole spacetime under consideration is axisymmetric. This choice is intriguing not least because it allows one to derive (with respect to a portion \(\Delta\mathcal{T}\) of the dynamical horizon \(\mathcal{T}\)) the Ashtekar-Krishnan version of the first law of black hole mechanics from [6]; a law which - similar to Hayward's first law of black hole dynamics [16], but different from the original law of Bardeen, Carter and Hawking [8] - remains valid even in the light of dynamical black hole spacetimes. Still, a more generic scenario arises in the given context if again simply the replacements \(N\rightarrow\tilde{N}\), \(O\rightarrow\tilde{O}\) and \(\mathcal{N}^{a}\rightarrow\tilde{\mathcal{N}}^{a}\) are made, yielding in full analogy to the above a splitting \(H_{h}^{Boundary}=H_{h,red}^{Boundary}-J_{h}^{\Omega\varphi}\) of the boundary Hamiltonian, where the corresponding horizon angular momentum \(J_{h}^{\Omega\varphi}\) is of the exact same form as \(J_{0}^{\Omega\varphi}\) depicted in (17), except that the latter is defined with respect to the cut \(\Omega_{t}\), while the former is defined with respect to \(\mathcal{S}_{t}\). The mentioned horizon angular momentum then leads again to a torque term \(M_{h}^{\Omega\varphi}\equiv\frac{dJ_{h}^{\Omega\varphi}}{dt}\), which characterizes the power of the rotational motion of the system along the horizon.
This in advance, it may next be noted that a large part of the upcoming section will be devoted to a more detailed characterization of equations (10) and (18), for which purpose a null geometric derivation of the surface integrals with integrands of the form (12) and (13) will be given and the derivatives \(\frac{dH_{h}^{Boundary}}{dt}\) and \(\frac{dH_{0}^{Boundary}}{dt}\) of the boundary parts of the Hamiltonian will be calculated with regard to the shifted horizon vector field \(t^{a}=\sqrt{2}Nl^{a}+V^{a}\); first for \(V^{a}=0\) and then for \(V^{a}\neq 0\). The results obtained in this way pass an important test along the way in that they are found to be fully consistent with the theory of dynamical and isolated horizons of Ashtekar et al. Moreover, it is found that the interpretation of the surface integrals over \(\Pi\) and \(\tilde{\Pi}\) as bulk-to-boundary inflow terms also proves to be absolutely tenable, not least because - given a suitable choice of the lapse function at the horizon - it can be observed that the rate of energy transfer from the bulk through the inner to the outer boundary of spacetime (and vice versa) becomes zero in the limiting case where the dynamical horizon of spacetime transitions to an isolated or weakly isolated horizon and settles into a stable equilibrium state; a state in which it would be
impossible for matter to cross the outermost, non-expanding null horizon and then escape to infinity, as in the case of a black hole. The approach taken in this paper thus reflects this particular aspect of black hole physics, as it should if an interpretation of the derived integral expressions as bulk-to-boundary inflow terms were to prove plausible. The latter is further clarified in the third and final section of this work by the concrete example of the Generalized Vaidya family of spacetimes.
## 2 Matter and Radiation Transfer through Dynamical Horizons
Having obtained a number of results applicable to quantities at both the inner and outer boundaries of spacetime, the present section is now devoted to the calculation of exactly the same quantities from a different angle; thus continuing the quasilocal description of mass and radiative energy transfer in bounded non-stationary spacetimes with dynamical horizons begun in the previous section.
For the calculation of these quantities, a null geometric approach is adopted this time, by which it is shown that the results deduced in the previous section prove to be consistent with, and are derivable within, the theory of dynamical horizons. Furthermore, it is shown that the corresponding boundary terms - depending on the choice of the time-flow vector field of the geometry - either reduce directly to the Bondi mass-loss formula or lead to quasilocal corrections from the latter when the outer boundary of spacetime is shifted to infinity.
As a first step in dealing with the above and thus linking the results of the previous section to the theory of dynamical horizons of Ashtekar et al., let a radial parameter \(R\) be considered and used as a coordinate for describing local effects at the horizon \({\cal T}\). Given this null coordinate, the choice \(N\equiv N_{R}\) can be made for the lapse function, where \(N_{R}\equiv|\partial R|\) shall by definition apply in this context. This choice for the lapse, as described in detail in the relevant literature on the subject, turns out to be favorable for several reasons; one of which is that it causes the ADM Hamiltonian \(H_{h}^{bulk}\) on the black hole horizon \({\cal T}\) (and, in fact, the entire horizon part \(H_{h}\) of the Brown-York Hamiltonian \(H\)) to vanish as soon as the latter transitions from a dynamical to an isolated or weakly isolated horizon, thereby ensuring that the rate of transferred energy becomes zero once the geometry of the black hole becomes stationary and its horizon non-expanding and null. On top of that, some of the ensuing calculations can be greatly simplified by choosing the lapse function in this particular way, which, however, also applies after relabeling \(r=r(R)\) of the level sets of the foliation of spacetime, yielding no more than the trivial rescaling \(N_{R}\to\frac{dr}{dR}N_{R}=:N_{r}\) of the lapse. Accordingly, to include this very rescaling freedom, the lapse function shall be chosen from now on as \(N\equiv N_{r}\) for determining the form of quasilocal quantities at the horizon.
As a further step, let it be assumed that the horizon null vector \(l^{a}\) can be completed to a null tetrad of the form \((l^{a},k^{a},m^{a},\bar{m}^{a})\) such that the conditions
\(-l_{a}k^{a}=m_{a}\bar{m}^{a}=1\) are met. Moreover, let it be assumed that there is a portion \(\Delta\mathcal{T}\) of \(\mathcal{T}\) that is bounded by two cross-sections \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\), given with respect to the selected radial coordinate \(r(R)\), with radii \(r_{1}=r(R_{1})\) and \(r_{2}=r(R_{2})\) such that \(r_{2}>r_{1}\). While the co-vector \(k_{a}\) associated with the transverse vector field \(k^{a}\) is usually additionally chosen as a null gradient field in the theory of dynamical and isolated horizons, which has the consequence that \(k^{a}\) is by definition geodesic in these theories, this assumption is not needed and therefore not made in the following. It could, however, still be made complementarily.
Anyway, with that set, it may be taken into account that the form of the Brown-York Hamiltonian depends on which of the choices for the time-flow vector field \(t^{a}\) proposed in the previous section is made in this context. Focusing here first on the case in which \(t^{a}=\sqrt{2}N_{r}l^{a}\), it is found that \(H_{h}\equiv H_{h}^{Bulk}\) applies by necessity at \(\mathcal{T}\) due to the fact that \(\Theta=0\) holds along the same hypersurface. Consequently, the horizon part of the Hamiltonian can be re-written in the form
\[H_{h} =H_{h}^{Bulk}=\underset{\mathcal{T}}{\int}\mathscr{H}d^{3}x= \underset{\mathcal{T}}{\int}N_{r}(\mathcal{H}+\mathcal{H}_{a}s^{a})\omega_{h}= \tag{19}\] \[=\sqrt{2}\underset{\mathcal{T}}{\int}N_{r}G_{ab}l^{a}n^{b} \omega_{h}=\underset{\mathcal{T}}{\int}N_{r}[G_{ab}l^{a}l^{b}+G_{ab}l^{a}k^{b }]\omega_{h}.\]
Hence, after using the decomposition \(\mathcal{H}+\mathcal{H}_{a}s^{a}={}^{(2)}R-\sigma_{ab}\sigma^{ab}-2\zeta_{a} \zeta^{a}+\sqrt{2}\Theta(2K-\frac{3}{2}\Theta^{2})-\sqrt{2}\mathcal{L}_{s}\Theta\) with \(\zeta^{a}:=q^{ab}(sD)l_{b}\) of the ADM Hamiltonian density, this same Hamiltonian, as shown by Ashtekar and Krishnan in [6], gives rise to an energy flux term of the form
\[\mathcal{F}_{M}:=H_{h}|_{\Delta\mathcal{T}}=\frac{1}{16\pi}\underset{r_{1}}{ \int}\underset{\mathcal{S}_{t}}{\int}({}^{(2)}R-\sigma_{ab}\sigma^{ab}-2\zeta _{a}\zeta^{a})\omega_{q}dr, \tag{20}\]
where \({}^{(2)}R\) is the two-dimensional Ricci scalar. By taking the Gauss-Bonnet theorem into account, it can then be shown that this term leads for the standard choice \(r(R)=R^{2}\) to an exact balance law for the area increase of a given black hole, i.e.
\[\mathcal{A}_{2}-\mathcal{A}_{1}=\mathcal{F}_{M}+\mathcal{F}_{G}, \tag{21}\]
which is given with respect to the different black hole areas \(\mathcal{A}_{j}=4\pi R_{j}^{2}\) with \(j=1,2\), and an integral expression \(\mathcal{F}_{G}=\underset{r_{1}}{\int}\underset{\mathcal{S}_{t}}{\int}( \sigma_{ab}\sigma^{ab}+2\zeta_{a}\zeta^{a})\omega_{q}dr\) describing the energy flux due to gravitational radiation. Based on the fact that the right hand side of (21) is manifestly non-negative, this result thus shows that even in the fully dynamical case the area of a black hole can never decrease.
Taking the above into account, it is found that the underlying ADM Hamiltonian \(H_{h}\) considered in (19) can alternatively be cast in the form
\[H_{h}=\frac{1}{16\pi}\underset{\mathcal{T}}{\int}N_{r}(\frac{1}{2}{}^{(2)}R- \sigma_{ab}\sigma^{ab}-\Omega_{a}\Omega^{a})\omega_{h}; \tag{22}\]
thereby suggesting that \(\int\limits_{\mathcal{T}}\!N_{r}(\Omega_{a}\Omega^{a}+\frac{1}{2}{}^{(2)}R) \omega_{h}=2\!\int\limits_{\mathcal{T}}\!N_{r}\zeta_{a}\zeta^{a}\omega_{h}\) is satisfied. This can be readily concluded from the fact that the null Raychaudhuri equation \(\mathfrak{L}_{l}\Theta=\kappa\Theta-\frac{1}{2}\Theta^{2}-\sigma_{ab}\sigma^{ab }+\omega_{ab}\omega^{ab}-G_{ab}l^{a}l^{b}\) can be combined with the identity \(\mathfrak{L}_{k}\Theta=-\frac{1}{2}{}^{(2)}R-\Xi\Theta+\Omega_{a}\Omega^{a}- \mathcal{D}_{a}\Omega^{a}+G_{ab}l^{a}k^{b}\) to obtain the result
\[\mathcal{H}+\mathcal{H}_{a}s^{a} =\frac{1}{2}{}^{(2)}R+(\kappa+\Xi-\frac{1}{2}\Theta)\Theta- \sigma_{ab}\sigma^{ab}+\omega_{ab}\omega^{ab}- \tag{23}\] \[-\Omega_{a}\Omega^{a}+\mathcal{D}_{a}\Omega^{a}-\sqrt{2}\mathcal{ L}_{s}\Theta,\]
which can then be used to set up equation (22). Note that it has been used in this context that the expression \(\int\left[\int\limits_{\mathcal{S}_{t}}\!\mathcal{D}_{a}\Omega^{a}\omega_{q} \right]dr\) vanishes identically and \(\int\limits_{\mathcal{S}_{t}}\!\mathcal{D}_{a}\Omega^{a}\omega_{q}\) does so as well. As may be noted, the quantity \(\kappa\) coincides with the surface gravity of the black hole at the horizon, where one generally has \(\kappa=\epsilon+\tilde{\epsilon}\) in spin-coefficient notation.
With this clarified, it may next be noted that, although the boundary term \(H_{h}^{Boundary}\) is equal to zero, its temporal variation with respect to the time evolution vector field \(t^{a}\) is generally not. Also, the corresponding variation of the bulk part \(H_{h}^{Bulk}\) is generally unequal to zero, which has the consequence that the temporal variation of the full Hamiltonian \(H\) leads to an integral law with a form identical to that of (18) that includes a bulk-to-boundary inflow term with integrand (14) resulting from the variation of the corresponding bulk part \(H_{h}^{Bulk}\).
To see this, the Lie derivative of \(H_{h}\) with respect to \(t^{a}=\sqrt{2}N_{r}l^{a}\) along \(\mathcal{T}\) will be calculated next. For this purpose, it may be taken into account that a variation of the integrand occurring in (19) yields the result
\[\mathfrak{L}_{t}[\omega_{h}N_{r}(\mathcal{H}+\mathcal{H}_{a}s^{a })] =\sqrt{2}N_{r}\omega_{h}[\mathfrak{L}_{l}\ln N_{r}(\mathcal{H}+ \mathcal{H}_{a}s^{a})+\mathfrak{L}_{l}(\mathcal{H}+\mathcal{H}_{a}s^{a})+ \tag{24}\] \[+(\Theta+\kappa)(\mathcal{H}+\mathcal{H}_{a}s^{a})]\]
where, just as a reminder, parts of the corresponding Hamiltonian density can be written in the form \(\mathcal{H}+\mathcal{H}_{a}s^{a}=G_{abl}l^{a}l^{b}+G_{ab}l^{a}k^{b}\). Using here then the fact that
\[\mathfrak{L}_{l}\left(G_{ab}l^{a}k^{b}\right) =-\mathfrak{L}_{k}\left(G_{ab}l^{a}l^{b}\right)+G_{ab}(l\nabla)k^{ a}l^{b}+2G_{ab}(k\nabla)l^{a}l^{b}+ \tag{25}\] \[+G_{ab}(l\nabla)l^{a}k^{b}+q_{a}^{\phantom{a}c}\nabla_{c}G_{b}^{a }l^{b}\]
applies as a consequence of the contracted Bianchi identity \(\nabla_{a}G_{\phantom{a}b}^{a}\cdot l^{a}=0\), it is thus found that
\[\mathfrak{L}_{l}[G_{ab}l^{a}l^{b}+ G_{ab}l^{a}k^{b}]=\sqrt{2}\mathfrak{L}_{s}\left(G_{ab}l^{a}l^{b} \right)+\mathcal{D}_{a}(G_{\phantom{a}b}^{a}l^{b})+G_{ab}\varkappa^{a}k^{b}+ \tag{26}\] \[+ 2\tilde{\kappa}G_{ab}l^{a}l^{b}-2G_{ab}\tau^{a}l^{b}+2G_{ab} \Omega^{a}l^{b}+G_{ab}\sigma^{ab}+\frac{1}{2}\Theta G_{ab}q^{ab}\]
is satisfied, where, in spin-coefficient notation, one has \(\tilde{\kappa}=\gamma+\bar{\gamma}\), \(\tau^{a}=\bar{\tau}m^{a}+\tau\bar{m}^{a}\), \(\varkappa^{a}=\bar{\varkappa}m^{a}+\varkappa\bar{m}^{a}\) and \(\Omega^{a}=(\bar{\alpha}+\beta)m^{a}+(\alpha+\bar{\beta})\bar{m}^{a}\). Given that the co-vector \(k_{a}\) is usually chosen as a gradient field in the theory of dynamical and isolated horizons, one could use at this point the fact that \(\tilde{\kappa}=0\). Yet, since (26) applies generically and remains valid regardless of whether \(\tilde{\kappa}=0\) is satisfied or not, the latter is not strictly assumed either at this or a later point of this work.
This being said, it can further be observed that
\[N_{r}^{2}\mathfrak{L}_{s}\left(G_{ab}l^{a}l^{b}\right) =\mathfrak{L}_{s}\left(N_{r}^{2}G_{ab}l^{a}l^{b}\right)-2N_{r} \mathfrak{L}_{s}N_{r}G_{ab}l^{a}l^{b}=D_{c}\left[(N_{r}^{2}G_{ab}l^{a}l^{b})s ^{c}\right]-\] \[-[N_{r}^{2}\cdot k+2N_{r}\mathfrak{L}_{s}N_{r}]G_{ab}l^{a}l^{b} \tag{27}\]
applies globally in \(\mathcal{M}\), where \(k=\frac{1}{\sqrt{2}}(\Theta-\Xi)\) is the extrinsic curvature scalar calculated with respect to \(s^{a}\). Whence, using Gauss' law once again, one is thus led to conclude that
\[\underset{\mathcal{T}}{\int}D_{c}\left[(N_{r}^{2}G_{ab}l^{a}l^{b})s^{c}\right] \omega_{h}=\underset{\mathcal{S}_{t}}{\int}N_{r}^{2}G_{ab}l^{a}l^{b}\omega_{q}. \tag{28}\]
This makes it clear that the bulk-to-boundary inflow term (14) can be derived as required in the context of the theory of dynamical black hole horizons.
Clearly, since the Lie derivative of one and the same object is calculated only in different ways, it can be confidently assumed that the remaining terms in relations \((24-26)\) and (28) can be combined to agree with the integrand of the bulk integral term in equation (6). However, this confirms that the results of the quasilocal Brown-York-Hamiltonian formalism and the dynamical horizon framework are entirely consistent with each other; and that even to a much greater extent than pointed out in [11].
For the sake of completeness, the result of the variation of the bulk part of the Hamiltonian shall be given at this point as well, reading
\[\mathfrak{L}_{t}H_{h}^{Bulk} =\frac{\sqrt{2}}{16\pi}\underset{\mathcal{T}}{\int}\omega_{h}N_{ r}\{(\mathfrak{L}_{l}N_{r}+\kappa N_{r})(^{(2)}R-\sigma_{ab}\sigma^{ab}-2 \zeta_{a}\zeta^{a})+ \tag{29}\] \[+N_{r}(\mathfrak{L}_{l}{}^{(2)}R+\Omega_{ab}\sigma^{ab}-2 \mathfrak{L}_{l}|\zeta|^{2})\},\]
where \(\Omega_{ab}=q_{a}^{\ c}q_{b}^{\ d}C_{cefd}l^{elf}\) and \(|\zeta|^{2}=\zeta_{a}\zeta^{a}\) applies by definition. Note that the identity \(\mathcal{L}_{l}\sigma_{ab}=q_{a}^{\ c}q_{b}^{\ d}L_{l}\sigma_{cd}=\kappa \sigma_{ab}+\sigma_{cd}\sigma^{cd}q_{ab}-q_{a}^{\ c}q_{b}^{\ d}C_{cefd}l^{elf}\) has been used to obtain this form of relation (29). An alternative way to express this relation is
\[\mathfrak{L}_{t}H_{h}^{Bulk} =\frac{\sqrt{2}}{16\pi}\underset{\mathcal{T}}{\int}\omega_{h}N_{ r}[(\mathfrak{L}_{l}N_{r}+\kappa N_{r})(G_{ab}l^{a}l^{b}+G_{ab}l^{a}k^{b})-N_{r}( \Xi-2\tilde{\kappa})G_{ab}l^{a}l^{b}+\] \[-N_{r}(N_{r}G_{ab}\varkappa^{a}k^{b}-2G_{ab}(\tau^{a}-\Omega^{a} )l^{b}+G_{ab}\sigma^{ab}+2\mathfrak{L}_{l-k}N_{r}G_{ab}l^{a}l^{b})]+\] \[+\frac{1}{4\pi}\underset{\mathcal{S}_{t}}{\int}\omega_{q}N_{r}^{2 }G_{ab}l^{a}l^{b}, \tag{30}\]
thereby giving exactly the bulk-to-boundary inflow term derived in the previous section. In this context, it has been used that
\[\int\limits_{\mathcal{T}}\!\!N_{r}^{2}\mathcal{D}_{a}(G^{a}_{\ b}l^{b})\omega_{h} =\int drN_{r}\left[\int\limits_{\mathcal{S}_{t}}\!\!\mathcal{D}_{a}(G^{a}_{\ b}l^{b}) \omega_{q}\right]=0 \tag{31}\]
applies due to the fact that by Gauss' law the surface integral \(\int\limits_{\mathcal{S}}\!\!\mathcal{D}_{a}(G^{a}_{\ b}l^{b})\omega_{q}\) can be converted into an line integral over the boundary \(\partial\mathcal{S}_{t}\) of \(\mathcal{S}_{t}\) which is zero.
However, as the variation of the Brown-York Hamiltonian \(H_{h}\) with respect to \(t^{a}=\sqrt{2}N_{r}l^{a}\) at \(\mathcal{T}\) is not yet fully calculated, this is not the end of the story. To calculate the latter, the variation \(\mathcal{L}_{t}H^{Boundary}_{h}\) of the inner boundary part of the Brown-York Hamiltonian has to be calculated as well. Using once more the null Raychadhuri equation, one here finds
\[\mathcal{L}_{t}H^{Boundary}_{h}= \tag{32}\] \[= \frac{1}{4\pi}\!\!\int\limits_{\mathcal{S}_{t}}\left[N_{r} \mathcal{L}_{l}N_{r}\Theta+N_{r}^{2}(\kappa\Theta-\sigma_{ab}\sigma^{ab}+ \omega_{ab}\omega^{ab}-8\pi T_{ab}l^{a}l^{b})\right]\omega_{q},\]
and thus
\[\mathcal{P}_{h}=\frac{1}{8\pi}\!\!\int\limits_{\mathcal{S}_{t}}\!\!\mathcal{I }\omega_{q}=\frac{1}{4\pi}\!\!\int\limits_{\mathcal{S}_{t}}\left[N_{r} \mathcal{L}_{l}N_{r}\Theta+N_{r}^{2}(\kappa\Theta-\sigma_{ab}\sigma^{ab}+ \omega_{ab}\omega^{ab})\right]\omega_{q} \tag{33}\]
for the power functional \(\mathcal{P}_{h}\), as follows directly from relations (18) and (30), respectively. Thus, as can be seen, the bulk-to-boundary inflow term is counterbalanced by the net flow of matter and/or radiation through the horizon. The amount of matter and radiation that flows out through the horizon therefore flows back in from the bulk, so that the net flux becomes zero at the cuts of the horizon.
This result holds in the exact same form at \(\Omega_{t}\), which is interesting to the extent that, after taking the exterior boundary of spacetime to future null infinity in the large sphere limit, it is found that
\[\mathcal{P}_{0}=-\frac{1}{4\pi}\!\int\limits_{\mathcal{S}_{\infty}}\!\!N^{2} \sigma_{ab}\sigma^{ab}\omega_{q}=-\frac{1}{4\pi}\!\int\limits_{\mathcal{S}_{ \infty}}\!\!|n|^{2}d\Omega \tag{34}\]
applies in a suitable Bondi-like chart \((u,r,\theta,\phi)\) with radial null coordinate \(r\) near null infinity [9, 18]. To see this, the asymptotic expansions \(N[\mathcal{L}_{l}N+\kappa]\Theta\omega_{q}\underset{r\rightarrow\infty}{ \longrightarrow}0\), \(N^{2}\omega_{ab}\omega^{ab}\omega_{q}\underset{r\rightarrow\infty}{ \longrightarrow}0\) as well as \(N^{2}\sigma_{ab}\sigma^{ab}\omega_{q}\underset{r\rightarrow\infty}{ \longrightarrow}|n|^{2}d\Omega\) with \(\sigma_{ab}\sigma^{ab}\omega_{q}=\frac{|n|^{2}}{r^{2}}r^{2}d\Omega\) may be taken into account, where \(n(u,\theta,\phi)\) is the Bondi news function. This news function is the retarded time derivative of the radiation strain, i.e. \(n=\partial_{u}\sigma_{0}\), where \(\sigma_{0}\) corresponds to the leading order term of the spin-coefficient \(\sigma\) of the Newman-Penrose formalism. This coefficient has been
unobtrusively incorporated into the definition of (34) inasmuch as it has been used that \(\sigma_{ab}\sigma^{ab}=|\sigma|^{2}=\frac{|n|^{2}}{r^{2}}\), where the latter holds in the vicinity of future null infinity.
In light of the above, one is led to the conclusion that the quasilocal quantity \(\mathcal{P}_{0}\) given above coincides in Bondi coordinates near future null infinity exactly with the time deriviative of the Bondi mass aspect, thereby giving rise to the infamous Bondi mass-loss formula [9, 18]. This can readily be concluded by taking into account that \(\frac{dm_{B}}{du}=-\frac{1}{4\pi}\int\limits_{\mathcal{S}_{\alpha}}|n|^{2}d\Omega\) applies in said coordinates at future null infinity, where \(m_{B}\) is the Bondi mass. However, this implies that the quantity \(\mathcal{P}_{0}\) can be interpreted as one characterizing the rate of mass-loss of a spatially and temporally bounded gravitating physical system, which reduces to the standard expression given an extension of the spacetime boundary to future null infinity. Yet, as may be noted, this only holds true relative to the given choice \(t^{a}\equiv\sqrt{2}Nl^{a}\) for the time evolution vector field, but not with respect to other choices that do not lead to the same result.
To see this, let the general case \(t^{a}=\sqrt{2}N_{r}l^{a}+V^{a}\) with \(V^{a}=(O-N_{r})s^{a}+\mathcal{N}^{a}\) be considered, where the situation changes completely in the sense that quasilocal corrections to \(\mathcal{P}_{0}\) and \(\mathcal{P}_{h}\) occur naturally. Here, the following is to be added: The choice \(t^{a}=\sqrt{2}N_{r}l^{a}\) made above for the time flow vector field actually proves to be a suitable choice for the study of the intrinsic and extrinsic geometry of dynamical horizons and for characterizing the latter by means of different null geometric quantities. The choice of an alternative time-flow vector field is essentially arbitrary for spacetimes lacking time translation symmetry; however, this choice should always be made taking into account important local geometric properties of the considered geometric field. On the other hand, given spacetimes that do not lack time translation (and/or rotational) symmetry, it is straightforward to make a choice for the time flow vector field of the geometry. This choice, as already emphasized in the previous section, is simply given by the Killing vector field of spacetime, i.e. by the linear combination of temporal and angular Killing vector fields, which can be written the form \(t^{a}=\sqrt{2}Nl^{a}+V^{a}\) with \(V^{a}=\Omega\varphi^{a}\). Such a choice already leads to quasilocal corrections, as shall be shown below.
To derive said corrections and obtain a bulk-to-boundary inflow term of the form (12), one may perform a variation of the bulk part of the horizon Hamiltonian. From the perspective of an unboosted observer (for which \(\eta=0\)), this Hamiltonian takes the form
\[H_{h}=H_{h}^{Bulk}+H_{h}^{Boundary}=\int\limits_{\mathcal{T}}(\mathscr{H}+ \mathscr{R})d^{3}x+\int\limits_{\mathcal{S}_{t}}\!\mathfrak{H}d^{2}x \tag{35}\]
where the definitions \(\mathscr{H}:=\frac{\sqrt{h}}{8\pi}N_{r}(\mathcal{H}+\mathcal{H}_{a}s^{a})\) and \(\mathscr{R}:=\frac{\sqrt{h}}{8\pi}\mathcal{H}_{a}V^{a}=\frac{\sqrt{h}}{8\pi}[( O-N_{r})\mathcal{H}_{a}s^{a}+\mathcal{H}_{a}\mathcal{N}^{a}]\) have been used. Perhaps the simplest way to calculate the variation \(\mathfrak{L}_{t}H_{h}^{Bulk}\) of the bulk part is to consider the \(3+1\)-identities
\[\mathfrak{L}_{V}H^{Bulk}_{h} = \frac{1}{8\pi}{\int\limits_{\mathcal{T}}}D_{b}\{[N_{r}( \mathcal{H}+\mathcal{H}_{a}s^{a})+\mathcal{H}_{a}V^{a}]V^{b}\}\omega_{h}=\] \[= \frac{1}{8\pi}{\int\limits_{\mathcal{S}_{t}}}[N_{r}(O-N_{r})( \mathcal{H}+\mathcal{H}_{a}s^{a})+\mathcal{H}_{a}V^{a}]\omega_{q}\]
and
\[\nabla_{c}G^{c}_{\ b}V^{b} = -\mathfrak{L}_{n}\mathcal{H}_{b}V^{b}+\mathcal{H}a_{b}V^{b}-K \mathcal{H}_{b}V^{b}+\] \[+ \mathcal{Q}^{c}_{\ b}a_{c}V^{b}+D_{c}\mathcal{Q}^{c}_{\ b}V^{b}=0,\]
where the fact that \(V_{a}s^{a}=O-N_{r}\) has been used. As may be noted, the latter identity can straightforwardly be deduced using the \(3+1\)-splitting \(G^{a}_{\ b}=\mathcal{H}n^{a}n_{b}-\mathcal{H}^{a}n_{b}-n^{a}\mathcal{H}_{b}+ \mathcal{Q}^{a}_{\ b}\) of the Einstein tensor.
Taking into account the decomposition relations \(n^{a}=\frac{1}{\sqrt{2}}(l^{a}+k^{a})\), relation (37) can be recast such that
\[\mathcal{L}_{l}\mathcal{H}_{b}V^{b}=-\mathcal{L}_{k}\mathcal{H}_{b}V^{b}+D_{c} (\mathcal{Q}^{c}_{\ b}V^{b})+\Phi_{V} \tag{38}\]
with
\[\Phi_{V}=\mathcal{H}V^{b}D_{b}\ln N_{r}-K\mathcal{H}_{b}V^{b}+\mathcal{Q}^{c} _{\ b}V^{b}D_{c}\ln N_{r}-\mathcal{Q}^{c}_{\ b}D_{c}V^{b} \tag{39}\]
being satisfied. Ultimately, applying Gauss' theorem once again to convert
\[{\int\limits_{\mathcal{T}}}D_{c}(N_{r}\mathcal{Q}^{c}_{\ b}V^{b}) \omega_{h}={\int\limits_{\mathcal{S}_{t}}}\mathcal{Q}^{c}_{\ b}s_{c}V^{b} \omega_{q} \tag{40}\]
and taking (30) as well as (36) and (39) into account, one finds that for a time-flow vector field of the form \(t^{a}=\sqrt{2}N_{r}l^{a}+V^{a}\) the variation \(\mathfrak{L}_{t}H^{Bulk}_{h}\) of the bulk part of the Hamiltonian is given by (30) plus an extra term of the form
\[\frac{1}{8\pi}{\int\limits_{\mathcal{T}}}{\sqrt{2}}N_{r}[\mathcal{L}_{l} \mathcal{H}_{b}V^{b}+\mathcal{H}_{b}\mathcal{L}_{l}V^{b}+]\omega_{h}+ \mathfrak{L}_{V}H^{Bulk}_{h}= \tag{41}\]
with
\[\Pi_{V}=\frac{1}{8\pi}\{(O-N_{r})[N_{r}(\mathcal{H}+\mathcal{H}_{a}s^{a})+ \mathcal{H}_{a}V^{a}]+N_{r}\mathcal{Q}^{c}_{\ b}s_{c}V^{b}\}. \tag{42}\]
As can readily be checked, by combining (30) and (41) one obtains again a bulk-to-boundary inflow term with an integrand that satisfies the identity \(\Pi+\Pi_{V}=\Pi=\Pi_{0}+\Pi_{N}\). Thus, as to be expected, one finds the result of the first section exactly reproduced.
For the variation of the boundary part of the Hamiltonian \(H^{Boundary}_{h}\), one may take into account that \({\cal L}_{\cal N}{\int\limits_{S}}(\sqrt{2}N_{r}\Theta-\Gamma_{V})\omega_{q}={ \int\limits_{S}}{\cal D}_{c}[(\sqrt{2}N_{r}\Theta-\Gamma_{V}){\cal N}^{c}] \omega_{q}=0\) applies in the given context. This reveals the fact that the variation \({\cal L}_{t}H^{Boundary}_{h}\) of the boundary term is given by (32) plus an extra term of the form
\[{\int\limits_{S_{t}}}\Delta_{V}\omega_{q}=\frac{1}{8\pi}{\int \limits_{S_{t}}}\sqrt{2}N_{r}[{\cal L}_{t}\Gamma_{V}+\Theta\Gamma_{V}-\Psi_{V}] \tag{43}\]
where the definition
\[\Psi_{V}=(O-N_{r})[\sqrt{2}{\cal L}_{s}N_{r}\Theta+\sqrt{2}N_{r}{ \cal L}_{s}\Theta+(\sqrt{2}N_{r}\Theta-\Gamma_{V})k-{\cal L}_{s}\Gamma_{V}]= \tag{44}\] \[=N_{r}(O-N_{r})[{\cal L}_{l-k}\ln N_{r}\cdot\Theta+\frac{1}{2}^{( 2)}R+(\kappa+\Xi-\frac{1}{2}\Theta)\Theta+\frac{1}{\sqrt{2}}(\sqrt{2}N_{r} \Theta-\Gamma_{V})(\Theta+\Xi)-\] \[\qquad-\sigma_{ab}\sigma^{ab}+\omega_{ab}\omega^{ab}-\Omega_{a} \Omega^{a}+{\cal D}_{a}\Omega^{a}-{\cal H}-{\cal H}_{a}s^{a}-\frac{1}{\sqrt{ 2}}{\cal L}_{l-k}\Gamma_{V}]\]
has been used, which takes a simpler form at \({\cal T}\) and therefore also at \({\cal S}_{t}\) due to the fact that \(\Theta=0\) applies there. Hence, by combining the bulk-to-boundary terms resulting from (30) and (41) with (43), one obtains the power functional
\[{\mathscr{P}}_{h}={\cal P}_{h}+{\mathfrak{P}}_{h}, \tag{45}\]
for the inner boundary of spacetime, with \({\cal P}_{h}\) being depicted in (33). This power functional contains the quasilocal correction term
\[{\mathfrak{P}}_{h}={\int\limits_{S_{t}}}(\Pi_{V}-\Delta_{V}) \omega_{q}, \tag{46}\]
which again takes a simpler form at \({\cal S}_{t}\) due to the fact that \(\Theta=0\) is satisfied along the leaves of \({\cal T}.\) The reason why \(\Theta\) was not set equal to zero in this context is, of course, that by an analogous approach an equivalent power functional
\[{\mathscr{P}}_{0}={\cal P}_{0}+{\mathfrak{P}}_{0}, \tag{47}\]
can be derived for the outer boundary of spacetime, whose from can easily be read off from (45). In fact, the exact same expression is obtained here, only that \({\cal S}_{t}\) has to be replaced by \(\Omega_{t}\) and it cannot necessarily be assumed that \(\Theta\) is equal to zero, since the outer horizon is a timelike hypersurface that will not generally represent a dynamical horizon. Provided that, as before, the outer boundary of spacetime is shifted to null infinity in the large sphere limit, \({\cal P}_{0}\) is then again given by equation (34), thereby implying that \({\mathfrak{P}}_{0}\) describes quasilocal corrections to said formula. Note that here again the transition to the rotating case can be readily achieved by the substitutions \(N\to\tilde{N},\,O\to\tilde{O},\) and \({\cal N}^{a}\to\tilde{\cal N}^{a}.\)
As can be inferred from equations (45) and (47), the quantities \(\mathfrak{P}_{h}\) and \(\mathfrak{P}_{0}\) encode corrections to the quasilocal analog of Bondi's mass-loss formula given by equations (33) and (34), respectively. These corrections each contain one term resulting from the variation of the boundary part of the Hamiltonian and a previously unrecognized bulk-to-boundary inflow term. The existence of precisely these terms suggests that the Bondi mass-loss formula - for the given choice of the time evolution vector field - results as a special case of the Brown-York formalism only when \(\mathfrak{P}_{0}\underset{r\rightarrow\infty}{\longrightarrow}0\) is satisfied, which, however, requires specific asymptotic fall-off conditions to be satisfied. Yet, the latter may not be the case for all choices \(N^{a}=Os^{a}+\mathcal{N}^{a}\) for the shift vector of the geometry; possibly not even in case that \(N^{a}=0\), as will be explained more clearly in the following, final section of this paper.
In conclusion, given the fact that in general no preferred time-flow vector field can be distinguished in spacetimes which do not exhibit time-translation symmetry and, moreover, with \(O\) and \(\mathcal{N}^{a}\) essentially freely selectable quantities which enter the definition of the derived quasilocal corrections, it is clear that these quantities can always be chosen in such a way that the mentioned corrections are non-zero in spacetimes without the mentioned symmetry. Thus, one is led to conclude that, according to the quasilocal Hamiltonian formalism used in this work, there are integral terms that can be expected to survive the large sphere limit and so give rise to non-vanishing quasilocal corrections to Bondi's mass-loss formula. Accordingly, there should be exceptions to the generally accepted rule: _The mass of a system is constant if and only if there is no news._ Rather, in light of the above, the statement should read correctly: _The mass of a system is constant if and only if there is no news (and the bulk stress-energy tensor at the boundary of spacetime is zero)._
This small addition to Bondi's statement proves quite significant in some cases of interest, in particular those in which both electromagnetic and gravitational radiation escape continuously from the system to null infinity, which is possible in non-stationary spacetimes because both matter and radiation can simultaneously pass through a dynamical horizon (as opposed to the case of a stationary isolated or even Killing horizon, where the latter would be impossible). The reason for this is that in just such a case there should be a non-vanishing bulk-to-boundary energy inflow term and hence corrections to the Bondi mass-loss formula, which should manifest themselves in the form of a shift in the intensity of the measured radiation.
That the explicit form of such a shift can actually be calculated, at least in simpler cases, will be shown in the forthcoming concluding section of this work, using the Generalized Vaidya family of spacetimes as an example. In doing so, it will be shown that the quasilocal corrections derived in the present section must be taken into account in order to obtain the correct formula for the loss of mass and radiant energy through a dynamical horizon in a spatially and temporally bounded gravitational system whose boundary is shifted to infinity in the large sphere limit. The form of the resulting quasilocal corrections is shown to depend particularly on the choice of the shift vector field of the geometry; with certain
choices clearly showing that, contrary to general expectation, null radiation at null infinity can be detected even when both the Bondi news function and the time derivative of the associated mass aspect are zero.
## 3 A Specific Example: The Family of Generalized Vaidya Spacetimes
To illustrate applications for the integral laws derived in the previous sections, a specific class of geometric models will now be treated in the following, namely the Generalized Vaidya family of solutions of Einstein's eqations. The metric of this spacetime describes the geometric field of a matter distribution with a null dust and a non-rotating null fluid part. In the ingoing case, the line element encoding the components corresponding metric reads
\[ds^{2}=-(1-\frac{2M}{r})dv^{2}+2dvdr+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{ 2}), \tag{48}\]
where \(M(v,r)\) is assumed to be a function of \(v\) and \(r\) with the property that its first derivative with respect to \(v\) possesses a well-defined limit in the sense that the object \(\dot{M}_{\infty}=\lim\limits_{v,r\rightarrow\infty}\dot{M}\) with \(\dot{M}=\partial_{v}M\) exists and is non-singular. Given that this is the case, one may consider the normalized geodesic null frame
\[l^{a} =\partial_{v}^{a}+\frac{N^{2}}{2}\partial_{r}^{a} \tag{49}\] \[k^{a} =-\partial_{r}^{a}\] \[m^{a} =\frac{1}{\sqrt{2}r}(\partial_{\theta}^{a}+i\csc\theta\partial_{ \phi}^{a})\] \[\bar{m}^{a} =\frac{1}{\sqrt{2}r}(\partial_{\theta}^{a}-i\csc\theta\partial_{ \phi}^{a}),\]
in relation to which the stress-energy tensor of this geometry splits in two parts
\[T_{ab}=T_{ab}^{(D)}+T_{ab}^{(F)}, \tag{50}\]
i.e., a null dust part \(T_{ab}^{(D)}=\mu k_{a}k_{b}\) of type \(I\) and a null fluid part \(T_{ab}^{(F)}=2(\rho+p)l_{(a}k_{b)}+pg_{ab}\) of type \(II\)[15], where the short hand notation \(\mu=\frac{\dot{M}}{4\pi r^{2}}\), \(\rho=\frac{M^{\prime}}{4\pi r^{2}}\), \(p=-\frac{M^{\prime\prime}}{8\pi r}\), \(M^{\prime}:=\partial_{r}M\), \(k_{a}=dv_{a}\) and \(l_{a}=dr_{a}-\frac{1}{2}(1-\frac{2M}{r})dv_{a}\) has been introduced. Thus, given that \(M\) is only a function of \(v\), \(T_{ab}^{(F)}\) is zero and the Vaidya metric is obtained as a special case. On the other hand, if \(M\) is chosen to be of the form \(M(v,r)=m(v)+\frac{\Lambda r^{3}}{6}\) the Vaidya-de Sitter metric is obtained as a special case, which reduces to the Kottler alias Schwarzschild-de Sitter metric in the case that \(m(v)=m_{0}=const\).
As a basis for the introduction of a geometric setting, as considered in the first section, one may now identify \(N^{2}=1-\frac{2M}{r}\) as the lapse function of the
geometry and perform a rescaling of the form \(l^{a}\rightarrow\frac{\sqrt{2}}{N}l^{a}\) and \(k^{a}\rightarrow\frac{N}{\sqrt{2}}k^{a}\), which yields the timelike and spacelike vector fields \(n^{a}=\frac{1}{\sqrt{2}N}\partial_{v}^{a}\) and \(s^{a}=\frac{1}{\sqrt{2}}[\frac{1}{N}\partial_{v}^{a}+N\partial_{r}^{a}]\). Then, by another boost transformation, the related vector fields \(v^{a}=\frac{1}{\lambda}n^{a}-\eta s^{a}\) and \(u^{a}=\frac{1}{\lambda}s^{a}-\eta n^{a}\) can be constructed in the next step, so that the main ingredients for the introduction of the geometric setting considered in previous sections of this work are given. Taking then further into account that the generalized Vaidya geometry is non-stationary and thus lacks time translation symmetry, it becomes clear that there are various ways to select the time-flow vector field of spacetime, all of which are consistent with the results of the previous section. As a result, there are multiple ways to set up the part \(H_{0}\) of the quasilocal Hamiltonian \(H\), and also multiple ways to calculate the variation of this same Hamiltonian and to investigate whether quasilocal corrections arise and persist even when the boundary of spacetime is shifted to infinity; some of which will now be discussed in the following.
Obviously, one of the choices mentioned is \(t^{a}=\sqrt{2}Nl^{a}\). Since for this choice \(\kappa=-{\cal L}_{l}N\) and \(\omega_{ab}=0\) applies, it is clear that the asymptotic fall-off conditions \(N[{\cal L}_{l}N+\kappa]\Theta\omega_{q}\underset{r\rightarrow\infty}{ \longrightarrow}0\), \(N^{2}\omega_{ab}\omega^{ab}\omega_{q}\underset{r\rightarrow\infty}{ \longrightarrow}0\) are met in the case that the boundary of spacetime is moved to null infinity in the large sphere limit.
Thus, taking the results of the previous section into account, it follows that the associated power functional \({\cal P}_{0}\), which describes the rate at which energy is radiated to infinity, is given by relation (34) and that only the energy flux due to gravitational radiation reaches null infinity; thereby proving to be consistent with relation (47) in the sense that \(\mathfrak{P}_{0}=0\). Hence, no quasilocal corrections arise in the given case. Yet, since the generalized Vaidya metric is spherically symmetric, it is clear that \(\sigma_{ab}=0\) and thus \({\cal P}_{0}=\frac{dm}{dv}=0\) applies, thereby implying that the Bondi mass \(m\) of the system is constant, as to be expected.
Another possible choice for the time-flow vector field of spacetime is \(t^{a}=\frac{1}{\sqrt{2}}\partial_{v}^{a}=Nn^{a}=\frac{N}{\sqrt{2}}(l^{a}+k^{a})\); a choice according to which \(N^{a}=0\) applies. Given this comparatively simple form of \(t^{a}\), it turns out to be most straightforward to use (10) directly to determine the form of potentially occurring quasilocal corrections. To this end, it may first be concluded from (9) that the bulk-to-boundary inflow term of the geometry takes the form
\[\underset{\Omega_{v}}{\int}\Pi\omega_{q} =\frac{1}{8\pi}\underset{\Omega_{v}}{\int}N^{2}G_{ab}n^{a}s^{b} \omega_{q}=\frac{1}{2}\underset{\Omega_{v}}{\int}N^{2}T_{ab}l^{a}l^{b}\omega_ {q}=\frac{1}{2}\underset{\Omega_{v}}{\int}N^{2}\mu\omega_{q}= \tag{51}\] \[=\frac{1}{2}(1-\frac{2M}{r})\dot{M},\]
thereby yielding
\[\underset{\mathbb{S}_{\infty}}{\int}\Pi\omega_{q}=\frac{1}{2}\dot{M}_{\infty} \tag{52}\]
in the large sphere limit. Then, given that the validity of \(N^{a}=0\) implies that \(O={\cal N}^{a}=0\) and thus \(\Gamma_{V}=\sqrt{2}N\Theta-Nk\) is satisfied, one finds that
\[\mathfrak{H}=\frac{\sqrt{q}}{8\pi}Nk. \tag{53}\]
Also, by taking further into account that \(k=\frac{1}{\sqrt{2}}(\Theta-\Xi)\), \(\Theta=\frac{N^{2}}{r}\) and \(\Xi=-\frac{2}{r}\) and thus \(\Theta-\Xi=\frac{N^{2}+2}{r}\) applies in the given context, it is found that
\[\frac{dH_{0}^{Boundary}}{dt} =\frac{1}{\sqrt{2}8\pi}\underset{\Omega_{v}}{\int}N\mathcal{L}_{n }N(\Theta-\Xi)+N^{2}\mathcal{L}_{n}(\Theta-\Xi)+\frac{N^{2}}{\sqrt{2}}(\Theta^ {2}-\Xi^{2})]\omega_{q}= \tag{54}\] \[=\frac{1}{16\pi}\underset{\Omega_{v}}{\int}[\frac{(3N^{2}+2) \partial_{v}N^{2}}{2Nr}+\frac{N^{2}(N^{4}-4)}{r^{2}}]\omega_{q},\]
where, just as a reminder, \(N^{2}=1-\frac{2M}{r}\) applies by definition. Ultimately, by using the fact that\(\frac{1}{2}\partial_{v}N^{2}=-\frac{\dot{M}}{r}\) and \(\omega_{q}=r^{2}\sin\theta d\theta d\phi\) holds true by definition and taking the large sphere limit of (54), one obtains the final result
\[\mathscr{P}_{0}=\mathfrak{P}_{0}=\underset{v,r\rightarrow\infty}{\lim}\frac{ dH_{0}^{Boundary}}{dt}+\underset{\mathbb{S}_{\infty}}{\int}\Pi\omega_{q}=\frac{3}{4}(1- \dot{M}_{\infty})\neq 0. \tag{55}\]
From this, however, it can be concluded that even in the given simple case Bondi's result is no longer exactly reproduced, since the variation of the quasilocal mass \(m\) of the system does not coincide with that of the Bondi mass \(m_{B}\); the fact that both quasilocal masses coincide exactly at future null infinity notwithstanding.
Moreover, since neither \(T_{ab}n^{a}n^{b}\) nor \(T_{ab}n^{a}s^{b}\) nor \(T_{ab}s^{a}s^{b}\) are equal to zero, it is clear that one can always find a function \(O(v,r)\) and an associated time evolution vector field of the form \(t^{a}=\sqrt{2}Nl^{a}+(O-N)s^{a}+\mathcal{N}^{a}\) such that the resulting bulk-to-boundary inflow term with integrand (12) is different from zero at null infinity, thereby implying that \(\mathfrak{P}_{0}\neq 0\) and thus \(\mathscr{P}_{0}\neq 0\) applies for the quasilocal quantities occurring in (47) in such a case. Accordingly, given this particular choice of time-flow vector field, it becomes clear that, as claimed, quasilocal corrections to the Bondi mass-loss formula occur at future null infinity. These are manifestly different from zero, so that in such a case one is led to conclude that there are again deviations from Bondi's mass-loss formula caused by the resulting quasilocal corrections.
Thus, to conclude, it is found in the given non-stationary case that radiation fields can be detected at null infinity even in cases where the Bondi news function is zero. The bulk-to-boundary inflow term responsible for this fact depends to a large extent, similar to the other quasilocal corrections, on the choice of the time-flow vector field of the geometry. For special choices of the latter, as it turns out, Bondi's results can however be exactly reproduced.
That said, in order to determine the analogous power functional at the horizon, one may proceed somewhat differently; if only because the rescaled null vector field \(l^{a}=\frac{\sqrt{2}}{N}\partial_{v}^{a}+\frac{N}{\sqrt{2}}\partial_{r}^{a}\) diverges at the horizon.
A facilitating circumstance in this context is the fact that it proves sufficient to consider vector fields which are only locally lightlike at the dynamical horizon \({\cal T}\). Taking this into account, the steps taken in [6] can be applied as is to the given case and used as a basis for constructing the horizon Hamiltonian \(H_{h}\) as well as calculating its variation. To this end, one may define the function \(f(v,r):=N^{2}(v,r)=1-\frac{2M(v,r)}{r}\), choose \(N_{r}=\sqrt{|\frac{f}{2f^{\prime}}|}=\sqrt{\frac{1}{2}\frac{\tilde{M}r}{M-rM ^{\prime}}}\) for the lapse function of the geometry and set up the system of vector fields \(\tilde{l}^{a}=\frac{|f^{\prime}|}{\sqrt{|\tilde{f}f^{\prime}|}}\partial_{v}^{a}\), \(\tilde{k}^{a}=-\frac{|\dot{f}|}{\sqrt{|\tilde{f}f^{\prime}|}}\partial_{r}^{a}\)\(\tilde{n}^{a}=\frac{1}{\sqrt{|2\tilde{f}f^{\prime}|}}[f^{\prime}\partial_{v}^{a}+ \dot{f}\partial_{r}^{a}]\), \(\tilde{s}^{a}=\frac{1}{\sqrt{|2\tilde{f}f^{\prime}|}}[f^{\prime}\partial_{v}^ {a}-\dot{f}\partial_{r}^{a}]\), with \(\tilde{l}^{a}\) being only locally lightlike. Choosing then the horizon vector field \(\tilde{t}^{a}=\sqrt{2}N_{r}\tilde{l}^{a}\) for setting up \(H_{h}\)1, it can be concluded from (33) that \(\mathscr{P}_{h}=\mathcal{P}_{h}=\mathfrak{P}_{h}=0\), which implies that the gravitational Hamiltonian of generalized Vaidya spacetime, when defined with respect to \(\tilde{t}^{a}\), is constant with respect to the Lie-flow generated by the same vector field, and thus is a locally conserved quantity at the boundary of spacetime (but not in the bulk). In the case where \(M\to const.\) and thus the geometry of spacetime approaches that of Schwarzschild spacetime, the fact that \(N_{r}\to 0\) applies in the same limit further implies that \(H_{h}\to 0\), as to be expected.
Footnote 1: In this context, it has to be ensured that the horizon vector field transitions smoothly into the residual time-flow vector field. However, this can readily be done by considering a smooth joining of the two vector fields using distrbutions or functions with compact support.
Consequently, given the choice \(N_{r}\) for the lapse function at the horizon, the fact that \(H_{h}\to 0\) shows that the temporal variation of the total quasilocal Brown-York Hamiltonian vanishes once the black hole horizon reaches a steady state of equilibrium and becomes an isolated or weakly isolated horizon in the sense of Ashtekar et al. Thus, the treated model confirms that any matter and/or radiation flux (of the specified type) from the bulk to the boundary of spacetime crossing a dynamical horizon necessarily subsides completely in the limiting case where the geometry of the generalized Vaidya becomes static and settles into an equilibrium state where it coincides with that of Schwarzschild spacetime. The model therefore confirms that in the case of a black hole spacetime, neither matter nor radiation can escape to infinity through the event horizon of a black hole. However, through a dynamical horizon, which lies not within a black hole event horizon, matter and radiation can very well escape to infinity. An example of the occurrence of such a situation is in the case of Vaidya-de Sitter spacetime for \(M>\sqrt{9\Lambda}\); a case in which no black hole horizon can form, but spacetime nevertheless exhibits a dynamical horizon through which matter and radiation can escape to infinity (but not to future null infinity, as the latter does not exist in said case). But there are, of course, other examples that could also be mentioned at this point.
Anyway, the above should apply not only to dynamical horizons whose cross sections are spherically symmetric, but also to more general horizons that occur, for example, in non-stationary axisymmetric spacetime. However, surprisingly, it has been found in the literature that in general it is not easy to analyze non
spherical dynamical horizons, since not too much is known about non-spherical marginally trapped surfaces.
Yet, as far as the results of the present work are concerned, this does not pose a major problem, since the derived integral laws should retain their validity for any type of dynamical black hole spacetime (and certainly beyond); even if the notion of a dynamical horizon is replaced by Hayward's more general notion of a trapping horizon. The calculated quasilocal corrections should therefore be taken into account where necessary.
## Conclusion and Outlook
In this work, the rate of change of mass and/or radiant energy escaping through the spatial boundary of a confined non-stationary spacetime was calculated using the quasilocal Brown-York formalism. In doing so, it was shown that a null geometric equivalent of the bulk-to-boundary inflow term derived in [17] results from varying the total Hamiltonian of the theory, which describes how matter and/or radiation can escape from the bulk of spacetime into its boundary region. Also, it was shown that other quasilocal corrections occur, some of which do not vanish even when the boundary of spacetime is shifted to infinity. As a result, using the example of Generalized Vaidya spacetime, it was shown that, in general, corrections to the Bondi mass-loss formula occur at null infinity, even though said formula can also be reproduced exactly - given a suitable choice of the time-flow vector field. The null geometric approach used for this purpose was found to be consistent with the theory of dynamical and isolated horizons, and it was found that the horizon part of the Hamiltonian becomes zero (for a suitable choice for the lapse function of the geometry) as soon as the dynamical horizon transitions into an isolated or weakly isolated horizon.
Remarkably, in this context, it turns out that the form of the derived bulk-to-boundary inflow term is independent of the choice of boundary conditions chosen to set up the quasilocal Hamiltonian of the theory. The reason is that this term results from the variation of the bulk part of the quasilocal Hamiltonian, i.e., the ADM Hamiltonian, and not from the variation of its boundary part. For this reason, the quasilocal corrections associated with this term always occur when the ADM Hamiltonian is considered, and are thus relatively universally applicable in general relativity. It is therefore to be expected that said quasilocal corrections play an important role in describing a large class of phenomena in Einstein-Hilbert gravity. As it stands, any further applications will be discussed in more detail elsewhere, in a future work on this subject.
**Acknowledgements:**
Great thanks to Abhay Ashtekar for pointing out an erroneous conclusion in the first draft of the manuscript. Also, I want to thank Felix Wilkens for his support in preparing the image depicted in Fig. 1 of the paper. |
2309.04035 | Generalized moving least squares vs. radial basis function finite
difference methods for approximating surface derivatives | Approximating differential operators defined on two-dimensional surfaces is
an important problem that arises in many areas of science and engineering. Over
the past ten years, localized meshfree methods based on generalized moving
least squares (GMLS) and radial basis function finite differences (RBF-FD) have
been shown to be effective for this task as they can give high orders of
accuracy at low computational cost, and they can be applied to surfaces defined
only by point clouds. However, there have yet to be any studies that perform a
direct comparison of these methods for approximating surface differential
operators (SDOs). The first purpose of this work is to fill that gap. For this
comparison, we focus on an RBF-FD method based on polyharmonic spline kernels
and polynomials (PHS+Poly) since they are most closely related to the GMLS
method. Additionally, we use a relatively new technique for approximating SDOs
with RBF-FD called the tangent plane method since it is simpler than previous
techniques and natural to use with PHS+Poly RBF-FD. The second purpose of this
work is to relate the tangent plane formulation of SDOs to the local coordinate
formulation used in GMLS and to show that they are equivalent when the tangent
space to the surface is known exactly. The final purpose is to use ideas from
the GMLS SDO formulation to derive a new RBF-FD method for approximating the
tangent space for a point cloud surface when it is unknown. For the numerical
comparisons of the methods, we examine their convergence rates for
approximating the surface gradient, divergence, and Laplacian as the point
clouds are refined for various parameter choices. We also compare their
efficiency in terms of accuracy per computational cost, both when including and
excluding setup costs. | Andrew M. Jones, Peter A. Bosler, Paul A. Kuberry, Grady B. Wright a | 2023-09-07T22:13:10Z | http://arxiv.org/abs/2309.04035v1 | Generalized moving least squares vs. radial basis function finite difference methods for approximating surface derivatives
###### Abstract
Approximating differential operators defined on two-dimensional surfaces is an important problem that arises in many areas of science and engineering. Over the past ten years, localized mesh-free methods based on generalized moving least squares (GMLS) and radial basis function finite differences (RBF-FD) have been shown to be effective for this task as they can give high orders of accuracy at low computational cost, and they can be applied to surfaces defined only by point clouds. However, there have yet to be any studies that perform a direct comparison of these methods for approximating surface differential operators (SDOs). The first purpose of this work is to fill that gap. For this comparison, we focus on an RBF-FD method based on polyharmonic spline kernels and polynomials (PHS+Poly) since they are most closely related to the GMLS method. Additionally, we use a relatively new technique for approximating SDOs with RBF-FD called the tangent plane method since it is simpler than previous techniques and natural to use with PHS+Poly RBF-FD. The second purpose of this work is to relate the tangent plane formulation of SDOs to the local coordinate formulation used in GMLS and to show that they are equivalent when the tangent space to the surface is known exactly. The final purpose is to use ideas from the GMLS SDO formulation to derive a new RBF-FD method for approximating the tangent space for a point cloud surface when it is unknown. For the numerical comparisons of the methods, we examine their convergence rates for approximating the surface gradient, divergence, and Laplacian as the point clouds are refined for various parameter choices. We also compare their efficiency in terms of accuracy per computational cost, both when including and excluding setup costs.
keywords: PDEs on surfaces, Meshfree, Meshless, RBF-FD, GMLS, Polyharmonic spline Msc: [2008] 65D05, 65D25, 65M06, 65M75, 65N06, 65N75, 41A05, 41A10, 41A15 +
Footnote †: journal:
## 1 Introduction
The problem of approximating differential operators defined on two dimensional surfaces embedded in \(\mathbb{R}^{3}\) arises in many multiphysics models. For example, simulating atmospheric flows with Eulerian or Lagrangian numerical methods requires approximating the surface gradient, divergence, and Laplacian on the two-sphere [1; 2; 3; 4]. Similar surface differential operators (SDOs) on more geometrically complex surfaces appear in models of ice sheet dynamics [5], biochemical signaling on cell membranes [6], morphogenesis [7], texture synthesis [8], and sea-air hydrodynamics [9].
Localized meshfree methods based on generalized moving least squares (GMLS) and radial basis function finite differences (RBF-FD) have become increasingly popular over the last ten years for approximating SDOs and solving surface partial differential equations (PDEs); see, for example, [10; 11; 12; 13; 14] for GMLS and [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26] for RBF-FD. These methods can be applied to surfaces defined by point clouds, without having to form a triangulation of the surface like surface finite element methods [27] or a level-set representation of the surface like embedded finite element methods [28]. Additionally,
for the special case of the sphere, RBF-FD has been shown to be highly competitive with element based methods in terms of accuracy per degree of freedom [16, 17, 23]. While there is one study dedicated to comparing GMLS and RBF-FD for approximating functions and derivatives in \(\mathbb{R}^{2}\) and \(\mathbb{R}^{3}\)[29], there are no studies that compare them for approximating SDOs. The first purpose of the present work is to fill this gap.
The RBF-FD methods referenced above use different approaches for approximating SDOs, while the GMLS methods essentially use the same approach based on (weighted) polynomial least squares. To keep the comparison to GMLS manageable, we will limit our focus to an RBF-FD method based on polyharmonic spline (PHS) kernels augmented with polynomials (or PHS+Poly) since they are most closely related to GMLS [29]. Additionally, these RBF-FD methods are becoming more and more prevalent as they can give high orders of accuracy that are controlled by the augmented polynomial degree [30] and they do not require choosing a shape parameter, which can be computationally intensive to do in an automated way.
The techniques for formulating SDOs also vary significantly in the RBF-FD methods referenced above, while the formulations used in GMLS are similar, being based on local coordinates to the surface. In this work, we limit our focus to the so-called tangent plane formulation with RBF-FD, as it provides a more straightforward technique for incorporating polynomials in RBF-FD methods than [15, 16, 18, 19, 20, 21, 24, 25] and is related to the local coordinate formulation used in GMLS (see below). Additionally, the comparison in [22] of several RBF-FD methods for approximating the surface Laplacian (Laplace-Beltrami operator) revealed the tangent plane approach to be the most computationally efficient in terms of accuracy per computational cost. The tangent plane method was first introduced by Demanet [31] for approximating the surface Laplacian using polynomial based approximations. Suchde & Kuhnert [13] generalized this method to other SDOs using polynomial weighted least squares. Shaw [22] (see also [26]) was the first to use this method for approximating the surface Laplacian with RBF-FD and Gunderman et. al. [23] independently developed the method for RBF-FD specialized to the surface gradient and divergence on the unit two-sphere. The second purpose of the present work is to analytically compare the local coordinate formulation of SDOs used in GMLS to the tangent plane formulation and to show that these formulations are in fact identical when the tangent space for the surface is known exactly for the given point cloud.
When the tangent space is unknown, which is generally the case for surface represented by point clouds, it must be approximated. There has been little attention given in the literature on RBF-FD methods for how to do these approximations; commonly it is assumed that they are computed by some separate techniques (e.g., [15, 18, 24]). However, for GMLS, these approximations are incorporated directly in the methods (e.g., [11, 12, 14]). The third purpose of this work is use the ideas from GMLS to develop a new RBF-FD technique for approximating the tangent space directly using PHS+Poly. By combining this with the tangent plane method, we arrive at the first comprehensive PHS+Poly RBF-FD framework for approximating SDOs on point cloud surfaces.
The GMLS and RBF-FD methods both use weighted combinations of function values over a local stencil of points to approximate SDOs. They also feature a parameter \(\ell\) for controlling the degree of polynomial precision of the formulas. For the numerical comparisons of the methods, we investigate how the size of the stencils and the polynomial degree effect the convergence rates of the methods for approximating SDOs under refinement. We focus on approximations of the surface gradient, divergence, and Laplacian operators on two topologically distinct surfaces, the unit two-sphere and the torus, which are representative of a broad range of application domains. In the case of the sphere, we also study the convergence rates of the methods for different point sets, including icosahedral points that are popular in applications. Finally, we investigate the efficiency of the methods in terms of their accuracy versus computational cost, both when including and excluding setup costs.
Our numerical results demonstrate that RBF-FD and GMLS give similar convergence rates for the same choice of polynomial degree \(\ell\), but overall RBF-FD results in lower errors. We also show that the often-reported super convergence of GMLS for the surface Laplacian only happens for highly structured, quasi-uniform point sets, and when the point sets are more general (but still possibly quasi-uniform), this convergence rate drops to the theoretical rate. Additionally, we find that the errors for RBF-FD can be further reduced with increasing stencil sizes, but that this does not generally hold for GMLS, and the errors can actually deteriorate. Finally, we find that when
setup costs are included, GMLS has an advantage in terms of efficiency, but if these are neglected then RBF-FD is more efficient.
The remainder of the paper is organized as follows. In Section 2, we provide some background and notation on stencil-based approximations and on surface differential operators. We follow this with a detailed overview of the GMLS and RBF methods in Section 3 and 4, respectively. In particular, Section 4.1 shows the equivalence of the local coordinate and tangent plane formulations of some SDOs, and Section 4.4 introduces an RBF-FD method for approximating the tangent space. A comparison of some theoretical properties of the two methods is given in Section 5, while extensive numerical comparisons are given in Section 6. We end with some concluding remarks in Section 7.
## 2 Background and notation
### Stencils
The RBF-FD and GMLS methods both discretize SDOs by weighted combinations of function values over a local _stencil_ of points. This makes them similar to traditional finite-difference methods, but the lack of a grid, a tuple indexing scheme, and inherent awareness of neighboring points requires that some different notation and concepts be introduced. In this section we review the stencil notation that will be used in the subsequent sections.
Let \(\mathbf{X}=\{\mathbf{x}_{i}\}_{i=1}^{N}\) be a global set of points (point cloud) contained in some domain \(\Omega\). A _stencil of \(\mathbf{X}\)_ is a subset of \(n\leq N\) nodes of \(\mathbf{X}\) that are close (see discussion below for what this means) to some point \(\mathbf{x}_{\mathrm{c}}\in\Omega\), which is called the _stencil center_. In this work, the stencil center is some point from \(\mathbf{X}\), so that \(\mathbf{x}_{\mathrm{c}}=\mathbf{x}_{i}\), for some \(1\leq i\leq N\), and this point is always included in the stencil. We denote the subset of points making up the stencil with stencil center \(\mathbf{x}_{i}\) as \(\mathbf{X}^{i}\) and allow the number of points in the stencil to vary with \(\mathbf{x}_{i}\). To keep track of which points in \(\mathbf{X}^{i}\) belong to \(\mathbf{X}\), we use _index set_ notation and let \(\sigma^{i}\) denote the set of indices of the \(1<n_{i}\leq N\) points from \(\mathbf{X}\) that belong to \(\mathbf{X}^{i}\). Using this notation, we write the elements of the stencil as \(\mathbf{X}^{i}=\{\mathbf{x}_{j}\}_{j\in\sigma^{i}}\). We also use the convention that the indices are sorted by the distance the stencil points are from the stencil center \(\mathbf{x}_{i}\), so that the first element of \(\sigma^{i}\) is \(i\).
With the above notation, we can define a general stencil-based approximation method to a given (scalar) linear differential operator \(\mathcal{L}\). Let \(u\) be a scalar-valued function defined on \(\Omega\) that is smooth enough so that \(\mathcal{L}u\) is defined for all \(\mathbf{x}\in\Omega\). The approximation to \(\mathcal{L}u\) at any \(\mathbf{x}_{i}\in\mathbf{X}\) is given as
\[\mathcal{L}u|_{\mathbf{x}=\mathbf{x}_{i}}\approx\sum_{j\in\sigma^{i}}c_{ij}u( \mathbf{x}_{j}). \tag{1}\]
The weights \(c_{ij}\) are determined by the method of approximation, which in this study will be either GMLS or RBF-FD. These weights can be assembled into a sparse \(N\times N\) "stiffness" matrix, similar to mesh-based methods. Vector linear differential operators (e.g., the gradient) can be similarly defined where (1) is used for each component and \(\mathcal{L}\) is the scalar operator for that component.
There are two main approaches used in the meshfree methods literature for determining the stencil points, one based on \(k\)-nearest neighbors (KNN) and one based on ball searches. These are illustrated in Figure 1 for a scattered point set \(\mathbf{X}\) in the plane. The approach that uses KNN is straightforward since it amounts to simply choosing the stencil \(\mathbf{X}^{i}\) as the subset of \(n_{i}\) points from \(\mathbf{X}\) that are closest to \(\mathbf{x}_{i}\). The approach that uses ball searches is a bit more involved, so we summarize it in Algorithm 1. Both methods attempt to select points such that the stencil satisfies polynomial unisolvency conditions (see the discussion in Section 3.1). In this work, we use the method in Algorithm 1 since
* it is better for producing stencils with symmetries when \(\mathbf{X}\) is regular, which can be beneficial for improving the accuracy of the approximations;
* it is more natural to use with the weighting kernel inherent to GMLS; and
* it produces stencils that are not biased in one direction when the spacing of the points in \(X\) are anisotropic.
To measure distance in the ball search, we use the standard Euclidean distance measured in \(\mathbb{R}^{3}\) rather than distance on the surface since this is simple to compute for any surface. We also use a \(k\)-d tree to efficiently implement the method. Finally, the choice of parameters we use in Algorithm 1 are discussed in Section 3.3.
```
1:Input: Point cloud \(\mathbf{X}\); stencil center \(\mathbf{x}_{\mathrm{c}}\); number initial stencil points \(n\); radius factor \(\tau\geq 1\)
2:Output: Indices \(\sigma^{c}\) in \(\mathbf{X}\) for the stencil center \(\mathbf{x}_{\mathrm{c}}\)
3: Find the \(n\) nearest neighbors in \(\mathbf{X}\) to \(\mathbf{x}_{\mathrm{c}}\), using the Euclidean distance
4: Compute the max distance \(h_{\mathrm{max}}\) between \(\mathbf{x}_{\mathrm{c}}\) and its \(n\) nearest neighbors
5: Find the indices \(\sigma^{c}\) of the points in \(\mathbf{X}\) contained in the ball of radius \(\tau h_{\mathrm{max}}\) centered at \(\mathbf{x}_{\mathrm{c}}\)
```
**Algorithm 1** Procedure for determining the stencil points based on ball searches.
### Surface differential operators in local coordinates
Here we review some differential geometry concepts that will be used in the subsequent sections. We refer the reader to the books [32; 33; 34] for a thorough discussion of these concepts and the derivations of what follows.
We assume that \(\mathcal{M}\subset\mathbb{R}^{3}\) is a regular surface and let \(T_{\mathbf{x}}\mathcal{M}\) denote the set of all vectors in \(\mathbb{R}^{3}\) that are tangent to \(\mathcal{M}\) at \(\mathbf{x}\in\mathcal{M}\) (i.e., the tangent space to \(\mathcal{M}\) at \(\mathbf{x}\)). This assumption implies that for each point \(\mathbf{x}\in\mathcal{M}\) there exists a local parameterization in \(T_{\mathbf{x}}\mathcal{M}\) of a neighborhood (or patch) of \(\mathcal{M}\) containing \(\mathbf{x}\) of the form
\[\mathbf{f}(\hat{x},\hat{y})=(\hat{x},\hat{y},f(\hat{x},\hat{y})), \tag{2}\]
where \(\hat{x}\), \(\hat{y}\) are local coordinates for \(T_{\mathbf{x}}\mathcal{M}\), and \(f\) is a smooth function for the "height" of the surface patch over \(T_{\mathbf{x}}\mathcal{M}\)[34]. This local parametric representation of a surface is called a Monge patch or Monge form [33] and is illustrated for a bumpy sphere surface in Figure 2. As we see below, it is particularly well suited for computing SDOs.
Using the parameterization (2), the local metric tensor \(G\) about \(\mathbf{x}\) for the surface is given as
\[G=\begin{bmatrix}\partial_{\hat{x}}\mathbf{f}\cdot\partial_{\hat{x}}\mathbf{ f}&\partial_{\hat{x}}\mathbf{f}\cdot\partial_{\hat{y}}\mathbf{f}\\ \partial_{\hat{y}}\mathbf{f}\cdot\partial_{\hat{x}}\mathbf{f}&\partial_{\hat{y }}\mathbf{f}\cdot\partial_{\hat{y}}\mathbf{f}\end{bmatrix}=\begin{bmatrix}1+( \partial_{\hat{x}}f)^{2}&(\partial_{\hat{x}}f)(\partial_{\hat{y}}f)\\ (\partial_{\hat{x}}f)(\partial_{\hat{y}}f)&1+(\partial_{\hat{y}}f)^{2}\end{bmatrix}. \tag{3}\]
Letting \(g^{ij}\) denote the \((i,j)\) entry of \(G^{-1}\), the surface gradient operator locally about \(\mathbf{x}\) is given as
\[\widehat{\nabla}_{\mathcal{M}}=(\partial_{\hat{x}}\mathbf{f})\left(g^{11} \partial_{\hat{x}}+g^{12}\partial_{\hat{y}}\right)+(\partial_{\hat{y}}\mathbf{ f})\left(g^{21}\partial_{\hat{x}}+g^{22}\partial_{\hat{y}}\right). \tag{4}\]
Figure 1: Comparison of the two search algorithms used in this paper for determining a stencil. The nodes \(\mathbf{X}\) are marked with solid black disks and all the stencil points are marked with solid blue disks, except for the stencil center, which is marked in red.
However, this is the surface gradient with respect to the horizontal \(\hat{x}\hat{y}\)-plane (see Figure 2 (b)), and subsequently needs to be rotated so that it is with respect to \(T_{\mathbf{x}}\mathcal{M}\) in its original configuration. If \(\mathbf{\xi}^{1}\) and \(\mathbf{\xi}^{2}\) are orthonormal vectors that span \(T_{\mathbf{x}}\mathcal{M}\) and \(\mathbf{\eta}\) is the unit outward normal to \(\mathcal{M}\) at \(\mathbf{x}\), then the surface gradient in the correct orientation is given as
\[\nabla_{\mathcal{M}}=\underbrace{\begin{bmatrix}\mathbf{\xi}^{1}&\mathbf{\xi}^{2}& \mathbf{\eta}\end{bmatrix}}_{R}\widehat{\nabla}_{\mathcal{M}}. \tag{5}\]
Using this result, the surface divergence of a smooth vector \(\mathbf{u}\in T_{\mathbf{x}}\mathcal{M}\) can be written as
\[\nabla_{\mathcal{M}}\cdot\mathbf{u}=\left(g^{11}\partial_{\hat{x}}+g^{12} \partial_{\hat{y}}\right)(\partial_{\hat{x}}\mathbf{f})^{T}R^{T}\mathbf{u}+ \left(g^{21}\partial_{\hat{x}}+g^{22}\partial_{\hat{y}}\right)(\partial_{ \hat{y}}\mathbf{f})^{T}R^{T}\mathbf{u} \tag{6}\]
The surface Laplacian operator locally about \(\mathbf{x}\) is given as
\[\begin{split}\Delta_{\mathcal{M}}=\frac{1}{\sqrt{|g|}}& \bigg{(}\partial_{\hat{x}}\left(\sqrt{|g|}g^{11}\partial_{\hat{x}} \right)+\partial_{\hat{x}}\left(\sqrt{|g|}g^{12}\partial_{\hat{y}}\right)+ \\ &\partial_{\hat{y}}\left(\sqrt{|g|}g^{21}\partial_{\hat{x}} \right)+\partial_{\hat{y}}\left(\sqrt{|g|}g^{22}\partial_{\hat{y}}\right) \bigg{)},\end{split} \tag{7}\]
where \(|g|=\det(G)\). This operator is invariant to rotations of the surface in \(\mathbb{R}^{3}\), so no subsequent modifications of (7) are necessary.
## 3 GMLS using local coordinates
The formulation of GMLS on a manifold was introduced by Liang & Zhao [11] and further refined by Trask, Kuberry, and collaborators [12; 14]. It uses local coordinates to approximate SDOs as defined in (5)-(7) and requires a method to also approximate the metric terms. Both approximations are computed for each \(\mathbf{x}_{i}\in\mathbf{X}\subset\mathcal{M}\) using GMLS over a local stencil of points \(\mathbf{X}^{i}\subset\mathbf{X}\). Below we give a brief overview of the method assuming that the tangent/normal vectors for the surface are known for each \(\mathbf{x}_{i}\in\mathbf{X}\). We then discuss a method for approximating these that is used in the Compadre Toolkit [35], which we use in the numerical experiments.
Figure 2: Illustration of a Monge patch parameterization for a local neighborhood of a regular surface \(\mathcal{M}\) in 3D. (a) Entire surface (in gray) together with the tangent plane (in cyan) for a point \(\mathbf{x}_{c}\) where the Monge patch is constructed (i.e., \(T_{\mathbf{x}_{c}}\mathcal{M}\)); red spheres mark a global point cloud \(\mathbf{X}\) on the surface. (b) Close-up view of the Monge patch parameterization, together with the points from a stencil \(\mathbf{X}_{c}\) (red spheres) formed from \(\mathbf{X}\) and the projection of the stencil to the tangent plane (blue spheres); the stencil center \(\mathbf{x}_{c}\) is at the origin of the axes for the \(\hat{x}\hat{y}\)-plane and is marked with a violet sphere.
We present the GMLS method through the lens of derivatives of MLS approximants as we feel this makes the analog to RBF-FD clearer, it is also closer to the description from [11]. Other derivations of GMLS are based on weighted least squares approximants of general linear functionals given at some set of points, e.g. [36; 37; 38]. However, both techniques produce the same result in the end [37]. For a more thorough discussion of MLS approximants, see for example [39, ch. 22] and the references therein.
### Approximating the metric terms
The metric terms are approximated from an MLS reconstruction of the Monge patch of \(\mathcal{M}\) centered at each target point \(\mathbf{x}_{i}\) using a local stencil of \(n_{i}\) points \(\mathbf{X}^{i}\subset\mathbf{X}\). This procedure is illustrated in Figure 2 and can be described as follows. First, the stencil \(\mathbf{X}^{i}\) is expressed in the form of (2) (i.e., \((\hat{x}_{j},\hat{y}_{j},f_{j})\), \(j\in\sigma^{i}\)), where \((\hat{x}_{j},\hat{y}_{j})\) are the coordinates for the stencil points in \(T_{\mathbf{x}_{i}}\mathcal{M}\), and \(f_{j}=f(\hat{x}_{j},\hat{y}_{j})\) are samples of the surface as viewed from the \(\hat{x}\hat{y}\)-plane. These can be computed explicitly as
\[\begin{bmatrix}\hat{x}_{j}\\ \hat{y}_{j}\\ \hat{f}_{j}\end{bmatrix}=\underbrace{\big{[}\boldsymbol{\xi}_{i}^{1}\quad \boldsymbol{\xi}_{i}^{2}\quad\boldsymbol{\eta}_{i}\big{]}^{T}}_{R_{i}^{T}}( \mathbf{x}_{j}-\mathbf{x}_{i}), \tag{8}\]
where \(\boldsymbol{\xi}_{i}^{1}\) and \(\boldsymbol{\xi}_{i}^{2}\) are orthonormal vectors that span \(T_{\mathbf{x}_{i}}\mathcal{M}\) and \(\boldsymbol{\eta}_{i}\) is the unit normal to \(\mathcal{M}\) at \(\mathbf{x}_{i}\). To simplify the notation that follows, we let \(\mathbf{\hat{x}}_{j}=(\hat{x}_{j},\hat{y}_{j})\) and \(\mathbf{\hat{X}}^{i}=\{\mathbf{\hat{x}}_{j}\}_{j\in\sigma^{i}}\) denote the projection of the stencil \(\mathbf{X}^{i}\) to \(T_{\mathbf{x}_{i}}\mathcal{M}\). Note that for convenience in what comes later we have shifted the coordinates so that the center of the projected stencil is \(\mathbf{\hat{x}}_{i}=(0,0)\).
In the second step, the approximate Monge patch at \(\mathbf{x}_{i}\) is constructed from a MLS approximant of the data \((\mathbf{\hat{x}}_{j},f_{j})\), \(j\in\sigma^{i}\), which can be written as
\[q(\mathbf{\hat{x}})=\sum_{k=1}^{L}b_{k}(\mathbf{\hat{x}})p_{k}(\mathbf{\hat{x} }), \tag{9}\]
where \(\{p_{1},\ldots,p_{L}\}\) is a basis for \(\mathbb{P}_{\ell}^{2}\) (the space of bivariate polynomials of degree \(\ell\)) and \(L=\dim(\mathbb{P}_{\ell}^{2})=(\ell+1)(\ell+2)/2\) is the dimension of this space. The coefficients \(b_{k}(\mathbf{\hat{x}})\) of the approximant are determined from the data according to the weighted least squares problem
\[\underline{b}^{*}(\mathbf{\hat{x}})=\operatorname*{argmin}_{\underline{b}\in \mathbb{R}^{L}}\sum_{j\in\sigma^{i}}w_{\rho}(\mathbf{\hat{x}}_{j},\mathbf{ \hat{x}})(q(\mathbf{\hat{x}}_{j})-f_{j})^{2}=\operatorname*{argmin}_{\underline {b}\in\mathbb{R}^{L}}\|W_{\rho}(\mathbf{\hat{x}})^{1/2}(P\underline{b}- \underline{f})\|_{2}^{2}, \tag{10}\]
where \(w_{\rho}:\mathbb{R}^{2}\times\mathbb{R}^{2}\rightarrow\mathbb{R}^{\geq 0}\) is a weight kernel that depends on a support parameter \(\rho\), \(W_{\rho}(\mathbf{\hat{x}})\) is the \(n_{i}\times n_{i}\) diagonal matrix \(W_{\rho}(\mathbf{\hat{x}})=\operatorname*{diag}(w_{\rho}(\mathbf{\hat{x}}_{j},\mathbf{\hat{x}}))\), and \(P\) is the \(n_{i}\times L\) Vandermonde-type matrix
\[P=\begin{bmatrix}p_{1}(\mathbf{\hat{x}}_{j})&p_{2}(\mathbf{\hat{x}}_{j})& \cdots&p_{L}(\mathbf{\hat{x}}_{j})\end{bmatrix},\;j\in\sigma^{i} \tag{11}\]
Here we use underlines to denote vectors (i.e., \(\underline{b}\) and \(\underline{f}\) denote vectors containing coefficients and data from (10), respectively). Note that the coefficients \(\overline{b}_{k}\) depend on \(\mathbf{\hat{x}}\) because the kernel \(w_{\rho}\) depends on \(\mathbf{\hat{x}}\) (this gives origin to the term "moving" in MLS). We discuss the selection of the stencils and weighting kernel below, but for now it is assumed that \(n_{i}>L\) and \(\mathbf{X}^{i}\) is unisolvent on the space \(\mathbb{P}_{\ell}^{2}\) (i.e., \(P\) is full rank), so that (10) has a unique solution.
The MLS approximant \(q\) is used in place of \(f\) in the Monge patch (2) and it is used to approximate the metric terms in (5)-(7). To compute these terms, various derivatives need to be approximated at the projected stencil center \(\mathbf{\hat{x}}_{i}\). Considering, for example, \(\partial_{\hat{x}}q\), the approximation is computed as follows:
\[\partial_{\hat{x}}q\big{|}_{\mathbf{\hat{x}}_{i}}\approx\sum_{k=1}^{L}b_{k}^{* }(\mathbf{\hat{x}}_{i})\partial_{\hat{x}}(p_{k}(\mathbf{\hat{x}}))\big{|}_{ \mathbf{\hat{x}}_{i}}, \tag{12}\]
where \(b_{k}^{*}(\hat{\mathbf{x}}_{i})\) come from (10) with \(\hat{\mathbf{x}}=\hat{\mathbf{x}}_{i}\). Other derivatives of metric terms in (5)-(7) are approximated in a similar way to (12). We note if the standard monomial basis is used for \(\{p_{1},\ldots,p_{L}\}\), then by centering the projected stencil in (8) about the origin, only one of the derivatives of \(p_{k}\) in (12) is non-zero when evaluated at \(\hat{\mathbf{x}}_{i}\).
Note that (12) is only an approximation of \(\partial_{\hat{x}}q\) because it does not include the contribution of \(\partial_{\hat{x}}(b_{k}^{*}(\hat{\mathbf{x}}))\big{|}_{\hat{\mathbf{x}}_{i}}\). This approximation is referred to as a "diffuse derivative" in the literature and is equivalent to the GMLS formulation of approximating derivatives [37]. The term "GMLS derivatives" is preferred over "diffuse derivatives" to describe (12), since the approximation is not diffuse or uncertain and has the same order of accuracy as the approximations that include the derivatives of the weight kernels [40].
### Approximating SDOs
The procedure for approximating any of the SDOs in (5)-(7) is similar to the one for approximating the metric terms, but for this task we are interested in computing stencil weights as in (1) instead of the value of a derivative at a point. Since these SDOs involve computing various partial derivatives with respect to \(\hat{x}\) and \(\hat{y}\), we can use (12) as a starting point for generating these stencil weights. If \(\{u_{j}\}_{j\in\sigma^{i}}\) are samples of a function \(u\) over the projected stencil \(\hat{\mathbf{X}}^{i}\), then we can again approximate \(\partial_{\hat{\mathbf{x}}}u\big{|}_{\hat{\mathbf{x}}=\hat{\mathbf{x}}_{i}}\) using (12), with \(b_{k}^{*}(\hat{\mathbf{x}}_{i})\) defined in terms of the samples of \(u\). To write this in stencil form we note that (12) can be written using vector inner products as
\[\partial_{\hat{x}}u\big{|}_{\hat{\mathbf{x}}_{i}}\approx\partial_{\hat{x}}q \big{|}_{\hat{\mathbf{x}}_{i}}\approx\underbrace{\Big{[}\partial_{\hat{x}}p_{ 1}\big{|}_{\hat{\mathbf{x}}_{i}}\quad\cdots\quad\partial_{\hat{x}}p_{L}\big{|} _{\hat{\mathbf{x}}_{i}}\Big{]}}_{(\partial_{\hat{x}}p(\hat{\mathbf{x}}_{i}))^ {T}}\underbrace{\big{]}b^{*}(\hat{\mathbf{x}}_{i})}_{(\hat{\mathbf{x}}_{i})} =\underbrace{\big{[}c_{1}^{i}\quad\cdots\quad c_{n^{i}}\big{]}}_{(\hat{ \mathbf{c}}_{\hat{x}}^{i})^{T}}\underbrace{\big{]}u}, \tag{13}\]
where we have substituted the solution of \(\underline{b}^{*}(\hat{\mathbf{x}}_{i})\) in (10) to obtain the term in the last equality. Using the normal equation solution for \(\underline{b}^{*}(\hat{\mathbf{x}}_{i})\), the stencil weights \(c_{\hat{x}}^{i}\) can be expressed as
\[\underline{c}_{\hat{x}}^{i}=W_{\rho}(\hat{\mathbf{x}}_{i})P(P^{T}W_{\rho}( \hat{\mathbf{x}}_{i})P)^{-1}(\partial_{\hat{x}}p(\hat{\mathbf{x}}_{i})). \tag{14}\]
This is typically computed using a QR factorization of \(W_{\rho}(\hat{\mathbf{x}}_{i})^{1/2}P\) to promote numerical stability.
Stencil weights \(\underline{c}_{\hat{y}}^{i}\), \(\underline{c}_{\hat{x}\hat{x}}^{i}\), \(\underline{c}_{\hat{x}\hat{y}}^{i}\), and \(\underline{c}_{\hat{y}\hat{y}}^{i}\) for the other derivative operators appearing in (5)-(7) can be computed in a similar manner for each stencil \(\hat{\mathbf{X}}^{i}\), \(i=1,\ldots,N\). These can then be combined together with the approximate metric terms to define the weights \(\{c_{ij}\}\) in (1) for any of the SDOs in (5)-(7).
### Choosing the stencils and weight kernel
As discussed in Section 2.1, we use Algorithm 1 to choose the stencil weights. For the initial stencil size, we use \(L=\dim(\mathbb{P}_{\ell}^{2})\). The radius factor \(\tau\) controls the size of the stencil, with larger \(\tau\) resulting in larger stencils, and we experiment with this parameter in the numerical results section.
There are many choices for the weight kernel \(w_{\rho}\) in (10). Typically, a single radial kernel is used to define \(w_{\rho}\) as \(w_{\rho}(\mathbf{x},\mathbf{y})=w(\|\mathbf{x}-\mathbf{y}\|/\rho)\), where \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\) and \(\|\cdot\|\) is the standard Euclidean norm for \(\mathbb{R}^{d}\). In this work, we use the same family of compactly supported radial kernels as [12; 14] and implemented in [35]:
\[w_{\rho}(\mathbf{x},\mathbf{y})=\left(1-\frac{\|\mathbf{x}-\mathbf{y}\|}{\rho} \right)_{+}^{2m}, \tag{15}\]
where \(m\) is a positive integer and \((\cdot)_{+}\) is the positive floor function. These \(C^{0}\) kernels have support over the ball of radius \(\rho\) centered at \(\mathbf{y}\). While smoother kernels can be used such as Gaussians, splines, or Wendland kernels [39], we have not observed any significant improvement in the accuracy of GMLS derivative approximations with smoother kernels. In general, proofs on how the choice of kernels effects the accuracy of GMLS approximations have yet to be found.
Finally, we note that the support parameter \(\rho\) is chosen on a per stencil basis and is set equal to \(\tau h_{max}\) from Algorithm 1. Picking an optimal value for \(\tau\) to minimize the approximation error
is a difficult problem. In general, the optimal value depends locally on the point set and the function (or its derivative) being approximated [41]. While there are some algorithms that attempt to approximate this value to minimize the local pointwise error (e.g., [41; 42]), they are computationally expensive. Typically, one chooses a single \(\tau>1\) such that the minimization problem (10) is well-posed (i.e., \(P\) is full rank). This can be easily monitored for each stencil to adjust \(\tau\) appropriately.
### Approximating the tangent space
When the tangent space \(T_{\mathbf{x}_{i}}\mathcal{M}\) is unknown, a coarse approximation to it can be computed for each stencil \(\mathbf{X}^{i}\) using principal component analysis [11]. In this method, one computes the eigenvectors of the covariance matrix \(\overline{X}_{i}\overline{X}_{i}^{T}\), where \(\overline{X}_{i}\) is the 3-by-\(n_{i}\) matrix formed from the stencil points \(\mathbf{X}^{i}\) centered about their mean. The two dominant eigenvectors of this matrix are taken as a coarse approximation to \(T_{\mathbf{x}_{i}}\mathcal{M}\) and the third is taken as a coarse approximation to the normal to \(\mathcal{M}\) at \(\mathbf{x}_{i}\); we denote these by \(\boldsymbol{\hat{\xi}}_{i}^{1}\), \(\boldsymbol{\hat{\xi}}_{i}^{2}\), and \(\boldsymbol{\tilde{\eta}}_{i}\), respectively. Next, an approximate Monge patch parameterization is formed with respect to this approximate tangent space using MLS following the same procedure outlined at the beginning of Section 3.1. This procedure is illustrated in Figure 3 (a), where the coarse approximate tangent plane is given in yellow. A refined approximation to the true tangent plane and normal at the stencil center \(\mathbf{x}_{\mathrm{c}}\) can be obtained by computing the tangent plane and normal to the MLS approximant of the Monge patch at \(\mathbf{x}_{\mathrm{c}}\); this plane is given in cyan in Figure 3 (a). Once this plane is computed, a new Monge patch parameterization with respect to this refined tangent plane approximation is formed, as illustrated in Figure 3 (b). This procedure is repeated for each stencil \(\mathbf{X}^{i}\) and the refined tangent space computed for each stencil is used in the procedure described in Section 3.1 for approximating the metric terms.
## 4 RBF-FD using the tangent plane
As discussed in the introduction, there are several RBF-FD methods that have been developed over the past ten years for approximating SDOs. We use the one based on the tangent plane method for formulating SDOs and PHS+Poly interpolants for approximating the derivatives that appear in this formulation. The subsections below provide a detailed overview of these respective techniques.
### Tangent plane method
The tangent plane method similarly uses local coordinates for the surface in the tangent plane formed at each \(\mathbf{x}_{i}\in\mathbf{X}\), but unlike the method from Section 3.1, it does not use approximations to
Figure 3: Illustration of the tangent plane correction method. (a) Monge patch parameterization for a local neighborhood of a regular surface \(\mathcal{M}\) (in gray) in 3D using a coarse approximation to the tangent plane (in yellow) at the center of the stencil \(\mathbf{x}_{\mathrm{c}}\) and the refined approximation to the tangent plane (in cyan). (b) Same as (a), but for the Monge patch with respect to the refined tangent plane. The red spheres denote the points from the stencil and the blue spheres mark the projection of the stencil to the (a) coarse and (b) refined tangent planes. The coarse and refined approximations to the tangent and normal vectors are given as \(\boldsymbol{\xi}_{\mathrm{c}}^{1}\), \(\boldsymbol{\xi}_{\mathrm{c}}^{2}\), and \(\boldsymbol{\eta}_{\mathrm{c}}\), respectively, with tildes on these variables denoting the coarse approximation.
the metric terms. It instead approximates the SDOs at each \(\mathbf{x}_{i}\) using the standard definitions for the derivatives in the tangent plane. So, using local coordinates (2) about \(\mathbf{x}_{i}\), the surface gradient for the tangent plane method is taken as
\[\nabla_{\mathcal{M}}=R_{i}\begin{bmatrix}\partial_{\hat{x}}\\ \partial_{\hat{y}}\\ 0\end{bmatrix}, \tag{16}\]
and the surface divergence of a smooth vector \(\mathbf{u}\in T_{\mathbf{x}_{i}}\mathcal{M}\) is taken as
\[\nabla_{\mathcal{M}}\cdot\mathbf{u}=\begin{bmatrix}\partial_{\hat{x}}& \partial_{\hat{y}}&0\end{bmatrix}R_{i}^{T}\mathbf{u}, \tag{17}\]
where \(R_{i}\) is the rotation matrix given in (8). Similarly, the surface Laplacian in the tangent plane method is
\[\Delta_{\mathcal{M}}=\partial_{\hat{x}\hat{x}}+\partial_{\hat{y}\hat{y}}. \tag{18}\]
We next show that if \(T_{\mathbf{x}_{i}}\mathcal{M}\) is known exactly for each \(\mathbf{x}_{i}\in\mathbf{X}\) and the point at which the SDOs are evaluated is \(\mathbf{x}_{i}\), then the SDOs (16)-(18) are equivalent to the corresponding ones involving metric terms (5)-(7). This was shown indirectly in [31] for the surface Laplacian using the distributional definition of the surface Laplacian. Here we show the result follows explicitly for each surface differential operator (5)-(7) from the local coordinate formulation in Section 2.2.
The first step is to note that the vectors \(\partial_{\hat{x}}\mathbf{f}\big{|}_{\hat{\mathbf{x}}_{i}}\) and \(\partial_{\hat{y}}\mathbf{f}\big{|}_{\hat{\mathbf{x}}_{i}}\) from the Monge parameterization (2) are tangential to the \(\hat{x}\hat{y}\)-plane and must therefore be orthogonal to the vector \(\begin{bmatrix}0&0&1\end{bmatrix}\). This implies \(\partial_{\hat{x}}f=\partial_{\hat{y}}f=0\) at \(\hat{\mathbf{x}}_{i}\), which means the metric tensor (3) reduces to the identity matrix when evaluated at \(\hat{\mathbf{x}}_{i}\). Using this result in (5) for \(\widehat{\nabla}_{\mathcal{M}}\) means that the surface gradient formula (5) is exactly (16) when evaluated at \(\hat{\mathbf{x}}_{i}\). The equivalence of the surface divergence formulas (6) and (17) also follow immediately from this result.
The steps for showing the equivalence of the surface Laplacian operator are more involved. To simplify the notation in showing this result, we denote partial derivatives of \(f\) with subscripts. For the first step of this process, we substitute the explicit metric terms, \(|g|=(1+f_{\hat{x}}^{2})(1+f_{\hat{y}}^{2})-(f_{\hat{x}}f_{\hat{y}})^{2}\), \(g^{11}=(1+f_{\hat{y}})/|g|\), \(g^{12}=g^{21}=-(f_{\hat{x}})(f_{\hat{y}})/|g|\), and \(g^{22}=(1+f_{\hat{x}})/|g|\), into (7) and expand the derivatives. Next, we simplify to obtain the following formula:
\[\Delta_{\mathcal{M}}=\frac{1}{\left(f_{\hat{x}}^{2}+f_{\hat{y}}^{ 2}+1\right)^{2}}\bigg{(} \left(f_{\hat{y}}f_{\hat{x}\hat{y}}\left(1+2f_{\hat{x}}^{2}+f_{ \hat{y}}^{2}\right)-(f_{\hat{x}}f_{\hat{x}\hat{x}}+f_{\hat{y}}f_{\hat{x}\hat{y }})(1+f_{\hat{y}}^{2})-f_{\hat{x}}f_{\hat{y}\hat{y}}(1+f_{\hat{x}}^{2})\right) \partial_{\hat{x}}+\] \[\left(f_{\hat{x}}f_{\hat{x}\hat{y}}\left(1+2f_{\hat{y}}^{2}+f_{ \hat{x}}^{2}\right)-(f_{\hat{y}}f_{\hat{y}\hat{y}}+f_{\hat{x}}f_{\hat{x}\hat{ y}})(1+f_{\hat{x}}^{2})-f_{\hat{y}}f_{\hat{x}\hat{x}}(1+f_{\hat{y}}^{2}) \right)\partial_{\hat{y}}\bigg{)}+\] \[g^{11}\partial_{\hat{x}\hat{x}}+2g^{12}\partial_{\hat{x}\hat{y} }+g^{22}\partial_{\hat{y}\hat{y}}\]
Using \(f_{\hat{x}}=f_{\hat{y}}=g^{12}=0\) and \(g^{11}=g^{22}=1\) at \(\hat{\mathbf{x}}_{i}\), this formula reduces to (18).
### Approximating the SDOs
Since the tangent plane method does not require computing approximations to any metric terms, we only need to describe the RBF-FD method for approximating the derivatives that appear in (16)-(18). We derive this method from derivatives of interpolants over the projected stencils for each point \(\mathbf{x}_{i}\in X\) using the same notation as Section 3 and we assume that the tangent space is known. A method for approximating the tangent space also using RBF-FD is discussed in Section 4.4.
Let \(\{u_{j}\}_{j\in\sigma^{i}}\) be samples of some function \(u\) over the projected stencil \(\hat{\mathbf{X}}^{i}=\{\hat{\mathbf{x}}_{j}\}_{j\in\sigma^{i}}\). The PHS+Poly interpolant to this data can be written
\[s(\hat{\mathbf{x}})=\sum_{j=1}^{n_{i}}a_{j}\phi(\|\hat{\mathbf{x}}-\hat{\mathbf{ x}}_{\sigma^{i}_{j}}\|)+\sum_{k=1}^{L}b_{k}p_{k}(\hat{\mathbf{x}}), \tag{19}\]
where \(\phi(r)=r^{2\kappa+1}\) is the PHS kernel of order \(2\kappa+1\), \(\kappa\in\mathbb{Z}^{\geq 0}\), \(\sigma^{i}_{j}\) is the \(j\)th index in \(\sigma^{i}\), \(\|\cdot\|\) denotes the Euclidean norm, and \(\{p_{1},\ldots,p_{L}\}\) are a basis for \(\mathbb{P}^{2}_{\ell}\). The expansion coefficients are determined by the \(n_{i}\) interpolation conditions and \(L\) additional moment conditions:
\[s(\mathbf{\hat{x}}_{j})=u_{j},\;j\in\sigma^{i}\quad\text{and}\quad\sum_{j=1}^{ n_{i}}a_{j}p_{k}(\mathbf{\hat{x}}_{\sigma^{i}_{j}})=0,\;k=1,\ldots,L. \tag{20}\]
These conditions can be written as the following \((n_{i}+L)\times(n_{i}+L)\) linear system
\[\begin{bmatrix}A&P\\ P^{T}&\mathbf{0}\end{bmatrix}\begin{bmatrix}\underline{a}\\ \underline{b}\end{bmatrix}=\begin{bmatrix}\underline{u}\\ \underline{0}\end{bmatrix}, \tag{21}\]
where \(A_{jk}=\|\mathbf{\hat{x}}_{\sigma^{i}_{j}}-\mathbf{\hat{x}}_{\sigma^{i}_{k}} \|^{2\kappa+1}\) (\(j,k=1,\ldots,n_{i}\)) and \(P\) is the same Vandermonde-type matrix given in (11). The PHS parameter \(\kappa\) controls the smoothness of the kernel and should be chosen such that \(0\leq\kappa\leq\ell\). With this restriction on \(\kappa\), it can be shown that \(A\) is positive definite on the subspace of vectors in \(\mathbb{R}^{n}\) satisfying the \(L\) moment conditions in (20) [36]. Hence, if the stencil points \(\mathbf{X}^{i}\) are such that \(\text{rank}(P)=L\) (i.e., they are unisolvent on the space \(\mathbb{P}^{2}_{\ell}\)), then the system (21) is non-singular and the PHS+Poly interpolant is well-posed. Note that this is the same restriction on \(\mathbf{X}^{i}\) for the MLS problem (10) to have a unique solution.
The stencil weights for approximating any of the derivatives appearing in the SDOs (16)-(18) can be obtained from differentiating the PHS+Poly interpolant (19). Without loss of generality, consider approximating the operator \(\partial_{\hat{x}}\) over the stencil \(\hat{\mathbf{X}}^{i}\). Using vector inner products as in (13), the stencil weights for this operator are determined from the approximation
\[\partial_{\hat{x}}u\big{|}_{\mathbf{\hat{x}}_{i}}\approx\partial_{\hat{x}}s \big{|}_{\mathbf{\hat{x}}_{i}}=\begin{bmatrix}\partial_{\hat{x}}\underline{ \phi}(\mathbf{\hat{x}}_{i})&\partial_{\hat{x}}\underline{p}(\mathbf{\hat{x}}_ {i})\end{bmatrix}^{T}\begin{bmatrix}\underline{a}\\ \underline{b}\end{bmatrix}.\]
where \(\partial_{\hat{x}}\underline{\phi}(\mathbf{\hat{x}}_{i})\) and \(\partial_{\hat{x}}\underline{p}(\mathbf{\hat{x}}_{i})\) are vectors containing the entries \(\partial_{\hat{x}}\|\mathbf{\hat{x}}-\mathbf{\hat{x}}_{\sigma^{i}_{j}}\|^{2 \kappa+1}\big{|}_{\mathbf{\hat{x}}_{i}}\), \(j=1,\ldots,n_{i}\), and \(\partial_{\hat{x}}p_{k}(\mathbf{\hat{x}})\big{|}_{\mathbf{\hat{x}}_{i}}\), \(k=1,\ldots,L\), respectively. Using (21) in the preceding expression gives the stencil weights as the solution to the following linear system
\[\begin{bmatrix}A&P\\ P^{T}&\mathbf{0}\end{bmatrix}\begin{bmatrix}\underline{c}^{i}_{\hat{x}}\\ \underline{\lambda}\end{bmatrix}=\begin{bmatrix}\partial_{\hat{x}}\underline{ \phi}(\mathbf{\hat{x}}_{i})\\ \partial_{\hat{x}}\underline{p}(\mathbf{\hat{x}}_{i})\end{bmatrix}, \tag{22}\]
where the entries in \(\underline{\lambda}\) are not used as part of the weights. Note that this description is equivalent to applying \(\partial_{\hat{x}}\) to the PHS+Poly cardinal basis functions defined over the stencil and evaluating them at \(\mathbf{\hat{x}}_{i}\)[3].
Stencil weights \(\underline{c}^{i}_{\hat{y}}\), \(\underline{c}^{i}_{\hat{x}\hat{x}}\), \(\underline{c}^{i}_{\hat{x}\hat{y}}\), and \(\underline{c}^{i}_{\hat{y}\hat{y}}\) for the other partial derivatives can be computed in an analogous way for each stencil \(\hat{\mathbf{X}}^{i}\), \(i=1,\ldots,N\). These can then be combined together to define the weights \(\{c_{ij}\}\) in (1) for any of the SDOs in (16)-(18).
### Choosing the stencils and PHS order
Similar to GMLS, we use Algorithm 1 to choose the stencils and also use the same initial stencil size of \(n=L\) for this algorithm. The parameter \(\kappa\) used to determine the PHS order should be chosen with an upper bound of \(\kappa\leq\ell\) (so that (22) is well posed) and a lower bound such that the derivatives of the PHS kernels make sense for whatever operator the RBF-FD stencils are being used to approximate. In this work we use \(\kappa=\ell\) as we have found that this choice works well for approximating various SDOs across a wide range of surfaces. Choosing \(\kappa<\ell\) can be useful for improving the conditioning of the system (22) and for reducing Runge Phenomenon-type edge effects in RBF-FD approximations near boundaries [43].
### Approximating the tangent space
If \(T_{\mathbf{x}_{i}}\mathcal{M}\) is unknown for any \(\mathbf{x}_{i}\in\mathbf{X}\), then we use a similar procedure to the one discussed for GMLS in Section 3.4 (and illustrated in Figure 3) to approximate it. The difference for RBF-FD
is that instead of using an MLS reconstruction of the Monge patch parameterization formed from the coarse tangent plane approximation at each \(\mathbf{x}_{i}\), we use the PHS+Poly interpolant (19) for the reconstruction. The refined approximation to the tangent plane at each \(\mathbf{x}_{i}\) is then obtained from derivatives of the PHS+Poly interpolant of the Monge patch for stencil \(\mathbf{X}^{i}\). We note that this approach is new amongst the different tangent plane methods, as previous approaches assumed the tangent space was computed by some other, possibly unrelated techniques, and not directly from the stencils (e.g., [13; 22; 26]). By combining this technique with the tangent plane method, we arrive at the first comprehensive PHS+Poly RBF-FD framework for approximating SDOs on point cloud surfaces.
## 5 Theoretical comparison of GMLS and RBF-FD
In this section, we make comparisons of the GMLS and RBF-FD methods in terms of some of their theoretical properties, including the different approaches in formulating SDOs, the parameters of the approximations, and the computational cost.
One of the main differences between the GMLS and RBF-FD approaches is that the former uses the local coordinate method to formulate SDOs, while the latter uses the tangent plane method. As shown in Section 4.1 these methods are equivalent if the tangent space for \(\mathcal{M}\) is known for each \(\mathbf{x}_{i}\in X\) and the SDOs are evaluated at the stencil center \(\mathbf{x}_{i}\). However, the GMLS method does not take advantage of this and instead includes metric terms in the formulation. These metric terms are approximated with the same order of accuracy as the GMLS approximation of the derivatives (see below), so that these errors are asymptotically equivalent as the spacing of the points in the stencil goes to zero. When the tangent space is unknown, both methods again approximate it to the same order of accuracy as their respective approximations of the derivatives.
The GMLS and RBF-FD methods each feature the parameter \(\ell\), which controls the degree of the polynomials used in the approximation. For a given \(\ell\), the formulas for either method are exact for all bivariate polynomials of degree \(\ell\) in the tangent plane formed by the stencil center \(\mathbf{x}_{i}\). Unsurprisingly, \(\ell\) also effects the local accuracy of the formulas in the tangent plane with increasing \(\ell\) giving higher orders of accuracy for smooth problems; see [11; 40] for a study of the accuracy of GMLS and [44; 45] for RBF-FD. The order of accuracy of both methods depends on the highest order derivative appearing in the SDOs, and is generally \(\ell\) if the derivative order is \(1\) and \(\ell-1\) if the derivative order is two. However, for certain quasi-uniform point clouds with symmetries, the order has been shown to be \(\ell\) for GMLS applied to second order operators like the surface Laplacian [11].
The computational cost of the methods can be split between the setup cost and the evaluation cost. The setup cost depends on \(\ell\) and \(n_{i}\) (which depends on \(\tau\)). For each stencil \(\mathbf{X}^{i}\), the dominant setup cost of GMLS comes from solving the \(n_{i}\times L\) system (14), while the dominant cost for RBF-FD comes from solving the \((n_{i}+L)\times(n_{i}+L)\) system (22). We use QR factorization to solve the GMLS system and LU factorization to solve the RBF-FD system, which gives the following (to leading order):
\[\text{Setup cost GMLS}\sim 2\sum_{i=1}^{N}n_{i}L^{2}\quad\text{and}\quad\text{ Setup cost RBF-FD}\sim\frac{2}{3}\sum_{i=1}^{N}(n_{i}+L)^{3}. \tag{23}\]
The stencil sizes depend on \(\ell\) and \(\tau\), and for quasi-uniform point clouds \(\mathbf{X}\), \(n_{i}\) is typically some multiple \(\gamma\) of \(L\). In this case, the setup cost of RBF-FD is higher by approximately \((1+\gamma)^{3}/(3\gamma)\). We note that the setup procedures for both methods are an embarrassingly parallel process, as each set of stencil weights can be computed independently of every other set. The evaluation costs of both methods are the same and can be reduced to doing sparse matrix-vector products. So, for a scalar SDO like the surface Laplacian
\[\text{evaluation cost GMLS \& RBF-FD:}\sim 2\sum_{i=1}^{N}n_{i}. \tag{24}\]
If the \(\ell\) and \(\tau\) parameters remain fixed so that size of the stencils remain fixed as \(N\) increases, then both the setup and evaluation cost are linear in \(N\).
## 6 Numerical comparison of GMLS and RBF-FD
We perform a number of numerical experiments comparing GMLS and RBF-FD for approximating the gradient, divergence, and Laplacian on two topologically distinct surfaces: the unit two sphere \(\mathbb{S}^{2}\) and the torus defined implicitly as
\[\mathbb{T}^{2}=\left\{(x,y,z)\in\mathbb{R}^{3}\,\big{|}\,(1-\sqrt{x^{2}+y^{2}})^ {2}+z^{2}-1/9=0\right\}. \tag{25}\]
For the experiments with the sphere, we consider two different node sets \(\mathbf{X}\), icosahedral and Hammersley; see Figure 4 (a) & (b) for examples. The first are highly structured, quasi-uniform points that are commonly used in numerical weather prediction [2; 4]. They have also been used in other studies on GMLS [12] and RBF-FD [16] methods on the sphere. Hammersely are low discrepancy point sequences commonly used in Monte-Carlo integration on the sphere [46]. They are highly unstructured with some points that nearly overlap. For the experiments on the torus, we use Poisson disk points generated using the weighted sample elimination (WSE) algorithm [47]. These points are also unstructured, but are quasi-uniform; see Figure 4 (c) for an example. They have also previously been used in studies on GMLS and RBF-FD methods [26]. Convergence results with other point sets can be found in the PhD thesis of the first author [48].
Error estimates for GMLS and RBF-FD typically require the nodes to be quasi-uniform in the sense that the average spacing between the points \(h\) (or more generally the mesh-norm) decreases like \(h\sim N^{-1/2}\)[36; 39]. As mentioned above, the icosahedral and Poisson disk node sets have this property and are thus well-suited for numerically testing convergence rates of GMLS and RBF-FD methods with increasing \(N\) (i.e., convergence as the density of the sampling of the surfaces increases). Specifically, we experimentally examine the algebraic convergence rates \(\beta\) versus \(\sqrt{N}\), assuming the error behaves like \(\mathcal{O}(N^{-\beta/2})\), and include results for polynomial degrees \(\ell=2\), \(4\), and \(6\). The Hammersley node sets are well suited to testing how stable the methods are to stencils with badly placed points. Since these nodes have low discrepancy over the sphere, it also makes sense to test convergence in a similar manner to the other point sets. The exact values of \(N\) used in the experiments for each of the node sets are as follows. For Icosahedral, \(N=10242,40962,163842,655362\), and for Hammersley and Poisson disk: \(N=8153,32615,130463,521855\).
All RBF-FD results that follow were obtained from a Python implementation of the method that only utilizes the scientific computing libraries SciPy and NumPy. For the GMLS results, we use the software package Compadre [35], which is implemented in C++ and uses the portable performance library Kokkos.
Figure 4: Examples from the three node sets considered in the numerical experiments: (a) \(N=2562\), (b) \(N=2048\), (c) \(N=2038\).
### Convergence comparison: Sphere
We base all the convergence comparisons for the sphere on the following function consisting of a random linear combination of translates of 50 Gaussians of different widths on the sphere:
\[u(\mathbf{x})=\sum_{j=1}^{50}d_{j}\exp(-\gamma_{j}\|\mathbf{x}-\mathbf{y}_{j}\|^ {2}),\;\mathbf{x},\mathbf{y}_{j}\in\mathbb{S}^{2}, \tag{26}\]
where \(\mathbf{y}_{j}\) are the centers and are randomly placed on the sphere, and \(d_{j}\) & \(\gamma_{j}\) are sampled from the normal distributions \(\mathcal{N}(0,1)\) & \(\mathcal{N}(15,4)\), respectively. This function has also been used in other studies on RBF-FD methods [15]. We use samples of \(u\) in the surface gradient tests and measure the error against the exact surface gradient, which can be computed using the Cartesian gradient \(\nabla\) in \(\mathbb{R}^{3}\) as \(\nabla_{\mathcal{M}}u=\nabla u-\boldsymbol{\eta}(\boldsymbol{\eta}\cdot \nabla u)\), where \(\boldsymbol{\eta}\) is the unit outward normal to \(\mathbb{S}^{2}\)[49] (which is just \(\mathbf{x}\)). Applying this to (26) gives
\[\nabla_{\mathcal{M}}u=2\sum_{j=1}^{50}d_{j}\gamma_{j}(\mathbf{y}_{j}-\mathbf{ x}(\mathbf{x}\cdot\mathbf{y}_{j}))\exp(-\gamma_{j}\|\mathbf{x}-\mathbf{y}_{j}\|^ {2}). \tag{27}\]
We use samples of this field in the surface divergence tests. Since \(\nabla_{\mathcal{M}}\cdot\nabla_{\mathcal{M}}u=\Delta_{\mathcal{M}}u\), we compare the errors in this test against the exact surface Laplacian of \(u\), which can be computed using the results of [3] as
\[\Delta_{\mathcal{M}}u=-\sum_{j=1}^{50}d_{j}\gamma_{j}(4-\|\mathbf{x}-\mathbf{ y}_{j}\|^{2}(2+\gamma_{j}(4-\|\mathbf{x}-\mathbf{y}_{j}\|^{2})))\exp(-\gamma_{j} \|\mathbf{x}-\mathbf{y}_{j}\|^{2}).\]
We also use this in the tests of the surface Laplacian using samples of \(u\).
For all these tests, we set radius factor \(\tau\) in the stencil selection Algorithm 1 to 1.5, which gave good results for both RBF-FD and GMLS (see the next section for some results on the effects of increasing \(\tau\)). While the exact tangent space for the sphere is trivially determined, we approximate it in all the results using the methods discussed in the Section 3.4 for GMLS and Section 4.4 for RBF-FD. These approximations are done with the same parameters for approximating the different SDOs to keep the asymptotic orders of accuracy comparable. Although not included here, we did experiments with the exact tangent space and obtained similar results to those presented here.
Figures 5 and 6 display the convergence results for GMLS and RBF-FD as a function of \(N\). Each figure is for a different point set type and contains the results for approximating the surface gradient, divergence, and Laplacian in both the relative two- and max-norms and for different polynomial degrees \(\ell\). We see from all the results that the measured convergence rates for GMLS and RBF-FD are similar, but that RBF-FD gives lower errors for the same \(N\) and \(\ell\) for approximating the surface gradient and divergence. This is also true for the surface Laplacian when \(\ell=4\) and \(\ell=6\), but not for \(\ell=2\). For this case, GMLS gives lower errors for the same \(N\) on the icosahedral nodes and about the same error for the Hammersley nodes. We also see from Figure 6 that both methods do not appear to be effected by stability issues associated with badly placed points in the stencils for the Hammersley nodes.
The measured convergence rates in the two-norm for the surface gradient and divergence approximations are close to the expected rates of \(\ell\) for both point sets. However, when looking at the convergence rates of the surface Laplacian, we see from Figure 5 that the icosahedral nodes have higher rates than for the Hammersley nodes in Figure 6. These improved convergence rates have been referred to as superconvergence in the GMLS literature and rely on the point set being structured so that the stencils have certain symmetries [11]. When these symmetries do not exist, as is the case for the Hammersley nodes, the convergence rates for the surface Laplacian more closely follow the expected rates of \(\ell-1\).
### Convergence comparison: Torus
The convergence comparisons on the torus are based on the target function
\[u(\mathbf{x})=\frac{x}{8}(x^{4}-10x^{2}y^{2}+5y^{4})(r^{2}-60z^{2}),\;\mathbf{ x}\in\mathbb{T}^{2}, \tag{28}\]
Figure 5: Convergence results for (a) surface gradient, (b) divergence, and (b) Laplacian on the sphere using icosahedral node sets. Errors are given in relative two-norms (first column) and max-norms (second column). Markers correspond to different \(\ell\): filled markers are GMLS and open markers are RBF-FD. Dash-dotted lines without markers correspond to 2nd, 4th, and 6th order convergence with \(1/\sqrt{N}\). \(\beta\) are the measured order of accuracy computed using the lines of best fit to the last three reported errors.
Figure 6: Same as Figure 5, but for the Hammersley nodes on the sphere.
Figure 7: Same as Figure 5, but for torus using Poisson disk points.
where \(r=\sqrt{x^{2}+y^{2}}\). This function has also been used in other studies of RBF methods for surfaces [49]. As with the sphere example, the surface gradient of \(u\) can be computed as \(\nabla_{\mathcal{M}}u=\nabla u-\boldsymbol{\eta}(\boldsymbol{\eta}\cdot\nabla u)\), where \(\boldsymbol{\eta}\) is the unit outward normal to \(\mathbb{T}^{2}\), which can be computed from the implicit equation (25). The surface Laplacian of (28) is given in [49] as
\[\Delta_{\mathcal{M}}u(\mathbf{x})=-\frac{3x}{8r^{2}}(x^{4}-10x^{2}y^{2}+5y^{4} )(10248r^{4}-34335r^{3}+41359r^{2}-21320r+4000),\;\mathbf{x}\in\mathbb{T}^{2}.\]
Similar to the sphere, we use samples of \(\nabla_{\mathcal{M}}u\) in the tests of the divergence and compare the results with \(\Delta_{\mathcal{M}}u\) above.
We first study the convergence rates with the stencil radius scaling \(\tau=1.5\) and approximate the tangent space, as we did with the sphere tests. Figure 7 displays the results for the surface gradient, divergence, and Laplacian. We see that errors for RBF-FD are again smaller than the errors for GMLS in almost all cases over the range of \(N\) tested. However, GMLS has a slightly higher convergence rates in the case of the surface gradient and divergence, but not for the Laplacian. Both methods have convergence rates that are close to the expected rates of \(\ell\) for these surface gradient and divergence and \(\ell-1\) for the Laplacian.
Next we investigate how the approximation properties of the two methods change when \(\tau\) is increased, which results in larger stencil sizes. We focus on approximating the surface Laplacian as similar results were found for the other SDOs. In the left plot of Figure 8, we show the relative two-norm errors of the approximations for a fixed \(N\) as \(\tau\) varies from \(1.5\) to \(2.5\). We see that increasing \(\tau\) has opposite effects on the two methods: the errors decrease for RBF-FD and increase with GMLS. We see similar results in the right plot of Figure 8, where we show the convergence of the methods with increasing \(N\) for different fixed values of \(\tau\) (and \(\ell\) fixed at \(4\)). While the convergence rates do not appear to change with \(\tau\), the overall errors decrease for RBF-FD and increase for GMLS. It should be noted that the errors eventually increase for GMLS as \(\tau\) decreases to \(1\) (which has been observed in other studies) and picking an optimal \(\tau\) in an automated way is challenging (e.g. [41; 42]).
These results make sense when one considers the different types of approximations the methods are based on: RBF-FD is based on interpolation, while GMLS is based on least squares approximation. As the stencil sizes increase, RBF-FD has a larger approximation space consisting of more shifts of PHS kernels, which can reduce the errors [44]. However, GMLS has the same fixed approximation space of polynomials of degree \(\ell\) regardless of the stencil size.
Finally, we compare the errors when the exact and approximate tangent spaces are used in the two methods. We focus only on the surface Laplacian and for \(\ell=4\) since similar results were obtained for the other operators and other \(\ell\). Table 1 shows the results for both methods. The approximate tangent spaces were computed using the methods from Sections 3.4 (GMLS) and 4.4 (RBF-FD) also using the polynomial degree \(\ell=4\). As discussed in Section 5, this choice is made
Figure 8: Relative two-norm errors of the surface Laplacian approximations as the stencil radius parameter \(\tau\) varies. Left figure shows errors for several different values of \(\tau\) and a fixed \(N=130463\). Right figure shows the convergence rates of the methods for different \(\tau\) and a fixed \(\ell=4\).
so that the tangent spaces are approximated with the same asymptotic order of accuracy as the approximation of the metric terms with GMLS. We see from the table that the differences between using the exact or the approximate tangent spaces for approximating the surface Laplacian is minor.
### Efficiency comparison
The results in Section 6 demonstrate that RBF-FD and GMLS have similar asymptotic convergence rates for the same \(\ell\), but that RBF-FD can achieve lower errors for the same \(N\) and stencil sizes. In this section, we consider which of the methods are more computationally efficient in terms of error per computational cost. We examine both the efficiency when the setup costs are included and when just the evaluation costs are included, as measured by (23) and (24), respectively. We limit this comparison to \(\tau=1.5\), but note that it may be possible to tune this parameter to (marginally) optimize the efficiency of either method over this case. Figure 9 displays the results of this examination for the case of computing the surface Laplacian on the torus discretized with Poisson disk sampling. Similar results were obtained for other SDOs and for the sphere, so we omit them. We see from the figure that GMLS is more efficient when the setup costs are included, but that RBF-FD is more efficient when only evaluation costs are included. For problems where the point sets are fixed and approximations to a SDO are required to be performed multiple times--as occurs when solving a time-dependent surface PDEs--the setup costs are not as important as the evaluation costs since they are amortized across all time-steps. In this scenario RBF-FD is the more efficient method.
## 7 Concluding remarks
We presented a thorough comparison of the GMLS and RBF-FD methods for approximating the three most common SDOs: the gradient, divergence, and Laplacian (Laplace-Beltrami). Our analysis of the two different formulations of SDOs used in the methods revealed that if the exact tangent space for the surface is used, these formulations are identical. We further derived a new RBF-FD method for approximating the tangent space of surfaces represented only by point clouds. Our numerical investigation of the methods showed that they appear to converge at similar rates when the same polynomial degree \(\ell\) is used, but that RBF-FD generally gives lower errors for the same \(N\) and \(\ell\). We additionally examined the dependency of the stencil size on the methods (as measured by the \(\tau\) parameter) and found that the errors produced by GMLS deteriorate as the stencil size increases. The errors for RBF-FD, contrastingly, appear to keep improving as the stencil size increases. However, we don't expect this trend to continue indefinitely, as eventually the tangent plane formulation breaks down when the stencil size becomes too large. Finally, we investigated the computational efficiency of the methods in terms of error versus computational cost and found GMLS to be more efficient when setup costs are included and RBF-FD to be more efficient when only considering evaluation costs.
\begin{table}
\begin{tabular}{|c||c c|c c|} \hline & \multicolumn{2}{c|}{GMLS} & \multicolumn{2}{c|}{RBF-FD} \\ \hline \(N\) & Exact & Approx. & Exact & Approx. \\ \hline \hline
8153 & 4.7984e-04 & 4.8004e-04 & 1.3311e-04 & 1.3312e-04 \\ \hline
32615 & 6.0457e-05 & 6.04654e-05 & 1.5321e-05 & 1.5322e-05 \\ \hline
130463 & 7.5486e-06 & 7.5488e-06 & 1.8811e-06 & 1.8811e-06 \\ \hline
521855 & 8.0158e-07 & 8.0159e-07 & 2.0177e-07 & 2.0176e-07 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of the relative \(\ell_{2}\) errors for the surface Laplacian on the torus using the exact tangent space for the torus and approximations to it based on the methods from Sections 3.4 (GMLS) and 4.4 (RBF-FD). In all cases, \(\ell=4\) and the points are based on Poisson disk sampling.
### Acknowledgements
_Funding._ AMJ was partially supported by US NSF grant CCF-1717556. PAB & PAK were supported by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research (ASCR) Program and Biological and Environmental Research (BER) Program under a Scientific Discovery through Advanced Computing (SciDAC 4) BER partnership pilot project. PAK was additionally supported by the Laboratory Directed Research & Development (LDRD) program at Sandia National Laboratories and ASCR under Award Number DE-SC-0000230927. AMJ was also partially supported by the Climate Model Development and Validation (CMDV) program, funded by BER. Part of this work was conducted while AMJ was employed at the Computer Science Research Institute at Sandia National Laboratories. GBW was partially supported by U.S. NSF grants CCF-1717556 and DMS-1952674.
|
2309.13549 | Towards Robust Robot 3D Perception in Urban Environments: The UT Campus
Object Dataset | We introduce the UT Campus Object Dataset (CODa), a mobile robot egocentric
perception dataset collected on the University of Texas Austin Campus. Our
dataset contains 8.5 hours of multimodal sensor data: synchronized 3D point
clouds and stereo RGB video from a 128-channel 3D LiDAR and two 1.25MP RGB
cameras at 10 fps; RGB-D videos from an additional 0.5MP sensor at 7 fps, and a
9-DOF IMU sensor at 40 Hz. We provide 58 minutes of ground-truth annotations
containing 1.3 million 3D bounding boxes with instance IDs for 53 semantic
classes, 5000 frames of 3D semantic annotations for urban terrain, and
pseudo-ground truth localization. We repeatedly traverse identical geographic
locations for a wide range of indoor and outdoor areas, weather conditions, and
times of the day. Using CODa, we empirically demonstrate that: 1) 3D object
detection performance in urban settings is significantly higher when trained
using CODa compared to existing datasets even when employing state-of-the-art
domain adaptation approaches, 2) sensor-specific fine-tuning improves 3D object
detection accuracy and 3) pretraining on CODa improves cross-dataset 3D object
detection performance in urban settings compared to pretraining on AV datasets.
Using our dataset and annotations, we release benchmarks for 3D object
detection and 3D semantic segmentation using established metrics. In the
future, the CODa benchmark will include additional tasks like unsupervised
object discovery and re-identification. We publicly release CODa on the Texas
Data Repository, pre-trained models, dataset development package, and
interactive dataset viewer on our website at https://amrl.cs.utexas.edu/coda.
We expect CODa to be a valuable dataset for research in egocentric 3D
perception and planning for autonomous navigation in urban environments. | Arthur Zhang, Chaitanya Eranki, Christina Zhang, Ji-Hwan Park, Raymond Hong, Pranav Kalyani, Lochana Kalyanaraman, Arsh Gamare, Arnav Bagad, Maria Esteva, Joydeep Biswas | 2023-09-24T04:43:39Z | http://arxiv.org/abs/2309.13549v2 | # Towards Robust Robot 3D Perception in Urban Environments: The UT Campus Object Dataset
###### Abstract
We introduce the UT Campus Object Dataset (CODa), a mobile robot egocentric perception dataset collected on the University of Texas Austin Campus. Our dataset contains 8.5 hours of multimodal sensor data: synchronized 3D point clouds and stereo RGB video from a 128-channel 3D LiDAR and two 1.25MP RGB cameras at 10 fps; RGB-D videos from an additional 0.5MP sensor at 7 fps, and a 9-DOF IMU sensor at 40 Hz. We provide 58 minutes of ground-truth annotations containing 1.3 million 3D bounding boxes with instance IDs for 53 semantic classes, 5000 frames of 3D semantic annotations for urban terrain, and pseudo-ground truth localization. We repeatedly traverse identical geographic locations for a wide range of indoor and outdoor areas, weather conditions, and times of the day. Using CODa, we empirically demonstrate that: 1) 3D object detection performance in urban settings is significantly higher when trained using CODa compared to existing datasets even when employing state-of-the-art domain adaptation approaches, 2) sensor-specific fine-tuning improves 3D object detection accuracy and 3) pretraining on CODa improves cross-dataset 3D object detection performance in urban settings compared to pretraining on AV datasets. Using our dataset and annotations, we release benchmarks for 3D object detection and 3D semantic segmentation using established metrics. In the future, the CODa benchmark will include additional tasks like unsupervised object discovery and re-identification. We publicly release CODa on the Texas Data Repository [1], pre-trained models, dataset development package, and interactive dataset viewer1. We expect CODa to be a valuable dataset for research in egocentric 3D perception and planning for autonomous navigation in urban environments.
Footnote 1: Interactive dataset viewer available on the CODa website
## I Introduction
Accurate and robust perception of objects and scenes is crucial for autonomous mobile robots performing tasks in urban environments. To this end, the computer vision and robotics communities have proposed datasets and benchmarks [2, 3, 4, 5, 6, 7] to serve as training data for the development and fair evaluation of modern data-driven approaches. However, perception models trained on existing datasets do not perform well in urban environments for the following reasons: 1) they exhibit significant sensor and viewpoint differences from urban robots, 2) they focus exclusively on RGB images, 3) they lack sufficient object or terrain annotation diversity. These characteristics limit egocentric robot capabilities [8, 9, 10], which are important for navigation and planning tasks.
Many egocentric 3D perception datasets are collected from urban robots or autonomous vehicles (AVs). Existing urban robotics datasets [11, 6, 7] in human-centric environments possess similar sensors and viewpoints but lack semantic annotation diversity. In contrast, autonomous vehicle (AV) datasets [12, 5, 13] contain semantic annotations but are collected from cars on streets, roads, or highways. They operate higher fidelity sensor suites, encounter different geometric and semantic entities, and have different sensor viewpoints compared to urban robots. This causes perception models trained on AV datasets to perform poorly on robots in urban settings -- Section VII-B presents quantitative analyses demonstrating this significant performance gap.
To address this gap, we contribute the **UT Campus Object Dataset (CODa)**, a large-scale annotated multimodal dataset for training and benchmarking egocentric 3D perception for robots in urban environments. Our dataset is comprised of 23 sequences in indoor and outdoor settings on a university campus and contains repeated traversals from different view
Fig. 1: Three of the five modalities available in CODa. **RGB** image with **3D object** labels (bottom left), **3D point cloud** (middle), **stereo depth** image (bottom right).
points, weather conditions (**sunny, rainy, cloudy, low-light**), and scene densities.
The sensor data includes 1) 3D point clouds from a 128-channel 3D LiDAR, 2) RGB images from a stereo camera pair synchronized with the 3D LiDAR, 3) RGB-D images from an active depth camera, 4) RGB-D images from a passive depth camera, and 5) 9 DoF inertial measurements. The dataset includes sensor intrinsic and extrinsic calibrations for each sequence and pseudo ground truth global poses.
CODa contains **1.3 million** ground truth 3D bounding box annotations, instance IDs, and occlusion values for objects in the 3D point cloud. Furthermore, it includes **5000** frames of 3D terrain segmentation annotations for 3D point clouds. All annotations are provided by human annotators, and labeled at 10Hz for 3D bounding boxes, and 2-10Hz for terrain semantic segmentation. Compared to similar 3D perception datasets, CODa has far more class diversity, containing **53** object classes and **23** urban terrain types. This includes classes that are useful to urban navigation, such as doors, railings, stairs, emergency phones, and signs. Using our annotations, we release benchmarks using established metrics [5, 14] for 3D object detection and 3D semantic segmentation with plans for perception and planning tasks relevant to autonomous navigation.
In the rest of the manuscript, we review existing datasets and relate CODa to them (Section II), describe the sensor setup (Section III), data collection procedure (Section IV), dataset contents (Section VI), and annotation details. We characterize the semantic composition of our dataset, proposed train/validation/test splits, and provide qualitative sensor data visualizations. Finally, in Section VII we empirically analyze how: using CODa improves object detection performance for robots in urban settings, different 3D LiDAR resolutions affect pre-trained object detector performance, and pre-training on CODa outperforms AV datasets in cross-dataset object detection on JRDB [7].
## II Related Work
In this section, we review existing egocentric 3D LiDAR datasets for urban and AV domains. We limit the discussion to real-world datasets, as there still exists a significant domain gap between simulation and real-world [33, 34].
### _Urban Datasets_
Urban datasets are collected in human-centric environments, such as college campuses, city streets, and shopping malls. Similar to our work, these datasets are used to benchmark robot performance in human-centric environments, often emphasizing long-term SLAM, object detection, and semantic segmentation. While there exist computer vision benchmarks for 3D object detection [3] and semantic segmentation [32], we focus on datasets collected from mobile robots due to differences in perspective shift and sensor suite.
Long-term SLAM datasets like MIT Stata [23], NCLT [11], FusionPortable [29], and OpenLORIS [30] contain globally consistent ground truth poses and multimodal sensor data. They are repeatedly collected over multiple times of day to fairly evaluate long-term SLAM methods that rely on geometric, visual, or proprioceptive sensor information. SCAND [6] is another large-scale dataset with multimodal sensor data collected over multiple times of day in a campus environment. Instead of ground truth poses, it contains socially compliant navigation demonstrations and operator commands to support social navigation research. Similarly, CODa contains multimodal sensor data with repeated trials over multiple times of day, but distinguishes itself by providing object and terrain annotations to support methods that rely on semantic information.
Besides CODa, there does not exist an urban robot dataset that contains 3D object and terrain annotations. RUGD [26] and Rellis-3D [27] are robot datasets with 2D and 3D semantic segmentation annotations respectively, but are collected on off-road terrains. These environments contain distinct semantic entities from those found in urban environments. The closest work to ours is JRDB [7], a mobile robot dataset with 1) 1.8 million 3D bounding box annotations 2) indoor and outdoor sequences 3) egocentric sensor data. However, JRDB [7] is intended for pedestrian understanding research as it only contains pedestrian semantic annotations. In contrast, CODa contains object and terrain level annotations for a wide range of semantic classes to support general-purpose egocentric perception and navigation in urban environments.
### _AV Perception Datasets_
Unlike urban robot datasets, AV datasets are collected from car-mounted, high-fidelity sensor suites and operate exclusively on roads, parking lots, and highways. Despite these differences, their large size and scene diversity may be leveraged to train 3D perception algorithms for urban settings.
Among AV datasets, the Oxford RobotCar dataset [20] contains the most repeated traversals over different weather, object density, and lighting conditions. It provides ground truth poses for evaluating long-term SLAM methods that only rely on visual and geometric information. For 2D multitask learning problems, Berkeley DeepDrive [19] provides semantic annotations at both the object and pixel level for a wide range of semantic classes and weather conditions.
Lyft L5 [22], CityScapes3D [18], and KITTI-360 [35] contain labeled 2D images or 3D bounding boxes with more non-overlapping semantic classes than other AV datasets. They contain vehicle-centric semantic classes to support multi-class object detection research in AV domains. Conversely, large-scale datasets like Waymo Open [12] and nuScenes [13] have fewer unique semantic classes but have more 3D semantic annotations per class and greater scene diversity. These characteristics establish them as de facto benchmarks for 3D object detection and semantic segmentation tasks, while also being valuable for pre-training 3D object detectors to recognize similar objects across domains.
Other works like Argoverse2 [15] and ONCE [16] support self-supervised point cloud learning by providing more unannotated 3D point clouds than any other AV dataset. Addi
tionally, both contain 3D object labels, but Argoverse2's [15] labels are limited to five meters within the drivable area and ONCE [16] is limited to five object classes. For robots operating in urban environments, it is important to identify a diverse set of objects in non-drivable areas, reinforcing the need for a dataset like CODa.
## III Sensor Setup
CODa was collected using a Clearpath Husky robot [36] equipped with a custom sensor suite with the following sensors, illustrated in Fig. 2:
* \(0.35^{\circ}\) vertical angular resolution, 2048 beams
- \(0.17^{\circ}\) horizontal angular resolution, up to 2.6 million points/second, field of view: 360\({}^{\circ}\) horizontal, 45\({}^{\circ}\) vertical, range: 128 m. Point clouds captured in 128x1024 channels @ 10 Hz.
* 2 \(\times\) Teledyne FLIR Blackfly S RGB cameras (BFS-U3-51S5C-C) up to 5 Megapixels, 75 Hz, global shutter. Paired with KOWA F2.8/5mm lenses. Field of view (H x W): 70\({}^{\circ}\)x79\({}^{\circ}\). Images captured in 1.25 Megapixels @ 10 Hz, hardware synchronized with 3D LIDAR.
* 1 \(\times\) Microsoft Azure Kinect active RGBD camera up to 12 and 1 MP (RGB and Depth) @ 15 Hz, rolling shutter. 7 microphone circular array. RGB and Depth Images captured in 2.0 MP @ 5Hz
* 1 \(\times\) Stereolabs ZED 2i passive stereo camera up to 4 Megapixels @ 15 Hz, rolling shutter. Images captured in 0.5MP @ 5Hz
* 1 \(\times\) Vectornav VN-310 Dual GNSS/INS, up to 800 Hz IMU Data. Inertial and GPS data captured @ 40Hz
\begin{table}
\begin{tabular}{c|c|c c c c|c c c c|c c} & \multicolumn{1}{c|}{**Inst.**} & \multicolumn{1}{c}{**\#3D Ann.**} & \multicolumn{1}{c}{**3D 3D pts/**} & \multicolumn{1}{c}{**2D**} \\
**Dataset** & **Pose** & **\#Cls \#3D** & **Bbx \#3D/2D** & **Seg Labels** & **Frames** & **Frames** & **Frame** & **Frames** & **Time of Day Night/Rain** \\ \hline MIT Stat[23] & S & 0 & 0 & 0 & Y & 0 & \(\sim\)5.1M & 1.4K & \(\sim\)5.1M & I & N/A & N/N \\ TUM RGB-D[24] & MC & 0 & 0 & N & 0 & 47K & 0 & 47K & I & N/A & N/N \\ Newer College[25] & S & 0 & 0 & 0 & N & 0 & 23K & 131K & 23K & I & M, A & N/N \\ JRDB[7] & None & 1 & **1.8M\({}^{*}\)** & 0 & Y & 28K & 28K & 130K & 28K & I+O & M, A & N/N \\ SCAND[6] & None & 0 & 0 & 0 & N & 0 & 313K & 65K & 626K & I+O & M, A & N/N \\ RUGD[26] & None & 24 & 0 & 7.4K\({}_{2D}\) & N & 0 & 0 & 0 & 37K & O & M, A & N/N \\ Rellis-3D[27] & S+G & 20 & 0 & 13K & N & 13K & 1.33K & **1.33M** & 6K & O & M, A & N/N \\ NCLT[11] & S+G+R & 0 & 0 & 0 & N & 0 & 1.2M & 69.5K & 628K & I+O & M, A, E & Y/N \\ ALITA[28] & S & 0 & 0 & 0 & N & 0 & **7.2M** & 65K & **7.2M** & O & M, A & N/N \\ FusionPortable[29] & S+MC & 0 & 0 & 0 & N & 0 & 1.4M & 131K & 2.9M & I+O & M, A & N/N \\ OpenLORIS[30] & MC & **40** & 0 & 0 & N & 0 & 497K & N/A & 497K & I+O & M, A & N/N \\ Pascal VOC3D[41] & None & 12 & 36K & 0 & Y & 30K & 0 & N/A & 22K & I+O & M, A, E & N/N \\ NYU Depth[23] & None & 26 & 0 & 1.45K & Y & 1.45K & 407K & N/A & 407K & I & M, A, E & N/N \\ \hline
**CODa (Ours)** & **S+G** & **53** & **1.3M** & **6K** & **Y** & **32K** & **324K** & **131K** & **324K** & **1+O** & **M, A, E & Y/Y \\ \end{tabular}
\end{table} TABLE II: Comparison between CODa (ours) and similar campus scale robot datasets. The most significant entry for each column in CODa is highlighted in blue. CODa provides the largest number of object classes, 3D bounding box annotations, and annotated 3D frames under the widest range of environmental and weather conditions. Pose annotations: G - GPS, R - GPS-RTK, S - SLAM, MC - Motion Capture. Indoor/Outdoor: I - Indoor, O - Outdoor. Time of Day: M - Morning, A - Afternoon, E - Evening. * JRDB[7] only provides annotations for pedestrians.
\begin{table}
\begin{tabular}{c|c|c c c c|c c c c|c c} & \multicolumn{1}{c|}{**Inst.**} & \multicolumn{1}{c}{**\#3D Ann.**} & \multicolumn{1}{c}{**3D**} & \multicolumn{1}{c|}{**3D pts/**} & \multicolumn{1}{c}{**2D**} \\
**Dataset** & **Pose** & **\#Cls \#3D/2D** & **Bbx \#3D/2D** & **Seg Labels** & **Frames** & **Frames** & **Frame** & **Frames** & **Time of Day Night/Rain** \\ \hline KITTI[5] & G+R & 3 & 80K & 43K & Y & 15K & 15K & 120K & 13K & M, A & N/N \\ nuScenes[13] & G+R & 23 & 1.4M & 40K & Y & 40K & 400K & 34K & 1.4M & M, A, E & Y/Y \\ Argoverse2[15] & G+R & 30 & 12M & 0 & Y & 150K & 6M & 107K & 300K & M, A, E & Y/Y \\ Waymap Open[12] & G+R & 4 & 12M & 230K & Y & 192K & 192K & 177K & 1M & M, A, E & Y/Y \\ ONCE[16] & G & 5 & 417K & 0 & Y & 21K & 1M & 720K & 7M & M, A, E & Y/Y \\ KITTI-360[17] & G+R & 14 & 68K & 156K & Y & 100K & 100K & 200K & 150K & Not Given & Not Given \\ CityScapes3D[18] & G+R & 8 & Not Given & 20K & Y & 20K & 0 & N/A & 25K & M, A, E & Y/Y \\ BDD100K[19] & G+R & 10 & 1.8M\({}_{2D}\) & 10K\({}_{2D}\) & Y & 0 & 0 & N/A & 120M & M, A, E & Y/Y \\ Oxford Robot(20) & G+R & 0 & 0 & 0 & N & 0 & 0 & N/A & 20M & M, A, E & Y/Y \\ ApolloCar3D[21] & G & 6 & 60K & 120K\({}_{2D}\) & Y & 0 & Not Given & N/A & 5.27K & M, A, E & Y/Y \\ Lyft L5[22] & G+R & 9 & 15K & 15K & Y & Not Given & Not Given & Not Given & 323K & M, A & N/N \\ \hline CODa (Ours) & S+G & 53 & 1.1M & 6K & Y & 32K & 324K & 131K & 131K & M, A, E & Y/Y \\ \end{tabular}
\end{table} TABLE I: Comparing dataset statistics between CODa (ours) and existing AV datasets. We use the following abbreviations: M - Morning, A - Afternoon, E - Evening, G - GPS, R - RTK, S - SLAM
Fig. 2: **Sensor setup** including mounting positions. All heights are relative to the ground plane.
An onboard computer with an Intel I7-8700 3.2GHz CPU and 32 GB RAM is securely mounted inside the robot and records all sensor streams to a high-speed Intel 760P SSDREKKW512G8 512GB SSD. A GPU-equipped laptop is mounted on the robot to process the Azure Kinect and ZED 2i camera data before transmitting both RGBD streams to the computer via 10 Gig Ethernet. The coordinate system definitions are described in the CODa documentation [1].
The Ouster LIDAR and FLIR cameras are synchronized by hardware using the Ouster LIDAR 10Hz sync pulse to trigger the FLIR cameras. This ensures that the start of the LIDAR scan is synchronized with the start of the exposure of the FLIR cameras. All other sensors have timestamps, but their capture times are not synchronized.
We calibrate the stereo RGB cameras with a checkerboard calibration pattern [37] using multiple images of the checkerboard at different positions. We obtain the LiDAR camera extrinsics using checkerboard images and an approach [38] that optimizes the sensor pose with respect to the checkerboard target and the entire scene. To obtain the LiDAR-IMU extrinsic, we use a target-free extrinsic calibration algorithm [39] that exploits vehicle motion. A calibration half-cube is used to ensure that the LiDAR depth camera extrinsic is accurate. Every sequence in CODa includes a pre or post-run calibration log file containing the raw sensor data and calibration targets in the field of view. Fig. 3 shows a sample frame from the calibration log file.
## IV Data Collection Procedure
In this section, we describe the sensor calibration procedure and data collection routes for CODa.
### _Per Sequence Calibration Procedure_
Robot operators calibrated the 3D LiDAR and RGB cameras using the methods in Section III for each sequence and saved the raw sensor data into a calibration log. Because the 3D LiDAR and IMU are fixed with respect to each other, we performed the LiDAR-IMU calibration once and used this calibration for all sequences. We moved the checkerboard at three different heights across the image to obtain accurate stereo camera calibrations. Fig. 3 shows the calibration process and sensor modalities in the calibration log.
### _Operator Roles and Data Privacy_
After calibration, pairs of operators drove the robot along one of four predetermined routes on UT campus. The primary operator drove the robot along predefined routes, including stopping at waypoints defined in Fig. 4, which are used to ensure global pose consistency between sequences. The second operator addressed questions from the crowd about CODa and handed out research information sheets containing a data privacy disclaimer and contact information. This operator logged all individuals' requests to opt-out from participating in CODa. We increase transparency by mounting a sign on the robot to indicate when it is recording data. While no individuals opt-ed out from our experiments, we protect the privacy of those who do with our user data removal procedure. We describe the user data removal procedure and release the research information sheet on TDR [1]. In the next section, we explain the routes in detail.
### _Data Collection Routes_
The four navigation routes along UT campus are: GatesDell, WCPowers, Guad24, and Union. Fig. 5 shows the reconstructed map for the first three routes in red, blue, and
Fig. 4: Spatial map of geographic locations contained in CODa. Operators pause the robot at each waypoint denoted on the map to correct global pseudo-ground truth pose estimates. We refer to blue, green, brown, red, and purple locations as SWY, GDC, WCP, Guad, and UNB in Fig. 6
Fig. 3: Sample frame from calibration file. Calibration half-cube and checkerboard are simultaneously visible in all RGB, depth, and 3D LIDAR frames.
\begin{table}
\begin{tabular}{l|l|l|r|r|r}
**Route** & **Setting** & **Locations** & **Traversals** & **Dist. (m)** & **Dur. (hr)** \\ \hline Gates-Dell & Both & GDC, SWY & 7 & \(5139\) & \(1.95\) \\ Guad24 & Out & Guad, SWY & 6 & \(8799\) & \(3.07\) \\ WCPowers & Both & WCP, SWY & 7 & \(8005\) & \(2.79\) \\ Union & In & UNB, Guad & 3 & \(2450\) & \(0.93\) \\ \end{tabular}
\end{table} TABLE III: Summary of the four routes in CODa. We traverse each route multiple times to capture diverse viewpoints, weather, and lighting conditions. Each route passes through a set of geographic locations defined in Fig. 4. The setting column describes whether the route is indoors, outdoors, or both.
green. We summarize the characteristics for each route in Table III, including the total distance traversed, total duration, number of traversals, and geographic locations visited for each route. These geographic locations are shown in Fig. 4 as SWY, GDC, WCP, Guad, and UNB. We choose each location for the following attributes: SWY has a large open area shared by vehicles and pedestrians, GDC has large open areas with classrooms, WCP has scenes from a cafeteria, Guad has scenes from sidewalks and vehicle-only roads, and UNB has scenes from a library and study area.
Each location is observed multiple times from various viewpoints, weather, and lighting conditions. We quantify the observation diversity in Fig. 6 by counting the number of observed frames in CODa for each location under four weather/lighting conditions (Cloudy, Dark, Sunny, Rainy) during three times of day (Morning, Afternoon, Evening). While we are unable to deploy the robot when it is actively raining, we collect data immediately after rainfall and label frames that satisfy these conditions as rainy. Across all sequences, CODa contains 3 rainy, 7 cloudy, 4 dark, and 9 sunny sequences. Fig. 17 qualitatively showcases the data diversity in CODa using sampled images from each sequence.
## V Annotation and Labels
We utilized Deepen AI 2, a 3rd party annotation company, to annotate point clouds from our 3D LiDAR with 3D bounding box and semantic segmentation annotations. We instructed Deepen annotators using our annotation guide, which we provide in the data report [1]. The annotation guide contains visual examples for each object and terrain class described in Fig. 15 and Fig. 13, quantitative occlusion level definitions, and operating procedures to determine object instance IDs. Following these instructions, Deepen annotators manually labeled 58 minutes of frames, followed by manual quality assurance checks to ensure that at least 95% of the bounding boxes and 90% of the terrain segmentation annotations were valid on the 3D point clouds. Our internal team then inspected each frame for additional issues. We now describe each annotation type in CODa in detail.
Footnote 2: Company Website (Deepen): [https://www.deepen.ai/](https://www.deepen.ai/)
### _3D Bounding Boxes_
Each 3D bounding box has 9 degrees of freedom, instance ID, object class, and occlusion level attributes. We maintain the same instance ID for each object as long as it is observable from the LiDAR or camera sensor or if it does not leave view for longer than 3 seconds. There are six occlusion types, ranging from None, Light, Medium, Heavy, Full, and Unknown occlusion. The first five occlusion types are used if the object is observable by the cameras or can be identified fully in the 3D point cloud. Objects that never enter the camera view or are geometrically ambiguous are given the unknown occlusion status. This label definition makes CODa useful for evaluating the 3D object tracking task under occlusion. Fig. 15 defines the object ontology for CODa. Because the full list of object classes is large, we refer the reader to the data report [1] for visual examples of each class.
### _3D Semantic Segmentation_
We annotate each point on the surrounding terrain with a semantic class label. We differentiate terrain classes by their visual appearance and geometric shape. For instance, red
Fig. 5: Satellite image of UT campus with the transformed point clouds and waypoints overlaid. The blue, green, and blue points correspond to the WCP, GDC, and Guad routes respectively. Operators pause the robot at each waypoint to establish global correspondences for pseudo-ground truth pose estimates. Most sequences exhibit poor GPS reception, thus requiring poses to be estimated from LiDAR, inertial, and waypoint data.
Fig. 6: Number of frames in CODa by geographic location and weather condition. Locations with temporally diverse observations contain frames during multiple times of the day. The coverage areas for locations SWY, GDC, WCP, Guad, and UNB are marked by blue, green, brown, red, and purple lines respectively in Fig. 4.
and yellow bricks are similar geometrically but are treated as different terrains because they are visually distinct. This makes 3D semantic segmentation challenging with just a single 3D LiDAR and encourages multi-modal methods that fuse 2D images and 3D LiDAR to infer terrain-level semantic labels. We label ambiguous points as unknown and points not associated with terrain as unlabeled. The full terrain ontology and examples for each class can be found in Fig. 13 and Fig. 14 respectively.
### _Pseudo Ground Truth Poses_
Due to the unreliability of GPS in urban environments, we use Lego-LOAM [40] to obtain initial robot poses and HitL-SLAM [41] to refine these pose estimates globally between runs using manual annotations with known map correspondences. In Fig. 4, we qualitatively assess our method's accuracy by visualizing the global pose estimate on a satellite image of UT campus and 3D map reconstruction.
## VI Analysis of CODa Annotations and Statistics
In this section, we analyze the distribution of data in CODa by geographic location, weather, and lighting conditions.
Fig. 6 shows that all geographic locations in CODa (besides WCP) contain data in the morning, afternoon, and evening. All routes with outdoor observations contain at least one sequence captured under rainy conditions. While the full dataset is biased toward sunny and cloudy weather, Fig. 9 shows that the annotated dataset contains 20 object classes that have at least 100 labels under all conditions. With this number going up to 36 classes if we only consider 3 of the 4 conditions. Aside from ATM, most classes contain 100 to 1000 labels each, with Fig. 9 showing the top five classes: pedestrian, tree, pole, railing, and chair. This class and weather imbalance is common in real-world datasets [13, 16] and is a challenging aspect that perception algorithms deployed in urban environments need to be resilient to.
Fig. 10 shows the proportion of each terrain class among the annotated points in CODa, organized by the parent class defined in the terrain ontology in Fig. 13. Among the 23 terrain classes, 21 have more than 200,000 annotated points each, with outdoor classes dominating the majority of the annotations. The two classes that do not satisfy this are dome mat and metal floor. This is because these terrains are small in size and uncommon in environments where they are found. This class imbalance is present in other real-world semantic segmentation datasets [14, 18] as well.
We propose train, validation, and test splits for our 3D object detection and 3D terrain segmentation benchmarks, with each split containing 70%, 15%, and 15% of each annotated sequence respectively. We visualize the spatial distribution of objects around the robot in Fig. 7 for static, dynamic, and all objects for each proposed split in a Kernel Density Estimate (KDE) plot. This demonstrates that both the density and relative position of objects around the robot are similar between our proposed splits.
## VII Experiments and Analysis
We leveraged the unique characteristics of CODa to conduct experiments that answer the following questions:
* Question 1: How well do 3D object detectors trained on large-scale AV datasets perform on CODa?
* Question 2: How well does unsupervised domain adaptation from AV datasets perform on CODa?
* Question 3: Can we improve object detection performance for low-resolution single 3D LiDAR setups on robots by fine-tuning on downsampled LiDAR point clouds?
* Question 4: Does pre-training on CODa improve cross-dataset object detection on existing urban robotics datasets?
### _Experimental Setup -- Selecting a 3D Object Detection Algorithm_
We choose a 3D object detector by evaluating the performance of three 3D object detectors: PointPillars [42], CenterPoint [43]. PVRCNN [44] on KITTI, Waymo, nuScenes, and CODa. These datasets are among the most widely used in 3D object detection benchmarks and for cross-dataset domain adaptation analysis [16, 45]. We evaluate the preceding models because they are LiDAR-only approaches, easy to reproduce, and achieve state-of-the-art detection performance on AV datasets. Both Centerpoint and PointPillars are top-performing open-source methods on Waymo and nuScenes leaderboards, and the OpenPCDet [46] implementation of PVRCNN unofficially outperforms the former models on Waymo. We use the OpenPCDet implementation of each model because it
Fig. 7: Spatial distribution of static (top), dynamic (middle), and all (bottom) objects around the robot for the train (left), validation(center), and test (right) splits. Angles (in degrees) are with respect to the forward heading of the robot, range values in meters.
provides the model configuration files, making results more reproducible.
We use the default model configurations provided in OpenPCDet and train each model for 30 epochs or until the performance saturates. For models that OpenPCDet does not provide configurations for, we benchmark various model architectures in Table IV and select the most favorable one.
All experiments involving CODa in Tables IV, V, and VI are conducted using the medium train, validation, and test split for computational reasons. We use the full CODa split for Table VII experiments to better match the scene diversity in AV datasets. For all metrics, we use the 3D object detection and bird's eye view evaluation metric proposed in the KITTI Vision Benchmark Suite [5] with an IOU of 0.7, 0.5, and 0.5 for the car, pedestrian, and cyclist classes respectively. This class list is consistent across CODa and AV datasets. For completeness, we report model performance on the full list of object classes for multiple LiDAR resolutions in the appendix.
We observe in Table IV that PVRCNN generally performs the best for 3D bounding box detection on large-scale AV datasets and CODa. As such, we select this model architecture to use in all of our later experiments. For a full summary of all models evaluated for this experiment, please refer to Table X in Appendix Section XI-C.
Fig. 8: Number of object labels per class organized by topological category. Objects in each topological category are sorted in order of most to least common.
Fig. 9: Histogram of the number of annotations per object class under four weather conditions (sunny, rainy, cloudy, dark). Object classes are organized by most to least frequent from left to right. Bars with stars are cloudy, diagonal lines are dark, circles are sunny, and horizontal lines are rainy.
### _AV Dataset to CODa Adaptation_
We apply several domain adaptation strategies to evaluate 3D object detector performance on CODa with and without domain-specific labels. In our experiment setup, we choose the object class list to be car, pedestrian, and cyclist so that it is consistent with the standard class list evaluated for nuScenes and Waymo. We perform the standard 3D data augmentation techniques (scaling, rotation, flipping) and align the point cloud ground plane heights for our experiments.
**Direct Transfer (Direct)**. In this experiment, the campus dataset is not accessible and the pre-trained model is evaluated directly on the test split. This is our baseline for the expected performance when deploying on a campus scale without CODa.
**ST3D++ (ST)**. In this scenario, the campus dataset is accessible but the ground truth labels are not available. This is typical for robot deployments as domain-specific raw sensor data is readily available. We used ST3D++ [47] to adapt to the campus domain as the authors demonstrated state-of-the-art unsupervised domain adaptation performance improvement between different AV datasets when using their method.
We perform a coarse hyperparameter tuning sweep across the positive and negative thresholds for each object class and use the same model weights for each class. After performing the self-training process for 25 epochs, we evaluate the highest-performing epoch directly on CODa. We present the highest performing models in Table X and include the full experiment list in Appendix Section XI-C, Table XI. This is the best performance we can achieve without labels when deploying on a campus scale with raw sensor data available.
**Domain Specific Finetuning (FT)**. We assume that domain-specific ground truth labels are available. We pre-train the model backbone on nuScenes or Waymo before fine-tuning on the train split of CODa, hypothesizing that learning features on other datasets benefit domain-specific performance.
We pre-train PVRCNN from scratch on each AV dataset for 30 epochs and evaluate the model on the CODa test split. After pre-training, we freeze the encoder and backbone weights and randomly initialize the detection, classification, and dense heads. We finetune the heads for 25 epochs, unfreeze the encoder and backbone weights, and train the entire model for another 25 epochs. Our experiments X show that a learning rate of 0.01 and the Adam 1cycle optimizer [48] provide the best empirical performance.
**ST3D++ with Domain Specific Finetuning (ST + FT)**. This experiment combines the ST and FT methods described earlier. We follow the same procedure for self-training using the ST method with the same hyperparameters. After self-training, we apply the training procedure in FT. For this approach, we find that a learning rate of 0.01 and the Adam 1cycle optimizer provides the best empirical performance.
**Domain Adaptation Discussion**. Table V demonstrates a significant performance gap between unsupervised domain adaptation and fully supervised methods. The highest-performing pre-trained model is about 40 percent lower than the same model trained from scratch on CODa. This is expected due to the large domain and sensor-specific differences described earlier. For ST, the model pre-trained on AV datasets decreases in performance after self-training on CODa. These results are consistent with findings from the ONCE [16]
\begin{table}
\begin{tabular}{c|c|c|c|c}
**PT** & **Direct** & **ST** & **FT** & **ST + FT** \\ \hline nuScenes [13] & 21.30 15.53 & 14.07 10.76 & 91.39 90.16 & 92.38 91.02 \\ Waymo [12] & 46.20 43.11 & 38.27 34.36 & 93.12 92.07 & 92.36 91.18 \\ CODa & **92.08** **91.11** & **-** & **-** & **-** \\ \end{tabular}
\end{table} TABLE V: Evaluation of PV-RCNN pretrained (**PT**) on AV Datasets and evaluated on the CODa test split after undergoing different domain adaptation (**DA**) methods. DA methods include: 1) **Direct** — train on the source dataset and evaluate directly on CODa; 2) _ST3D++_**ST**[47]— for unsupervised adaptation; 3) Fine-tuning (**FT—** with CODa after pre-training on the source dataset; and 4) Both ST3D++ and Fine-tuning (**ST + FT**). The results demonstrate that even state-of-the-art unsupervised domain adaptation methods for 3D object detectors are not competitive with approaches that use domain-specific training labels. All models are evaluated on the medium test split of CODa.
\begin{table}
\begin{tabular}{c|c|c|c|c}
**Mod. Data.** & **nuScenes** & **Waymo** & **KITTI** & **CODa** \\ \hline PointPillars [42] & 28.42 17.94 & 55.11 47.55 & **70.27** 63.32 & 49.78 48.86 \\ CenterPoint [43] & **36.91** 23.86 & 62.66 54.86 & 69.34 63.87 & 82.08 76.92 \\ PVRCNN [44] & 33.85 25.41 & **62.73** **56.40** & 70.22 **65.28** & **92.08** **91.11** \\ \end{tabular}
\end{table} TABLE IV: Evaluation of several 3D object detectors on AV Datasets and CODa, We report mean average precision for the car, pedestrian, and cyclist categories in bird’s eye view (AP\({}_{BEV}\)) and 3D (AP\({}_{3D}\)) with IOU 0.7, 0.5, and 0.5 respectively. We average the results at the easy, medium, and hard difficulties (following the KITTI Vision Benchmarks). The blue and red indicate the highest-performing training method for BEV and 3D detection for each dataset. Mod. - Model Data. - Dataset
Fig. 10: Histogram of 3D semantic segmentation annotations labels for outdoor, indoor, and both environments in CODa. Vertical numbers above each bar indicate the total number of points annotated for that semantic class. The semantic classes in the legend map to the bars from left to right.
AV dataset. They show that performing unsupervised domain adaptation with ST3D from nuScenes to ONCE decreases performance and hypothesize that this is due to differences in LiDAR beam resolution. We believe that differences in sensor viewpoint and resolution cause ST3D to produce poor quality pseudo labels on CODa and support this hypothesis in Section VII-C.
Our experiments show that pre-training on Waymo improves performance on 3D bounding box and BEV tasks by about 1-2 percent compared to training from scratch. However, performance does not consistently improve between FT and ST+FT adaptation techniques. We speculate that when trained to performance saturation, FT improvements dominate the effects of ST pre-training. We conclude from these studies that downstream tasks like 3D object detection benefit from better initial 3D representations. In addition, we hope that our empirical analysis of methods like ST3D spurs future work on how to continue improving self-training methods between domains with significant changes in sensor resolution, viewpoint, and geometric features.
### _Impact of Sensor Resolution Differences on Object Detection Performance_
While most robots benefit from having high-quality object detections, their wide range of sensor setups presents a challenge for object detectors. Therefore, it is important to understand how object detection performance is affected by sensor resolution differences between the train and test domains.
For our experiments, we train PV-RCNN from scratch on 20% of the Waymo train dataset and fine-tune the model on the CODa medium train split at four LiDAR resolutions [16, 32, 64, 128] on the car, pedestrian, and cycilst classes for 30 epochs or until performance saturates. Our LiDAR is originally 128 channels, so we subsample the original point cloud to obtain the lower resolutions. For fine-tuning, we follow the same two-stage process used in the prior experiments: train a randomly initialized detection head for 15 epochs while keeping the model backbone frozen and then train the full model for another 30 epochs. After training, we evaluate the model directly on the CODa medium test split at all four LiDAR resolutions.
**Sensor Resolution Discussion**. Table VI shows that 3D object detectors trained on a specific sensor resolution perform best on the same sensor resolution during test time. Furthermore, the larger the resolution difference between the train and test domains, the more performance is affected. This vindicates our hypothesis that large differences in LiDAR resolutions negatively affect object detection performance. Thus, we release pre-trained models on all classes in CODa for the [16, 32, 64, and 128] channel LiDAR resolutions and encourage users to select the pre-trained model that is most similar to the target dataset's resolution. Table IX in Appendix Section XI-A reproduces this experiment for all classes in CODa.
### _JRDB Adaptation_
Aside from sensor variations, viewpoint and scene differences between train and test domains also present a challenge for LiDAR-based object detectors. To understand the impact of these differences, we evaluate the performance of 3D object detectors trained on CODa and AV datasets on JRDB, a large-scale urban robot dataset with LiDAR point clouds and 3D bounding box annotations.
For our experiments, we train three PV-RCNN models from scratch on 20% of the Waymo train split, full CODa train split, and full JRDB train split for 30 epochs or until performance saturates. For consistency, all models are only trained on pedestrians and evaluated on the proposed JRDB validation split using their 3D detection benchmark metrics (average precision, recall, and F1 score). We repeat our evaluation on two variations of the validation split: one containing ground truth annotations exclusively within 15 meters of the ego vehicle and the other within 25 meters of the ego vehicle.
**JRDB Performance Discussion.** Table VII shows that CODa models consistently outperform Waymo models in all metrics for both the 15m and 25m range. Furthermore, pretraining on CODa offers similar performance to training with labels on JRDB, corroborating our claim that pre-trained CODa models generalize to other urban settings. We believe this can be explained by CODa's similarity to JRDB in terms of sensor resolution, viewpoint, and scene diversity. By utilizing prior knowledge of similar environments in JRDB, CODa models are more robust to point cloud sparsity than Waymo models. Fig. 16 in Section VII-D vindicates our claim with several examples where CODa models detect sparse pedestrians that Waymo models miss.
\begin{table}
\begin{tabular}{c|c c c c c}
**Train** & **CODa-16** & **CODa-32** & **CODa-64** & **CODa-128** \\ \hline CODa-16 & **75.15** & **73.29** & 64.99 & 63.24 & 49.17 & 47.36 & 21.93 & 18.94 \\ CODa-32 & 50.79 & 47.95 & **78.30** & **76.90** & 70.49 & 69.37 & 59.95 & 56.59 \\ CODa-64 & 21.10 & 22.05 & 67.27 & 64.77 & **86.20** & **84.48** & 77.63 & 77.53 \\ CODa-128 & 12.58 & 12.16 & 48.05 & 45.76 & 76.51 & 75.38 & **92.61** & **91.34** \\ \end{tabular}
\end{table} TABLE VI: Evaluating the impact of point cloud resolution differences between the source and target domain on 3D object detector performance. All experiments are conducted with a PV-RCNN detector first pre-trained on Waymo using the pedestrian, car, and cycilst classes. We fine-tune the pre-trained model on CODa downsampled to 16, 32, 64, and 128 vertical channels (CODa-#channels) for 50 epochs. We then evaluate the model performance on different point cloud resolutions for the pedestrian, car, and cycilst classes using the same evaluation metric as Table IV. All models are evaluated on the medium test split of CODa.
\begin{table}
\begin{tabular}{c|c c c|c c c}
**TrainTest** & \multicolumn{3}{c|}{**JRDB (15m)**} & \multicolumn{3}{c}{**JRDB (25m)**} \\
**Pre.** & **Rec.** & **F1** & **Prec.** & **Rec.** & **F1** \\ \hline Waymo & 55.39 & 18.70 & 27.96 & 52.76 & 17.19 & 25.94 \\ CODa & 60.29\({}^{\text{\textminus}4.90}\) & 25.32\({}^{\text{\textminus}6.62}\) & 35.66\({}^{\text{\textminus}7}\) & 57.38\({}^{\text{\textminus}4.62}\) & 25.31\({}^{\text{\textminus}8.12}\) & 35.13\({}^{\text{\textminus}9.19}\) \\ \hline JRDB & 65.64 & 27.14 & 38.39 & 64.15 & 27.15 & 38.15 \\ \end{tabular}
\end{table} TABLE VII: Cross-dataset 3D object detection performance comparison on JRDB [7] after training on CODa and Waymo [12]. We train a PV-RCNN detector on only pedestrians for all datasets. We evaluate the average precision, recall, and F1 score for objects within 15 meters (15m) and 25 meters (25m) of the ego vehicle. We report the performance difference between the AV and CODa models in red and blue superscripts.
To assess how variations in sensor resolution affect model performance across datasets, we evaluated models trained on different resolutions of CODa on JRDB in Table XII in Appendix Section VII-D. Our findings indicate that detection performance decreases as the sensor resolution difference increases between the train and test datasets. This aligns with the insights we presented in Section VII-C, demonstrating that cross-dataset performance is maximized when the train and test resolutions closely match. Thus, we recommend that users select the pre-trained model that is most similar to their target dataset's resolution for optimal performance. Our findings should motivate future work to leverage scene context and develop density invariant models to improve 3D object detection performance.
## VIII Benchmarks
In this section, we define the 3D object detection and 3D semantic segmentation for this dataset. We plan on adding additional tasks in the future for robot perception and planning, such as long-term SLAM, cross-domain information retrieval, and preference-aware navigation.
### _3D Object Detection_
The 3D object detection task involves predicting 7 degrees of freedom boxes for all object classes. We use the 3D object detection metric proposed in the KITTI Vision Benchmark Suite. For the car, pedestrian, and cyclist classes, we require a minimum bounding box overlap of 70%, 50%, and 50% to determine if detection is correct. For all other object classes, we use a minimum overlap of 50% with the ground truth bounding box. All methods are limited to using up to 10 prior LiDAR frames for predictions. All sensor modalities and pseudo-ground truth poses can be used, and we will evaluate all predictions on the 3D point cloud annotations.
### _3D Semantic Segmentation_
For the 3D semantic segmentation benchmark, we use the same evaluation metric proposed in SemanticKITTI [14]. This is the mean intersection-over-union (mIoU) metric [3] over all classes. All sensor modalities can be used, but we will evaluate all predictions using the 3D point cloud annotations. Table VIII benchmarks Cylinder3D [49] and 2DPass [50], two state-of-the-art LiDAR only and LiDAR camera approaches respectively. For our benchmarks, we train both models from scratch for 30 epochs or until performance saturates and take the highest-performing model.
## IX Conclusion and Future Work
In this work, we presented the UT Campus Object Dataset (CODa), a multi-modal dataset that contains greater object and scene-level annotation diversity than any other similar existing dataset. CODa contains 1.3 million human-annotated 3D bounding boxes and 5000 semantic segmentation annotations over 8.5 hours of data collected from the perspective of a mobile robot across UT campus. We publicly release CODa on the Texas Data Repository [1], pre-trained models for various LiDAR resolutions (16, 32, 64, 128 channels), and dataset development package.
We conducted extensive experiments to select a high-performing model architecture for urban environments. We demonstrated a performance gap for 3D object detectors in urban environments by comparing the performance on CODa's test split after training on CODa versus AV datasets. We empirically demonstrated that 3D object detection performance is significantly affected by differences in LiDAR sensor resolution during test time. Finally, we conducted various ablation studies to show that pre-training on CODa instead of AV datasets improves cross-dataset object detection performance on existing urban robotics datasets. This constitutes motivation for future work to improve 3D object detector invariance to point cloud density and highlights the importance of selecting a pre-trained model that closely resembles the target domain during robot deployments. We expect that this work will spur future research toward learning sensor-invariant 3D feature representations, object-centric localization approaches, and terrain-aware navigation planners. In the future, we plan on releasing additional benchmarks on CODa to facilitate fair comparison for methods in these research areas.
## X Acknowledgement
This work was conducted with the Autonomous Mobile Robotics Laboratory (AMRL) at UT Austin. This project is partially supported by NSF Awards CAREER-2046955 and IIS-1954778. We would like to thank Roberto Martin-Martin, Zhangyang "Atlas" Wang, and Philipp Krahenbuhl for their fruitful comments, suggestions, and inspiration. The authors acknowledge the Texas Advanced Computing Center
\begin{table}
\begin{tabular}{c|c
(TACC)3 at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper. URL: [http://www.tacc.utexas.edu](http://www.tacc.utexas.edu). The CODa study is covered under the University of Texas at Austin IRB Number STUDY00003493.
Footnote 3: TACC website: [http://www.tacc.utexas.edu](http://www.tacc.utexas.edu)
|
2309.04753 | On the Spectrum of Exterior Algebra, and Generalized Exponents of Small
Representations | We present some results about the irreducible representations appearing in
the exterior algebra $\Lambda \mathfrak{g}$, where $ \mathfrak{g}$ is a simple
Lie algebra over $\mathbb{C}$. For Lie algebras of type $B$, $C$ or $D$ we
prove that certain irreducible representations, associated to weights
characterized in a combinatorial way, appear as irreducible components of
$\Lambda \mathfrak{g}$. Moreover, we propose an analogue of a conjecture of
Kostant, about irreducibles appearing in the exterior algebra of the little
adjoint representation. Finally, we give some closed expressions, in type $B$,
$C$ and $D$, for generalized exponents of small representations that are
fundamental representations and we propose a generalization of some results of
De Concini, M\"oseneder Frajria, Procesi and Papi about the module of special
covariants of adjoint and little adjoint type. | Sabino Di Trani | 2023-09-09T10:52:19Z | http://arxiv.org/abs/2309.04753v1 | # On the spectrum of exterior algebra, and generalized exponents of small representations
# On the spectrum of exterior algebra, and generalized exponents of small representations
Sabino di Trani
Dipartimento di Matematica "Guido Castelnuovo", Sapienza - Universita di Roma.
_The autor has been partially supported by GNSAGA - INDAM group._
ORCID id: [https://orcid.org/0000-0002-6651-558X](https://orcid.org/0000-0002-6651-558X)
_E-mail address:[email protected]
**Abstract:** We present some results about the irreducible representations appearing in the exterior algebra \(\Lambda\mathfrak{g}\), where \(\mathfrak{g}\) is a simple Lie algebra over \(\mathbb{C}\). For Lie algebras of type \(B\), \(C\) or \(D\) we prove that certain irreducible representations, associated to weights characterized in a combinatorial way, appear as irreducible components of \(\Lambda\mathfrak{g}\). Moreover, we propose an analogue of a conjecture of Kostant, about irreducibles appearing in the exterior algebra of the little adjoint representation. Finally, we give some closed expressions, in type \(B\), \(C\) and \(D\), for generalized exponents of small representations that are fundamental representations and we propose a generalization of some results of De Concini, Moseneder Frajria, Procesi and Papi about the module of special covariants of adjoint and little adjoint type.
**Keywords:** Simple Lie Algebras, Kostant Conjecture, Exterior Algebra, Small Representations, Generalized Exponents.
## 1. Introduction
Let \(\mathfrak{g}\) be a simple Lie algebra over \(\mathbb{C}\) of rank \(\operatorname{rk}\mathfrak{g}\). Fix a Cartan subalgebra \(\mathfrak{h}\) and let \(\Phi\) be the associated root system with Weyl group \(W\). We choose a set of positive roots \(\Phi^{+}\) and let \(\Delta\) be the associated simple system. Let \(\rho\) be the corresponding Weyl vector and \(\theta\) the highest root with respect to the standard partial order \(\leq\) on \(\Phi^{+}\). If \(\mathfrak{g}\) is not simply laced, \(\theta_{s}\) is the short dominant root. We denote by \(\Pi\) and \(\Pi^{+}\) the set of weights and the set of dominant weights respectively, moreover we denote by \(\omega_{i}\) the \(i\)-th fundamental weight. Throughout the paper, \(V_{\lambda}\) will be the irreducible finite dimensional representation of \(\mathfrak{g}\) of highest weight \(\lambda\in\Pi^{+}\) and we denote by \(V_{\lambda}^{0}\) the corresponding \(W\)-representation on the zero weight space of \(V_{\lambda}\). Finally, \(e_{1}\leq\cdots\leq e_{n}\) will be the _exponents_ of \(\mathfrak{g}\).
The adjoint action of \(\mathfrak{g}\) on itself induces an action of \(\mathfrak{g}\) on \(S(\mathfrak{g})\) and \(\Lambda\mathfrak{g}\), the symmetric and exterior algebras over \(\mathfrak{g}\) respectively, preserving the natural gradings. Two celebrated results give an explicit description of the ring of invariants in \(S(\mathfrak{g})\) and \(\Lambda\mathfrak{g}\) with respect to this action.
**Theorem** (Chevalley, Shephard and Todd).: _Let \(\mathfrak{g}\) be a complex semisimple Lie algebra of rank \(n\) and \(\mathfrak{h}\) a fixed Cartan subalgebra. Up to identify \(\mathfrak{g}\) with \(\mathfrak{g}^{*}\) and \(\mathfrak{h}\) with \(\mathfrak{h}^{*}\) via Killing form, the restriction of polynomial functions induces an algebra isomorphism between the rigs of invariants_
\[S(\mathfrak{g})^{\mathfrak{g}}\simeq S(\mathfrak{h})^{W}.\]
_In particular, \(S(\mathfrak{g})^{\mathfrak{g}}\) is a polynomial algebra with generators of degrees \(e_{1}+1,\ldots,e_{n}+1\)._
**Theorem** (Hopf, Koszul and Samelson).: _Let \(\mathfrak{g}\) be a complex semisimple Lie algebra of rank \(n\). Then_
\[(\Lambda\mathfrak{g})^{\mathfrak{g}}=\Lambda(P_{1},\ldots,P_{n}),\]
_where the degree of a generator \(P_{i}\) of the algebra of the invariants is equal to \(2e_{i}-1\)._
If \(M=\oplus M_{i}\) is a graded \(\mathfrak{g}\)-module, we denote by
\[P(V_{\lambda},M,t)=\sum_{i}\dim\operatorname{Hom}_{\mathfrak{g}}(V_{\lambda},M _{i})t^{i}\]
the generating function for graded multiplicities of the irreducible representation \(V_{\lambda}\) in \(M\). As an immediate consequence of the above theorems, it is possible to obtain the following formulae that encode the graded structure of rings of invariants:
\[P(V_{0},\Lambda\mathfrak{g},t)=\prod_{i=1}^{n}(1+t^{2e_{i}+1}),\qquad P(V_{0}, S(\mathfrak{g}),t)=\prod_{i=1}^{n}(1-t^{e_{i}+1})^{-1}.\]
Aiming to generalize the above results, some questions about irreducible representations in \(S(\mathfrak{g})\) and \(\Lambda\mathfrak{g}\) naturally arise:
1. Is it possible to determine the irreducible representations appearing in \(S(\mathfrak{g})\) and in \(\Lambda\mathfrak{g}\)?
2. If \(V_{\lambda}\) is a subrepresentation of \(S(\mathfrak{g})\) or of \(\Lambda\mathfrak{g}\), is it possible to determine the degrees in which \(V_{\lambda}\) appears?
3. Denoting by \(\Lambda^{i}\mathfrak{g}\) (resp. \(S^{i}(\mathfrak{g})\)) the submodule of homogeneous elements of degree \(i\) in \(\Lambda\mathfrak{g}\) (resp. \(S(\mathfrak{g})\)), is it possible to determine the multiplicity of \(V_{\lambda}\) in \(\Lambda^{i}\mathfrak{g}\) (resp. \(S^{i}(\mathfrak{g})\))?
These questions inspired a great amount of claims and conjectures; many of them are still open or have only implicit answers.
For what concerns the irreducibles appearing in the symmetric algebra, the problem was extensively studied by Kostant in [27]. More precisely, Kostant proved the isomorphism
\[S(\mathfrak{g})\simeq S(\mathfrak{g})^{\mathfrak{g}}\otimes\mathcal{H},\]
where \(\mathcal{H}\) is the ring of \(\mathfrak{g}\)-harmonic polynomials, i.e. the ring of polynomials over \(\mathfrak{g}\) annihilated by \(\mathfrak{g}\)-invariant differential operators of positive degreee with constant coefficients. Studying the graded multiplicities of \(V_{\lambda}\) in \(S(\mathfrak{g})\) can be then reduced to determining the multiplicity of \(V_{\lambda}\) in each homogeneous component \(\mathcal{H}^{i}\) of \(\mathcal{H}\). Kostant proved that the multiplicity of \(V_{\lambda}\) in \(\mathcal{H}\) equals the dimension of \(V_{\lambda}^{0}\) and that the degrees \(i\) such that \(V_{\lambda}\) appears in \(\mathcal{H}^{i}\) are related to the eigenvalues of the action of the Coxeter-Killing transformation on the \(W\)-representation \(V_{\lambda}^{0}\).
These integers are called the _Generalized Exponents_ associated to \(V_{\lambda}\) and are extensively studied in the literature because of their nice combinatorial properties. We summarize some remarkable results about generalized exponents in Section 4.
On the other hand, despite its finite dimensionality, determining the irreducible components appearing in \(\Lambda\mathfrak{g}\) seems to be quite difficult. A complete description of irreducible representations in the exterior algebra is known only in type \(A\), by some general arguments due to Berenstein and Zelevinsky, and for exceptional algebras of type \(F_{4}\) and \(G_{2}\), by direct computations. For other cases an open conjecture has been formulated by Kostant, describing the \(V_{\lambda}\) appearing in \(\Lambda\mathfrak{g}\) as the irreducibles indexed by \(\lambda\) smaller or equal to \(2\rho\) in the dominance order on weights, i.e. the ordering defined by the relation \(\mu\leq\lambda\) if and only if \(\lambda-\mu\) is a sum of positive roots. To introduce the reader to this fascinating subject and to provide a framework for the new results contained in this article, we present in Section 2 a brief survey of some known results on this topic.
The remaining part of the paper is devoted to present our results.
In Section 3 we recall some results of Berenstein and Zelevinsky about multiplicities in tensor product decomposition. These techniques are used in [4] to prove Kostant Conjecture in type \(A\). We use these tools to prove that large families of irreducible representations appear as irreducible representations in \(\Lambda\mathfrak{g}\), for \(\mathfrak{g}\) of type \(B\), \(C\) and \(D\). More precisely we introduce
the Coordinatewise Ordering (Definition 3.3) on the set of dominant weights, prescribing that \(\mu\) is _coordinatewise smaller_ than \(\lambda\) (for short \(\mu\lesssim\lambda\)) if certain combinatorial conditions are satisfied. We use this ordering to describe a suitable subset of the set of dominant weights smaller than \(2\rho\) in the dominance order. We prove that irreducible representations associated to weights in this subset appear in \(\Lambda\mathfrak{g}\). The main result of the section is the following theorem:
**Theorem**.: _Let \(\mathfrak{g}\) be a simple Lie algebra over \(\mathbb{C}\) of type \(B\), \(C\) or \(D\) and let \(\lambda\) be a dominant weight for \(\mathfrak{g}\). If \(\lambda\leq 2\rho\) and \(\lambda\lesssim 2\rho\), then \(V_{\lambda}\) appears as irreducible component in \(\Lambda\mathfrak{g}\)._
Section 5 is devoted to compute explicit formulae for polynomials of generalized exponents, using the techniques summarized in Section 4. In particular, denoting by \(E_{\lambda}(t)\) the generating polynomial of generalized exponents associated to \(V_{\lambda}\), i.e. the Poincare polynomial of graded multiplicities of \(V_{\lambda}\) into \(\mathcal{H}\), in Section 5 we observe that the following formula can be obtained in type \(C_{n}\) as a consequence of results contained in [16]
\[E_{\omega_{2k}}(t)=\frac{t^{2k}(n-2k+1)_{t^{2}}}{(n-k+1)_{t^{2}}}{n\choose k}_ {t^{2}},\]
where \((n)_{t}\) denotes the \(t\)-analogue of \(n\) and \({n\choose k}_{t}\) is the \(t\)-binomial. Moreover, denoting by \(\lfloor a\rfloor\) the integer part of \(a\), we prove that the following formulae hold in type \(B_{n}\)
\[E_{\omega_{2k}}(t)=t^{k}{n\choose k}_{t^{2}},\]
\[E_{\omega_{2k+1}}(t)=t^{n-k}{n\choose k}_{t^{2}},\]
\[E_{2\omega_{n}}(t)=t^{n-\lfloor\frac{n}{2}\rfloor}{n\choose\lfloor\frac{n}{2} \rfloor}_{t^{2}},\]
and in type \(D_{n}\)
\[E_{\omega_{2k}}(t)=t^{k}\frac{(t^{n-2k}+1)}{(t^{n}+1)}{n\choose k}_{t^{2}},\]
\[E_{\omega_{n-1}+\omega_{n}}(t)=\frac{t^{\lfloor\frac{n}{2}\rfloor}(t+1)}{(t^{ n}+1)}{n\choose\lfloor\frac{n}{2}\rfloor}_{t^{2}},\]
\[E_{2\omega_{n-1}}(t)=E_{2\omega_{n}}(t)=\frac{t^{\frac{n}{2}}}{(t^{n}+1)}{n \choose\frac{n}{2}}_{t^{2}},\]
where the formula for \(E_{\omega_{n-1}+\omega_{n}}(t)\) holds for \(n\) odd and the formulae for \(E_{2\omega_{n-1}}(t)\) and \(E_{2\omega_{n}}(t)\) must be considered only if \(n\) is even. Finally, some open question and conjectures are proposed at the end of Sections 3 and of Section 5.
**Acknowledgements.** The main original contributions of this paper are some results that I obtained during my doctoral studies, so I would like to thank my advisor, Professor Paolo Papi, for his mentoring and for his supervision. Moreover, I am grateful to Professor Andrea Maffei for many useful discussions about the Kostant Conjecture. I would like to extend my special thanks to the anonymous referee for their really careful reading and for their precious comments to a previous version of the paper. I am also grateful to Rosario Mennuni and Viola Siconolfi for their advice on the organization of a first draft of the paper. Finally, this article was partially written during my frequent stays in Pisa: I express my gratitude to P.F., to M.A.P. and to the little R.F. for their great hospitality and to all my friends at the Mathematics Department for their support.
## 2. Irreducible Representations in the Exterior Algebra
As mentioned in the introduction, an uniform description of irreducible representations appearing in the exterior algebra \(\Lambda\mathfrak{g}\), with \(\mathfrak{g}\) a simple Lie algebra over \(\mathbb{C}\), has been proposed by Kostant:
**Conjecture 2.1** (Kostant, c.f.r. [5], Introduction).: _The representation \(V_{\lambda}\) appears in the decomposition of \(\Lambda\mathfrak{g}\) if and only if \(\lambda\leq 2\rho\) in the dominance order._
Currently a proof of this conjecture is known only in type \(A\), by the combinatorial construction given in [5], and in the exceptional cases \(G_{2}\) and \(F_{4}\) by explicit computations, as reported in [12]. Moreover, we mention that in [12] the authors exhibit a possible uniform proof of the Kostant Conjecture for algebras of types \(ADE\), assuming that \(1\) is a saturation factor for any simply laced algebra. It is not clear if similar techniques could be used to prove the Conjecture in the remaining cases. Moreover, _a priori_ it should be possible to verify Kostant Conjecture in type \(E\) by direct computations, but it seems to be an unfruitful approach. Nevertheless, a uniform proof of Conjecture 2.1 is desirable, but a concrete strategy is far to be clear. In addition to that, if \(V_{\lambda}\) appears in \(\Lambda\mathfrak{g}\) studying its graded multiplicities seems to be also very complex. We collect here some partial related results. Firstly, a uniform bound for multiplicity of \(V_{\lambda}\) is known.
**Theorem 2.1** (Reeder, [36], Section 4).: (2.1) \[\dim\,\operatorname{Hom}_{\mathfrak{g}}(V_{\lambda},\Lambda\mathfrak{g})\leq 2 ^{\operatorname{rk\mathfrak{g}}}\dim V_{\lambda}^{0}.\]
Moreover, Reader investigated when the equality holds.
**Definition 2.2** (c.f.r [36], Definition 2.2).: _An irreducible representation \(V_{\lambda}\) is small if \(\lambda\) is in the root lattice and if \(2\alpha\nleq\lambda\) for every dominant root \(\alpha\)._
**Theorem 2.2** (Reeder, [36], Section 4).: _Equality in Formula (2.1) holds if and only if \(V_{\lambda}\) is small._
Observe in particular that the adjoint and the little adjoint representations are special cases of small representations. Some explicit formulae for polynomials of graded multiplicities are proved by Bazlov.
**Theorem 2.3** (Bazlov, [3], Section 5.2).: _The following formula for graded multiplicities of adjoint representation in \(\Lambda\mathfrak{g}\) holds:_
\[P(\mathfrak{g},\Lambda\mathfrak{g},q)=(1+q^{-1})\prod_{i=1}^{n-1}(q^{2e_{i}+1 }+1)\sum_{i=1}^{n}q^{2e_{i}}.\]
Moreover, for certain weights close to \(2\rho\), an explicit formula can be found in [36].
**Theorem 2.4** (Reeder, [36], Proposition 6.3).: _Let \(I\subseteq\Delta\). Consider \(\delta_{I}=\sum_{\alpha\in I}\alpha\) and denote by \(c(I)\) the number of connected component of the Dynkin subdiagram generated by \(I\). Then_
\[P(V_{2\rho-\delta_{I}},\Lambda\mathfrak{g},t)=t^{|\Phi^{+}|-|I|}(t+1)^{n-c(I) }(t^{2}+1)^{|I|-c(I)}(t^{3}+1)^{c(I)}.\]
Similarly, closed formulae can be obtained for small representations as a consequence of a conjecture formulated by Reeder in [36] and proved in [16] and [17]. This conjecture was inspired by two remarkable results:
**Theorem 2.3** (Broer [10], Theorem 1).: _The homomorphism induced by the Chevalley restriction theorem_
\[\operatorname{Hom}_{\mathfrak{g}}(V_{\lambda},S(\mathfrak{g}))\to \operatorname{Hom}_{W}(V_{\lambda}^{0},S(\mathfrak{h}))\]
_is a graded isomorphism of \(S(\mathfrak{g})^{\mathfrak{g}}\simeq S(\mathfrak{h})^{W}\)-modules if and only if \(V_{\lambda}\) is small._
**Theorem 2.4** (Chevalley, Eilenberg [11], Reeder [35]).: _Let \(G\) be a compact Lie group, \(T\subset G\) a maximal torus and \(W\) its Weyl group. Let \(\mathfrak{g}\) be the complexified Lie algebra of \(G\) and \(\mathfrak{h}\) the Cartan subalgebra of \(\mathfrak{g}\) associated to \(T\). The Weyl map \(\psi:G/T\times T\to G\) induces in cohomology the following graded isomorphism:_
\[(\Lambda\mathfrak{g})^{\mathfrak{g}}\simeq H^{*}(G)\simeq(H^{*}(G/T)\otimes H ^{*}(T))^{W}\simeq\left(\mathcal{H}_{(2)}\otimes\Lambda\mathfrak{h}\right)^{W}.\]
_where \(\mathcal{H}_{(2)}\) denotes the graded ring of \(W\)-harmonic polynomials over \(\mathfrak{h}\), with a grading obtained by doubling the natural one._
Theorem 2.3 and Theorem 2.4 suggest that graded multiplicities of a small representation \(V_{\lambda}\) in \(\Lambda\mathfrak{g}\) are linked to multiplicities of the \(W\)-representation \(V_{\lambda}^{0}\) in the bigraded ring \(\Lambda\mathfrak{h}\otimes\mathcal{H}_{(2)}\). Reeder conjectured that, if \(V_{\lambda}\) be a small representation, the following equality holds:
\[\dim\operatorname{Hom}_{\mathfrak{g}}(V_{\lambda},\Lambda^{i}\mathfrak{g})= \sum_{k+h=i}\dim\operatorname{Hom}_{W}(V_{\lambda}^{0},\mathcal{H}_{(2)}^{h} \otimes\Lambda^{k}\mathfrak{h}) \tag{2.2}\]
Small representations for algebras of type \(A_{n-1}\) are of the form \(V_{\lambda}\) where \(\lambda\) is a partition of \(n\). Reeder's conjecture is implicitly proved in literature for algebras of type \(A\) by comparing the results contained in [25] and [32] with the following formula proved by Stembridge:
**Theorem 2.5** (Stembridge, [40], Corollary 6.2).: _Let \(\lambda\) be a partition of \(n\) and \(\Gamma\) the associated Young tableaux, displayed in the English way._
\[P(V_{\lambda},\Lambda\mathfrak{g},q)=\frac{\prod_{i=1}^{n}(1-q^{2i})}{(1+q)} \prod_{(ij)\in\Gamma}\frac{\left(q^{2j-2}+q^{2i-1}\right)}{\left(1-q^{2h(ij)} \right)}\]
_where \(h(ij)\) denotes the hook length of the box \((ij)\in\Gamma\)._
For other simple Lie algebras the conjecture is proved in [16] and [17] using a case by case strategy. The problem of finding a uniform approach to prove Equation (2.2) for small representations is still open and very interesting. In this spirit, an enhanced version of Reeder's conjecture has been recently proposed in [14], Section 7.
Finally, we remark that the module of special coinvariants \(\operatorname{Hom}_{\mathfrak{g}}(\mathfrak{g},\Lambda\mathfrak{g})\) has a richer geometric structure, as proved in [15]:
**Theorem 2.6** (De Concini, Papi, Procesi, [15], Theorem 1.1).: _The module \(Hom_{\mathfrak{g}}(\mathfrak{g},\Lambda\mathfrak{g})\) is a finitely generated free module over \(\Lambda(P_{1},\dots,P_{n-1})\) with generators in degree \(2e_{i}\) and \(2e_{i}-1\)._
An analogous result it is proved in [13], when \(\mathfrak{g}\) is not simply laced, for the module \(\operatorname{Hom}_{\mathfrak{g}}(V_{\theta_{s}},\Lambda\mathfrak{g})\). An extension of these theorems to certain small representations is proposed in Section 5.6.
## 3. Berenstein and Zelevinsky Polytopes
The more efficient way to approach the Kostant Conjecture seems to be by facing the problem using tensor product decomposition techniques. In fact, using the Weyl Character Formula, in [26] Kostant proved the following isomorphism:
\[\Lambda\mathfrak{g}\simeq(V_{\rho}\otimes V_{\rho})^{\oplus 2^{\operatorname{rkg}}}\]
Kostant's Conjecture can be consequently reformulated in the following terms (c.f.r. [12], Remark 4):
**Conjecture 3.1** (Kostant).: _The representation \(V_{\lambda}\) appears in the decomposition of \(V_{\rho}\otimes V_{\rho}\) if and only if \(\lambda\leq 2\rho\) in the dominance order._
The conjecture in type \(A\) is proved by Berenstein and Zelevinsky in [5] as a consequence of a more general combinatorial construction, used to find the tensor product decomposition of two irreducible finite dimensional representations of \(\mathfrak{gl}_{n}(\mathbb{C})\). More in detail, they prove that for any triple of dominant weights \((\lambda,\mu,\nu)\), the irreducible representation \(V_{\nu}\) is a component of \(V_{\lambda}\otimes V_{\mu}\) if and only if there exists an integral point in a suitable polytope \(P(\lambda,\mu,\nu)\) depending on the expansion of \(\lambda\) and \(\mu\) in terms of the fundamental weights. As an application of their results, Berenstein and Zelevinsky prove that for every \(\mu\leq 2\rho\) the polytopes of the form \(P(\rho,\rho,\mu)\) have _at least_ one integral point. Moreover in [4] it is conjectured that a similar description of tensor multiplicities in terms of integral points of certain polytopes holds for every classical Lie algebra. The statement of the conjecture is recalled in subsection 3.2, it is proved by Berenstein and Zelevinsky as a consequence of results contained in [6].
### Orderings on Dominant Weights
We recall now how roots systems of type \(B_{n}\), \(C_{n}\) and \(D_{n}\) can be realized in an \(n\)-dimensional euclidean vector space \(\mathbb{E}\) with basis \(\{\varepsilon_{1},\ldots,\varepsilon_{n}\}\). We follow the constructions exposed in [9] and [18].
_Root System of Type \(B_{n}\):_
\[\Phi=\{\pm\varepsilon_{i}\pm\varepsilon_{j}\}_{i<j}\cup\{\pm \varepsilon_{1},\,\ldots\,,\pm\varepsilon_{n}\},\] \[\Delta=\{\varepsilon_{1}-\varepsilon_{2},\,\ldots,\,\varepsilon_ {n-1}-\varepsilon_{n},\,\varepsilon_{n}\},\] \[\Phi^{+}=\{\varepsilon_{i}\pm\varepsilon_{j}\}_{i<j}\cup\{ \varepsilon_{1},\,\ldots,\varepsilon_{n}\}\quad W=S_{n}\ltimes(\mathbb{Z}/2 \mathbb{Z})^{n}\,,\] \[\omega_{i}=\varepsilon_{1}+\cdots+\varepsilon_{i}\quad\omega_{n} =\frac{\varepsilon_{1}+\cdots+\varepsilon_{n}}{2},\] \[\rho=\frac{(2n-1)\varepsilon_{1}+(2n-3)\varepsilon_{2}+\cdots+3 \varepsilon_{n-1}+\varepsilon_{n}}{2}.\]
_Root System of Type \(C_{n}\):_
\[\Phi=\{\pm\varepsilon_{i}\pm\varepsilon_{j}\}_{i<j}\cup\{\pm 2 \varepsilon_{1},\,\ldots\,,\pm 2\varepsilon_{n}\},\] \[\Delta=\{\varepsilon_{1}-\varepsilon_{2},\,\ldots,\,\varepsilon_ {n-1}-\varepsilon_{n},\,2\varepsilon_{n}\},\] \[\Phi^{+}=\{\varepsilon_{i}\pm\varepsilon_{j}\}_{i<j}\cup\{2 \varepsilon_{1},\,\ldots\,,2\varepsilon_{n}\}\quad W=S_{n}\rtimes(\mathbb{Z}/ 2\mathbb{Z})^{n}\,,\] \[\omega_{i}=\varepsilon_{1}+\cdots+\varepsilon_{i},\] \[\rho=n\varepsilon_{1}+(n-1)\varepsilon_{2}+\cdots+2\varepsilon_ {n-1}+\varepsilon_{n}.\]
_Root System of Type \(D_{n}\):_
\[\Phi=\{\pm\varepsilon_{i}\pm\varepsilon_{j}\}_{i<j}\quad\Delta=\{\varepsilon _{1}-\varepsilon_{2},\,\ldots,\,\varepsilon_{n-1}-\varepsilon_{n},\,\varepsilon _{n-1}+\varepsilon_{n}\},\]
\[\Phi^{+}=\{\varepsilon_{i}\pm\varepsilon_{j}\}_{i<j}\quad W=S_{n}\ltimes( \mathbb{Z}/2\mathbb{Z})^{n-1},\]
\[\omega_{i}=\varepsilon_{1}+\cdots+\varepsilon_{i}\quad\omega_{n-1}=\frac{ \varepsilon_{1}+\cdots-\varepsilon_{n}}{2}\quad\omega_{n}=\frac{\varepsilon_{ 1}+\cdots+\varepsilon_{n}}{2},\]
\[\rho=(n-1)\varepsilon_{1}+(n-2)\varepsilon_{2}+\cdots+\varepsilon_{n-1}.\]
The set dominant weights is partially ordered by the dominance order, i.e. \(\lambda\geq\mu\) if \(\lambda-\mu\) is a sum of positive roots. Moreover, every dominant weight \(\lambda\) can be written as a sum \(\lambda_{1}\varepsilon_{1}+\ldots\lambda_{n}\varepsilon_{n}\) where \(\lambda_{i}\in\frac{1}{2}\mathbb{Z}\) for all \(i\). The condition \(\lambda\geq\mu\) in the dominance order can be restated as follows:
_Remark 3.2_.: Let \(\lambda=\lambda_{1}\varepsilon_{1}+\cdots+\lambda_{n}\varepsilon_{n}\) and \(\mu=\mu_{1}\varepsilon_{1}+\cdots+\mu_{n}\varepsilon_{n}\) be two dominant weights for a simple Lie algebra of type \(B_{n}\), \(C_{n}\) or \(D_{n}\). Then \(\lambda\geq\mu\) if and only if the following conditions hold:
1. \(\sum_{i=1}^{k}(\lambda_{i}-\mu_{i})\geq 0\) for all \(1\leq k\leq n\), in type \(B\);
2. \(\sum_{i=1}^{k}(\lambda_{i}-\mu_{i})\geq 0\) for all \(1\leq k\leq n\) and \(\sum_{i=1}^{n}(\lambda_{i}-\mu_{i})\) is an even integer, in type \(C\) and \(D_{n}\).
We introduce now a different ordering on the set of weight.
**Definition 3.3** (Coordinatewise order on weights).: _Let \(\lambda=\lambda_{1}\varepsilon_{1}+\cdots+\lambda_{n}\varepsilon_{n}\) and \(\mu=\mu_{1}\varepsilon_{1}+\cdots+\mu_{n}\varepsilon_{n}\) be two dominant weights for a simple Lie algebra \(\mathfrak{g}\) of type \(B_{n}\),\(C_{n}\) or \(D_{n}\). We say that \(\mu\) is smaller than \(\lambda\) with respect to the relation \(\lesssim\) if and only if \(\lambda_{i}-\mu_{i}\geq 0\) and \(|\lambda_{i}|\geq|\mu_{i}|\) for all \(i\). In this case we write \(\mu\lesssim\lambda\) and we say that \(\mu\) is smaller than \(\lambda\) with respect to the coordinatewise order._
_Remark 3.4_.: Observe that the coordinatewise ordering is different from the dominance ordering. As an example, in type \(C\) the weight \(\omega_{2}\) is the only non zero dominant weight smaller than \(2\omega_{1}\) with respect to the dominance order, but \(\omega_{2}\lesssim 2\omega_{1}\). On the other side in type \(C\) we have that \(\omega_{1}\lesssim\omega_{2}\), although \(\omega_{2}\) is a minimal element between non zero dominant weights with respect to dominance order.
The next two sections are devoted to prove the following theorem:
**Theorem 3.1**.: _Let \(\mathfrak{g}\) be a simple Lie algebra over \(\mathbb{C}\) of type \(B\), \(C\) or \(D\) and let \(\lambda\) be a dominant weight for \(\mathfrak{g}\). If \(\lambda\leq 2\rho\) and \(\lambda\lesssim 2\rho\), then \(V_{\lambda}\) appears as irreducible component in \(\Lambda\mathfrak{g}\)._
_Example 3.5_.: In this example we compare the set of weights considered in the assert of Theorem 3.1 with the ones appearing in Theorem 2.1 and in Theorem 2.4. In particular we focus on the case of simple Lie algebra \(C_{3}\). In type \(C_{3}\) there are 35 dominant weights smaller or equal to \(2\rho\) with respect to dominance order. Between these weights, there are 30 dominant weights \(\mu\) such that \(\mu\lesssim 2\rho\). All small weights appear in this set, but they are considerably fewer (more precisely, in type \(C_{3}\) there are 4 small weights, c.f.r. Table 4). Moreover, in type \(C_{3}\) there are 7 dominant weights of the form \(2\rho-\delta_{I}\) with \(I\subset\Delta\). Between them only 4 weights are not smaller than \(2\rho\) with respect to the coordinatewise order.
### \(\mathfrak{g}\)-partitions and Berenstein-Zelevinsky polytopes
Let \(m\) be a weight in the root lattice for the Lie algebra \(\mathfrak{so}_{2n+1}\mathbb{C}\), it can be described by a vector of non negative integers
\[(m_{12},m_{12}^{+},\ldots,m_{n-1n},m_{n-1n}^{+},m_{1},\ldots,m_{n})\]
such that
\[m=\sum_{i<j}m_{ij}(\varepsilon_{i}-\varepsilon_{j})+\sum_{i<j}m_{ij}^{+}( \varepsilon_{i}+\varepsilon_{j})+\sum_{i}m_{i}\varepsilon_{i}.\]
We say that the sequence of integers \((m_{12},m_{12}^{+},\ldots,m_{n-1n},m_{n-1n}^{+},m_{1},\ldots,m_{n})\) is an \(\mathfrak{so}_{2n+1}\)-_partition for \(m\)_. We say that an \(\mathfrak{so}_{2n+1}\) partition is an \(\mathfrak{so}_{2n}\)-partition (resp. \(\mathfrak{sp}_{2n}\)-partition) if \(m_{i}=0\) (resp. \(m_{i}\) is even) for every \(i\). The inequalities that determine the Berenstein-Zelevinsky polytope for a general tensor product \(V_{\lambda}\otimes V_{\mu}\) are described in [4] in terms of the variables \(m_{12},m_{12}^{+},\ldots,m_{n-1n},m_{n-1n}^{+}\) and \(m_{1},\ldots,m_{n}\). We recall here their description as presented in [4].
Consider the set \(I=\{\bar{0},1,\ldots,n,\bar{1},\ldots,\bar{n}\}\), ordered by \(\bar{0}<1<\bar{1}<\cdots<n<\bar{n}\), and set
\[\Delta_{ij}=m_{ij}-m_{ij}^{+},\quad\Delta_{\bar{i}\bar{j}}=\Delta_{i+1j+1}, \quad\Delta_{i\bar{j}}=\Delta_{\bar{i}\bar{j}}=\left\{\begin{array}{ll}m_{i,j +1}^{+}-m_{i+1j+1}&\mbox{if $j<n$},\\ m_{i}-m_{i+1}&\mbox{if $i=n$}.\end{array}\right.\]
where \(m_{i,j},m_{i,j}^{+}\) must be considered only if \(i<j\). Now, if \(j<n\) and \(t\in I\), we consider the linear forms (c.f.r. [4], Formulae (2.4)):
\[\left\{\begin{array}{l}\mathscr{L}_{j}^{t}(m)=-\sum_{\bar{0}\leq s\leq t} \Delta_{sj},\\ \mathscr{N}_{j}^{t,0}(m)=\Delta_{\bar{j}\bar{j}}+\sum_{j+1\leq s\leq t}\Delta_{ \bar{j},s},\\ \mathscr{N}_{j}^{t,1}(m)=\mathscr{N}_{j}^{n,0}+\sum_{t\leq s\leq n}\Delta_{j,s}.\end{array}\right. \tag{3.1}\]
Otherwise, if \(j=n\), consider
\[\mathscr{L}_{n}^{t}(m)=-\left[2\left(\sum_{1\leq p\leq t}\Delta_{p\,n}\right)+ \sum_{0\leq p\leq t}\Delta_{\overline{p}\,n}\right]\qquad\mathcal{N}_{n}^{n,1}( m)=m_{n}\qquad\text{(Type B)}, \tag{3.2}\]
\[\mathscr{L}_{n}^{t}(m)=-\left[\left(\sum_{1\leq p\leq t}\Delta_{p\,n}\right)+ \left(\frac{1}{2}\sum_{0\leq p\leq t}\Delta_{\overline{p}\,n}\right)\right] \qquad\mathcal{N}_{n}^{n,1}(m)=m_{n}/2\qquad\text{(Type C)}, \tag{3.3}\]
\[\mathscr{L}_{n}^{t}(m)=\widehat{\mathscr{L}}_{n-1}^{t}(m)\qquad\mathcal{N}_{ n}^{n,1}(m)=m_{n-1,n}^{+}\qquad\text{(Type D)}, \tag{3.4}\]
where \(\widehat{\mathscr{L}}_{n-1}^{t}(m)\) is the image of \(\mathscr{L}_{n-1}^{t}(m)\) under the involution
\[\widehat{m}_{i,j}=\left\{\begin{array}{ll}m_{i,j}&\text{if }j<n\\ m_{i,j}^{+}&\text{if }j=n\end{array}\right.\qquad\widehat{m}_{i,j}^{+}=\left\{ \begin{array}{ll}m_{i,j}^{+}&\text{if }j<n\\ m_{i,j}&\text{if }j=n\end{array}\right.\]
Let us denote with \(c_{\lambda\mu}^{\nu}\) the generalized Littlewood Richardson coefficient associated to the triple of dominant weights \((\lambda,\mu,\nu)\), i.e. the multiplicity of \(V_{\nu}\) in \(V_{\lambda}\otimes V_{\mu}\). The following theorem, crucial for our results, was conjectured in [4] and proved in [6]
**Theorem 3.6** (Berestein, Zelevinsky, [4], Section 2).: _Let \(\lambda=a_{1}\omega_{1}+\cdots+a_{n}\lambda_{n}\) and \(\mu=b_{1}\omega_{1}+\cdots+b_{n}\omega_{n}\) be dominant weights. The irreducible components of \(V_{\lambda}\otimes V_{\mu}\) are in bijection with integral points of the polytope defined by the inequalities_
\[\mathscr{L}_{j}^{t}\leq a_{j}\qquad\mathscr{N}_{j}^{t,0}\leq b_{j}\qquad \mathscr{N}_{j}^{t,1}\leq b_{j},\]
_where the indices considered are displayed in the Table 1._
Each integral point in the polytope corresponds to a \(\mathfrak{g}\) partition. We are going to call these \(\mathfrak{g}\) partitions _admissible for the pair \((\lambda,\mu)\)_. We say that a \(\mathfrak{g}\) partition \((m_{12},m_{12}^{+},\ldots,m_{n})\) is associated to a weight \(\nu\) if
\[\nu=\sum_{i<j}m_{ij}(\varepsilon_{i}-\varepsilon_{j})+\sum_{i<j}m_{ij}^{+}( \varepsilon_{i}+\varepsilon_{j})+\sum_{i}m_{i}\varepsilon_{i}\]
As a corollary of the Theorem 3.6, Berenstein and Zelevinsky prove that:
**Theorem 3.7** (Berestein, Zelevinsky, [4], Section 2).: _The coefficient \(c_{\lambda\mu}^{\nu}\) is equal to the number of \(\mathfrak{g}\)-partitions admissible for the pair \((\lambda,\mu)\) and associated to \(\lambda+\mu-\nu\)._
\begin{table}
\begin{tabular}{|c|c|c|} \hline & **Type B and C** & **Type D** \\ \hline \(\mathscr{L}_{j}^{t}\) & \(1\leq j\leq n,\ \bar{0}\leq t<j\) & \(\begin{array}{c}1\leq j\leq n-1,\ \bar{0}\leq t<j\\ j=n,\bar{0}\leq t<n-1\end{array}\) \\ \hline \(\mathscr{N}_{j}^{t,0}\) & \(1\leq j\leq n-1,\,\bar{j}\leq t\leq n\) & \(1\leq j\leq n-2,\,\bar{j}\leq t\leq n-1\) \\ \hline \(\mathscr{N}_{j}^{t,1}\) & \(\begin{array}{c}1\leq j\leq n-1,\,\bar{j}<t\leq n\\ j=t=n\end{array}\) & \(\begin{array}{c}1\leq j\leq n-2,\,\bar{j}<t\leq n,\\ j=t=n,\ \ j=n-1,\,t=n\end{array}\) \\ \hline \end{tabular}
\end{table}
Table 1. Indices contribution
We want to use the previous results to obtain informations about the decomposition into irreducibles of \(V_{\rho}\otimes V_{\rho}\). In particular, studying the irreducible components which appear in \(V_{\rho}\otimes V_{\rho}\) is consequently equivalent to describe the integral points in the polytope defined by
\[\mathscr{L}_{j}^{t}(m)\leq 1,\qquad\mathscr{N}_{j}^{t,0}(m),\,\mathscr{N}_{j}^{t,1 }(m)\leq 1,\]
for \(t,j\) that range as in Table 1. From now on this section, by abuse of notation, we say that a \(\mathfrak{g}\) partition is _admissible_ if it is admissible for the pair \((\rho,\rho)\). Our aim is to construct explicitly an admissible \(\mathfrak{g}\)-partition associated to each weight \(\lambda\leq 2\rho\) such that \(\lambda\lesssim 2\rho\).
Firstly, we rearrange the equations defining the Berenstein and Zelevinsky polytopes in a more explicit form. Set \(M(i,j)=m_{ij}-m_{ij}^{+}\), \(N(i)=m_{i}-m_{i+1}\), \(R(i,j)=m_{i,j}^{+}-m_{i+1,j}^{+}\) and \(S(i,j)=m_{ij+1}-m_{i+1\,j+1}+m_{ij+1}^{+}-m_{i+1\,j+1}^{+}\) for \(j\in\{1,\ldots,n\}\) and \(1\leq i<j\), then the linear forms in Formula (3.1) can be expressed as:
\[\mathscr{L}_{j}^{t}(m)=\sum_{i=1}^{t-1}\left(M(i,j+1)-M(i,\,j)\right)-M(t,j) +m_{t\,j+1},\]
\[\mathscr{L}_{j}^{\mathfrak{I}}(m)=\sum_{i=1}^{t}\left(M(i,j+1)-M(i,\,j)\right) +m_{t+1\,j+1},\]
\[\mathscr{N}_{i}^{t\,0}(m)=m_{i\,i+1}^{+}+\sum_{j=i+1}^{t-1}R(i,j+1)+(m_{i\,t+ 1}^{+}-m_{i+1\,t+1}),\]
\[\mathscr{N}_{i}^{t\,0}(m)=m_{i\,i+1}^{+}+\sum_{j=i+1}^{t}R(i,j+1),\]
\[\mathscr{N}_{i}^{n\,0}(m)=m_{i\,i+1}^{+}+\sum_{j=i+1}^{n-1}R(i,j+1)+N(i),\]
\[\mathscr{N}_{i}^{t\,1}(m)=m_{i\,i+1}^{+}+N(i)+M(i,t)+\sum_{j=i+1}^{t-1}R(i,j+ 1)+\sum_{j=t}^{n-1}S(i,j+1),\]
\[\mathscr{N}_{i}^{\mathfrak{I}\,1}(m)=m_{i\,i+1}^{+}+N(i)+\sum_{j=i+1}^{t-1}R( i,j+1)+\sum_{j=t}^{n-1}S(i,j+1).\]
If \(j=n\), the equations (3.2), (3.3) and (3.4) can be rearranged in the following way:
\[\mathscr{L}_{n}^{t}(m)=-2\sum_{i=1}^{t}M(i,\,n)+m_{t}\qquad\mathscr{L}_{n}^{ \overline{t}}(m)=-2\sum_{i=1}^{t}M(i,\,n)+m_{t+1}\qquad\text{(Type B)},\]
\[\mathscr{L}_{n}^{\overline{t}}(m)=-\sum_{i=1}^{t}M(i,\,n)+m_{t+1}/2\qquad \mathscr{L}_{n}^{t}(m)=-\sum_{i=1}^{t}M(i,\,n)+m_{t}/2\qquad\text{(Type C)},\]
\[\mathscr{L}_{n}^{t}(m)=-\sum_{i=1}^{t-1}M(i,\,n)-\sum_{i=1}^{t}M(i,\,n-1)+m_{ t\,n}^{+}\qquad\mathscr{L}_{n}^{\overline{t}}(m)=-\sum_{i=1}^{t}\left(M(i,\,n)+M(i, \,n-1)\right)+m_{t+1\,n}^{+}\quad\text{(Type D)}.\]
Here we adopted the convention that, if the set of indices is empty, the sum is equal to \(0\).
### The construction
For each \(\lambda\leq 2\rho\) set \(c_{i}=2|\rho_{i}|-|\lambda_{i}|\), where by \(\lambda_{i}\) and \(\rho_{i}\) we denote the \(i\)-th coordinate of \(\lambda\) and \(\rho\), with respect to the basis \(\{\varepsilon_{1},\ldots\varepsilon_{n}\}\). If \(0\leq c_{i}\) for all \(i\leq n\), we give an explicit construction of an admissible \(\mathfrak{g}\)-partition associated to \(2\rho-\lambda\), appearing as integral point in the Berenstein Zelevinzky polytope associated to the tensor product \(V_{\rho}\otimes V_{\rho}\). The conditions on the \(c_{i}\) in particular are equivalent to require that \(\lambda\lesssim 2\rho\).
We have three main cases, depending on the parity of the \(\{c_{i}\}_{i\leq n}\). We will construct an admissible \(\mathfrak{g}\)-partition \(m=(m_{12},\ldots,m_{n})\) associated to \(2\rho-\lambda\) in an iterative way. We start setting \(m\) to be the zero vector.
**Case A: the \(c_{i}\) are all even.**
1. If \(c_{n}=0\) set \(m_{n}=0\), otherwise \(m_{n}=2\) (observe that the case \(c_{n}\) even and greater than \(0\) cannot happen in type \(B\) and \(D\) because in these cases \(2\rho_{n}<2\));
2. Suppose \(h+1=n-(i-1)+1\) and let \((m_{i\,i+1},m_{i\,i+1}^{+},\ldots,m_{i\,n},m_{i\,n}^{+},m_{i})\) be the integers constructed at the \(h\)-th step. Let \(J_{i}=\{j_{k}<\cdots<j_{1}\}\) be the set of indices such that \(m_{ij_{s}}\neq 0\). By convention, we set \(j_{0}=n+1\). We have the following cases: 1. if \(c_{i-1}=0\), set \(m_{i-1,j}=m_{i-1,j}^{+}=m_{i-1}=0\) for all \(j\); 2. if \(c_{i}\geq c_{i-1}>0\), set \(m_{i-1}=m_{i}\), \(m_{i-1\,j}=m_{i\,j}\) and \(m_{i-1\,j}^{+}=m_{ij}^{+}\) for all \(j\) such that \(n\geq j\geq j_{s}\), where \(s\) is chosen to be equal to \(c_{i-1}/2\) if \(m_{i}=0\), and to \(c_{i-1}/2-1\) otherwise. Finally set \(m_{i-1j}=m_{i-1j}^{+}=0\) for the remaining indices; 3. if \(c_{i-1}=c_{i}+2\), set \(m_{i-1}=m_{i}\) and \(m_{i-1\,j}=m_{i\,j}\) e \(m_{i-1\,j}^{+}=m_{ij}^{+}\) for all \(j>i\). Finally set \(m_{i-1\,i}=m_{i-1\,i}^{+}=1\).
**Proposition 3.8**.: _The construction exposed in Case A produces an admissible \(\mathfrak{g}\)-partition associated to \(2\rho-\lambda\)._
Proof.: By Theorem 3.6, we need to prove that \(\mathscr{L}_{j}^{i}(m),\mathscr{N}_{j}^{i,0}(m)\) and \(\mathscr{N}_{j}^{i,1}(m)\) are smaller than \(1\). Observe that in our construction \(m_{i}\neq 0\) only if \(m_{i+1}\neq 0\) and then \(N(i)\leq 0\) for all \(i\). Moreover, a non zero \(m_{ij}\) is constructed (i.e in case (2) or in case (3)) if and only if \(m_{ij}^{+}\neq 0\), and in that case we always have \(m_{ij}=m_{ij}^{+}\). Consequently \(M(i,j)=0\) for every pair \(i,j\). Finally, if \(i+1<j\), we always have that \(m_{i,j}=m_{ij}^{+}\neq 0\) only if \(m_{i+1,j}=m_{i+1j}^{+}\neq 0\) and then \(R(i,j),\,S(i,j)\leq 0\). Verify that the constructed \(\mathfrak{g}\)-partition is admissible is now just a straightforward computation, recalling that by construction described in (2) and (3) we have \(m_{ij},m_{ij}^{+}\leq 1\) for every pair \(i,j\) and \(m_{ij}^{+}-m_{i+1j}\leq 0\) for every \(j\) such that \(i+1<j\).
_Example 3.9_.: In this example we construct admissible \(\mathfrak{sp}_{6}\mathbb{C}\)-partitions associated to the weights \(2\rho-\lambda\) and \(2\rho-\lambda^{\prime}\), where \(\lambda=2\omega_{3}\) and \(\lambda^{\prime}=4\omega_{1}\). We remark that, because in type \(C_{3}\) we have nine positive roots, an \(\mathfrak{sp}_{6}\mathbb{C}\)- partition can be identified with a vector of the form
\[(m_{12},m_{12}^{+},m_{13},m_{13}^{+},m_{23},m_{23}^{+},m_{1},m_{2},m_{3}).\]
Firstly we deal with the case of \(\lambda=2\omega_{3}\). We have \(c_{3}=0\), so we set \(m_{3}=0\) and the Step 1 returns the null vector. For Step 2, we have \(c_{2}=2=c_{3}+2\) and we are in case (3). We set \(m_{2}=m_{3}=0\) and \(m_{23}=m_{23}^{+}=1\) obtaining the vector \((0,0,0,0,1,1,0,0,0)\). Finally we have \(c_{1}=c_{2}+2\) and to perform Step 3 we are again in case (3), so we set \(m_{1}=m_{2}=0\), \(m_{13}=m_{13}^{+}=1\) and \(m_{12}=m_{12}^{+}=1\) and the iteration produces the vector \((1,1,1,1,1,1,0,0,0)\). We want now obtain an \(\mathfrak{sp}_{6}\mathbb{C}\)-partitions associated to \(2\rho-4\omega_{1}\). We have that \(c_{3}=c_{1}=2\) and \(c_{2}=4\). Because \(c_{3}=2\), Step 1 of our construction produces the vector \((0,0,0,0,0,0,0,0,2)\). We have \(c_{2}=c_{3}+2\) and then, to perform Step 2, we are in case (3). We set \(m_{2}=2\) and \(m_{23}=m_{23}^{+}=1\) and we obtain the vector \((0,0,0,0,1,1,0,2,2)\). Finally, because \(c_{1}=c_{2}-2>0\), at Step 3 we are in case (2). Observe that \(J_{2}=\{3\}\) and \(s=0\), so we set \(m_{1}=m_{2}=2\) and
\(m_{12}=m_{12}^{+}=m_{13}=m_{13}^{+}=0\), and \((0,0,0,0,1,1,2,2,2)\) is a \(\mathfrak{sp}_{6}\mathbb{C}\)-partitions associated to \(2\rho-4\omega_{1}\).
_Example 3.10_.: We construct now an admissible \(\mathfrak{so}_{7}\mathbb{C}\)-partition associated to the weight \(2\rho-\lambda\) where \(\lambda=4\omega_{1}+2\omega_{3}\). We identify an \(\mathfrak{so}_{7}\mathbb{C}\)- partition \(m\) with a vector of the form
\[(m_{12},m_{12}^{+},m_{13},m_{13}^{+},m_{23},m_{23}^{+},m_{1},m_{2},m_{3}).\]
In \(B_{3}\) the weight \(2\rho\) has coordinates \((5,3,1)\) with respect to the \(\{\varepsilon_{i}\}\) basis, and then \(c_{3}=0\), \(c_{2}=2\) and \(c_{1}=0\). Consequently we have that \(m_{23}=m_{23}^{+}=1\) are the only non zero coordinates of \(m\) and then the algorithm produces the vector \((0,0,0,0,1,1,0,0,0)\).
**Case B: there exists an even number of odd \(c_{i}\), \(c_{n}\) is even or \(c_{n}\) is odd and \(\lambda_{n}\neq 0\).**
1. Let \(\{\gamma_{1}<\dots<\gamma_{2k}\}\) be the set of indices such that \(c_{i}\) is odd. We pair together the \(j\)-th and the \(k+j\)-th index obtaining the set \(P=\{(\gamma_{1},\,\gamma_{k+1}),\,\dots,\,(\gamma_{k},\,\gamma_{2k})\}\).
2. Construct the weight \(\lambda^{\prime}\) starting from \(\lambda\) using the pairs in \(P\): if \((\gamma_{j},\,\gamma_{j+k})\in P\), set \(\lambda^{\prime}_{\gamma_{j}}=\lambda_{\gamma_{j}}+1\) and \(\lambda^{\prime}_{\gamma_{j+k}}=\lambda_{\gamma_{j+k}}-1\), otherwise \(\lambda^{\prime}_{\gamma_{j}}=\lambda_{\gamma_{j}}\).
3. Observe that \(\lambda^{\prime}\) is again a dominant weight smaller than \(2\rho\) and the set \(\{c^{\prime}_{i}=2|\rho_{i}|-|\lambda^{\prime}_{i}|\}\) is composed only by non negative even integers. Using Case A, construct an admissible \(\mathfrak{g}\)-partition \(m^{\prime}=(m^{\prime}_{ij},\,m^{\prime+}_{ij},\,m^{\prime}_{i})\) associated to \(2\rho-\lambda^{\prime}\).
4. If \((\gamma_{j},\,\gamma_{j+k})\) is a pair in \(P\), we set \(m_{\gamma_{j}\gamma_{j+k}}=m^{\prime}_{\gamma_{j}\gamma_{j+k}}+1\), otherwise \(m_{\gamma_{j}\gamma_{j+k}}=m^{\prime}_{\gamma_{j}\gamma_{j+k}}\).
_Remark 3.11_.: A \(\mathfrak{g}\)-partition constructed in Case B has the following properties:
1. \(m_{ij}>1\) only if \((i,j)\) is in \(P\);
2. \(m_{ij}^{+}\) is different from \(0\) only if \(m_{ij}\neq 0\). Moreover we have \(m_{ij}\leq 2\) and \(m_{ij}^{+}\leq 1\). In particular \(m_{ij}>m_{ij}^{+}\) if and only if \((i,j)=(\gamma_{h},\gamma_{h+k})\in P\). Analogously, \(M(i,j)\neq 0\) if and only if \(i=\gamma_{h}\) and \(j=\gamma_{h+k}\), in that case we have \(M(i,j)=1\);
3. \(m_{ij}^{+}\neq 0\) only if \(m_{i+1j}^{+}\neq 0\) or if \(j=i+1\). Consequently the quantities \(R(i,j)=m_{ij}^{+}-m_{i+1j}^{+}\) and \(m_{ij}^{+}-m_{i+1j}\) are smaller or equal to zero if \(j>i+1\);
4. \(m_{i}\neq 0\) only if \(m_{i+1}\neq 0\). This implies \(m_{i}-m_{i+1}\leq 0\) for all \(i\). Moreover observe that for every \(i\) we have \[m_{i}=\begin{cases}\leq 1\text{ in type }B,\\ 0\text{ in type }D,\\ \leq 2\text{ in type }C.\end{cases}\]
5. Because of (2), we have that \(S(i,j)=m_{ij}+m_{ij}^{+}-(m_{i+1j}+m_{i+1j}^{+})\) is always smaller or equal to zero, except if \((i,j)=(\gamma_{h},\gamma_{k+h})\in P\). In this case we have \(m_{ij}+m_{ij}^{+}-(m_{i+1j}+m_{i+1j}^{+})=1\).
**Proposition 3.12**.: _The construction exposed in Case B produces an admissible \(\mathfrak{g}\)-partition associated to \(2\rho-\lambda\)._
Proof.: First of all observe that (3) and (4) in Remark 3.11 imply immediately that \(\mathscr{N}_{i}^{t\;0}(m)\), \(\mathscr{N}_{i}^{t\;0}(m)\) and \(\mathscr{N}_{i}^{n\;0}(m)\) are all smaller or equal than \(1\). We want now find an upper bound to \(\mathscr{N}_{i}^{t\;1}(m)\) and \(\mathscr{N}_{i}^{t\;1}(m)\). We have to discuss some cases, depending on the parity of \(c_{i}\) and \(c_{i+1}\). Set \(P_{-}:=\{\gamma_{1},\dots,\gamma_{k}\}\) and \(P_{+}:=\{\gamma_{k+1}\dots\gamma_{2k}\}\).
_If \(c_{i}\) is even_ By construction in Case A we have that \(m_{ij+1}+m_{ij+1}^{+}=m^{\prime}_{ij+1}+m^{\prime+}_{ij+1}\leq m^{\prime}_{i+1j+ 1}+m^{\prime+}_{i+1j+1}\) for \(j\neq i\) and then \(S(i,j+1)=m_{ij+1}+m^{+}_{ij+1}-(m_{i+1j+1}+m^{+}_{i+1j+1})\) is non positive for every \(j>i+1\). Moreover \(M(i,j)=0\) for all \(j\) and \(N(i)\leq 0\). It is immediate to check that \(\mathscr{N}_{i}^{t\;1}(m)\leq 1\) and \(\mathscr{N}_{i}^{t\;1}(m)\leq 1\);
_If \(c_{i}\) is odd and \(i\in P_{+}\)_, by (5) of Remark 3.11 we have that \(S(i,j)\leq 0\) and \(M(i,j)=0\) for every \(j\). The inequalities for \(\mathscr{N}_{i}^{\overline{t}\,1}(m)\) and \(\mathscr{N}_{i}^{t\,1}(m)\) are then easily verified;
_If \(c_{i}\) and \(c_{i+1}\) are both odd and \(i,i+1\in P_{-}\)_, suppose \(i=\gamma_{h}\) (and then \(i+1=\gamma_{k+h+1}\)). We have \(S(i,\gamma_{k+h})=1\) and \(S(i,\gamma_{k+h+1})<0\). It follows that for every \(s>i+1\)
\[\sum_{j=s}^{n-1}\left[m_{ij+1}+m_{ij+1}^{+}-(m_{i+1j+1}+m_{i+1j+1}^{+})\right] \leq 0. \tag{3.5}\]
An immediate consequence of above inequality and of (3) and (4) of Remark 3.11 is that \(\mathscr{N}_{1}^{\overline{t}\,1}(m)-m_{ii+1}^{+}\leq 0\). Because of (2) of Remark 3.11 we have \(m_{ii+1}^{+}\leq 1\) and then \(\mathscr{N}_{1}^{\overline{t}\,1}(m)\leq 1\). Observe now that if \(s\geq\gamma_{k+h}\) inequality in (3.5) is strict. Moreover \(M(i,j)=1\) only if \(j=\gamma_{k+h}\) and we obtain consequently that \(\mathscr{N}_{i}^{t\,1}(m)\leq 1\) for every \(t\) in Table 1.
_If \(c_{i}\) and \(c_{i+1}\) are both odd, \(i\in P_{-}\) and \(i+1\in P_{+}\)_, observe that \(c_{i}^{\prime}\leq c_{i+1}^{\prime}\) by Step 2 of the construction in Case B and this implies that \(m_{ii+1}^{+}=m_{ii+1}^{+}=0\). Because \(i\in P_{-}\), we can suppose \(i=\gamma_{h}\) and we recall that \(M(i,j)>0\) if and only if \(j=\gamma_{k+h}\). Moreover we have
\[\sum_{j=s}^{n-1}S(i,j+1)=\begin{cases}\leq 1&\text{ if }s<\gamma_{k+h}\\ \leq 0&\text{ otherwise.}\end{cases}\]
Observe now that \(M(i,j)>0\) (in particular it is equal to \(1\)) only if \(\sum_{j=s}^{n-1}S(i,j+1)\leq 0\) and the inequalities \(\mathscr{N}_{i}^{\overline{t}\,1}(m)\leq 1\) and \(\mathscr{N}_{i}^{t\,1}(m)\leq 1\) are verified;
Finally, _if \(c_{i}\) is odd, \(i=\gamma_{h}\in P_{-}\) and \(c_{i+1}\) is even_, we observe again that because of Step 2 of our construction in Case B, we have \(c_{i}^{\prime}\leq c_{i+1}^{\prime}\) and then \(m_{ii+1}=m_{ii+1}^{+}=0\). As in the previous case we have
\[\sum_{j=s}^{n-1}S(i,j+1)=\begin{cases}\leq 1&\text{ if }s<\gamma_{k+h}\\ \leq 0&\text{ otherwise.}\end{cases}\]
and \(M(i,j)=1\) only if \(\sum_{j=s}^{n-1}S(i,j+1)\leq 0\). Check that \(\mathscr{N}_{i}^{\overline{t}\,1}(m)\leq 1\) and \(\mathscr{N}_{i}^{t\,1}(m)\leq 1\) in now completely straightforward.
It remains to prove that the conditions of Theorem 3.6 holds for the operators \(\mathscr{L}_{j}^{s}(m)\). Some of these inequalities are trivial by the construction of \(m\), in particular \(\mathscr{L}_{n}^{\overline{t}}(m),\mathscr{L}_{n}^{\overline{t}}(m)\leq 1\); in fact \(m_{i}\leq 1\) in type \(B\) and \(D\), \(m_{i}/2\leq 1\) in type \(C\) and in our construction we have \(M(i,j)\geq 0\) and \(m_{ij}^{+}\leq 1\) for every \(i,j\). Furthermore observe that \(\mathscr{L}_{j}^{s}(m)=\mathscr{L}_{j}^{\overline{s-1}}(m)-M(s,j)\) and, again because \(M(i,j)\) are always non negative, we reduce to prove that \(\mathscr{L}_{j}^{s-1}(m)\leq 1\). We recall that
\[\mathscr{L}_{j}^{\overline{t}}(m)=\sum_{i=1}^{t}\left(M(i,j+1)-M(i,j)\right)+m _{t+1\,j+1}.\]
We have four cases:
_If both \(j\) and \(j+1\) are not in \(P_{+}\)_ by (2) of Remark 3.11 we have \(M(i,j)=M(i,j+1)=0\) for all \(h\) and for all \(j\). Moreover \(m_{t+1j+1}\) is smaller than \(1\) because \(j+1\notin P_{+}\) and the inequality \(\mathscr{L}_{j}^{\overline{t}}(m)\leq 1\) is verified.
_If \(j=\gamma_{k+h}\in P_{+}\) and \(j+1\notin P_{+}\)_, we have \(M(i,j+1)=0\) for all \(i\) and \(M(i,j)=0\) if and only if \(i\neq\gamma_{h}\). Otherwise we have \(M(\gamma_{h},\gamma_{k+h})=1\). This implies that \(\sum_{i=1}^{t}\left(M(i,j+1)-M(i,j)\right)=-1\) if \(t\geq\gamma_{h}\). Otherwise \(\sum_{i=1}^{t}\left(M(i,j+1)-M(i,j)\right)=0\). Moreover we have \(m_{t+1j+1}\leq 1\) because \(j+1\notin P_{+k}\). These conditions immediately imply \(\mathscr{L}_{j}^{\overline{t}}(m)\leq 1\).
_If \(j\notin P_{+}\) and \(j+1=\gamma_{k+h}\in P_{+}\), we firstly remark that by construction we have \(c_{j}^{\prime}<c_{j+1}^{\prime}+2\) and then \(m_{jj+1}^{\prime}=0\) by construction in Case A. Thus \(m_{jj+1}=1\) if \(j=\gamma_{h}\) and zero otherwise.
In general, \(m^{\prime}_{jj+1}=0\) implies \(m^{\prime}_{ij+1}=0\) for every \(i\leq j\) and then \(m_{t+1j+1}=1\) if \(t=\gamma_{h}-1\) and zero otherwise. Moreover observe that \(M(i,j+1)>0\) (and in particular, it is equal to \(1\)) only if \(i=\gamma_{h}\). Now we can evaluate the expression \(\sum_{i=1}^{t}\left(M(i,j+1)-M(i,j)\right)\). By our previous observations about the \(M(i,j+1)\) and by (2) of Remark 3.11, it is equal to \(0\) if \(t<\gamma_{h}\) and equal to \(1\) if \(t\geq\gamma_{h}\). As observed before, in the last case we have \(m_{t+1j+1}=0\) and it follows easily that \(\mathscr{L}_{j}^{\overline{t}}(m)\leq 1\) holds.
_If \(j\) and \(j+1\) are both in \(P_{+}\),_ we can suppose \(j=\gamma_{k+h}\) and then \(j+1=\gamma_{k+h+1}\). We consequently have \(M(i,j)\neq 0\) if and only if \(i=\gamma_{h}\) and that \(M(i,j+1)\neq 0\) if and only if \(i=\gamma_{h+1}\). We then obtain that \(\sum_{i=1}^{t}\left(M(i,j+1)-M(i,j)\right)\) is equal to \(-1\) if \(\gamma_{h}\leq t<\gamma_{h+1}\) and \(0\) otherwise. If \(m_{t+1j+1}\leq 1\) the inequality \(\mathscr{L}_{j}^{\overline{t}}(m)\leq 1\) is verified. Otherwise, we remark that \(m_{t+1j+1}=2\) only if \(t+1=\gamma_{h+1}\), i.e if \(\gamma_{h}\leq t<\gamma_{h+1}\), but this is exactly the case of \(\sum_{i=1}^{t}\left(M(i,j+1)-M(i,j)\right)=-1\), and again the inequality is checked.
_Example 3.13_.: In this example we want to construct an admissible \(\mathfrak{sp}_{8}\mathbb{C}\)-partition associated to the weight \(2\rho-\lambda\), where \(\lambda=\omega_{4}\). We recall that \(\omega_{4}\) has coordinates \((1,1,1,1)\) in the \(\{\varepsilon_{i}\}\) basis so we have \(c_{4}=1,c_{3}=3,c_{2}=5,c_{1}=7\). The set of odd indices is \(\{1,2,3,4\}\) and \(P=\{(1,3),(2,4)\}\). The weight \(\lambda^{\prime}\) is then \((2,2,0,0)\) (i.e. \(2\omega_{2}\)) and by construction in case A we have that the non zero coordinates of \(m^{\prime}\) are \(m^{\prime}_{4}=m^{\prime}_{3}=m^{\prime}_{2}=m^{\prime}_{1}=2\), \(m^{\prime}_{34}=m^{\prime+}_{34}=1\), \(m^{\prime}_{24}=m^{\prime+}_{24}=1\) and \(m^{\prime}_{12}=m^{\prime+}_{12}=m^{\prime+}_{14}=m^{\prime+}_{14}=1\). By our construction in case B, we have that the \(\mathfrak{sp}_{8}\mathbb{C}\)-partitions \(m\) associated to the weight \(2\rho-\omega_{4}\) has the following non zero coordinates: \(m_{4}=m_{3}=m_{2}=m_{1}=2\), \(m_{34}=m^{+}_{34}=1\), \(m_{24}=2,m^{+}_{24}=1\) and \(m_{12}=m^{+}_{12}=m_{13}=m_{14}=m^{+}_{14}=1\).
_Remark 3.14_.: Because of the parity constraint in the dominance order relations in type \(C\) and \(D\) (c.f.r. Remark 3.2), Case A and B cover all the weights appearing in the statement of Theorem 3.1 for symplectic and even orthogonal algebras.
Because of previous Remark, in the remaining cases we deal only with algebras of type \(B\). In particular, observe that in type \(B\) the condition \(c_{n}\neq 0\) is equivalent to assume that \(c_{n}\) is odd and in particular \(c_{n}=1\). Moreover \(c_{n}\neq 0\) if and only if \(\lambda_{n}=0\).
**Case C: \(\lambda_{n}=0\) and \(c_{n}\) is odd or there exists an odd number of odd \(c_{i}\) and \(\lambda_{n}\neq 0\).** Let \(I=\{\gamma_{1}\cdots<\gamma_{k}\}\) be the set of indices such that \(c_{i}\) is odd.
1. Construct the weight \(\lambda^{\prime}\) setting \(\lambda^{\prime}_{i}=\lambda_{i}+1\) if \(i\in I\) and \(\lambda^{\prime}_{i}=\lambda_{i}\) otherwise. Observe that \(\lambda^{\prime}\) is a dominant weight and it is again smaller than \(2\rho\) and that the set \(\{c^{\prime}_{i}=2|\rho_{i}|-|\lambda^{\prime}_{i}|\}\) is composed only by non negative even integers.
2. Using Case A, construct an admissible \(\mathfrak{g}\)-partition \(m^{\prime}=(m^{\prime}_{ij},\,m^{\prime+}_{ij},\,m^{\prime}_{i})\) associated to \(2\rho-\lambda^{\prime}\). Observe that \(\lambda^{\prime}_{n}\neq 0\), then \(c^{\prime}_{n}=0\) and by construction in Case A we have \(m^{\prime}_{j}=0\) for every \(j\).
3. Set \(m_{ij}=m^{\prime}_{ij}\) and \(m^{+}_{ij}=m^{\prime+}_{ij}\) for every pair of indices \(i,j\). Moreover set \(m_{i}=1\) if \(i\in I\) and \(m_{i}=m^{\prime}_{i}=0\) otherwise.
**Proposition 3.15**.: _The construction exposed in Case C produces an admissible \(\mathfrak{g}\)-partition associated to \(2\rho-\lambda\) constructed using the iterative process exposed in Case C. By our construction, \(\mathscr{L}_{j}^{\overline{t}}(m)=\mathscr{L}_{j}^{\overline{t}}(m^{\prime})\) and \(\mathscr{L}_{j}^{t}(m)=\mathscr{L}_{j}^{t}(m^{\prime})\) for every \(t\) and for every \(j\neq n\). Moreover \(\mathscr{N}_{i}^{t\;0}(m)=\mathscr{N}_{i}^{t\;0}(m^{\prime})\) for every \(t\neq n\). Observe that \(m_{i}\leq 1\) for every \(i\) and \(M(i,j)=m_{ij}+m^{+}_{ij}=m^{\prime}_{ij}+m^{\prime+}_{ij}=0\) for every pair of indices \(i,j\) because of construction in Case A. This implies that \(\mathscr{L}_{n}^{t}(m),\mathscr{L}_{n}^{t}(m)\leq 1\). Observe now that,
by Step 3 of construction in case C we have
\[\mathscr{N}_{i}^{\,n\,0}(m)-m_{ii+1}^{+}-m_{i}+m_{i+1} =\mathscr{N}_{i}^{\,n\,0}(m^{\prime})-m_{ii+1}^{\prime+}-m_{i}^{ \prime}+m_{i+1}^{\prime}\] \[\mathscr{N}_{i}^{\,t\,1}(m)-m_{ii+1}^{+}-m_{i}+m_{i+1} =\mathscr{N}_{i}^{\,t\,1}(m^{\prime})-m_{ii+1}^{\prime+}-m_{i}^{ \prime}+m_{i+1}^{\prime}\]
In particular, by construction of \(m^{\prime}\) both expressions \(\mathscr{N}_{i}^{\,n\,0}(m^{\prime})-m_{ii+1}^{\prime+}-m_{i}^{\prime}+m_{i+1} ^{\prime}\) and \(\mathscr{N}_{i}^{\,t\,1}(m^{\prime})-m_{ii+1}^{\prime+}-m_{i}^{\prime}+m_{i+1} ^{\prime}\) are smaller or equal than \(0\). To prove that \(\mathscr{N}_{i}^{\,n\,0}(m),\mathscr{N}_{i}^{\,t\,1}(m)\leq 1\) it is enough to show that \(m_{ii+1}^{+}+m_{i}-m_{i+1}\leq 1\) for every \(i\). We remark that in our construction \(m_{ii+1}^{+}\) is always smaller than \(1\) and \(m_{i}\neq 0\) if and only if \(i\in I\). Now, if \(c_{i}\) is even, the inequality \(m_{ii+1}^{+}+m_{i}-m_{i+1}\leq 1\) comes directly from the fact that \(m_{i}\leq m_{i+1}\). If \(i\) and \(i+1\) are both odd, then \(m_{i}=m_{i+1}=1\) and \(m_{ii+1}^{+}+m_{i}-m_{i+1}\leq 1\) is satisfied. Finally, if \(c_{i}\) is odd and \(c_{i+1}\) is even, observe that \(c_{i}\leq c_{i+1}+1\) by parity constraint and then \(c_{i}^{\prime}\leq c_{i+1}^{\prime}\) by Step 1 of construction in case C. This implies, by construction of \(m^{\prime}\) and by Step 3 in case C, that \(m_{ii+1}^{+}=m_{ii+1}^{\prime+}=0\) and again we obtained \(m_{ii+1}^{+}+m_{i}-m_{i+1}\leq 1\).
_Remark 3.16_.: In type \(B_{n}\), the construction of admissible \(\mathfrak{g}\) partitions exposed in Case C works also in Case B. We privileged the procedure exposed in Case B to underline the uniform construction in all the classical cases.
_Example 3.17_.: In this example we construct admissible \(\mathfrak{so}_{7}\mathbb{C}\)-partition associated to the weight \(2\rho-\lambda\) where \(\lambda=4\omega_{1}\). We have \(c_{3}=1\), \(c_{2}=3\) and \(c_{1}=1\). The weight \(\lambda^{\prime}\) constructed as in Step 1 of case C has coordinates \((5,1,1)\) (i.e. \(\lambda^{\prime}=4\omega_{1}+2\omega_{3}\)). We have just constructed in Example 3.10 an \(\mathfrak{so}_{7}\mathbb{C}\)-partition \(m^{\prime}\) associated to \(2\rho-\lambda^{\prime}\). In particular we obtained \(m^{\prime}=(0,0,0,0,1,1,0,0,0)\). By Step 3 of construction in case C, we then obtain that \(m=(0,0,0,0,1,1,1,1,1)\) is an \(\mathfrak{so}_{7}\mathbb{C}\)-partition associated to \(2\rho-4\omega_{1}\).
### A Conjecture about Exterior Algebra \(\Lambda V_{\theta_{s}}\)
If \(\mathfrak{g}\) is not simply laced, we propose here an analogous of Kostant Conjecture about irreducible representations appearing in the exterior algebra over the little adjoint representation. We are motivated by two recent works that highlight some interesting aspects of the structure of \(\Lambda V_{\theta_{s}}\) as \(\mathfrak{g}\)-representation. The first one is an article of I. Ademehin [1], dealing with the graded multiplicities of trivial and little adjoint representation in \(\Lambda V_{\theta_{s}}\). The results contained in [1] are in some sense very similar to the classical ones about exterior algebra over \(\mathfrak{g}\) and we think that a further investigation about the structure of \(\Lambda V_{\theta_{s}}\) could lead to some very interesting results. Our second motivating paper is an article of Panyushev [34], where the following theorem is proved in the more generic context of orthogonal isotropy representations.
**Theorem 3.18** (Panyushev [34], Theorem 2.9).: _Let \(\mathfrak{g}\) be a non simply laced algebra of type \(B\), \(C\) and \(F_{4}\). Let \(\theta_{s}\) be the short dominant root of \(\mathfrak{g}\), then_
\[\Lambda V_{\theta_{s}}\simeq 2^{|\Delta_{s}|}\left(V_{\rho_{s}}\otimes V_{\rho_{s}}\right)\]
_where \(\Delta_{s}\) is the set of short simple roots and \(\rho_{s}\) is half the sum of positive short roots._
Analogously to the case exterior algebra over adjoint representation, we formulate the following conjecture:
**Conjecture 3.19**.: _Let \(\mathfrak{g}\) be a non simply laced simple Lie algebra. \(V_{\lambda}\) is an irreducible component of \(\Lambda V_{\theta_{s}}\) if and only if \(\lambda\leq 2\rho_{s}\)._
By Theorem 3.18, Conjecture 3.19 can be restated as
**Conjecture 3.20**.: _Let \(\mathfrak{g}\) be a non simply laced simple Lie algebra. \(V_{\lambda}\) is an irreducible component of \(V_{\rho_{s}}\otimes V_{\rho_{s}}\) if and only if \(\lambda\leq 2\rho_{s}\)._
The Conjecture 3.20 can be easily proved for case \(B_{n}\) using elementary representation theory. We checked the conjecture also using Berenstein and Zelevinsky polytope associated to \(V_{\rho_{s}}\otimes V_{\rho_{s}}\). Moreover, we proved it for exceptional cases \(F_{4}\) and \(G_{2}\) by direct computations. The conjecture remains open only in type \(C\), where it seems that combinatorics of short roots and weights is linked to the Kostant conjecture in type \(D\).
## 4. Generalized Exponents and Macdonald Kernels
We give here an overview of theory of generalized exponents for representations of Lie algebras, following the results exposed in [27].
**Theorem 4.1** (Kostant [27], Theorem 0.11).: _The module \(\operatorname{Hom}_{\mathfrak{g}}\left(V_{\lambda},S(\mathfrak{g})\right)\) is a free \(S(\mathfrak{g})^{\mathfrak{g}}\)-module of dimension \(\dim V_{\lambda}^{0}\)._
Let \(n\) be the dimension of \(V_{\lambda}^{0}\) and let \(f_{1},\ldots,f_{n}\) be any set of homogeneous generators of \(\operatorname{Hom}_{\mathfrak{g}}\left(V_{\lambda},S(\mathfrak{g})\right)\) as \(S(\mathfrak{g})^{\mathfrak{g}}\)-module. Up to relabeling the polynomials \(f_{i}\), it is possible to suppose that \(\deg\!f_{i}\leq\deg\!f_{i+1}\) for every \(i\). Set \(m_{i}(\lambda)=\deg\!f_{i}\).
**Definition 4.2** (c.f.r [10], Introduction).: _The integers \(m_{1}(\lambda),\ldots,m_{n}(\lambda)\) are the generalized exponents of the representation \(V_{\lambda}\)._
Generalized exponents have also an interpretation in therms of \(W\)-representation on the zero weight space \(V_{\lambda}^{0}\). Let \(c\in W\) be a Coxeter - Killing transformation, i.e. \(c=s_{\alpha_{1}}\ldots s_{\alpha_{n}}\) where \(s_{\alpha_{i}}\) is the simple reflection associated to the \(i\)-th simple root. The action of \(\mathfrak{g}\) on \(V_{\lambda}\) induces a representation \(\rho_{\lambda}:W\to\operatorname{End}(V_{\lambda}^{0})\). The element \(\rho_{\lambda}(c)\) acts diagonally on \(V_{\lambda}^{0}\) with eigenvalues \(\gamma_{j}=\exp\frac{2i\pi m_{i}(\lambda)}{h}\), where \(h\) is the Coxeter number.
_Example 4.3_.: Consider \(\mathfrak{g}\) acting on itself by the adjoint action. Such an action induces the reflection representation of \(W\) on \(\mathfrak{h}\). The generalized exponents for this representation coincides with the classical exponents of \(\mathfrak{g}\).
_Example 4.4_.: Let \(\mathfrak{g}\) be a not simply laced simple Lie algebra and consider its little adjoint representation \(V_{\theta_{s}}\).The generalized exponents associated to \(V_{\theta_{s}}\) are the _short exponents_ of \(\mathfrak{g}\).
Consider now the generating polynomial of generalized exponents defined by the formula
\[E_{\lambda}(t)=\sum_{i=1}^{\dim V_{\lambda}^{0}}t^{m_{i}(\lambda)}.\]
Theorem 4.1 translates naturally into the following remarkable factorization of generating series of graded multiplicities:
\[P(V_{\lambda},S(\mathfrak{g}),t)=E_{\lambda}(t)\prod_{i=1}^{n}(1-t^{e_{i}+1})^ {-1}.\]
Determining the graded multiplicities in the symmetric algebra is then deeply linked to determining the generalized exponents of \(V_{\lambda}\). In particular the problem of finding explicit formulae for the polynomials \(E_{\lambda}(t)\) turns out to be very interesting both from a combinatorial and from a representation theoretic point of view. For Lie algebras of type \(A\) a combinatorial description of generalized exponents is given in [29], [30]. For other classical algebras, the combinatorics of generalized exponents is less explicit, and closed formulae are available only in special cases (see [21], [22], [23], [24], [31], [38]).
_Remark 4.5_.: Values of classical exponents, and of short exponents in the case of non simply laced algebras, are well known (see [42], Table 4.1). Explicit formulae for \(E_{\theta}(t)\) and \(E_{\theta_{s}}(t)\) can be consequently computed in the classical cases:
\[E_{\theta}(t)=\left(n+1\right)_{t}\qquad\text{ Type }A_{n} \tag{4.1}\]
\[E_{\theta}(t)=t\left(n\right)_{t^{2}}\qquad E_{\theta_{s}}(t)=t^{n}\qquad \text{ Type }B_{n} \tag{4.2}\]
\[E_{\theta}(t)=t\left(n\right)_{t^{2}}\qquad E_{\theta_{s}}(t)=t^{2}\left(n-1 \right)_{t^{2}}\qquad\text{ Type }C_{n} \tag{4.3}\]
\[E_{\theta}(t)=\left(n\right)_{t^{2}}\frac{t(t^{n-2}+1)}{(t^{n}+1)}\qquad\text { Type }D_{n} \tag{4.4}\]
### Macdonald Kernels
We recall now some tools, introduced by Stembridge in [42], useful to produce effective computations.
**Definition 4.6** (c.f.r. [42], Section 1.1).: _Let \(\mathbb{Z}\langle\Pi\rangle:=\mathbb{Z}\{e^{\lambda},\,\lambda\in\Pi\}\) denote the group ring generated by \(\Pi\). The Macdonald Kernel of \(\mathfrak{g}\) is the formal series \(\Delta(q,t)\in\mathbb{Z}\langle\Pi\rangle\left[[q,t]\right]\) defined by the formula_
\[\Delta(q,t):=\prod_{i\geq 0}\left(\frac{1-q^{i+1}}{1-tq^{i}}\right)^{ \operatorname{rk}\mathfrak{g}}\cdot\prod_{i\geq 0}\prod_{\alpha\in\Phi} \frac{1-q^{i+1}e^{\alpha}}{1-tq^{i}e^{\alpha}}.\]
The Macdonald kernels specialize to the graded character of exterior algebra of adjoint representation, when evaluated at \((q,t)=(-q,q^{2})\), and to the graded character of the symmetric algebra over \(\mathfrak{g}\) when evaluated at \((q,t)=(0,t)\) (c.f.r. [42], Section 1.2). Observe now that \(\Delta(q,t)\) is \(W\)-invariant; this implies that it can be expanded in terms of characters of irreducible representations, obtaining an expression of the form
\[\Delta(q,t)=\sum_{\mu\in\Pi^{+}}C_{\mu}(q,t)\chi(\mu),\]
for certain formal series \(C_{\mu}(q,t)\), indexed by dominant weights of \(\mathfrak{g}\). In particular, when specialized at \((q,t)=(-q,q^{2})\) and at \((q,t)=(0,t)\), the formal series \(C_{\mu}(q,t)\) gives the graded multiplicities of the representation \(V_{\mu}\) in the exterior algebra and in the symmetric algebra respectively.
_Remark 4.7_.: By Theorem 4.1, the polynomial \(E_{\lambda}(t)\) can be computed by determining the ratio \(C_{\lambda}(q,t)/C_{0}(q,t)\) and evaluating it at \((0,t)\).
In [42] Stembridge proves that the formal series \(C_{\mu}(q,t)\) satisfy some recurrences, reducing the problem of their explicit computation to solving a linear system of equations with coefficients in \(\mathbb{C}[q^{\pm 1},t^{\pm 1}]\).
We recall that it is possible to extend the definition of \(C_{\mu}(q,t)\) to any weight \(\mu\) by setting
\[C_{\mu}(q,t)=\left\{\begin{array}{ll}0&\text{if $\mu+\rho$ is not regular,}\\ (-1)^{l(\sigma)}C_{\lambda}(q,t)&\text{if $\sigma(\mu+\rho)=\lambda+\rho\;, \lambda\in\Pi^{+},\sigma\in W.$}\end{array}\right.\]
For short, if there exists \(\sigma\) such that \(\sigma(\mu+\rho)=\lambda+\rho\), with \(\lambda\in\Pi^{+}\), we say that the weight \(\mu\) is _conjugated to \(\lambda\) by \(\sigma\)_. Moreover, if \(\mu\) is conjugated to \(\lambda\) by \(\sigma\), we say that \((-1)^{l(\sigma)}C_{\lambda}(q,t)\) is the _reduced form_ of \(C_{\mu}(q,t)\). Sometimes a precise information about the sign of \(\sigma\) is not needed in our reasoning; in this case we shortly say that \(\mu\)_is conjugated to \(\lambda\)_.
**Theorem 4.8** ( Minuscule Recurrence, [42], Formula (5.14)).: _Fix a dominant weight \(\lambda\) and let \(\omega\) be a minuscule coweight (i.e. \((\omega,\alpha)\in\{0,\pm 1\}\) for every positive root \(\alpha\)), then the following relation holds:_
\[\sum_{i=1}^{k}C_{w_{i}\lambda}(q,t)\left(\sum_{\psi\in O_{\omega}}\Big{(}t^{-( \rho,w_{i}\psi)}-q^{(\lambda,\omega)}t^{(\rho,w_{i}\psi)}\Big{)}\right)=0. \tag{4.5}\]
_where, denoting by \(W_{\lambda}\) the stabilizer of \(\lambda\) in \(W\), the \(w_{1},\ldots,w_{k}\) are minimal coset representatives of \(W/W_{\lambda}\) and \(O_{\omega}\) is the orbit \(W_{\lambda}\cdot\omega\)._
_Remark 4.9_.: Observe that if \(\lambda\) and \(\mu\) are dominant weights and \(w\lambda\) is conjugated to \(\mu\), then \(\mu<\lambda\). As a consequence, if we write the \(C_{w_{i}\lambda}(q,t)\) appearing in Formula (4.5) in their reduced form, in the minuscule recurrence there appear only \(C_{\mu}(q,t)\) with \(\mu\) dominant and smaller than \(\lambda\).
## 5. Small Representations
The aim of this Section is to present closed formulae for generalized exponents of certain small representations in type \(B\), \(C\) and \(D\). In Table 2 we list the weights of non trivial small representations in these three cases. More precisely, in Theorem 5.1, Theorem 5.2 and Theorem 5.3 we provide closed expressions for the polynomials of generalized exponents for small representations that are indexed by fundamental weights.
**Theorem 5.1**.: _Let \(\lambda\) be a small weight of the form \(\lambda=\omega_{2k}\) for the simple Lie algebra of type \(C_{n}\). Then:_
\[E_{\lambda}(t)=\frac{t^{2k}(n-2k+1)_{t^{2}}}{(n-k+1)_{t^{2}}}{n\choose k}_{t^{2 }}. \tag{5.1}\]
**Theorem 5.2**.: _The polynomials of generalized exponents for small weight for the simple Lie algebra of type \(B_{n}\) have the following closed expressions:_
\[E_{\omega_{2k}}(t)=t^{k}{n\choose k}_{t^{2}},\quad E_{\omega_{2k+1}}(t)=t^{n-k }{n\choose k}_{t^{2}},\quad E_{2\omega_{n}}(t)=t^{n-\lfloor\frac{n}{2}\rfloor }{n\choose\lfloor\frac{n}{2}\rfloor}_{t^{2}}, \tag{5.2}\]
**Theorem 5.3**.: _Let \(\lambda\) be a small weight of the form \(\lambda=\omega_{2k}\) for the simple Lie algebra of type \(D_{n}\), then:_
\[E_{\omega_{2k}}(t)=t^{k}\frac{(t^{n-2k}+1)}{(t^{n}+1)}{n\choose k}_{t^{2}}, \tag{5.3}\]
_Moreover, if \(n\) is odd, the weight \(\omega_{n-1}+\omega_{n}\) is small and we have:_
\[E_{\omega_{n-1}+\omega_{n}}(t)=\frac{t^{\lfloor\frac{n}{2}\rfloor}(t+1)}{(t^{n }+1)}{n\choose\lfloor\frac{n}{2}\rfloor}_{t^{2}}, \tag{5.4}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Type B** & **Type C** & **Type D, \(n\) even** & **Type D, \(n\) odd** \\ \hline \(\omega_{i}\), \(i<n\) & \(\omega_{2k}\) & \(\omega_{2k}\) & \(\omega_{2k}\) \\ \(2\omega_{n}\) & \(\omega_{1}+\omega_{2k+1}\) & \(2\omega_{n-1}\), \(2\omega_{n}\) & \(\omega_{n-1}+\omega_{n}\) \\ & & \(\omega_{1}+\omega_{2i+1}\) & \(\omega_{1}+\omega_{2i+1}\) \\ & & \(\omega_{1}+\omega_{n-1}+\omega_{n}\) & \(\omega_{1}+2\omega_{n-1}\), \(\omega_{1}+2\omega_{n-1}\) \\ \hline \end{tabular}
\end{table}
Table 2. Weights of small representation for type \(B,C\) and \(D\).
_Finally, if \(n\) is even, \(2\omega_{n-1}\) and \(2\omega_{n}\) are small and the following formulae hold:_
\[E_{2\omega_{n-1}}(t)=E_{2\omega_{n}}(t)=\frac{t^{\frac{n}{2}}}{(t^{n}+1)}{n\choose \frac{n}{2}}_{t^{2}}, \tag{5.5}\]
The proof of above theorems use and iterative reasoning, based on the fact that if \(\lambda\) is small and \(\lambda^{\prime}<\lambda\), then \(\lambda^{\prime}\) is small. In particular, a minimal non zero small weight is a dominant root. The following remark provides the base step for our computations.
_Remark 5.4_.: The following formulae are proved in [42], Theorem 4.1:
\[C_{\theta}(q,t)=\frac{t-q}{t-qt^{h}}E_{\theta}(t)C_{0}(q,t),\qquad C_{\theta_{ s}}(q,t)=\frac{t-q}{t-qt^{h}}E_{\theta_{s}}(t)C_{0}(q,t) \tag{5.6}\]
### Proof of Formulae in Type C
In this section we give a proof of Theorem 5.1. Observe that such a theorem hold true if \(k=1\) because of Remark 4.5. In [16], Theorem 5.5, the following iterative formula for \(C_{\lambda}(q,t)\) is achieved for weights of the form \(\lambda=\omega_{2k}\):
\[C_{\omega_{2(k+1)}}(q,t)=\frac{(t^{2(n-2k-1)}-1)(t^{2(n-k+1)}-1)(1-qt^{2k-1}) t^{2}}{(t^{2(n-2k+1)}-1)(t^{2(k+1)}-1)(1-qt^{2(n-k)-1})}C_{\omega_{2k}}(q,t). \tag{5.7}\]
By Remark 4.7, evaluating Equation (5.7) at \(q=0\) we obtain a recursive relation between \(E_{\omega_{2(k+1)}}(t)\) and \(E_{\omega_{2k}}(t)\). Using Remark 5.4 for the base step, we obtain by induction that
\[E_{\omega_{2(k+1)}}(t) =\frac{t^{2}(t^{2(n-2k-1)}-1)(t^{2(n-k+1)}-1)}{(t^{2(n-2k+1)}-1)( t^{2(k+1)}-1)}E_{\omega_{2(k)}}(t)\] \[=\frac{t^{2}(t^{2(n-2k-1)}-1)(t^{2(n-k+1)}-1)}{(t^{2(n-2k+1)}-1)( t^{2(k+1)}-1)}\frac{t^{2k}(n-2k+1)_{t^{2}}}{(n-k+1)_{t^{2}}}{n\choose k}_{t^{2}}\] \[=\frac{t^{2(k+1)}(n-2k-1)_{t^{2}}}{(n-k)_{t^{2}}}{n\choose k+1}_ {t^{2}}\]
and Equation (5.1) is proved. We remark that similar but more complicated formulae can be obtained for the other small weights in type \(C\) by making explicit the coefficients of the equations in [16], Section 5.3.
### Proof of Formulae in Type B
In type \(B_{n}\) the unique minuscule coweight is \(\varepsilon_{1}\). In Formula (4.5) we choose \(\omega=\varepsilon_{1}\) and \(\lambda=\omega_{k}\) with \(k<n\) or \(\lambda=2\omega_{n}\). The stabilizer \(W_{\omega_{i}}\) is isomorphic to \(S_{i}\times B_{n-i}\) and \(W_{\omega_{i}}(\varepsilon_{1})=\{\varepsilon_{1},\ldots,\varepsilon_{i}\}\). Analogously, \(W_{2\omega_{n}}\) is isomorphic to \(S_{n}\) and \(W_{2\omega_{n}}(\varepsilon_{1})=\{\varepsilon_{1},\ldots,\varepsilon_{n}\}\). Writing all the \(C_{\mu}(q,t)\) in their reduced form the recurrence can be rewritten as
\[\sum_{\mu\leq\lambda}\Lambda_{\mu}^{\lambda,n}(q,t)C_{\mu}(q,t)=0, \tag{5.8}\]
for certain coefficients \(\Lambda_{\mu}^{\lambda,n}(q,t)\). We will refer to this form of the Formula (4.5) as the _reduced recurrence for \(C_{\lambda}(q,t)\)_. We recall that if \(\lambda\) is a small, then a dominant weight \(\mu\) smaller than \(\lambda\) in the dominance order is again small. In particular the weights smaller than \(\omega_{k}\) are of the form \(\omega_{h}\) with \(h<k\). From now, we denote by \(C_{h}\) the formal series \(C_{\omega_{h}}(q,t)\) and by \(\Lambda_{h}^{k,n}\) its coefficient \(\Lambda_{\mu}^{\lambda,n}(q,t)\) in the recurrence for \(\lambda=\omega_{k}\). Moreover, if \(\lambda=2\omega_{n}\), we use the notation \(C_{n}\) for \(C_{2\omega_{n}}(q,t)\) and the coefficient of \(C_{h}(q,t)\) in the relative recurrence will be \(\Lambda_{h}^{n,n}\). Reasoning as in [16] and aiming to simplify recurrence (5.8), we want now expand recursively the coefficients \(\Lambda_{h}^{k,n}\).
_Remark 5.5_.: Using explicit realization of fundamental weights and of \(\rho\) as presented in Section 3.1, it is possible to check that a weight of the form \(w\omega_{k}\) in \(B_{n}\), with \(w\in W\), is conjugated to \(\omega_{h}\) only if \(w\omega_{k}=\varepsilon_{1}+\dots+\varepsilon_{h}+\nu\), where \(\nu\) has the first \(h\) coordinates equal to \(0\) when written in the \(\{\varepsilon_{i}\}\) basis. In our realization, the root system \(B_{n-h}\) can be identified as the root subsystem of \(B_{n}\) given by vectors of the form \(\{\pm\varepsilon_{i}\pm\varepsilon_{j}\}_{i<j\leq n-h}\cup\{\pm\varepsilon_{1},\,\dots,\pm\varepsilon_{n-h}\}\). This identification corresponds to the immersion of \(B_{n-h}\) into \(B_{n}\) induced by the immersion of the associated Dynkin diagrams. Under this identification \(\nu\) can be thought of as weight of the form \(w^{\prime}\omega_{k-h}\) conjugated to \(0\) in \(B_{n-h}\).
_Example 5.6_.: Consider the weight \(\omega_{4}\) in \(B_{6}\) and let \(\epsilon_{3}\) the element of \(W\) that acts as the sign change on the \(3\)-rd coordinate. Then \(\epsilon_{3}\omega_{4}=\omega_{2}+\nu\) there \(\nu=-\varepsilon_{3}+\varepsilon_{4}\). Observe that \(\nu+\rho\) has coordinates \((\frac{11}{2},\frac{9}{2},\frac{5}{2},\frac{7}{2},\frac{3}{2},\frac{1}{2})\) in terms of the \(\{\varepsilon_{i}\}\) basis. In particular \(s_{3}(\nu+\rho)=\rho\) and \(\nu\) is conjugated to zero.
_Remark 5.7_.: It is immediate to check that \(w\lambda\) is conjugated to \(\lambda\) if and only if \(w\in W_{\lambda}\). In particular this implies that, if \(\lambda=\varepsilon_{1}+\dots+\varepsilon_{k}\), then
\[\Lambda_{k}^{k,n}=\sum_{i=1}^{k}\frac{1-qt^{2n-2j+1}}{t^{\frac{2n-2j+1}{2}}}= \frac{(1-qt^{2n-k})(t^{k}-1)}{t^{\frac{2n-1}{2}}(t-1)}\]
The following Lemma, proved in [16], characterize the set \(\Omega_{0}^{k,n}\) of weights of the form \(w\omega_{k}\) conjugated to \(0\) in \(B_{n}\). In particular, it is possible to describe explicitly the coordinates of a weight in \(\Omega_{0}^{k,n}\) with respect to the basis \(\{\varepsilon_{1},\dots,\varepsilon_{n}\}\).
**Lemma 5.8** ([16], Lemma 4.1).: _Let \(w\in W\) be such that \(w\omega_{k}\in\Omega_{0}^{k,n}\), then:_
* _if_ \(k\) _is even,_ \(w\omega_{k}\) _has all the coordinates equal to zero except for_ \(k/2\) _pairs of consecutive coordinates of the form_ \((-1,1)\) _and it is conjugated to_ \(0\) _by_ \(\sigma\in W\) _of length_ \(k/2\)_._
* _if_ \(k\) _is odd _then_ \(w\omega_{k}\) _has all the coordinates equal to zero, except for a choice of_ \((k-1)/2\) _pairs of coordinates equal to_ \((-1,1)\) _and for the last one that must be equal to_ \(-1\)_. In this case_ \(w\omega_{k}\) _is conjugated to_ \(0\) _by by_ \(\sigma\in W\) _of length_ \((k-1)/2+1\)_._
The cardinality of \(\Omega_{0}^{k,n}\) can be explicitly computed as a consequence of Lemma 5.8:
\[|\Omega_{0}^{k,n}|=\begin{cases}\binom{n-\frac{k}{2}}{2}&\text{if $k$ is even,}\\ \binom{n-\frac{k-1}{2}-1}{2}&\text{if $k$ is odd.}\end{cases}\]
Set
\[p(n,q,t)=t^{\frac{2n-1}{2}}-qt^{-\frac{2n-1}{2}}+t^{-\frac{2n-3}{2}}-qt^{\frac {2n-3}{2}}=\frac{(t-q)(1+t^{2n-2})}{t^{\frac{2n-1}{2}}}.\]
**Lemma 5.9**.: _The following relations between the coefficients \(\Lambda_{h}^{k,n}\) hold for \(h<k\):_
\[\Lambda_{h}^{k,n}=(-1)^{s}\Lambda_{h}^{h,n}\binom{n-k+s}{s}+\Lambda_{0}^{k-h,n- h}\qquad\text{ if }\;k-h=2s, \tag{5.9}\]
\[\Lambda_{h}^{k,n}=(-1)^{s+1}\Lambda_{h}^{h,n}\binom{n-k+s}{s}+\Lambda_{0}^{k-h,n- h}\qquad\text{ if }\;k-h=2s+1 \tag{5.10}\]
\[\Lambda_{0}^{2s,n}=(-1)^{s}p(n,q,t)\binom{n-s-1}{s-1}-\Lambda_{0}^{2s-2,n-2}+ \Lambda_{0}^{2s,n-1}, \tag{5.11}\]
\[\Lambda_{0}^{2s+1,n}=(-1)^{s+1}p(n,q,t)\binom{n-s-2}{s-1}-\Lambda_{0}^{2s-1,n- 2}+\Lambda_{0}^{2s+1,n-1}, \tag{5.12}\]
Proof.: Equations (5.9) and (5.10) are direct consequences of Remark 5.5, where it is observed that a weight \(w\omega_{k}\) gives a contribution to \(\Lambda_{h}^{k,n}\) if it is of the form \(\varepsilon_{1}+\cdots+\varepsilon_{h}+\nu\), where \(\nu\) can be thought as weight \(w^{\prime}\omega_{k-h}\) conjugated to \(0\) in \(B_{n-h}\). Moreover, Equations (5.11) and (5.12) can be obtained observing that a weight \(\nu=w\omega_{k}\) contributing to \(\Lambda_{0}^{k,n}\) is of the form \(-\varepsilon_{1}+\varepsilon_{2}+\nu^{\prime}\) with \(\nu^{\prime}\) conjugated to \(0\) in \(B_{n-2}\) or of the form \(\nu=(0,\nu_{2},\ldots,\nu_{n})\) in \(\{\varepsilon_{i}\}\)-expansion, where \(\nu^{\prime}=(\nu_{2},\ldots,\nu_{n})\) is a weight conjugated to \(0\) in \(B_{n-1}\).
The above relations enable to simplify considerably the computations needed to prove our formulae. In particular they are crucial for the proof, contained in Section 5.3), of the following theorem. We denote by \(R_{i}\) and by \(R_{n}\) the reduced recurrences for \(C_{\lambda}(q,t)\), with \(\lambda=\omega_{i}\) and \(\lambda=2\omega_{n}\) respectively.
**Theorem 5.10**.: _There exists a family of integers \(\{A_{i}^{k,n}\}_{i\leq k}\) such that_
\[\sum_{i=1}^{k}A_{i}^{k,n}R_{i}=\Lambda_{k}^{k,n}C_{k}+\Gamma_{0}^{1,n-k+1}C_{ k-1}+\sum_{i=1}\Gamma_{0}^{2,n-k+i+1}C_{k-2i}+\sum_{i=2}\Gamma_{0}^{2,i}C_{k-2i+1 }=0 \tag{5.13}\]
_where the coefficients \(\Lambda_{k}^{k,n}\) and \(\Gamma_{0}^{i,n}\) are defined by the formulae_
\[\Gamma_{0}^{1,n}=\Lambda_{0}^{1,n}=-\frac{(t-q)t^{n-1}}{t^{\frac{2n-1}{2}}} \qquad\Gamma_{0}^{2,n}=\Gamma_{0}^{2,n-1}-p(n,q,t)=-\frac{(t-q)(t^{2n-1}-1)}{ t^{\frac{2n-1}{2}}(t-1)}\]
Observe that specializing the Equation (5.13) at \((q,t)\to(-q,q^{2})\) one obtains the equation of [16], Proposition 4.3 used to prove Reeder's Conjecture in type \(B\).
_Remark 5.11_.: Dividing Equation (5.13) by \(C_{0}(q,t)\) we obtain a recursive relation between the formal series \(\overline{C}_{\mu}(q,t)=C_{\mu}(q,t)/C_{0}(q,t)\). We recall that \(\overline{C}_{\mu}(0,t)=E_{\mu}(t)\) by Remark 4.7, and consequently the specialization at \(q\to 0\) of Equation (5.13) leads to a recursive relation between polynomials of generalized exponents of small representations.
Proof.: (of Theorem 5.2) We denote by \(E_{h}\) and \(E_{n}\) the polynomials \(E_{\omega_{h}}(t)\) and \(E_{2\omega_{n}}(t)\) respectively. Set
\[b_{i}=-t^{n-i+1}(t^{2i-1}-1),\qquad c_{k}=t^{k}-1\]
Because of Remark 5.11, evaluating Equation (5.13) at \(q=0\) and multiplying it by \(t^{\frac{2n-1}{2}}(t-1)\) it is possible to obtain the relation
\[c_{k}E_{k}+\sum_{i=1}^{\lfloor\frac{k}{2}\rfloor}b_{n-k+i+1}E_{k-2i}+\sum_{i=1 }^{\lfloor\frac{k+1}{2}\rfloor}b_{i}E_{k-2i+1}=0.\]
We want now prove our formulae by induction. The base step comes from Remark 5.4 and by formulae contained in Remark 4.5. Consequently, for the inductive step we have to prove that the two identities
\[(t^{2s}-1)t^{s}\binom{n}{s}_{t^{2}} =(t^{2s}-1)E_{2s}\] \[=-\sum_{j=0}^{s-1}b_{n-s-j+1}E_{2j}-\sum_{j=0}^{s-1}b_{s-j}E_{2j+1}\] \[=\sum_{j=0}^{s-1}t^{s+j}(t^{2(n-s-j)+1}-1)E_{2j}+\sum_{j=0}^{s-1} t^{n-s+j+1}(t^{2(s-j)+1}-1)E_{2j+1}\]
\[(t^{s+1}-1)\binom{n}{s+1}_{t} =\sum_{j=0}^{s}t^{j}\left(t^{n-2j}-1\right)\binom{n}{j}_{t}\] \[=t^{s}(t^{n-2s}-1)\binom{n}{s}_{t}+\sum_{j=0}^{s-1}t^{j}\left(t^{n-2 j}-1\right)\binom{n}{j}_{t}\] \[=(t^{n-s}-1)\binom{n}{s}_{t},\]
The equality now holds because
\[\binom{n}{s+1}_{t}=\frac{(n-s)_{t}}{(s+1)_{t}}\binom{n}{s}_{t}.\]
### Proof of Theorem 5.10
Firstly, we define iteratively the family of integers \(\{A_{j}^{k,n}\}\). Set \(A_{k}^{k,n}=1\), for \(h\in\{1,\ldots,k-1\}\) we define
\[A_{h}^{k,n}=-\sum_{j=h+1}^{k}(-1)^{\lfloor\frac{j-h+1}{2}\rfloor}\binom{n-j+ \lfloor\frac{j-h}{2}\rfloor}{\lfloor\frac{j-h}{2}\rfloor}A_{j}^{k,n} \tag{5.15}\]
Moreover, by convention we set \(A_{h}^{k,n}=0\) if \(h>k\) or if \(h\leq 0\). Using properties of binomials and Equation (5.15) it is possible to prove that the integers \(A_{h}^{k,n}\) satisfy nice iterative properties:
**Lemma 5.12**.:
1. \(A_{h+1}^{k,n}=A_{h}^{k-1,n-1}\)_,_
2. \(A_{h}^{k,k}=A_{h}^{k-1,k-1}+A_{h}^{k-2,k-1}\)_,_
3. \(A_{h}^{k,n}=A_{h}^{k,n-1}+A_{h}^{k-2,n-1}\)_, if_ \(k<n\)_._
We consider now the expression \(\sum_{j=0}^{k}A_{i}^{k,n}R_{i}\). It can be written in the form
\[\Lambda_{k}^{k,n}C_{k}+\sum_{h=0}^{k-1}\Gamma_{h}^{k,n}C_{h}=0, \tag{5.16}\]
for some coefficients \(\Gamma_{h}^{k,n}\) that we are going to determine explicitly.
**Proposition 5.13**.: _For every \(h\) such that \(0<h<k\) the equality \(\Gamma_{h}^{k,n}=\Gamma_{0}^{k-h,n-h}\) holds._
Proof.: By definition we have that \(\Gamma_{h}^{k,n}=\sum_{j=h}^{k}A_{j}^{k,n}\Lambda_{h}^{j,n}\). Now we use Lemma 5.9 to expand \(\Lambda_{h}^{j,n}\):
\[\Gamma_{h}^{k,n} =\sum_{j=h+1}^{k}A_{j}^{k,n}\left[(-1)^{\lfloor\frac{j-h+1}{2} \rfloor}\binom{n-j+\lfloor\frac{j-h}{2}\rfloor}{\lfloor\frac{j-h}{2}\rfloor }\Lambda_{h}^{h,n}+\Lambda_{0}^{j-h,n-h}\right]+A_{h}^{k,n}\Lambda_{h}^{h,n}\] \[=\left[\sum_{j=h}^{k}(-1)^{\lfloor\frac{j-h+1}{2}\rfloor}\binom{ n-j+\lfloor\frac{j-h}{2}\rfloor}{\lfloor\frac{j-h}{2}\rfloor}A_{j}^{k,n} \right]\Lambda_{h}^{h,n}+\sum_{j=h+1}^{k}A_{j}^{k,n}\Lambda_{0}^{j-h,n-h}\]
using Equation (5.15) and setting \(t=j-h\) we have
\[=\sum_{t=1}^{k-h}A_{t+h}^{k,n}\Lambda_{0}^{t,n-h}\]
and now by Lemma 5.12
\[=\sum_{t=1}^{k-h}A_{t}^{k-h,n-h}\Lambda_{0}^{t,n-h}=\Gamma_{0}^{k-h,n-h}.\]
_Remark 5.14_.: By Equation 5.11 we know that \(\Lambda_{0}^{2,n}=-p(n,q,t)+\Lambda_{0}^{2,n-1}\). Moreover, observe that \(\Lambda_{0}^{1,n}=\Lambda_{0}^{1,n-1}\) and \(A_{1}^{2,n}=A_{1}^{2,n-1}=1\). We consequently obtain
\[\Gamma_{0}^{2,n}=\Lambda_{0}^{2,n}+\Lambda_{0}^{1,n}=-p(n,q,t)+\Lambda_{0}^{2, n-1}+\Lambda_{0}^{1,n-1}=-p(n,q,t)+\Gamma_{0}^{2,n-1}\]
**Proposition 5.15**.: _If \(k>2\), the following relations between the coefficients \(\Gamma_{0}^{k,n}\) hold:_
\[\Gamma_{0}^{k,k}=\Gamma_{0}^{k-1,k-1}+\Gamma_{0}^{k-2,k-1}-\Gamma_{0}^{k-2,k-2}\]
\[\Gamma_{0}^{k,n}=\Gamma_{0}^{k,n-1}-\Gamma^{k-2,n-2}+\Gamma_{0}^{k-2,n-1} \qquad\text{ for }k<n\]
Proof.: We consider \(\Gamma_{0}^{k,n}=\sum_{j=1}^{k}A_{j}^{k,n}\Lambda_{0}^{j,n}\) end expand \(\Lambda_{0}^{j,n}\) according to Lemma 5.9. We obtain
\[\Gamma_{0}^{k,n} =\sum_{j=2}^{k}A_{j}^{k,n}\left[(-1)^{\lfloor\frac{j+1}{2}\rfloor} \binom{n-\lfloor\frac{j+1}{2}\rfloor-1}{\lfloor\frac{j}{2}\rfloor-1}p(n,q,t)- \Lambda_{0}^{j-2,n-2}+\Lambda_{0}^{j,n-1}\right]+A_{1}^{k,n}\Lambda_{0}^{1,n}\] \[=\left[\sum_{j=2}^{k}(-1)^{\lfloor\frac{j+1}{2}\rfloor}\binom{n- \lfloor\frac{j+1}{2}\rfloor-1}{\lfloor\frac{j}{2}\rfloor-1}A_{j}^{k,n} \right]p(n,q,t)-\sum_{j=3}^{k}A_{j}^{k,n}\Lambda_{0}^{j-2,n-2}+\sum_{j=1}^{k}A _{j}^{k,n}\Lambda_{0}^{j,n-1}\]
Observe now that Equation (5.15) implies
\[\sum_{j=2}^{k}(-1)^{\lfloor\frac{j+1}{2}\rfloor}\binom{n-\lfloor\frac{j+1}{2} \rfloor-1}{\lfloor\frac{j}{2}\rfloor-1}A_{j}^{k,n}=0.\]
Furthermore using Lemma 5.9 and setting \(t=j-2\) we have
\[\Gamma_{0}^{k,n} =\sum_{j=1}^{k}A_{j}^{k,n}\Lambda_{0}^{j,n-1}-\sum_{j=3}^{k}A_{j} ^{k,n}\Lambda_{0}^{j-2,n-2}\] \[=\sum_{j=1}^{k}\left[A_{j}^{k,n-1}+A_{j}^{k-2,n-1}\right]\Lambda_ {0}^{j,n-1}-\sum_{t=1}^{k-2}A_{t+2}^{k,n}\Lambda_{0}^{t,n-2}\] \[=\sum_{j=1}^{k}A_{j}^{k,n-1}\Lambda_{0}^{j,n-1}+\sum_{j=1}^{k-2}A _{j}^{k-2,n-1}\Lambda_{0}^{j,n-1}-\sum_{t=1}^{k-2}A_{t}^{k-2,n-2}\Lambda_{0}^{ t,n-2}\] \[=\Gamma_{0}^{k,n-1}+\Gamma_{0}^{k-2,n-1}-\Gamma_{0}^{k-2,n-2}.\]
and analogously
\[\Gamma_{0}^{k,k} =\sum_{j=2}^{k}A_{j}^{k,k}\left[(-1)^{\lfloor\frac{j+1}{2}\rfloor }\binom{k-\lfloor\frac{j+1}{2}\rfloor-1}{\lfloor\frac{j}{2}\rfloor-1}p(k,q,t )-\Lambda_{0}^{j-2,k-2}+\Lambda_{0}^{j,k-1}\right]+A_{1}^{k,k}\Lambda_{0}^{1,k}\] \[=\left[\sum_{j=2}^{k}(-1)^{\lfloor\frac{j+1}{2}\rfloor}\binom{k- \lfloor\frac{j+1}{2}\rfloor-1}{\lfloor\frac{j}{2}\rfloor-1}A_{j}^{k,k}\right]p (k,q,t)-\sum_{j=3}^{k}A_{j}^{k,k}\Lambda_{0}^{j-2,k-2}+\sum_{j=1}^{k-1}A_{j}^{ k,k}\Lambda_{0}^{j,k-1}\] \[=\sum_{j=1}^{k-1}\left[A_{j}^{k-1,k-1}+A_{j}^{k-2,k-1}\right] \Lambda_{0}^{j,k-1}-\sum_{t=1}^{k-2}A_{t}^{k-2,k-2}\Lambda_{0}^{j,k-2}\] \[=\Gamma_{0}^{k-1,k-1}+\Gamma_{0}^{k-2,k-1}-\Gamma_{0}^{k-2,k-2}.\]
Making explicit computations for \(n=2,3\), it is possible to prove that \(\Gamma_{0}^{2,2}=\Gamma_{0}^{3,3}\). Moreover observe that \(\Gamma_{0}^{1,n}=\Gamma_{0}^{1,n+1}\) for every \(n>1\). As a consequence of Proposition 5.15 we obtain:
**Corollary 5.16**.: _The following relations hold:_
\[\Gamma_{0}^{k,k}=\begin{cases}\Gamma_{0}^{k-2,k-1}&\text{ if $k$ is even,}\\ \Gamma_{0}^{k-1,k-1}&\text{ if $k$ is odd.}\end{cases}\qquad\Gamma_{0}^{k,n}= \begin{cases}\Gamma_{0}^{k,n-1}&\text{ if $n>k>2$ and $k$ is odd,}\\ \Gamma_{0}^{k-2,n-1}&\text{ if $n>k>2$ and $k$ is even.}\end{cases}\]
Theorem 5.10 comes directly by Remark 5.14 and iterating the relations of Corollary 5.16.
### Proof of Formulae in Type D
We denote by \(C_{h}(q,t)\) the formal series \(C_{\omega_{2h}}(q,t)\). Moreover, if \(n=2k+1\) (resp. \(n=2k\)) we denote by \(C_{k}(q,t)\) the formal series \(C_{\omega_{n-1}+\omega_{n}}(q,t)\) (resp. \(C_{2\omega_{n}}(q,t)\)). Our formulae can be obtained dealing with the non specialized version of Equation 4.4 of [17]. The reduced recurrence \(R_{k}\) for \(C_{k}(q,t)\) can be written in the form
\[R_{k}=\sum_{h\leq k}\Lambda_{h}^{k,n}(q,t)C_{h}(q,t)=0. \tag{5.17}\]
for certain coefficients \(\Lambda_{h}^{k,n}(q,t)\). Reasoning as in Remark 5.7, a non specialized analogue of Formula 4.5 in [17] can be achieved:
\[\Lambda_{k}^{k,\,n}(q,t)=\begin{cases}\frac{2(t^{2k}-1)(1-qt^{2k-1})}{t^{2k-1} (t-1)}&\text{if $n=2k$,}\\ \frac{(t^{2k}-1)(1-qt^{2(n-k)-1})}{t^{n-1}(t-1)}&\text{otherwise. }.\end{cases}.\]
Set now
\[b_{k,n}=\begin{cases}\ \frac{(t-q)(t^{2k}-1)}{t^{k}(t-1)}&\text{if $n=2k$,}\\ \ \frac{(t-q)(t^{n}-1)(t^{n-2k}+1)}{t^{n-k}(t-1)}&\text{otherwise.}\end{cases}\]
The next Proposition is a non specialized version of Proposition 4.6 of [17].
**Proposition 5.17**.: _The following recursive relation hold:_
\[\Lambda_{k}^{k,n}(q,t)C_{k}(q,t)-\sum_{i=1}^{k}b_{i,n-2(k-i)}C_{k-i}(q,t)=0 \tag{5.18}\]
Using Equation (5.18) and Remark 5.4 as base step, it is possible to prove inductively Theorem 5.3.
Proof.: (of Theorem 5.3) As observed in Remark 5.11, the formal series \(\overline{C}_{k}(q,t)=C_{k}(q,t)/C_{0}(q,t)\) satisfies the Recurrence (5.18). Specializing Equation (5.18) at \(q=0\) and recalling that \(E_{\omega_{2k}}(t)=\overline{C}_{k}(0,t)\), Formulae (5.3), (5.4) and (5.5) can be obtained recursively proving that
\[\frac{(t^{2k}-1)}{t^{n-1}(t-1)}\frac{t^{k}(t^{n-2k}+1)}{(t^{n}+1 )}\binom{n}{k}_{t^{2}} =\frac{(t^{2k}-1)}{t^{n-1}(t-1)}\overline{C}_{k}(0,t)\] \[=\sum_{i=1}^{k-1}\frac{t^{k+i}(t^{n-2i}-1)(t^{n-2k}+1)}{t^{n-1}( t-1)}\overline{C}_{i}(0,t)\] \[=\sum_{i=1}^{k-1}\frac{t^{k+i}(t^{n-2i}-1)(t^{n-2k}+1)}{t^{n-1}(t -1)}\frac{t^{i}(t^{n-2i}+1)}{(t^{n}+1)}\binom{n}{i}_{t^{2}}\] \[=\frac{t^{k}(t^{n-2k}+1)}{t^{n-1}(t-1)(t^{n}+1)}\sum_{i=1}^{k-1}t ^{2i}(t^{2(n-2i)}-1)\binom{n}{i}_{t^{2}}.\]
Again we reduced to Identity (5.14) that we just proved in Section 5.2.
### Proof of Proposition 5.17
The proof is analogue to the proof of Proposition 4.6 contained in Section 5 of [17]. Set
\[r(n,q,t)=t^{(n-1)}-qt^{-(n-1)}+t^{-(n-2)}-qt^{(n-2)}=\frac{(t-q)\left(t^{2n-3} +1\right)}{t^{n-1}}\]
We denote by \(\Omega_{0}^{\lambda,\,n}\) the set of weights of the form \(w\lambda\) conjugated to \(0\). If \(\lambda=\omega_{2k}\) we will use the notation \(\Omega_{0}^{k,\,n}\) Coherently with our previous notations, if \(n=2k+1\) (resp. \(n=2k\)) we denote by \(\Omega_{0}^{k,\,n}\) the set ow weights of the form \(w(\omega_{n-1}+\omega_{n})\) (resp. \(w(2\omega_{n})\)) conjugated to \(0\). We recall the following results by [17], Section 4.1:
_Remark 5.18_ ([17], Remark 4.3).: The weights giving non zero contribution to \(\Lambda_{h}^{k,n},k>h>0\) are of the form \(e_{1}+\cdots+e_{2h}+\nu\), where \(\nu\) has the first \(2h\) coordinates equal to \(0\). Considering the immersion of \(D_{n-2h}\to D_{n}\) induced by the Dynkin diagrams, \(\nu\) can be then identified with a weight in \(\Omega_{0}^{k-h,n-2h}\).
**Lemma 5.19** ([17], Lemma 4.5).: _Set \(\lambda=\omega_{2k}\), \(2k<n\) or \(\lambda=\omega_{n-1}+\omega_{n}\), \(n=2k+1\) and let \(w\in W\) be such that \(w\lambda\) is conjugated to \(0\). Then \(w\lambda\) is of one of the following form:_
1. _The_ \(2k\) _non zero coordinates of_ \(w\lambda\) _are pair of consecutive coordinates_ \(((w\lambda)_{(j)},(w\lambda)_{(j)+1})\) _of the form_ \((-1,1)\)_._
2. _There are_ \(2(k-1)\) _non zero coordinates that are pair of consecutive coordinates_ \(((w\lambda)_{(j)},(w\lambda)_{(j)+1})\) _of the form_ \((-1,1)\) _and the latter two are equal to_ \(-1\)_._
_In both cases there exists an element \(\sigma\in W\) of length \(l(\sigma)=k\) such that \(\sigma(w\lambda+\rho)=\rho\)._
_Remark 5.20_.: Consider \(\mu\in\Omega_{0}^{k,\,n}\) and denote by \(\epsilon_{n}\) the sign change on the \(n\)-th coordinate.
* _If_ \(n=2k\) then \(\mu\) must be of the form \(-e_{1}+e_{2}+\nu\) with \(\nu\in\Omega_{0}^{k-1,2k-2}\),
* _If_ \(n=2k+1\) then \(\mu\) must be of the form \(-e_{1}+e_{2}+\nu\) with \(\nu\in\Omega_{0}^{k-1,2k-1}\) or \(\mu=(0,\mu_{2},\ldots,\mu_{n})\) where \(\mu^{\prime}=(\mu_{2},\ldots,\mu_{n})\in\Omega_{0}^{k,2k}\) or \(\epsilon_{n}\mu^{\prime}\in\Omega_{0}^{k,2k}\),
* _If_ \(n\neq 2k,2k+1\) then \(\mu\) must be of the form \(-e_{1}+e_{2}+\nu\) with \(\nu\in\Omega_{0}^{k-1,n-2}\) or \(\mu=(0,\mu_{2},\ldots,\mu_{n})\) where \(\mu^{\prime}=(\mu_{2},\ldots,\mu_{n})\in\Omega_{0}^{k,n-1}\).
The above considerations lead to non specialized versions of recursive relations (4.6), (4.7), (4.8) and (4.9) in [17]:
\[\Lambda_{h}^{k,\,n}(q,t)=(-1)^{k-h}\Lambda_{h}^{h,n}(q,t)|\Omega_{0}^{k-h,n-2h }|+\Lambda_{0}^{k-h,\,n-2h}(q,t). \tag{5.19}\]
\[\Lambda_{0}^{k,2k}(q,t)=(-1)^{k}\sum_{i=1}^{k}r(2i,q,t)=(-1)^{k}r(2k,q,t)- \Lambda_{0}^{k-1,2k-2}(q,t). \tag{5.20}\]
\[\Lambda_{0}^{k,\,2k+1}(q,t)=(-1)^{k}r(2k+1,q)|\Omega_{0}^{k-1,2k-1}|-\Lambda_{0 }^{k-1,2k-1}(q,t)+2\Lambda_{0}^{k,2k}(q,t), \tag{5.21}\]
\[\Lambda_{0}^{k,\,n}(q,t)=(-1)^{k}r(n,q,t)|\Omega_{0}^{k-1,n-2}|-\Lambda_{0}^{k -1,n-2}(q,t)+\Lambda_{0}^{k,n-1}(q,t). \tag{5.22}\]
where
\[|\Omega_{0}^{k,n}|=\left\{\begin{array}{ll}1&\mbox{if $\lambda=2\omega_{n}$ or if $\lambda=0$}\\ \frac{n}{k}\binom{n-k-1}{k-1}&\mbox{if $\lambda=\omega_{2k}$ and $2k<n$ or $\lambda=\omega_{n-1}+\omega_{n}$ and $n=2k+1$.}\end{array}\right.\]
As in Section 5 of [17], define a family of integers \(A_{h}^{k.n}\) in the following way:
\[A_{h}^{k,n}=\left\{\begin{array}{ll}0&\mbox{if $h>k$ or $h\leq 0$,}\\ 1&\mbox{if $h=k$,}\\ -\sum_{i=h+1}^{k}(-1)^{i-h}|\Omega_{0}^{i-h,n-2h}|A_{i}^{k,n}&\mbox{otherwise.} \end{array}\right. \tag{5.23}\]
and consider
\[\sum_{i=1}^{k}A_{i}^{k,n}R_{i}=\Lambda_{k}^{k,n}(q,t)C_{k}(q,t)-\sum_{i=0}^{k -1}\Gamma_{i}^{k,n}(q,t)C_{i}(q,t). \tag{5.24}\]
Performing the same computation as in Proposition 5.2 of [17] it is possible to prove that \(\Gamma_{h}^{k,n}(q,t)=\Gamma_{0}^{k-h,n-2h}(q,t)\) if \(k>h>0\). Analogously the following relations hold:
\[\Gamma_{0}^{k,2k}(q,t)=-\sum_{j=2}^{k+1}r(j,q,t),\qquad\Gamma_{0}^{k,2k+1}(q,t) =2\Gamma_{0}^{k,2k}(q,t)-r(k+2,q,t),\]
\[\Gamma_{0}^{k,n}(q,t)=\Gamma_{0}^{k,n-1}(q,t)-r(n-k+1,q,t).\]
Now it is straightforward to show that \(\Gamma_{0}^{k,n}(q,t)=b_{k,n}\) and then \(\Gamma_{0}^{k-h,n-2k}(q,t)=b_{k-h,n-2h}\).
### Open Questions about Generalized Exponents and Small Representations
Some natural questions arise as consequences of our results. Firstly, as a consequence of Theorem 2.3, generalized exponents of small representations are related to the so called _fake degrees_, i.e. degrees of generators (as \(S(\mathfrak{h})^{W}\)-module) of isotypic components of \(W\)-representations in \(S(\mathfrak{h})\). There exists an ample literature about fake degrees and many formulae to obtain them in terms of suitable combinatorial statistics (see [33] for a complete survey about the topic and [2][7], [8], [19], [28], [37], [41] for more specific results). It could be interesting to find a purely combinatorial proof of Formulae (5.1), (5.2), (5.3), (5.4) and (5.5).
Moreover, a closer analysis of formulae proved in [16] and [17] for graded multiplicities of small representations in the exterior algebra, underlines some similarities with the results contained in [13] and [15]. In fact, Theorem 2.6 and its analogous version for little adjoint representation are suggested by the following factorizations of \(P(\mathfrak{g},\Lambda\mathfrak{g},q)\) and \(P(V_{\theta_{s}},\Lambda\mathfrak{g},q)\):
\[P(\mathfrak{g},\Lambda\mathfrak{g},q)=(1+q^{-1})\prod_{i=1}^{n-1}(q^{2e_{i}+1 }+1)E_{\theta}(q^{2}) \tag{5.25}\]
\[P(V_{\theta_{s}},\Lambda\mathfrak{g},q)=(1+q^{-1})\prod_{i=1}^{n-1}(q^{2e_{i} +1}+1)E_{\theta_{s}}(q^{2}) \tag{5.26}\]
In particular, the authors of [13] and [15] noticed that the factor \(\prod_{i=1}^{n-1}(q^{2e_{i}+1}+1)\) is the Poincare polynomial of the exterior algebra over the first \(n-1\) generators \(P_{1},\ldots,P_{n-1}\) of the algebra of invariants in \(\Lambda\mathfrak{g}\). A direct computation shows that similar factorizations can be achieved for polynomials of graded multiplicities of certain small representations. As an example, comparing the results exposed in Theorem 5.2 with formulae proved in [16], in type \(B_{n}\) the polynomials for graded multiplicities can be rearranged as
\[P(V_{\omega_{2s}},\Lambda\mathfrak{g},q)=(1+q^{-1})\prod_{i=1}^{n-s}(1+q^{2e_ {i}+1})\prod_{i=1}^{s-1}(1+q^{2e_{i}+1})E_{\omega_{2s}}(q^{2}),\]
\[P(V_{\omega_{2s+1}},\Lambda\mathfrak{g},q)=(1+q^{-1})\prod_{i=1}^{s}(1+q^{2e_{ i}+1})\prod_{i=1}^{n-s-1}(1+q^{2e_{i}+1})E_{\omega_{2s+1}}(q^{2}).\]
Analogously, using Theorem 5.1, in type \(C_{n}\) it is possible to obtain the factorization:
\[P(V_{\omega_{2k}},\Lambda\mathfrak{g},q)=(1+q^{-1})\prod_{i=1}^{n-k}(q^{2e_{i} +1}+1)\prod_{i=1}^{k-1}(q^{2e_{i}+1}+1)E_{\omega_{2k}}(q^{2}).\]
Consequently, it is natural to ask if there exist examples of small representations \(V_{\lambda}\), different from \(V_{\theta}\) and \(V_{\theta_{s}}\), such that the module \(\operatorname{Hom}_{\mathfrak{g}}(V_{\lambda},\Lambda\mathfrak{g})\) has a structure of free module over a suitable exterior algebra of invariants, with degrees prescribed by factorizations of \(P(V_{\lambda},\Lambda\mathfrak{g},q)\) similar to the ones in Formulae (5.25) and (5.26). |
2310.00456 | Quantum Materials Group Annual Report 2022 | The Quantum Materials group at Indian Institute of Technology Patna is
working on a range of topics relating to nanoelectronics, spintronics, clean
energy and memory design etc. The PI has past experiences of working
extensively with superconducting systems like cuprates [1, 2], ruthanate [3],
pnictide [4, 5], thin film heterostructures [6, 7] etc and magnetic recording
media [8, 9] etc. In this report, we have summarised the ongoing works in our
group. We explored a range of functional materials like two-dimensional
materials, oxides. topological insulators, organic materials etc. using a
combination of experimnetal and computational tools. Some of the useful
highlights are as follows: (a) tuning and control of the magnetic and
electronic state of 2D magentic materials with rapid enhancement in the Curie
temperature, (b) Design and detection of single electron transistor based
nanosensors for the detection of biological species with single molecular
resolution, (c) Observation of non-volatile memory behaviour in the hybrid
structures made of perovskite materials and 2D hybrids. The results offer
useful insight in the design of nanoelectronic architecrures for diverse
applications. | P. Kumari, S. Rani, S. Kar, T. Mukherjee, S. Majumder, K. Kumari, S. J. Ray | 2023-09-30T18:30:47Z | http://arxiv.org/abs/2310.00456v2 | # Quantum Materials Group Annual Report 2022
###### Abstract
The Quantum Materials group at Indian Institute of Technology Patna is working on a range of topics relating to nanoelectronics, spintronics, clean energy and memory design etc. The PI has past experiences of working extensively with superconducting systems like cuprates [1; 2], ruthanate [3], pnictide [4; 5], thin film heterostructures [6; 7] etc and magnetic recording media [8; 9] etc. In this report, we have summarised the ongoing works in our group. We explored a range of functional materials like two-dimensional materials, oxides. topological insulators, organic materials etc. using a combination of experimnetal and computational tools. Some of the useful highlights are as follows: (a) tuning and control of the magnetic and electronic state of 2D magentic materials with rapid enhancement in the Curie temperature, (b) Design and detection of single electron transistor based nanosensors for the detection of biological species with single molecular resolution, (c) Observation of non-volatile memory behaviour in the hybrid structures made of perovskite materials and 2D hybrids. The results offer useful insight in the design of nanoelectronic architectures for diverse applications.
Nanosensing
The discovery of 2D materials has opened up various areas for nanoelectronic application. A single electron transistor (SET) is a source-quantum dot-drain structure where the quantised conduction through the dot can be controlled by a capacitively coupled gate. The quantised energy levels in the quantum dot (QD) give rise to well defined conductance plateaus, with their positions changing at specific source-drain bias leading to a coulomb blockade state.
Due to the larger surface to volume ratio, 2D materials attracted significant attention for sensing applications, which is reflected in its drastic change of resistance on the adsorption of an alien molecule. In the close proximity of an external molecule, the charge carrier concentration on a 2D material changes significantly. This property has been exploited to use 2D layered materials like graphene [10], borophene [11] and MoS2 for sensing applications. Depending upon the sign of the charge transfer between the molecule and the 2D material, the molecule near the layered material can work as a donor or acceptor. Owing to the 2D nature of such materials, the effect of surface dopants is extremely perceptive which has been demonstrated in achieving extreme detection efficiency upto 1 ppb.
In our recent work, we have developed various single electron transistor (SET) devices with 2D materials as island and studied their usefulness as nanosensor. DNA and RNA detection were studied for the case of graphene and hexagonal boron nitride based SET [12; 13]. It was observed that addition of a secondary gate electrode offers better control of the detection efficiency [14]. Earlier, we have designed various SET devices for chemical detection using a variety of layers like graphene, MoS\({}_{2}\)[15], phosphorene [16], C\({}_{3}\)N etc [17] and in various engineered nanostructures [18; 19; 20; 21; 22; 23; 24; 25]. The biosensing behaviour of a C\({}_{3}\)N nanoribbon can be found here [26; 27] and Proximity-induced colossal conductivity modulation was observed in phosphorene towards the detection of organic molecules [28]. Silicon based single electron devices have been used for electron pumping operation, that is useful for quantum metrology [29; 30; 31; 32; 33].
Spintronics
The discovery of atomically thin two dimensional materials has brought a paradigm shift in the development of 2D nanoelectronic and spintronic devices. Today, a wide variety of such materials ranging from graphene, which is a semi-metal, to semiconductors and insulators have emerged with unique electronic properties. Specifically, graphene and other 2D crystals have already shown their utility in spintronic applications [34; 35; 36; 37; 38]. A recent entry into this area has been the class of 2D magnets, which defied the predictions of Mermin-Wagner theorem, which states that finite temperature thermal excitations are enough to destroy magnetic order in two dimension. Despite this, the feasibility of 2D magnets was shown by first-principles calculations in several 2D materials, which can be considered as ground states at absolute zero temperature. Several recent experiments showed the existence of 2D magnetism in several predicted materials such as CrI\({}_{3}\), VSe\({}_{2}\) and Cr\({}_{6}\)Ge\({}_{2}\)Te\({}_{6}\). This validity has encouraged exploring the feasibility of magnetic phases in more 2D crystals. However, the majority of the predicted materials exhibit ordering temperatures much below room temperature and a key challenge remains today is to find ways to increase the ordering temperature above room temperature and engineer phases locally. In particular, such control is necessary for device realisation and applicability of the magnetic phases in spintronic and magneto-electronic switches. The conventional strategy involves chemical doping and defect generation in 2D crystals for creating local magnetic moments [39; 40; 41]. However, to dynamically establish and control magnetism in materials, it is essential to use physical stimuli such as electric field or strain that can be locally applied at a nanoscopic device scale [42].
In this direction, we have recently uncovered the family of 2D transition metal oxychlorides with the general formula MOCl (M = Ti, V, Fe, Cr). This type of magnetic material, while being a semiconductor, is promising for integration into electronic circuits. However, like other 2D semiconductor magnets, ferromagnetism in CrOCl only exists at low temperatures, much below 200 K. This makes it highly interesting to devise new ways of enhancing the interatomic exchange interaction for designing and tuning magnetic phases. While electric field control of the carrier concentration in graphene and other 2D crystals is one way to tune the interatomic exchange, the exceptional resilience of such crystals to strain also provides novel ways to engineer desired electronic structures and enable flexible
2D spintronic devices. For CrOCl, we probe the combined effect of two stimuli, through extensive ab-initio calculations and Monte-Carlo simulations of the 2D Ising Model, to uncover the magnetic response of CrOCl. First, we establish how the application of uniaxial and biaxial strain can lead to room temperature ferromagnetism and the occurrence of a well-defined phase transition. Next, we show how coupling strain with electric field leads to high temperature magnetic ordering, increasing the T\({}_{c}\) to almost three times the intrinsic T\({}_{c}\) of CrOCl [43; 44; 81]. Further, we have observed spin-selective transport behaviour offering significant spin injection efficiency tunable with applied strain in it [46]. While extending this work for another member of this family CrOBr, we observed similar tunability in electronic, magnetic phases as well as in the spin-transport behaviour [47]. We reported intrinsic magnetism in several new members of the family of TMXYZ system where TM=transition metal and X, Y, Z = Cl, Br, I. It includes robust half-metallicity in two-dimensional VClI\({}_{2}\)[48], ferromagnetic semiconductor VClBr\({}_{2}\)[49], Janus monolayers [50; 51], two-dimensional magnetic semiconductor VIBr\({}_{2}\)[52] etc. The stimuli assisted tuning and control of magnetic and electronic properties were also studied for Cr\({}_{2}\)Ge\({}_{2}\)Se\({}_{6}\)[53; 54; 55] and Cr\({}_{2}\)Ge\({}_{2}\)Te\({}_{6}\)[56].
## III Twistronics
The recent discovery of superconductivity in bilayer graphene has fuelled interest in understanding the role of an interlayer twist in controlling the material properties. When two neighbouring layers of a 2D system are rotated with respect to each others, it results in strong-electron correlation effect resulting in the demonstration of novel quantum effects. In this direction, we have studied several interesting systems to understand the role of interlayer twist on the electronic, magnetic and optical properties of various van der Waals heterostructures. Twist-assisted tunability and enhanced ferromagnetism in a 2D Van Der Waal's Heterostructure made up of CrI\({}_{3}\) and Graphene [57], while Proximity induced exchange coupling in a Phosphorene heterojunction was reported in the presence of a CrI\({}_{3}\) substrate [58]. Interesting effects were observed when a 2D magnet interface was formed with a magnetic topological insulator like MnBi\({}_{2}\)Te\({}_{4}\), where topological features and Dirac to Weyl type of band conversion was observed [59]. Twist-assisted optoelectronic phase control was reported in two-dimensional (2D) Janus heterostructures that exhibit direct bandgap with type-II band alignment at some specific twist angle, which shows potential for
future photovoltaic devices [60; 61]. Similar studies were also performed on different TMD heterostructure like MoS\({}_{2}\)/WS\({}_{2}\) heterostructure [63; 64], MoS\({}_{2}\)/MoSe\({}_{2}\) heterostructure [65].
## IV Energy storage
The effect of global warming is very prominent and the focus is towards green energy. In this direction, several technologies are in focus like photovoltaics, thermoelectrics, battery storage etc. The use of 2D materials and their heterostructures can have useful advantages due to their larger areal coverage in a smaller volume. We have studied MnO\({}_{2}\)/CoO\({}_{2}\) heterostructure as Promising cathode material for the Li and Na ion battery [66]. Ultralow lattice thermal conductivity and thermoelectric performance were reported for twisted Graphene/Boron Nitride heterostructure through strain engineering [67; 68] and two-dimensional KCuX (X= S, Se) [69].
## V Resistive switching and non-volatile memory
In the present era, the demand of Non-volatile memory (NVM) is continuously increasing because of its use in portable gadgets and consumer electronics like, laptops, digital cameras, USB storage devices, and smart phones etc. A variety of NVM candidates have emerged such as Flash memory, Phase change memory (PCM), Ferroelectric random access memory (FeRAM), Magnetic random access memory (MRAM), and Resistive random access memory (RRAM) etc. Among them, RRAM is the most promising candidate for the future memory design due to its simple structure, high endurance, good retention, low operating voltage, high power and multi-functionalities like complementary (CRS), unipolar (URS) and bipolar resistive switching (BRS).
Choice of material in such structures is very crucial which determines the operational limitations in terms of durability, retention, reliability and switching power etc. Till now, a variety of materials like, binary transition metal oxides, organic materials, graphene derivative, chalcogenides, complex compositional materials and polymers have been used for the resistive switching (RS) purpose. Out of these, the transition metal oxide-based devices have attracted significant attention due to their easy fabrication technique and having a wide range of electrical properties. Among metal-oxides, ZnO has an advantage in terms of
cost and stability, offering a wide and direct band gap, controllable electrical behaviour, having different morphologies and environmental friendly nature which we have studied in the presence of an oxide electrode [70]. Another approach of improving the switching behaviour is through the formation of a hybrid structures made of various 2D materials and perovskite oxides, where we have studied a range of perovskite oxides like LSMO, LBMO, LCMO etc. in the presence of 2D materials like rGO, CuI, Graphene, ZnO etc [71; 72; 73; 74; 75; 76; 77]. To understand the temperature effect on the switching study, studies were performed [78; 79; 80]. The addition of biological materials as a switching medium also offered useful insight [81; 82]. Recently, we have started looking a supramolecular gel materials for non-volatile memory applications which are found to offer interesting switching properties [83; 84].
|
2303.07201 | An evaluation of Google Translate for Sanskrit to English translation
via sentiment and semantic analysis | Google Translate has been prominent for language translation; however,
limited work has been done in evaluating the quality of translation when
compared to human experts. Sanskrit one of the oldest written languages in the
world. In 2022, the Sanskrit language was added to the Google Translate engine.
Sanskrit is known as the mother of languages such as Hindi and an ancient
source of the Indo-European group of languages. Sanskrit is the original
language for sacred Hindu texts such as the Bhagavad Gita. In this study, we
present a framework that evaluates the Google Translate for Sanskrit using the
Bhagavad Gita. We first publish a translation of the Bhagavad Gita in Sanskrit
using Google Translate. Our framework then compares Google Translate version of
Bhagavad Gita with expert translations using sentiment and semantic analysis
via BERT-based language models. Our results indicate that in terms of sentiment
and semantic analysis, there is low level of similarity in selected verses of
Google Translate when compared to expert translations. In the qualitative
evaluation, we find that Google translate is unsuitable for translation of
certain Sanskrit words and phrases due to its poetic nature, contextual
significance, metaphor and imagery. The mistranslations are not surprising
since the Bhagavad Gita is known as a difficult text not only to translate, but
also to interpret since it relies on contextual, philosophical and historical
information. Our framework lays the foundation for automatic evaluation of
other languages by Google Translate | Akshat Shukla, Chaarvi Bansal, Sushrut Badhe, Mukul Ranjan, Rohitash Chandra | 2023-02-28T04:24:55Z | http://arxiv.org/abs/2303.07201v1 | An evaluation of Google Translate for Sanskrit to English translation via sentiment and semantic analysis
###### Abstract
Google Translate has been prominent for language translation; however, limited work has been done in evaluating the quality of translation when compared to human experts. Sanskrit one of the oldest written languages in the world. In 2022, the Sanskrit language was added to the Google Translate engine. Sanskrit is known as the mother of languages such as Hindi and an ancient source of the Indo-European group of languages. Sanskrit is the original language for sacred Hindu texts such as the Bhagavad Gita. In this study, we present a framework that evaluates the Google Translate for Sanskrit using the Bhagavad Gita. We first publish a translation of the Bhagavad Gita in Sanskrit using Google Translate. Our framework then compares Google Translate version of Bhagavad Gita with expert translations using sentiment and semantic analysis via BERT-based language models. Our results indicate that in terms of sentiment and semantic analysis, there is low level of similarity in selected verses of Google Translate when compared to expert translations. In the qualitative evaluation, we find that Google translate is unsuitable for translation of certain Sanskrit words and phrases due to its poetic nature, contextual significance, metaphor and imagery. The mistranslations are not surprising since the Bhagavad Gita is known as a difficult text not only to translate, but also to interpret since it relies on contextual, philosophical and historical information. Our framework lays the foundation for automatic evaluation of other languages by Google Translate.
keywords: Natural Language Processing, Language Translator Models, Sanskrit Translations, Google Translate, Semantic Analysis, Sentiment Analysis, Hindu Texts +
Footnote †: journal:
## 1 Introduction
Deep learning methods have proven to be powerful in handling data in different formats such as numerical, textual, video, audio, and image in large volumes [1]. Natural Language Processing (NLP) [2] is a field of artificial intelligence that empowers machines to process, interpret and understand text and language just as humans. In the past, NLP found numerous applications in the field of text processing such as sentiment analysis [3; 4], topic modeling [5; 6], speech translation [7; 8], named entity recognition [9; 10], etc. NLP combines the field of computational linguistics with deep learning, statistics, and machine learning [11]. In the last decade, a variety of deep learning models have been applied for NLP that have boosted the field with a number of innovations [12]. Semantic and sentiment analysis are two of the most prominent text processing applications given their applications in social media and marketing. It has shown that sentiment analysis can also be used for predictive modelling for election outcomes via the US 2020 general elections [13].
Language translation models use computer systems to translate text in a source language to an equivalent text in the target language [14]. An efficient translation model is a key to many trans-lingual applications [15], cross-language information retrieval [16], computer-assisted language learning [17], etc. In the past, numerous systems have been proposed that either improve the quality of the generated translations [18] or study the robustness of these systems by evaluating their performance for different target languages [19]. Neural machine translation (NMT) [20; 21] uses a recurrent neural network (RNN) model to predict the likelihood of a sequence of words. It typically models an entire sentence in a single integrated model [22; 23]. On the other hand, the Transformer [24] is an attention-based model that remains a dominant architecture for several language pairs [25]. The self-attention layers of the Transformer model learn the dependencies between words in a sequence by examining links between all the words in the paired sequences and by directly modeling those relationships [26]. Language translation is perhaps one of the most difficult modelling task considering the fluidity of human language [27]. Nowadays, deep neural network models and encoder-decoder attention-based RNN such as the _bidirectional encoder representations from Transformer_ (BERT) [28], have achieved state-of-the-art results in
language modelling tasks with special properties[29; 30].
The _Bhagavad Gita_ (translates as the _song of God_) is sacred Hindu text [31; 32] that captures the essence of Hindu philosophy [33]. The Mahabharata, one of the earliest and largest epics written Sanskrit language using the style of narrative poetry, features the Bhagavad Gita as a chapter that captures a philosophical conversation between Lord Krishna and Arjuna about duty and ethics (Karma and Dharma) in the context of the Kurushetra war [34]. The Bhagavad Gita shares the themes in a style similar to the Upanishads [35; 36], a collection of Hindu philosophy and sacred texts that predates and also influenced Greek philosophy [37; 38]. In the past, NLP has been utilized to decipher and evaluate translations of major Hindu texts, including the Bhagavad Gita and Upanishads. Chandra et al. [39] used NLP to map the themes (topics) between the Bhagavad Gita and the Upanishads. Since the translation of a poem can break not only the rhythm but also modify the essence of the text, semantic and sentiment analysis can provide a means to evaluate the quality of translations. Hence, Chandra et al. [3] implemented a semantic and sentiment analysis on different translations of the Bhagavad Gita.
In May 2022, Google added support for the Sanskrit language in its addition of 24 languages [40] to _Google Translate_, making a total of 133 languages worldwide. The team developed a new monolingual language model learning approach for zero-resource translation [41]; i.e., translation for languages with no in-language parallel text and no language-specific translation examples [42; 43]. The model was trained to learn representations of under-resourced languages directly from monolingual text using the _masked sequence-to-sequence_ (MASS) task. MASS adopts the encoder-decoder framework for reconstructing a sentence fragment given the remaining part of the sentence. The encoder takes a sentence with a randomly masked fragment (several consecutive tokens) as input, and its decoder tries to predict this masked fragment [44].
In the past, some studies have analyzed the translation quality of Google Translate using computational models. Xiaoning et al. [45] used Google Translate in cross-lingual information retrieval in order to translate the queries from English to Chinese, where a Kullback-Leibler (KL) divergence model was used for information retrieval. The authors indicate that Google Translate was chosen because of superior performance for named entity translation. Li et al. [46] compared the Google Translate with human (expert) translation, focusing on Chinese to English translation. The study reported that translation by Google Translate was highly correlated with the original text and the human expert. Rahimi et al. [47] studied the English-Persian translation of Google Translate. Kalchbrenner et al. [20] compared the accuracy of machine translation and reported that NMT improved the semantic aspects of the translation, despite some limitations. Abdur et al. [48] compared the English translations of Baidu and Google Translate and reported that there is a scope for improvement for both search engines, and one is not necessarily superior to the other. Patil et al. [49] evaluated the accuracy of Google Translate in medical communication and found that Google Translate was not accurate when it comes to medical phrases and hence should not be blindly trusted. The authors also found that European languages performed better than other languages and thus confirmed the presence of a translation bias. It is important to note that not many studies have evaluated the quality of translations of Google Translate for low-resource languages [50], i.e. languages with data scarcity such as Sanskrit.
In this paper, we present a framework that evaluates the quality of Google Translate by focusing on the Sanskrit language. In this study, we first publish a Sanskrit to English translation of the Bhagavad Gita using Google Translate. Our proposed framework extends the methodology Chandra et al. [3] that compared translations of three different translations of the Bhagavad Gita using semantic and sentiment analysis. This study performs sentiment analysis via BERT and semantic analysis via a sentence embedding model to compare the Bhagavad Gita translation by Google Translate with translation by known experts. It further extracts keywords using KeyBERT to analyze the central themes in all three translations. Although the study's main aim is to evaluate the quality of Sanskrit translations by Google Translate, our framework is designed to be easily extended to other languages to evaluate Google Translate. Finally, we qualitatively evaluate selected Google Translate verses of the Bhagavad Gita with help of a Sanskrit translator.
The rest of the paper is organized as follows. Section 2 provides an overview of the framework used for analysis. Section 3 presents the analysis of the results. Section 4 gives a detailed discussion, and Section 5 concludes the study.
## 2 Methodology
### Data extraction and processing
The Bhagavad Gita is divided into 18 chapters, each containing a sequence of questions and answers between Lord Krishna and Arjuna on various subjects, including the Karma philosophy. This organization is symbolic because the Mahabharata War lasted 18 days [34]. In this study, we use three different Bhagavad Gita translations (Mahatma Gandhi [51], Eknath Easwaran [52], Sri Purohit Swami [53]) to compare with the translation by Google Translate. We selected the significant and prominent translations from different historical periods. In order to prevent any translation biases, we picked the translations where the translators were from a Hindu background. We processed the raw data from the three sets of translations using the methodology described by Chandra et al.. [3] where semantic and sentiment analysis was implemented for comparing selected translations of the Bhagavad Gita.
### Google Translate
Google Translate is a free-to-use web-based translation tool developed by Google in April 2006 [54]. Google Translate is a multilingual NTM that translates texts, websites, and documents from a given language to a target language as specified by the user [55]. Even though Google Translate [54; 56] has made significant advances in recent years (as of December 2022), it only covers 133 written languages all over the world [57], out of thousands of written and spoken languages. Note that Google
Translate does not cater to automatic speech recognition i.e spoken languages, it is a text-based translation tool. There are challenges faced by Google Translate to data scarcity, the absence of digitized data for languages (low-resource languages), and the absence of translated texts. Hence, a roadblock exists in the development of functional translation models for low-resource languages such as Sanskrit [58]. Note that Sanskrit is an ancient language used in Hindu texts; however, only about 24,821 [59] Sanskrit speakers (based on 2011 census) who are mostly in remote and rural communities of India. The lack of data is a problem for language identification models since it forces them to learn to translate from a limited monolingual text. To overcome these challenges, Google made several modifications to the basic architecture of Google Translate which included _back translation_ overcome the lack of parallel (translated) data [58]. Back translation is a localization quality control method where content is translated back to its original language and then compared to the source [60].
### Google Translate - Bhagavad Gita
We need to translate all the 18 chapters of the Bhagavad Gita from Sanskrit to English using the Google Translate's application programmer interface (API). We extracted all the verses from the Sanskrit Bhagavad Gita 2 available on the _Bhaktivedanta Vedabase_ from Swami Prabhupada who translated the Bhagavad Gita originally in 1968 [61]. Note that the Sanskrit language is written using the Devanagari script [62] which can be directly used as an input to Google Translate API. We pre-processed the data with the following steps:
Footnote 2: [https://vedabase.io/en/library/bg/1/1/](https://vedabase.io/en/library/bg/1/1/)
Footnote 3: [https://github.com/sydney-machine-learning/Google-Sanskrit-translate-evaluation/tree/main/BG-Google-Translated](https://github.com/sydney-machine-learning/Google-Sanskrit-translate-evaluation/tree/main/BG-Google-Translated)
1. Arranging the verses chapter-wise in different files
2. Removed verse numbering in Bhagavad Gita
3. Converted verses to a single line
4. Added the original Sanskrit version to the file
Figure 1 shows an example of the above pre-processing process.
Finally, we published the translation by Google Translate online via Github 3.
Footnote 3: [https://github.com/sydney-machine-learning/Google-Sanskrit-translate-evaluation/tree/main/BG-Google-Translated](https://github.com/sydney-machine-learning/Google-Sanskrit-translate-evaluation/tree/main/BG-Google-Translated)
Figure 1 shows th Sanskrit script (Devanagari) of the Bhagavad Gita with processed translation.
### Sentiment and Semantic Analysis
A word embedding is used for the representation of words from the text in the form of a real-valued vector so that it can be used for processing by statistical and deep learning models [63]. The real-valued vectors used for word embedding are selected to preserve the semantic and syntactic qualities of the word appearing in a text corpus [64]. A number of word embedding models exist that have certain strengths and weaknesses [65]. Mikolov et al. [66] introduced _Word2Vec_ model in 2013 which has been widely used word embedding learned using a shallow neural network model. Thus, a simple cosine function can be used to test the level of similarity between two words. Co-sine similarity is a metric to measure the text-similarity between two documents irrespective of their size. A word is represented into a vector form and the text documents are represented in n-dimensional vector space. The cosine similarity metric measures the cosine of the angle between two n-dimensional vectors projected in a multi-dimensional space.
BERT is a transformer-based model that was introduced by Devlin et al. [67] in 2018 which comprises numerous bidirectional transformers that empower it to capture contextual information before and after a word. Note that BERT is a pre-trained model that has been trained from unlabeled data extracted from the _BooksCorpus_ featuring 800 million words and the _English Wikipedia_ featuring 2,500 million words. Since BERT gives context-enriched embedding, it outperformed traditional NLP models such as Word2Vec on text processing tasks such as semantic and sentiment analysis [68]. The word embeddings generated by Word2Vec are context-independent and cannot address the problem of polysemous words [68]. The embedding generated by BERT, on the other hand, is context-dependent, i.e., the same word can have multiple vector representations depending upon the context in which it is being used [68].
Sentiment analysis, also referred to as opinion mining and emotion analysis, identifies the emotional tone behind a body of text [69]. Recent innovations involve machine learning and deep learning to mine text for sentiment, and subjective information [70]. Sentiment analysis systems help in gathering insights from unorganized and unstructured text. It can be applied to varying scopes such as document, paragraph, sentence, and sub-sentence levels [69]. There are primarily three different systems currently in use for performing sentiment analysis. Rule-based systems perform sentiment analysis based on predefined lexicon-based rules [71], whereas automatic systems learn from data with machine learning techniques [72]. A hybrid sentiment analysis, on the other hand, combines both approaches [73]. In addition to identifying sentiment, it can also extract the polarity (or the amount of positivity and negativity), subject and opinion holder within the text [74].
Semantic analysis, on the other hand, is the process of drawing meaning from text. Semantic analysis is key to contextualization that helps disambiguate language data so that text-based NLP applications can be more accurate [75]. It allows computers to understand and interpret sentences, paragraphs, or whole documents, by analyzing their grammatical structure and identifying relationships between individual words in a particular context [76]. Its the driving force behind machine learning tools such as chatbots, search engines, and text analysis applications [69]. By feeding semantically enhanced algorithms with samples of text, NLP methods can make accurate predictions based on past observations [77].
### Framework
We present a framework that compares translations and implements sentiment and semantic analysis, adopted from Chandra and Kulkarni [3] (Figure 2). We utilize this framework by comparing the Bhagavad Gita by Google Translate with
three expert-based translations. Our framework provides further insights into the various themes discussed by these different translations. We extracted the Bhagavad Gita Sanskrit slokas (verses) from Bhaktivedanta Vedabase 4 using web data scraping process. We provide this text as input to the Google Translate API, which gives the corresponding English translated text as an output. We then store the output in printable document format (PDF) format. Afterwards, we convert the the PDF files to text files for pre-processing and cleaning of text, where we remove verse numbers, symbols, etc. Our framework implements the BERT-base model for sentiment analysis by predicting the sentiments of different versees of the four translations. We use multi-label sentiment classification in our framework where a verse can be both empathetic and optimistic, simultaneously. We then train our sentiment analysis component in the framework using an expert-labeled SenWave dataset [78] which features 10 different sentiments labeled by a group of 50 experts for 10,000 tweets worldwide during the COVID-19 pandemic. We fine-tuned (trained) the BERT-base sentiment analysis model using the SenWave dataset so that it can recognize the respective sentiments in a multi-label setting, originally used for COVID-19 sentiment analysis [79] and for Bhagavad Gita sentiment analysus [3]. The conventional sentiment polarity score has ambiguity due to varied expressions that feature metaphor, humor, and expressions hard for machines to understand. Hence, multi-label sentiment classification provides further insights. We compare verse-by-verse and chapter-by-chapter sentiments of the chosen translations as shown in Figure 2.
Footnote 4: [https://vedabase.io/en/library/bg/1/1/](https://vedabase.io/en/library/bg/1/1/)
Furthermore, we perform semantic analysis to reveal the variations in the translations so that we get an indication of how similar or different the expert-based translations are when compared to the Google Translate version of the Bhagavad Gita. We perform semantic analysis through a sentence embedding model (MPNet [80]) which is based on the BERT model as shown in the framework (Figure 2). MPNet sentence embedding model generates high-quality embedding for our encoded verses o the Bhagavad Gita. We use the uniform manifold approximation and projection (UMAP) [81] dimensionality reduction technique to visualize the high-dimensional vectors. We investigate the nature in terms of the similarity of the chapters based on data visualization through the plot of the first two dimensions obtained from UMAP.
Furthermore, we extract keywords from the text to examine the major topics using KeyBERT which provides the keyboard that describes significant themes (Figure 2). We note that various other techniques can be used, such as _rapid automatic keyword extraction_ (RAKE)[82], _yet another keyword extractor_ (YAKE)[83], and term frequency-inverse document frequency (TF-IDF) [84]. However, these are based on statistical characteristics, unlike KeyBERT which is based on the semantic similarity of the text. Hence, we use KeyBERT as it considers the text's semantic aspects.
### Experimental setup
We train the BERT (base) model on the SenWave dataset by pre-processing the tweets as done by Chandra and Kulkarni [3]. We utilised thetrained models from previous study about sentiment analysis of the Bhagavad Gita [3] via the GitHub repository 5. The SenWave dataset consists of 10,000 Tweets that were labeled according to 10 different sentiments by human experts. There is an additional label related to the "official report" related to COVID-19 which we deleted in data processing.
Footnote 5: [https://github.com/sydney-machine-learning/sentimentanalysis.bhagavadgit](https://github.com/sydney-machine-learning/sentimentanalysis.bhagavadgit)
## 3 Results
### Data Analysis
The n-gram [85] in NLP provides a statistical overview of a text through a continuous sequence of words and elements. We first present the top-ten bi-gram and tri-grams along with top-twenty optimistic and pessimistic sentiments bi-grams and tri-grams in the text for the different translations as shown in Figure 3.
We now analyze the bi-grams and tri-grams of Google Translate version and Eknath Easwaran's version. We only compare with Eknath Easwaran for simplicity, the comparison with Mahatma Gandhi and Sri Purohit Swami's translations can also be done in a similar manner. We observe that the concept of a "supreme spirit," or the "Atman" is mentioned in both translations, but the path taken to achieve this realization varies between them. The Google Translate version in Figure (a)a features the tri-gram [absolute, truth, supremel], thus reflecting a path of absolute truth. Eknath Easwaran's translation 3c features bi-grams [supreme, goal], [selfless, service], and [selfish, attachment], thus stressing the importance of selfless service devoid of selfish attachments and desires. It is interesting to note
Figure 1: Original Sanskrit script (Devanagari) of the Bhagavad Gita with translation and further processing.
that Chapter 3 is titled 'Selfless Service' by Eknath Easwaran's translation.
Furthermore, we observe from Figure 3 that the top three bi-grams and ti-grams are different for the two translations. Both translations have used different words to describe similar themes. Google Translate features [supreme, personality], [personality, godhead], and [living, entities] as the top 3 bi-grams and different permutations of [supreme, personality, godhead] as the top 3 tri-grams. Mahatma Gandhi's translation features [fruit, action], [pleasure, pain], and [without, attachment] as the top three bi-grams and [sacrifice, charity, austerity], [vedas, declare, nothing] and [else, carnality, minded] as the top three tri-grams. Eknath Easwaran's translation features [every, creation], [supreme, goal], and [selfish, desire] as the top three bi-gram and [attain, supreme, goal], [senses, mind, intellect] and [dwells, every, creation] as the top three tri-grams. Shri Purohit Swami's translation features [supreme, spirit], [right, action], and [pleasure, pain] as the top three bi-grams and [thing, movable, immovable], [purity, passion, ignorance] and [sanjaya, continued, thus] as the top three tri-grams. Hence, a mere word-to-word comparison through bi-grams and tri-grams reflects differences in the translations.
### Sentiment Analysis
Next, we use the BERT model for verse-by-verse sentiment analysis of the respective Bhagavad Gita translations.
We visualize chapter-wise sentiment analysis for all four translations as depicted by Figure 5 and Figure 6 along with cumulative sentiment analysis for all chapters as depicted by Figure 4. In cumulative sentiment analysis (Figure 4), we observe that _thankful_, _anxious_, _sad_, and _denial_ are the least expressed sentiments across all four translations whereas _optimistic_ is the most expressed. We also observe that sentiments _surprise_ and _annoyed_ are under-expressed. In contrast, sentiment _empathetic_ is over-expressed by Google Translate when compared to the other three translations. The sentiments _optimistic_, _pessimistic_, _joking_, and _anxious_ are equally expressed in all four translations. We further note that _optimistic_, and _empathetic_ are the leading sentiments for Google Translate while _annoyed_, _pessimistic_ and _surprise_ are leading sentiments for Mahatma Gandhi's version. Thus indicating that Google Translate leads in optimistic sentiments and Mahatma Gandhi's version leads in pessimistic sentiments.
Figure 7 displays a heat map showing the frequency of a specific sentiment in each translation of all the verses compared to the other sentiments. We observe that in the case of Google Translate in Figure (a)a, _empathetic_ is the key sentiment in addition to the sentiments _optimistic_, _annoyed_ and _joking_, which are key sentiments for the rest of the three translations as shown by Figure (b)b, Figure (c)c and Figure (d)d. We further observe that the sentiment combination [_optimistic_, _empathetic_] are the leading combinations of sentiments of Google Translate. In the other three versions, the leading combinations of sentiments are [_annoyed_, _surprise_] followed by [_surprise_, _optimistic_] and [_an-noyed_,_optimistic_]. It is also important to note that for Google Translate, the sentiments such as _thankful_ and _denial_ are the least expressed sentiments. In contrast, the sentiments such as _denial_ and _anxious_ are the least expressed sentiments in the
Figure 2: Framework showing major components that include using Google Translate for translating the original Sanskrit version of the Bhagavad Gita to English. We use semantic and sentiment analysis to compare the Google Translate version to expert translations from the literature that includes translations by Mahatma Gandhi and Eknath Easwaran.
Figure 3: Visualisations of top 10 bi-grams and tri-grams for different Bhagavad Gita translations.
other three versions.
Finally, we measure the diversity and similarity of sentiments expressed with verse-by-verse comparison for all four translations. Table 1 shows the Jaccard similarity score computed on the predicted sentiments for three pairs of texts for the selected chapters. The score is highest for Eknath Easwaran's version and Google Translate (GT-Easwaran), indicating they had the highest overlap of the predicted sentiments. The comparison of Gandhi-Easwaren shows the baseline from previous study [3] where we find that GT-Easwaran has a much lower score, hence a much lower similarity. This indicates that Google Translate has not been as effective as human experts in translating the Bhagavad Gita.
### Semantic Analysis
Next, we provide the semantic analysis of the texts and compare the four translations. Using the MPNet-base model, we encode all the verses and present the verse-by-verse cosine similarity, grouped by chapter, for the three translations with Google Translate. We report both the mean and standard deviation of the score. In Table 2, we observe that Chapter 3 is semantically most similar, whereas Chapter 17 is semantically least similar. Further, in the pair-wise comparison, Google Translate and Shi Purohit Swamis translations are most similar. These two translations also have the highest Jaccard similarity score for the predicted sentiments (Table 1). We finally compare Gandhi-Easwaren to show a baseline from previous study [3] where we find that GT-Easwaren has a much lower similarity that shows that Google Translate has not been as effective when compared to human experts.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Chapters** & **GT-Gandhi** & **GT-Purohit** & **GT-Easwaran** & **Gandhi-Easwaren** \\ \hline Chapter 3 & 0.42 & 0.388 & 0.412 & 0.604 \\ \hline Chapter 5 & 0.374 & 0.373 & 0.401 & 0.568 \\ \hline Chapter 7 & 0.353 & 0.363 & 0.393 & 0.559 \\ \hline Chapter 8 & 0.341 & 0.362 & 0.377 & 0.547 \\ \hline Chapter 9 & 0.331 & 0.353 & 0.348 & 0.501 \\ \hline Chapter 10 & 0.324 & 0.351 & 0.357 & 0.523 \\ \hline Chapter 11 & 0.309 & 0.324 & 0.350 & 0.507 \\ \hline Chapter 12 & 0.315 & 0.323 & 0.357 & 0.500 \\ \hline Chapter 15 & 0.309 & 0.319 & 0.354 & 0.494 \\ \hline Chapter 16 & 0.316 & 0.328 & 0.359 & 0.500 \\ \hline Chapter 17 & 0.323 & 0.332 & 0.355 & 0.510 \\ \hline
**Average** & **0.338** & **0.347** & **0.369** & **0.526** \\ \hline \end{tabular}
\end{table}
Table 1: Sentiment analysis of selected pairs of translations by Google Translate (GT) with Jaccard similarity score of the predicted sentiments for selected Chapters. We provide the mean of the scores at the bottom and lower score indicates lower similarity. The comparison of Gandhi-Easwaran shows the baseline from previous study [3].
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Chapters** & **GT-Gandhi** & **GT-Purohit** & **GT-Easwaran** & **Gandhi-Easwaren** \\ \hline Chapter 3 & 0.52(0.156) & 0.58(0.148) & 0.59(0.120) & 0.63(0.133) \\ \hline Chapter 5 & 0.34(0.082) & 0.61(0.133) & 0.51(0.187) & 0.63(0.129) \\ \hline Chapter 7 & 0.35(0.194) & 0.56(0.232) & 0.35(0.100) & 0.70(0.144) \\ \hline Chapter 8 & 0.36(0.086) & 0.34(0.104) & 0.38(0.098) & 0.66(0.123) \\ \hline Chapter 9 & 0.33(0.108) & 0.36(0.113) & 0.35(0.103) & 0.68(0.126) \\ \hline Chapter 10 & 0.33(0.121) & 0.37(0.118) & 0.38(0.093) & 0.76(0.096) \\ \hline Chapter 11 & 0.36(0.118) & 0.38(0.108) & 0.38(0.105) & 0.71(0.109) \\ \hline Chapter 12 & 0.35(0.122) & 0.40(0.159) & 0.35(0.118) & 0.61(0.120) \\ \hline Chapter 15 & 0.40(0.135) & 0.39(0.129) & 0.37(0.142) & 0.69(0.116) \\ \hline Chapter 16 & 0.38(0.126) & 0.37(0.128) & 0.41(0.089) & 0.66(0.096) \\ \hline Chapter 17 & 0.30(0.077) & 0.35(0.128) & 0.33(0.115) & 0.65(0.111) \\ \hline
**Average** & **0.34(0.111)** & **0.43(0.142)** & **0.40(0.110)** & **0.67(0.119)** \\ \hline \end{tabular}
\end{table}
Table 2: Semantic Analysis using cosine similarity score for comparing selected chapter pairs of the translations. The mean score is given with standard deviation (in brackets) for all the verses in the respective chapters at the bottom (*). The lower score indicates less similarity. The comparison of Gandhi-Easwaren shows the benchmark from previous study [3].
Figure 4: Cumulative Sentiments of the chapters
Figure 5: Chapter-wise Sentiment Analysis of Chapter 1 - Chapter 9.
Figure 6: Chapter-wise Sentiment Analysis of Chapter 10 - Chapter 18.
Next, we present some of the semantically most similar verses in Table 3. In Chapter 3 - Verse 13, we observe that all translations have conveyed a similar meaning; however, choice of words is different for all four. Our framework assigns a high similarity score to all three pairs. In Chapter 11 - Verse 21 and Chapter 12 - Verse 19, we observe that Google Translate and Eknath Easwaran have used somewhat similar words and thus have obtained a higher similarity score (Score 2). We present some of the semantically least similar verses in Table 4. We observe that for Chapter 12 - Verse 19, Google Translate and Eknath Easwaran convey very different themes and thus have been given a very low similarity score.
In addition, we examine the semantic score by showing actual verses from translated versions of a chosen chapter. We select Chapter 12 because it includes the least verses, making it easier to include it in the paper. Table 5 presents arbitrarily selected verses from Chapter 12 with the cosine similarity score. We also present the mean and standard deviation of the scores to give a sense of the general semantic similarity of the verses in the chapter for the comparison of chosen translations.
## 4 Evaluation by Sanskrit Expert
We further evaluate selected verses from Google Translate in comparison with expert translations, with help of a Sanskrit researcher, Sushrut Badhe 6 who has published a translation of the Bhagavad Gita in 2015 [86]. The unique part of this translation was that the rhythm and rhyme was maintained in the English translation following the original Sanskrit version. We note that the rhythm and rhyme are the key attributes of the Bhagavad Gita in Sanskrit since it was written to be sung and remembered through oral traditions for thousands of years. In consultation with Sushrut Badhe, we provide the following analyses about selected chapters and verses included in the paper.
Footnote 6: [https://en.wikipedia.org/wiki/Sushrut_Badhe](https://en.wikipedia.org/wiki/Sushrut_Badhe)
### Semantically most similar verses
Table 3, we show selected semantically most similar verses using three expert translations (Gandhi, Easwaran, and Swami) and Google Translate (GT), with accomianying original Sanskrit verses in Figure 8. In Chapter 3: Verse 13, we find that both GT-Gandhi (i.e GT vs Gandhi) and GT-Swami are more
Figure 7: Heat Map of different Bhagavad Gita translations.
\begin{table}
\begin{tabular}{|p{11.4pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Chapter** & **Vers** & **GT** & **Gandhi** & **Easwaran** & **Swami** & **GT-Gandhi** & **GT-Easwaran** & **GT-Purohit** \\ \hline
3 & 13 & Those who eat the remains of the sacrifice are freed from all sins and enjoy the sins of the miners who cook for their own sake & The righteous men who eat the who eat the residue of the sacrifice are freed from all sins, but the wicked who cook for themselves eat sin. & The spiritually minded, who eat in the spirit of service, are freed from all sins, but the selfish, who are freed for their own satisfaction, on jcnn. & 0.919 & 0.705 & 0.836 \\ \hline
7 & 9 & I am pious and fragrance on the earth and I am the effulence of fire and I am the life of all living beings and I am the activities of all living beings. & I am the sweet fragrance in earth; the brilliance in fire; the life in all beings; and the austerity in asceites. & I am the sweet fragrance in the earth and the radiance of fire. I am the life in all beings; and the austerity in asceites. & I am the fire in any the life in every creature and the striving of the spiritual aspirant. & I am the Fragrance of earth, the Brilliance of fire. I am the life in beings, and I am the austerity of the asceites. & 0.862 & 0.873 & 0.855 \\ \hline
12 & 12 & Knowledge is the best way to practice knowledge and meditation is superior to meditation. From meditation. From meditation, renunciation of the fruits of action is attained by renunciation. & Better indeed is knowledge than than knowledge is concentration.better than concentration. Between medication of the fruits of action is attained by renunciation. & Knowledge is superior to blind action, meditation to mere knowledge is medidden. But better still is surrender of attachment to results, because there follows immediate peace. & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to meditation, and where there is renunciation peace & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to meditation, and where there is renunciation peace & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to meditation, and where there is renunciation peace & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to meditation, and where there is renunciation peace & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to meditation, and where there is renunciation peace & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to meditation, and where there is renunciation peace & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to meditation, and where there is renunciation peace & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to meditation, and where there is renunciation peace & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to meditation, and where there is renunciation peace & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to meditation, and where there is renunciation peace & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to meditation, and where there is renunciation peace & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to meditation, and where there is renunciation peace & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to meditation, and where there is renunciation peace & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to meditation, and where there is renunciation peace & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to me knowledge,renunciation of the fruit of action to meditation, and where there is renunciation peace & Knowledge is superior to blind action, meditation to mere knowledge,renunciation of the fruit of action to me knowledge,renunciation of the fruit of action to me knowledge,renunciation of the fruit of action to me knowledge,renunciation of the fruit of action to me knowledge,renunciation peace & 0.681 & 0.739 & 0.813 \\ \hline
17 & 16 & The mind, grace, and self-control is called the mental state of self-realization. & Serenity, benignity, silence, self-restraint, and purity of the spiritual substitute ascript of the mind. & Canliness, silence, self-restraint, and purity of the spiritual native of mind. & Serenity, kindness, silence, self-control and purity this is austerity of mind. & Serenity, kindness, silence, self-control and purity this is austerity of mind. & Serenity, kindness, silence, self-control and purity this is austerity of mind. & Serenity, kindness, silence, self-control and purity this is austerity of mind. & Serenity, kindness, silence, self-control and purity this is austerity of mind. & Serenity, kindness, silence, self-control and purity this is austerity of mind. & Serenity, kindness, silence, self-control and purity this is austerity of mind. & Serenity, kindness, silence, self-control and purity this is austerity of mind. & Serenity, kindness, silence, self-control and purity this is austerity of mind. & Serenity, kindness, silence, self-control and purity this is austerity of mind. & Serenity, kindness, silence, self-control and purity this is austerity of mind. & Serenity, kindness, silence, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is auster of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is auster of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is austerity of mind. & Serenity, kindness, self-control and purity this is auster of mind.
semantically similar than GT-Easwaran. However, the GT version merges both the lines of the verse and gives a confusing answer and loses contextual significance entirely. The original Sanskrit Verse (Chapter 3: Verse 13) of the Bhagavad Gita implies that those who consume the food that is a remainder after performing sacrifice are freed of all their sins whereas those who cook and consume only for themselves end up consuming only sin. Google translate version conveys a wrong meaning. Also the word _santo_, which is significant and refers to the saints and spiritual minded people; has been omitted arbitrarily in the translation. The translations of Easwaran, Gandhi, and Swami, though semantically dissimilar do not lose contextual significance.
In Chapter 8: Verse 21 of Table 3, the values of cosine similarity for all three combinations are nearly equal with GT-Easwaran showing the maximum semantic similarity. In this case, all the four translations are contextually significant. Google translate version has accurately translated the word _punyo_ (Figure 8)as pions. Easwaren and Gandhi have translated it as sweet whereas Swami has omitted its translation. In this verse, Google translate appears to be the more accurate version. In Chapter 11: Verse 21, both GT-Swami and GT-Easwaranare more semantically similar than GT-Gandhi.The translations of Eashwaren, Gandhi and Swami are contextually significant. The GT version is incorrect and bereft of logical sense or contextual significance.
In Chapter 12: Verse 19 of Table 3, GT-Swami and GT-Easwaran are more semantically similar than GT-Gandhi. The translations of Gandhi, Easwaren, and Swami are contextually significant. GT version, _Knowledge is best way to practice knowledge and meditation is superior to meditation. From meditation, renunciation of fruits of actions is attained by renunciation._, is bereft of logic and contextual significance.
In Chapter 17: Verse 16 of Table 3, GT-Easwaran is most semantically similar. The translations of Easwaren, Gandhi and Swami are contextually significant. GT version has only literal word to word translation which does not convey a clear meaning and lacks contextual significance. The word _manaprasda_ (Figure 8) is wrongly translated as the mind, grace and this affects the logical meaning of the translation.
### Semantically less similar verses
Table 4 presents selected less similar verses, having low cosine scores of semantic similarity with original shown in Figure 9. In both verses, we find that GT only gave a literal translation that was of no contextual significance or meaning.
In Chapter 11: Verse 41 of Table 4, GT-Swami is most similar in terms of its cosine value of semantic similarity. The GT version, _O Krislna, I thought that I was a friend, O Krislna, O friend of the demigods_, does not convey a logical sense and is incorrect.
In Chapter 17: Verse 26, GT-Easwaran is most similar in terms of its cosine value of semantic similarity. The google translator version, _This is used in the same way as the truth, O_ _on_ _Priha, and in the praiseworthy action, which is used in the same way as the words of the Lord._, lacks both logic and contextual significance.
### Chapter 12: Arbitrarily selected verses
Chapter 12 of the Gita is considered to be one of the important chapters as it contains the verses that are relevant to the crux of the teaching of the Gita the way of Bhakti (devotion).
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline
**Chapter** & **Verse** & **GT** & **Gandhi** & **Easwaran** & **Swami** & **GT-Gandhi** & **G-Easwaran** & **Swami** \\ \hline
11 & 41 & O Krislna, I thought that I was a friend, O Krislna, O friend of the demigods. & If ever in carelessness, thinking of You as comrade, I addressed You saying, O Krisnna!, O Yadav! not knowing Your greatness, in negligence or in affection, or in whether we were playing or resting, alone or in company, sitting together or eating. & Whatever I have said onto You in rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashlessless rashless rashless rashless rashless rashlessless rashlessless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashlessless rashless rashlessless rashless rashless rashless rashless rashlessless rashless rashless rashless rashless rashless rashless rashlessless rashless rashless rashless rashless rashlessless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashlessless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashlessless rashless rashlessless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashless rashlessless rashless rashless rashlessless rashless rashless rashlessless rashless rashlessless rashless rashless rashlessless rashless rashlessless rashlessless rashless rashless rashlessless rashless rashlessless rashless rashlessless rashlessless rashless rashless rashlessless rashlessless rashless rashlessless rashless rashlessless rashlessless rashless rashless rashlessless rashlessless rashlessless rashless rashlessless rashless rashlessless rashlessless rashlessless rashlessless rashless rashless rashless rashless rashlessless rashless rashlessless rashlessless rashlessless rashless rashless rashlessless rashlessless rashlessless rashless rashlessless rashlessless rashlessless rashlessless rashless rashless rashlessless rashless rashlessless rashlessless rashless rashless rashlessless rashlessless rashlesslessless rashlessless rashless rashlessless rashlessless rashlessless rashlessless rashless rashlessless rashlessless rashlessless rashless rlessless rashlessless rashlessless rashlessless rashlessless rashlessless rashlessless rashlessless rashlessless rashlessless rashlessless rashlessless rashlesslessless rashless rashlesslessless rashlessless rashless rashes
\begin{table}
\begin{tabular}{|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \hline
**Chapter** & **Verse** & **GT** & **Gandhi** & **Easwaran** & **Swami** & **GT- Gandhi** & **GT- Easwaran** & **Stwami** \\ \hline
12 & 1 & Arjuna said: Those devotees who are thus constantly engaged in worshiping You, who are also the most unmanifest of the unmanifest, who are the best in yoga? & Of the devotees who thus worship You, this constantly engaged in worshiping You, who are also the most unmanifest, who are the best by yoga? & OR Me set your mind on Me. On Me set your mind on Me. You will live in Me alone, there is no doubt that about it. & ARUNA Of those steadfast devotees who love you and those who seek you as the eternal formless Reality, who are the more established in yoga? & Arjuna asked: My Lord! Which are the better devotees who worship You, those who try to know You as a Personal God, or those who worship You as Impersonal and Indestructible? & 0.52 & 0.68 & 0.70 \\ \hline
12 & 8 & Concentrate on Me in Me in Me, fix your mind on Me. You will live in Me. In Me alone, there is no doubt that about it. & On Me set your mind, on Me rest your conviction; thus without doubt shall you remain only in Me about it. & Still your mind in me, still your intellect in me, and without doubt you will be united with me forever. & Then let your mind cling only to Me, let your intellect abide in Me; and without doubt you shall live hereafter in Me alone. & Then let your mind cling only to Me, let your intellect abide in Me; and without doubt you shall live hereafter in Me alone. & 0.61 & 0.67 & 0.60 \\ \hline
12 & 13 & He is not hated by all living beings, friendly and compassionate. & Who has ill-will towards none, who is friendly and complasionate. Living mine’ or I, who regards pain and pleasure at alike, who is long-suffering; & That one I love who is incapable of hated of ill will, who is friendly and complasionate. Living mine’ or I, who regards pain and pleasure at alive, who is long-suffering; & 0.34 & 0.21 & 0.39 \\ \hline
12 & 15 & He who is freed from all joy, anger, fear and anxiety, who is not afraid of the world and who is not afraid of this world. & Who gives no trouble to the world, who is free from exultation, resentent, fear and vexation,that man is dear to Me. & Not agitating the world or by it agitated, they stand above the sway of who is free from exultation, ment, fear and vexation,that man is dear to Me. & He who does not harm the world, and whom the world cannot harm, who is not carried away by any impulse of joy, anger or fear, such a one is My beloved. & 0.70 & 0.50 & 0.70 \\ \hline
12 & 20 & Those who worship this nectar of religious principles as described above are very dear to Me and are very dear to Me. & They who follow this essence of drama, as I have told it, with faith, keeping Me as their goal,those devotees are exceeding dear to Me. & Those who mediate upon this immortal drama as I have declared it, full of faith and seeking me as lifes supreme goal, are truly my devotees, and may love for them is very great. & 0.70 & 0.50 & 0.70 \\ \hline \end{tabular}
\end{table}
Table 5: Semantic similarity of verses selected from Chapter 12 with cosine similarity (score) using selected translations (Gandhi, Easwaran, Swammi) to compare with Google Translate (GT). We also provide the score mean and standard deviation (in brackets) of the scores at the bottom (*).
## References
* [1] A. B. K. K., and S. K. K., "Theoretical evaluation of the \(\beta\)
We select five arbitrarily verses from Chapter 12 (Table 5). In general, we find that the translations of Easwaren, Gandhi and Swami did not lose contextual significance. However, on the contrary, we find that GT conveyed no contextual meaning in all five verses.
Chapter 12: Verse 1, GT-Swami is most similar in terms of its cosine value of semantic similarity. The GT version, _Concentrate on Me in Me in Me, fix your mind in Me. You will live in Me. In Me alone, there is doubt that there is no doubt about it_ does not make any sense and also incorrect.
Chapter 12: Verse 13, GT-Swami is most similar in terms of its cosine value of semantic similarity. The translations of Eastwaren, Gandhi and Swami are contextually significant. This verse originally indicates the temperament of a devotee who harbours no hate or ill will for any human being. The GT version, _He is not hated by all living beings, friendly and compassionate_ sounds logical but does not hold contextual significance.
Chapter 12: Verse 15, GT-Swami is most similar in terms of its cosine value of semantic similarity. The GT version, _He who is freed from all joy, anger, fear and anxiety, who is not afraid of the world and who is not afraid of this world_ is improper as it misses the meaning of the original verse which implies that the altruistic soul who is free from all the bonds of pleasure, fear, anger and anxiety neither disturbs the world or is not disturbed by it.
Chapter 12: Verse 20, GT-Gandhi is most similar in terms of its cosine value of semantic similarity. The GT version, _Those who worship this nectar of religious principles as described above are very dear to Me and are very dear to Me_ though sounding logical, loses contextual significance in the last part, and features repetition.
## 5 Discussion
Among the verses selected for qualitative assessment with assistance of a Sanskrit researcher, we found that only one verse (Table 3, Chapter 7 - Verse 9) was translated correctly, capturing the context and the foundations of Hindu philosophy. In the rest of the verses which had contextual references or poetic elements, these were mistranslated. If we were to look closely at the singular verse translated accurately by google translate, we can see that the original Sanskrit Chapter 7: Verse 9 (Table 3, contains 9 distinct words that are bereft of any wordplay or poetic inferences. However, when we see the Sanskrit Chapter 12: Verse 8 (Table 5) due to the presence of words having the same roots, Google Translate is unable to identify the significance, where _mayy eva mana dhatsva mayi buddhi niveAya_ is wrongly translated as _Concentrate on Me in Me, fix your mind in me_.
The discrepancies in translation can be thus, attributed to the inability of Google Translate to understand context of the root words. The same word of Sanskrit language can have multiple meanings which have to be understood depending on the context of the statement. Most of the ancient Sanskrit epics such as Ramayana and Mahabharata, are written in the form of _shloka_ (stanza) and they are embedded with references and allegories. Also, the verses from various chapters are inextricably linked. This proves to be a major challenge in translation and can lead to erroneous results if the verses are translated independently without understanding the references. For instance, in the Bhagavad Gita, in the verses (Chapter 9: Verse 34 and Chapter 18: Verse 65), the original Sanskrit words are exactly same in the first three parts of the shloka, only the fourth part is different as shown below in bold:
* Chapter 9: Verse 34: "man-man bhava mad-bhatko mad-yj m namaskuru mm evaifhyasi yuktvaivam t tuna mat-parayaa"
* Chapter 9: Verse 34: "Be mindful of Me, be devoted to Me, live in Me, and bow down to Me You will come to Me alone, thus uniting yourself and being devoted to Me"
* Chapter 18: Verse 65: "man-man bhava mad-bhatko mad-yj m namaskuru mm evaifhyasi sarya te pratijine priyo si me"
* Chapter 18: Verse 65: "Be mindful of Me, be devoted to Me, live in Me, and bow down to Me You will come to me I promise you truly you are dear to me"
If we were to analyse the Google Translate versions of both verses, they sound fairly similar and do not convey much about the contextual significance of the verses. In Chapter 9, Arjuna continues to be in a state of confusion as he listens intently to Krishna whereas in the Chapter 18, Arjunas doubts are completely resolved and these words hold a complete difference in both spiritual and psychological terms. The text of the Gita has been understood to hold a significant psychotherapeutic potential and it has been recommended that its pragmatic use can improve both trust and communication [87].
In Chapter 17: Verse 26 (Table 4) Lord Krishna explains to Arjuna the meaning of _sat_ (literally translated as truth) which is part of a triple formula _Om Tat Sat_ introduced in a previous verse of the same chapter (Chapter 17 Verse 25). This reference is lost in the google translation altogether. This is a significant concept from the Bhagavad Gita which had a number of interpretation by prominent scholars since ancient times and been prominent in Vedanta Hindu school of philosophy [88; 89].
In Hindu philosophy, Om is the most sacred term - it has its own alphabet symbol in the Sanskrit and Hindi script known as Devanagari [62] as shown e.g. in (Figures 8). Om is not really part of the Devanagari script, i.e it is left alone and not used to form other words. Hence, the Devanagari script views Om as scared since it is beyond philosophy and descriptions in
Hinduism. Om is a symbolic representation of the impersonal aspect of God, the Supreme one, an idea so pure and. Om represents all that was there before the birth of the universe; more precisely, before the birth of the multiverse, since Hindiusim introduced the idea of multiverse through its philosophy and mythology [90]. Om refers to the formless Brahman and is the primordial sound that pervades creation [91]. Note that Brahman is defined as ultimate reality in the universe (multiverse) [90] and is also one of the terms that can be translated to English easily as it changes meaning in different contexts [92], similar to Dharma and Karma. Brahman is the pervasive, eternal truth, and consciousness which does not change; however, it is the cause of all changes. Hence, Brahman can be seen as a philosophical paradox [93]. It can be argued that Brahman is the closest word to the concept of God in Abraham religions; however, it is also different since God is known to be creator, protector and observe; whereas Brahman has all these properties, but also remains part of the universe. In the Isha Upanishad [94; 95], this verse further defines the property of Brahman:
On
All this is full. All that is full
From fullness, fullness comes
When fullness is taken from fullness,
Fullness still remains.
Om Shanti, Shanti, Shanti
Note that full has been translated as infinite, wholeness, complete, absolute, perfect, and reality by different translators of the Upanishads [94; 96; 97]. Hence, the translation of the Upanishads poses similar challenges as the Bhagavad Gita. Om is the term that cannot be translated and remains as it is in most translations of Hindu texts.
A major limitation is that the text that we have given is a philosophical song summarising major schools of Hindu philosophy, which had a number of interpretations, and hence distinct schools were formed. For instance, _Advaita Vedanta_ (non-dualism) [89; 98] and _Dvaita Vedanta_ (dualism) [88; 99]. Vedanta schools developed out of philosophical differences in interpretations of the Bhagavad Gita. We note that Advaita Vedanta became prominent from Adi Shankara's interpretation of Bhagavad Gita [100; 101] in the 8th century, known as the Sankara Bhashya [102]. These schools formed when Sanskrit was a prominent language in studying Hindu philosophy, and the schools were formed not due to mistranslation but due to interpretation. Due to different schools of philosophy, there can be translation bias; i.e. a translator with Advaita Vedanta will translate with biases towards this school of philosophy and Dvaita Vedanta will also do the same. In terms of Google Translate, we note that such bias is not there, but then there are limitations that also create a bias. Advaita Vedanta has been the most prominent school of Hindu philosophy in last thousand years with various texts of interpretations of the Bhagavad Gita and Vedas through scholars; hence, if these are used in the model training data, then model will philosophical biases.
Figure 10: Semantic similarity of verses selected from Chapter 12 across translations given in Table 5 showing original Sanskrit verses from the Bhagavad Gita [61] in Devanagi and English Transliteration.
## 6 Conclusion
We presented a framework for evaluation of Google Translate using Sanskrit as an example language. In our framework, we used a combination of semantic and sentiment analysis for comparing expert translations of the Bhagavad Gita with Google Translate.
In terms of sentiment analysis, a major observation was that the sentiments _optimistic_, _pessimistic_, _joking_, and _anxious_ were equally expressed in all four translations. We found that Google Translate lead in terms of optimistic sentiments and Mahatma Gandhi lead in pessimistic sentiments. In semantic analysis, we found that Chapter 3 is semantically most similar; whereas Chapter 17 is semantically least similar when comparing the translations with Google Translate. Generally, we found that Google Translate provided low level of semantic and sentiment similarity when compared to translations by human experts. This indicates that a lot has to be done to improve Google Translate in this domain since we are dealing with philosophical and metaphorical concepts in the Bhagavad Gita and a low resource language (Sanskrit) having a small number of native speakers. Furthermore, although Sanskrit is a low resource language, we note that it is an official language in India. Sanskrit is the main language for various ancient Hindu texts, and hence there has been a lot of focus on Sanskrit in academia. Therefore, the current study has a wide range of implications. Automatic translation of ancient texts could further help ease the burden of translating a text from scratch.
We further compared selected translations using a qualitative approach with help of a Sanskrit translator. In the qualitative evaluation, we find that Google translator is unsuitable for translation of poetic Sanskrit words and phrases due to its inability to recognize contextual significance and imagery. The mistranslations are not surprising as the Bhagavad Gita is known as a difficult text to translate and interpret since it relies on contextual, philosophical and historical information.
There is a good scope for using our proposed framework for evaluation of Google Translate for other languages. As noted earlier, our current study used Sanskrit which is not much used as a conversational language and we evaluated Google Translate using the Bhagavad Gita which is a poem. Hence, in future work we can evaluate other languages from India, particularly Hindi which has third highest speakers in works as first and second language, after English and Mandarin. Apart from Hindi, our framework is essentially useful for any language which has already been translated by experts, which can be used for comparison with Google Translate version.
## Code and Data
Github repository: 7
Footnote 7: [https://github.com/sydney-machine-learning/Google-Sanskrit-translate-evaluation](https://github.com/sydney-machine-learning/Google-Sanskrit-translate-evaluation)
|
2309.09277 | Fairness for All: Investigating Harms to Within-Group Individuals in
Producer Fairness Re-ranking Optimization -- A Reproducibility Study | Recommender systems are widely used to provide personalized recommendations
to users. Recent research has shown that recommender systems may be subject to
different types of biases, such as popularity bias, leading to an uneven
distribution of recommendation exposure among producer groups. To mitigate
this, producer-centered fairness re-ranking (PFR) approaches have been proposed
to ensure equitable recommendation utility across groups. However, these
approaches overlook the harm they may cause to within-group individuals
associated with colder items, which are items with few or no interactions.
This study reproduces previous PFR approaches and shows that they
significantly harm colder items, leading to a fairness gap for these items in
both advantaged and disadvantaged groups. Surprisingly, the unfair base
recommendation models were providing greater exposure opportunities to these
individual cold items, even though at the group level, they appeared to be
unfair. To address this issue, the study proposes an amendment to the PFR
approach that regulates the number of colder items recommended by the system.
This modification achieves a balance between accuracy and producer fairness
while optimizing the selection of colder items within each group, thereby
preventing or reducing harm to within-group individuals and augmenting the
novelty of all recommended items. The proposed method is able to register an
increase in sub-group fairness (SGF) from 0.3104 to 0.3782, 0.6156, and 0.9442
while also improving group-level fairness (GF) (112% and 37% with respect to
base models and traditional PFR). Moreover, the proposed method achieves these
improvements with minimal or no reduction in accuracy (or even an increase
sometimes). We evaluate the proposed method on various recommendation datasets
and demonstrate promising results independent of the underlying model or
datasets. | Giovanni Pellegrini, Vittorio Maria Faraco, Yashar Deldjoo | 2023-09-17T13:51:25Z | http://arxiv.org/abs/2309.09277v2 | Fairness for All: Investigating Harms to Within-Group Individuals in Producer Fairness Re-ranking Optimization - A Reproducibility Study
###### Abstract.
Recommender systems are widely used to provide personalized recommendations to users. Recent research has shown that recommender systems may be subject to different types of biases, such as popularity bias, leading to an uneven distribution of recommendation exposure among producer groups. To mitigate this, producer-centered fairness re-ranking (PFR) approaches have been proposed to ensure equitable recommendation utility across groups. However, these approaches overlook the harm they may cause to within-group individuals associated with colder items, which are items with few or no interactions.
This study reproduces previous PFR approaches and shows that they significantly harm colder items, leading to a fairness gap for these items in both advantaged and disadvantaged groups. Surprisingly, the unfair base recommendation models were providing greater exposure opportunities to these individual cold items, even though at the group level, they appeared to be unfair. To address this issue, the study proposes an amendment to the PFR approach that regulates the number of colder items recommended by the system. This modification achieves a balance between accuracy and producer fairness while optimizing the selection of colder items within each group, thereby preventing or reducing harm to within-group individuals and augmenting the novelty of all recommended items. The proposed method is able to register an increase in sub-group fairness (SGF) from 0.3104 to 0.3782, 0.6156, and 0.9442 while also improving group-level fairness (GF) (112% and 37% with respect to base models and traditional PFR). Moreover, the proposed method achieves these improvements with minimal or no reduction in accuracy (or even an increase sometimes).
We evaluate the proposed method on various recommendation datasets and demonstrate promising results independent of the underlying model or datasets. Our reproducibility study highlights the importance of considering within-group individuals in fairness-improving approaches and proposes a potential solution to address the issue of harm to disadvantaged individuals. We believe that our proposed method can contribute to ongoing efforts to make recommender systems more inclusive and fair to all users.
**ACM Reference Format:**
Giovanni Pellegrini, Vittorio Maria Faraco, and Yashar Deldjoo. 2018. Fairness for All: Investigating Harms to Within-Group Individuals in Producer Fairness Re-ranking Optimization - A Reproducibility Study. In _Woodstock '18: ACM Symposium on Neural Gaze Detection, June 03-05, 2018, Woodstock, NY_. ACM, New York, NY, USA, 15 pages. [https://doi.org/XXXXXXXXXXXX](https://doi.org/XXXXXXXXXXXX)
## 1. Introduction and Context
Recommender systems are widely used in task-sensitive and business-competitive domains, such as employment, health, e-commerce, and social media. As a result, fairness has gained increasing importance in recent years. The field of recommender systems typically examines fairness from two perspectives: consumer-provider (stakeholder) and group-individual (granularity of groups) (Gararay et al., 2018; Krizhevsky et al., 2017). A recent survey (Garay et al., 2018), which reviews fairness studies in recommender systems up to 2023, states that group fairness has emerged as the predominant focus of research in RecSys, accounting for 67% of all studies. Group fairness refers to ensuring fairness across a specific demographic, while individual fairness aims to treat each person as an individual, regardless of group membership. The group fairness approach is further
categorized into Consumer Fairness (CF) and Producer Fairness (PF), with almost 50% of all studies in fairness research dedicated to these two subcategories.
In group fairness setting, a key assumption is that providing equitable recommendation utility at the **group level** is sufficient to deem the method fair. For PF hence, this implies that both privileged and underprivileged provider groups should receive comparable exposure according to a predetermined target distribution (e.g., equal or proportional to catalog size). In this scenario, it can be even acceptable to compromise some recommendation accuracy to maintain the equity of recommendation exposure at the group level.
Various fairness enhancement techniques have been developed to achieve this goal, including pre-processing, in-processing, and post-processing methods [15; 17; 21]. Our study focuses on post-processing fairness ranking techniques that are adaptable to different recommendation algorithms and contexts without being tied to the core recommendation algorithm, also referred to as the "base ranking model". These techniques have gained significant attention due to their ability to transform black-box ranking methods into fair rankings and since they do not require re-training if fairness or protected groups change. This feature is particularly useful when group and fairness definitions are dynamic and re-training a model is costly. We review here several applications of fairness ranking approaches in recommender systems research. Ferraro et al. [9] propose a re-ranking algorithm that prioritizes user-oriented fairness and explicitly addresses gender and music bias. They evaluated the algorithm's effectiveness on a limited range of base ranking models, which included collaborative filtering approaches and well-known baselines such as MostPop. Yalcin and Bilge [24] address popularity bias in _group recommendations_ by adapting a re-ranking approach and proposing two strategies that incorporate popularity and group ratings. Similarly, Li et al. [13] present a user-oriented fairness re-ranking (UFR) method to address the unfairness problem in recommendations by adding constraints to evaluation objectives in the optimization algorithm. Rahmani et al. [19] extend these studies by examining UFR across different group fairness definitions and settings based on attributes and domains. In a similar work, Naghiaei et al. [16] also explore the applicability of fairness re-ranking in various scenarios, such as provider and joint consumer-provider fairness re-ranking, using a large number of datasets. These research studies highlight the versatility of fairness re-ranking optimization in addressing algorithmic biases and promoting fairness in various recommendation and fairness settings. Our study aims to investigate the **harns and consequences** of _producer-centered fairness re-ranking (PFR)_ on individuals within a group, with a particular emphasis on the exposure of less popular items in sub-groups. The main objective is to determine if the tPFR method successfully achieves the intended producer's goal of enhancing the visibility of these items.
We suggest a modification to the tPFR algorithm for improving both group-level and sub-group-level fairness, which we evaluate using the mean novelty of less popular items in sub-groups as a metric to maximize. The revised approach is referred to as **aPFR** and is introduced in various versions in this work.1
Footnote 1: While the focus of this work is on provider fairness, the findings and insights obtained may have wider implications for other fairness scenarios, including consumer fairness and two-sided markets.
_The Fairness Harms Resulting from tPFR_. It is crucial to re-evaluate the underlying premise of group fairness and analyze the **implications** of the group fairness provided by PFR. While it is natural to assume that all these fairness interventions are aimed at promoting fairness in recommendations to providers, we must scrutinize this assumption more closely and determine if we have been successful in promoting fairness in society. To accomplish this, we will examine a specific example, as demonstrated in Figure 1 and Table 1.
Figure 1 demonstrates the distribution of item novelty in the recommendation lists produced by the MultiVAE model on the Amazon Luxury Beauty dataset used in our experiments. The figure comprises five plots, each representing the novelty distribution across users. Table 1 presents the numerical values for the evaluation metrics, which are calculated for two different CF models: MultiVAE and NeuMF. It is worth noting that the traditional PFR, as explored in [16] and originally derived from the work of Li et al. [13], aims to enhance Group Fairness (Goal) while minimizing the impact on Accuracy (Cost 1). The current work at hand introduces the concept of "sub-group cold-item" fairness as an additional cost of the PFR intervention (Cost 2) to be measured.
Table 1 shows that tPFR successfully achieves a significant reduction in group-level fairness from 46.18 to 22.56, which is more than a 51% reduction, at the cost of a moderate sacrifice in accuracy from 0.00934 to 0.00869 (or 7%). However, the results also reveal that tPFR causes a substantial reduction in the fairness of items within each sub-group, from 0.74 to 0.29, which is a 61% reduction.2 In simpler terms, the outcome of tPFR is as follows: it achieves group-level fairness (Goal 1, +51%), with a trade-off in accuracy (Cost 1, -7%). However, it harms the exposure of cold-items within each sub-group, resulting in a reduction of sub-group fairness (Cost 2, -61%). Therefore, even though tPFR appears fair
\begin{table}
\begin{tabular}{l|c|c c c c c|c c c c} \hline \hline \multirow{2}{*}{Metric} & \multirow{2}{*}{Role} & \multicolumn{8}{c|}{**MultiVAE (_plot in Figure 1_)} & \multicolumn{8}{c}{**NeuMF**} \\ \cline{3-13} & & Base & tPFR & LaPFR & MaPFR & HaPFR & Base & tPFR & LaPFR & MaPFR & HaPFR \\ \hline \hline
**Group Fairness \(\downarrow\)** & Goal & 46.18 & 22.56 & **6.42** & **1.1** & **2.4** & 45.51 & 2.26 & 2.26 & 2.26 \\
**Accuracy \(\uparrow\)** & Cost 1 & 0.00934 & 0.00869 & **0.00931** & 0.00708 & 0.00545 & 0.01152 & 0.01017 & **0.00951** & **0.00916** & 0.00789 \\
**Novelty Adv. Group \(\uparrow\)** & - & 0.72 & 0.16 & **0.3** & **0.52** & **0.91** & 0.74 & 0.15 & **0.24** & **0.32** & **0.45** \\
**Novelty Disadv. Group \(\uparrow\)** & - & 0.76 & 0.43 & **0.53** & **0.63** & **0.85** & 0.79 & 0.55 & **0.55** & **0.57** & **0.61** \\
**Avg. Sub-Group Nov. \(\uparrow\)** & Cost 2 & 0.74 & 0.29 & **0.42** & **0.58** & **0.88** & 0.77 & 0.35 & **0.4** & **0.45** & **0.53** \\ \hline \hline \end{tabular} Our objective in this work is to maximize the visibility of cold items within sub-groups, and we use the metric **Avg. Sub-Group Nov.** to measure this. This aligns with the provider’s goal of increasing exposure to less popular items while maintaining accuracy (cf. Section 2.2). We use the terms “sub-group cold-item fairness” and “sub-group fairness” interchangeably in this work to refer to the same notion of fairness.
\end{table}
Table 1. Cost-Benefit Analysis in Fig. 1’s optimization scenarios for Group Fairness (PFR goal), Accuracy and Sub-group Fairness (PFR costs), on the Amazon Luxury Beauty dataset, with MultiVAE and NeuMF as baseline models. Note that novelty is directly related to the fairness, which we analyze it at sub-group level referred in this work as sub-group cold-item fairness.
Figure 1. Comparison of Concentration Bias in Novelty Distribution of User Recommendation Lists between Traditional provider fairness re-ranking (tPFR ) and Amended PFR: Analysis on Amazon Luxury Beauty using the MultiVAE model. (**Left**) Within the advantaged group, (**Middle**) within the dis-advantaged group, (**Right**) overall in recommendation lists. \(\Delta_{n}\) measures the improvement in the novelty of the recommended item, while \(\Delta_{a}\) quantifies the cost/gains in terms of the recommendation accuracy.
at the group-level and increases the exposure of disadvantaged item group (less popular items), it harms the exposure of cold items within each sub-group, which contradicts the goal of provider fairness (cf. Section 2.2).
To address this issue, we propose an amendment to PFR, called aPFR, which introduces a multiplicative novelty term into the re-ranker regularization function to control the novelty of recommended items. These plots are arranged from left to right in the following order:
* **Blue curve.** The baseline ranking model (before fairness);
* Orange curve. The tPFR algorithm, which was originally derived from a study by Li et al. [13], and explored in [16];
* **Green curve.** It represents the proposed amendment with light emphasis (LaPFR ) on long-tail items within each sub-group (\(y=0.1\));
* **Red curve.** It represents the proposed amendment with medium emphasis (MaPFR ) on long-tail items within each sub-group (\(y=0.33\));
* **Violet curve.** It represents the proposed amendment with heavy emphasis (HaPFR ) on long-tail items within each sub-group (\(y=1\)).
The parameter \(\gamma\) is used in the re-ranker to regulate cold-item exposure, and our evaluation shows that our method achieves higher group-level fairness and more exposure for less popular items in sub-groups compared to tPFR, by simultaneously regularizing group-level fairness and cold-item exposure.
The Light-, Medium-, and Heavy-Amended PFR (LaPFR, MaPFR, and HaPFR ) - shown in green, red, and violet curves, respectively - promote sub-group fairness, increasing it from 0.27 to 0.42, 0.58, and 0.88, respectively, compared to tPFR. Additionally, they improve or maintain both group-level fairness (from 22.56 to 6.42, 1.1, and 2.4) and accuracy (from 0.00869 to 0.00931, 0.00708, and 0.00545), as shown in Table 1. For example, the LaPFR approach not only reduces fairness with respect to tPFR, from 22.56 to 6.42 (71.61 % reduction) but also increases accuracy, from 0.00869 to 0.0931 (7.14 %). This increase in accuracy can be attributed to the larger search space in LaPFR (or in general aPFR), which is able to retrieve good items from long-tail distributions in each sub-group. As we increase the power of novelty, we are able to enhance sub-group fairness and group fairness, however, this comes at the cost of overall system accuracy. By exploring the aPFR approaches, we are able to identify the "sweet spots" where we can maintain or improve accuracy while enhancing fairness at different levels.
_Summary._ According to the studies and analyses conducted, the pursuit of group-level fairness through tPFR can result in reduced exposure of less popular items (i.e., less fairness), which contradicts the intended purpose of these methods, even if the algorithm considers it fair. We offer an explanation for this unfairness at the sub-group level, which we believe is caused by the fixed threshold of binary grouping, resulting in greater harm to cold items within sub-groups, possibly due to user preference for short-head items in sub-groups. Our proposed solution effectively tackles this issue by incorporating a multiplicative novelty term into the re-ranker cost function. Through extensive empirical evaluation, it has been demonstrated that the combination of the novelty term and group-level fairness term yield enhancements in **accuracy**, **group fairness**, and the proposed **sub-group fairness**. Moreover, depending on the task and interest of the stakeholder (e.g., provider), the proposed solution can further enhance fairness at both levels by prioritizing cold items in MaPFR and HaPFR, at the expense of a mild reduction in accuracy.
_Contributions._ In this context, this research aims to reproduce previous tPFR models, paying specific attention to the harm they do to within-group individuals associated with colder items, in particular
* We conduct a thorough analysis of the commonly used provider fairness re-ranking (PFR) approach in prior research and draw attention to its negative impacts on exposure of cold items when scrutinized at a more granular group level;
* We propose an amendment to the classic tPFR approach by introducing a _novelty multiplier_ term to the cost function to control the degree of novelty in the recommended items. Our approach balances group-level fairness and sub-group fairness by fine-tuning the novelty multiplier and its interaction with fairness regularization. This enables us to promote the visibility of cold items in sub-groups while maintaining high accuracy levels;
* We conducted an ablation study where we tested various combinations of regularization parameters for both the fairness parameters (capturing group-level fairness) and the cold-start term (capturing sub-group fairness). The objective was to improve the generalizability of our research and investigate the interaction between the amended tPFR method and the ablation parameters.
* We conduct extensive experiments by applying re-rankers on top of various competitive baseline collaborative filtering (CF) recommendation approaches. Specifically, we explore five domains, namely movie (MovieLens100K), luxury-wellness (Amazon Luxury Beauty), e-commerce (Amazon Prime Pantry), POI (Foursquare), and music (LastFM), which have different feedback types, including explicit and implicit. We evaluate our proposed aPFR amendments on five diverse baseline recommendation models: BPR, NeuMF, MultiVAE, LightGCN, and NGCF, giving us a total combination of 5 datasets \(\times\) 5 models = 25 CF simulations.
In the following, we investigate the reproducibility technique (cf. Section 2), then we replicate prior experiments with the proposed amendment (cf. Section 3), we highlight the results and findings of our research (cf. Section 4), and we draw consequent conclusions that pave the way for more fair future work (cf. Section 5).
## 2. Reproducibility Technique
In this section, we define the PFR implementation we use and reproduce, we present the datasets we run our experiments on, the base ranking models we employ, the fairness definitions our work is based upon, and its evaluation methods and metrics.
### Producer Fairness Re-ranking (PFR)
#### 2.1.1. Background
The optimization-based re-ranking approach used in this study, known as tPFR, has been previously investigated in recommendation system research, as discussed in (Li et al., 2018; Li et al., 2019; Li et al., 2019). The primary differences between these strategies stem from their focus on different stakeholders and their use of constrained or unconstrained optimization strategies. Initially, Li et al. (Li et al., 2018) considered using an unconstrained optimization-based re-ranking approach for consumer fairness in CF settings. However, subsequent research (Li et al., 2019; Li et al., 2019) focused on consumer or CP-fairness by utilizing a constrained approach. In our study, we built upon the constrained version of the approach and focused on _provider fairness_. We chose this focus for two main reasons: _firstly_, it allowed for a deeper understanding of the method and what PFR achieves, particularly given that PFR operates at a user level rather than a CP level, and _secondly_, encouraging the promotion of cold-items has notable commercial advantages, which is in line with the motivation behind provider fairness. (cf. Section 2.2).
#### 2.1.2. Formal description of the proposed amendment to PFR
To provide context for our proposed amendment, we present a brief overview of the PFR method. PFR uses the top-N recommendation list and relevance scores from the unfair base ranker and applies an optimization-based re-ranking algorithm. The objective is to maximize the total
relevance scores while minimizing the deviation from producer fairness (GF). This approach has been explored in previous studies on recommendation systems, such as those discussed in (Han et al., 2016; Wang et al., 2017; Wang et al., 2018).
The re-ranking optimization objective can be formalized as follows, with the decision vector \(X\) selecting the items to be included in the re-ranked list:
\[\begin{split}\max_{X_{i}}&\sum_{l=1}^{N}S_{i} \cdot\mathbf{N}_{i}^{r}\cdot X_{i}-\lambda\cdot\mathbf{GF}(X,\mathcal{I})\\ \text{s.t.}&\sum_{l=1}^{N}X_{l}=K,X_{l}\in\{0,1\} \end{split} \tag{1}\]
The optimization problem aims to maximize total preference scores \(\mathbf{S_{i}}\) and minimize deviations from fairness by recommending a specific number \(K\) from the top-\(N\) recommendation list of items to each user that minimizes GF. Similar to (Wang et al., 2017), GF here computes the difference between the expected recommendation utility of items in the advantaged group and that of items in the disadvantaged group. In addition, we extend the use of GF by introducing a weighted version that compares the deviation from parity exposure to a target distribution. This enables us to simultaneously consider the fairness concerns of multiple producer groups and evaluate the performance of the recommender system against a pre-defined fairness target (cf. Section 2.2).
The term \(\mathbf{N}\) introduces the novelty dimension in the re-ranker objective by influencing the selection of items that have a higher novelty score, enabling the method to tackle sub-group fairness jointly with the other goals. The hyperparameters \(\lambda\) and \(\gamma\) determine the emphasis placed on group fairness deviation and sub-group cold-item fairness respectively, with a value of 0 aligning with the baseline recommendation list. The optimization problem mentioned is a mixed-integer linear programming (MILP) problem and is known to be NP-hard. However, commercial optimization solvers such as Gurobi can be used to solve it. The MILP problem can be converted to an instance of the Knapsack problem, where the objective is to select items for each user to maximize the overall score, with the assumption that each item has a unit weight and the total weight is limited by the fixed list size.
#### 2.1.3. Implementation
The implementation of tPFR(Wang et al., 2017) is publicly available on Google Colab via their Github repository.3 It utilizes Cornac,4 a recommendation framework, and the optimization framework MIP5, which is based on the commercial optimization solver Gurobi.6 The code includes implementation steps for training baseline models, testing pipelines, and fairness-aware re-ranking and evaluation modules. However, due to Colab's recent discontinuation of support for TF1, their code cannot be executed anymore, which seriously hinders its reproducibility aspect. For instance, running neural models like NeuMF is no longer possible. To address this issue, we employ RecBole,7 a highly flexible recommendation framework built on PyTorch, in our study. We are making our implementation publicly accessible through a dedicated anonymized repository, which is discussed in more detail in Section 2.3.
Footnote 3: [https://github.com/rahmanidashti/CPFairRecSys](https://github.com/rahmanidashti/CPFairRecSys)
Footnote 4: [https://coranc.preferred.ai](https://coranc.preferred.ai)
Footnote 5: [https://www.python-mip.com](https://www.python-mip.com)
Footnote 6: [https://www.gurobi.com](https://www.gurobi.com)
### Fairness Definitions and Metrics
The focus of this work is on provider fairness and explores fairness within this context. Before delving into the analysis, we revisit the primary fairness goal in provider fairness and then examine the level of fairness within this framework.
**Goal. "Provider's Objective of Fairness in Recommender Systems"**
From a provider's perspective, the ideal goal is to enhance the exposure of cold or less popular items offered by the RS while simultaneously ensuring an acceptable level of recommendation quality, which means maintaining recommendation accuracy with little or no reduction.
We investigate fairness at two _hierarchical grouping levels_ that have received limited attention in prior works to achieve the above objective. Specifically, we aim to examine the goal of provider fairness for the exposure of cold items at different levels of granularity, namely, the group and sub-group levels. These levels are elaborated below:
Definition 1 (Group-level provider fairness).: _In the context of group-level fairness, a recommender system is deemed fair towards producers if it offers equitable recommendation utility or exposure to both privileged and underprivileged groups, as determined by a "target representation"._
Note that target representation refers to the ideal proportion or distribution of exposure of different groups in a recommendation system, as discussed in (Han et al., 2017). This paper considers two target representations for group fairness in recommender systems: (i) parity, and (ii) proportionality to corpus presence. Parity aims for equal resource allocation for each group, while proportionality targets allocation proportional to the number of items in the corpus belonging to a given group. We use a popularity-based segmentation approach, categorizing items as either short-head or popular items (top 20%) or long-tail or unpopular items (bottom 80%). The re-ranking objective integrates group-level fairness by enhancing the exposure of items from both groups based on a target representation (fair distribution) specified by the system designer. The target representations used in this work are denoted as \(GF_{eq}\) and \(GF_{prop}\), which correspond to \(p_{f}\) values of \([0.5,0.5]\) and \([0.2,0.8]\), respectively.
Definition 2 (Sub-group cold item fairness).: _Sub-group fairness, as defined in the context of provider fairness, refers to the level of fairness at an individual level within each sub-group. In our approach, we specifically focus on "cold items", where we assume that increased exposure to these items corresponds to greater sub-group fairness. This definition aligns with the objective of provider fairness, which aims to enhance the visibility of less popular items._
We group each recommendation list into two sub-groups, \(G=[G_{A},G_{B}]\), where \(G_{A}\) and \(G_{B}\) represent popular and non-popular item groups. Then we compute a novelty score using the following formula: \(N(i|i\in C)=\sum_{i\in C}-log_{2}(p_{i})\), where \(p_{i}\) is the popularity score of item \(i\) in the original catalog \(C\). We denote the novelty score for sub-group \(G_{A}\) and \(G_{B}\) as \(n_{A}\) and \(n_{B}\), respectively. Then, we calculate the novelty scores for both sub-groups and then compute the average of them to obtain the SGF score (short for Sub-group cold-item fairness), represented by the formula \(SGF=(nA+nB)/2\). A higher SGF value indicates that both sub-groups provide more exposure to colder items within their respective groups, thereby achieving better SGF.
### Code and Datasets
The entire pipeline code, which includes training and inference of the base ranking models and the fairness-aware re-ranking stage, along with pre-processed data used to generate the results in this paper, is released. Furthermore, we offer ready-to-run Jupyter Notebooks that have been tested on Google Colab to obtain the results and plots mentioned in the paper. All the relevant materials can be found on the anonymized GitHub repository.8
## 3. Replicating Prior Experiments with the Proposed Amendment.
This section outlines experiments conducted to assess various reproducibility aspects, including the obtained results and our observations.
### Setting.
Below are all the details concerning the experimental setup.
#### 3.1.1. Datasets.
We employ five publicly available datasets coming from different domains, that are movie, luxury-wellness, e-commerce, POI, and music, and contain different feedback types (i.e., explicit and implicit). The datasets are MovieLens-100K, Amazon Luxury Beauty, Amazon Prime Pantry, Foursquare, and LastFM. Before using them, we apply \(k\)-core data filtering with \(k=10\) to ensure they contain sufficient number of ratings per user (\(\frac{R}{U}\)) and ratings per item (\(\frac{R}{T}\)), leading to a higher density (\(\frac{R}{U\times I}\)) and ultimately a more manageable overall size to run our experiments in a dynamic setting with enough feedback. The datasets statistics are described in detail in Table 2.
#### 3.1.2. Evaluation Method.
Our settings involve a train-validation-test split for the data with ratios 80%, 10%, and 10%, respectively. In contrast to (Kumar et al., 2018), we evaluate our approach under the assumption that the relevance scores are not accessible to the post-processing algorithm, and instead are estimated based on the training data. For the purpose of re-ranking, we divide the items into two different groups based on their popularity, selecting the top 20% in terms of interactions as the short-head or popular items, and the bottom 80% as the long-tail or unpopular items. Furthermore, we compute a novelty score \(N_{i}\) for each item, characterizing it as colder or warmer. In addition, we define the producer group fairness evaluation metric (\(GF\)) to capture the performance of models _w.r.t._ the group-level fairness, and the sub-group cold-item fairness metric (\(SGF\)) to evaluate their performance on the sub-group level. \(GF\) uses the deviation from producer group fairness \(DPF\) presented in Equation 1 to compute the performance on the first level, while \(SGF\) is a direct indicator of the average value of both sub-groups novelty scores. (cf. Section 2.2)
We calculate the overall performance of our by averaging the accuracy, subgroup fairness (SGF), and group fairness (GF) metrics using the 'All' metric, as shown in Table 4, where \(All=w_{1}\cdot Acc+w_{2}\cdot GF+(1-w_{1}-w_{2})\cdot SGF\) where \(w_{1}+w_{2}+w_{3}=1\). In this work, we set \(w_{i}=\frac{1}{3}\), however, we can adjust the weights \(w_{i}\)'s to give more importance to specific evaluation metrics.
Furthermore, we employ the symbol \(\Delta_{B}\) to indicate the percentage improvement between the 'All' metric of our modified aPFR models and the **Base** model, while \(\Delta_{t}\) measures the percentage improvement between the 'All' metric of our modified aPFR and the **tPFR**.
#### 3.1.3. Core CF Recommendation Models.
We use a suite of competitive, CF recommendation models as baseline ranking models in our post-processing approach, as summarized below:
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Dataset** & Users & Items & Ratings & \(\frac{R}{U}\) & \(\frac{R}{T}\) & Density & Item Gini \\ \hline
**MovieLens100K** & 944 & 1,683 & 100,000 & 106.04 & 59.45 & 0.063\% & 0.629 \\
**Amazon Prime Pantry** & 6,049 & 4,367 & 78,186 & 12.91 & 17.9 & 0.003\% & 0.279 \\
**Amazon Luxury Beauty** & 2,719 & 1,028 & 18,466 & 6.8 & 18 & 0.006\% & 0.299 \\
**Foursquare** & 1,083 & 5,135 & 147,938 & 136 & 29 & 0.027\% & 0.422 \\
**LastFM** & 1,867 & 1,529 & 62,795 & 33.73 & 41.2 & 0.022\% & 0.529 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Statistics of the final datasets used in this work after \(k\)-core pre-processing.
* **BPR**[20]: A conventional recommendation model that employs matrix factorization to learn user and item embeddings of low dimensionality, and optimize the model based on the pairwise ranking of items for each user to predict whether a user prefers a given item over another;
* **NeuMF**[11]: A hybrid recommendation model that combines matrix factorization with a neural network architecture (MLP). It learns user and item embeddings, capturing both linear and non-linear patterns in user-item interactions. The model uses a pairwise ranking objective to optimize the model;
* **MultiVAE**[14]: A non-linear probabilistic deep learning model that extends a variational autoencoder (VAE) structure to collaborative filtering for implicit feedback, and acquires the underlying representations of users and items from their interactions to create recommendations in an unsupervised way.
* **LightGCN**[10]: A pure collaborative filtering method that utilizes a simplified version of graph convolutional networks (GCNs) without nonlinear activation functions and additional weight matrices. It learns user and item embeddings through graph propagation rules and user-item interactions, making it scalable and efficient.
* **NGCF**[22]: A graph-based recommendation model that employs a neural network architecture and learns high-order connectivity and user-item signals based on the exploitation of the user-item graph structure, by propagating embeddings on it.
#### 3.1.4. Hyperparameter Tuning
The RecBole public library is utilized to implement and apply the baseline algorithms. Hyperparameter tuning is performed for both classical and deep recommendation models using a greedy search strategy. The best configurations are chosen based on the performance on the validation set. For BPR, we adjust the embedding size and learning rate hyperparameters, with 20 different trials, in the ranges [8, 16, 32, 64, 128] and [0.01, 0.005, 0.001, 0.0001], respectively. For NeuMF, we vary the learning rate, dropout probability, MLP hidden size, MF embedding size, and MLP embedding size hyperparameters, with 108 different trials, in the ranges [0.01, 0.005, 0.001], [0.1, 0.3], ['[64, 32, 16]', '[32, 16, 8]'], [64, 32, 16], and [64, 32, 16], respectively. For MultiVAE, we adjust the learning rate, latent dimension, MLP hidden size, and dropout probability hyperparameters, with 90 different trials, in the ranges [0.01, 0.005, 0.001], [8, 16, 32, 64, 128], [300, 600, 800], and [0.3, 0.5], respectively. For LightGCN, we vary the embedding size, learning rate, number of layers, and regularization weight hyperparameters, with 270 different trials, in the ranges [8, 16, 32, 64, 128], [0.01, 0.005, 0.001], [1, 2, 3, 4], and [1e-04, 1e-03, 1e-02], respectively. Lastly, for NGCF, we tune the learning rate, hidden size list, regularization weight, node dropout, message dropout, and delay hyperparameters, with 108 different trials, in the ranges [0.01, 0.005, 0.001], ['[64, 64, 64]', '[128, 128, 128]', '[256, 256, 256]'], [1e-5, 1e-4], [0.0, 0.1, 0.2], [0.0, 0.1, 0.2], and [1e-4, 1e-2, 1e-1], respectively.
\begin{table}
\begin{tabular}{c c c c|c c c} \hline \hline & \multicolumn{3}{c}{Amazon Luxury Beauty} & \multicolumn{3}{c}{Foursquare} \\ \cline{2-7} & BPR & NeuMF & LightGCN & BPR & NeuMF & NGCF \\ \hline Base & 0.23 & 0.24 & 0.22 & 0.27 & 0.28 & 0.3 \\ tPFR & 0.63 & 0.65 & 0.63 & 0.71 & 0.69 & 0.69 \\ LaPFR & 0.55 & 0.61 & 0.54 & 0.65 & 0.58 & 0.62 \\ MaPFR & 0.46 & 0.56 & 0.38 & 0.56 & 0.51 & 0.38 \\ HaPFR & 0.14 & 0.48 & 0.16 & 0.06 & 0.41 & 0.06 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Harm values for different models and datasets at \(\lambda=0.1\), where \(H\downarrow=1-N\uparrow\).
## 4. Findings
This section details our experiments on several reproducibility aspects and presents our observations and results.
### Sub-Group Fairness Harm Resulting from PFR
Table 4 presents a summary of the primary outcomes from reproducing earlier research using tPFR, as well as the proposed amendments (LaPFR, MaPFR, and HaPFR ) across two datasets, namely Amazon Luxury Beauty, and Foursquare. Using this Table, we have created Table 3 in this section, which summarizes the negative impacts of each model (named Harm). To calculate the harm, we normalize sub-group novelty to a range of [0-1] across each dataset and compute it as \(H=1-N\). Essentially, the concept is that the lower the sub-group fairness, the greater the harm to cold-items.
Based on Table 3, it can be observed that adding tPFR has a negative impact on sub-group novelty across all models, as indicated by higher H values compared to the base models. For example, in the case of the Amazon Luxury Beauty dataset, the H values increase from 0.23 to 0.63, 0.55, 0.46, and 0.14 for tPFR, LaPFR, MaPFR, and HaPFR, respectively. This means that the proposed HaPFR model has the lowest harm value and introduces the least harm to sub-group novelty among the PFR variations. Similarly, for the Foursquare dataset, the H value for the base BPR model is 0.27, and this increases to 0.71, 0.65, and 0.56 for tPFR, LaPFR, and MaPFR, respectively, indicating a negative impact on sub-group novelty. However, the HaPFR model has an H value of 0.06, much lower than the other PFR models, suggesting that it causes the least harm to sub-group novelty among the others.
Fig. 2 presents a graphical representation of the harm inflicted by tPFR on sub-group fairness harm across all five datasets. With the exception of the LastFM dataset, tPFR is shown to have the highest harm H in nearly all cases. For the LastFM dataset, the lightest version of the amendment results in slightly higher harm to sub-group novelty, but this
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l l l} \hline \hline Model & Type & \multicolumn{6}{c}{Amazon Luxury Beauty} & \multicolumn{6}{c}{Foursquare} \\ \cline{3-13} & & NDCG \(\uparrow\) & GF\({}_{eq}\) & GF\({}_{prop}\) & SCF \(\uparrow\) & All \(\uparrow\) & \(\Delta_{B}\uparrow\) & \(\Delta_{L}\uparrow\) & NDCG \(\uparrow\) & GF\({}_{eq}\) & GF\({}_{prop}\) & SCF \(\uparrow\) & All \(\uparrow\) & \(\Delta_{B}\uparrow\) & \(\Delta_{L}\uparrow\) \\ \hline \hline BPR & Base & 0.0092 & 0.5109 & 0.073 & 0.7679 & 0.5329 & - & - & 0.0071 & 0.5092 & 0.045 & 0.7283 & 0.5155 & - & - \\ BPR & tPFR & 0.008 & 0.8885 & 0.6878 & 0.369 & 0.5912 & 0.1094 & - & 0.0063 & 0.7308 & 0.2945 & 0.2952 & 0.4904 & -0.0487 & - \\ BPR & LaPFR & 0.0073 & 0.9701 & 0.8705 & 0.4495 & 0.6187 & 0.161 & 0.0465 & 0.0061 & 0.851 & 0.4938 & 0.3466 & 0.5463 & 0.5997 & 0.114 \\ BPR & MaPFR & 0.0066 & 0.9599 & 0.9618 & 0.5404 & 0.6314 & 0.1848 & 0.068 & 0.0055 & 0.9099 & 0.8534 & 0.4360 & 0.6001 & 0.1641 & 0.2237 \\ BPR & HaPFR & 0.0067 & 0.9967 & 0.9647 & 0.8627 & 0.7442 & **0.3965** & **0.2588** & 0.0018 & 0.998 & 0.9903 & 0.9359 & 0.6678 & **0.2954** & **0.3617** \\ \hline NeuMF & Base & 0.0115 & 0.4791 & 0.024 & 0.7679 & 0.7425 & - & - & 0.0088 & 0.4965 & 0.0323 & 0.7216 & 0.5745 & - & - \\ NeuMF & tPFR & 0.0108 & 0.9857 & 0.9444 & 0.3551 & 0.7407 & - & - & 0.0042 & 0.0064 & 1.0 & 1.0 & 0.3131 & 0.5977 & 0.0404 & - \\ NeuMF & LaPFR & 0.0098 & 0.9857 & 0.9444 & 0.392 & 0.7129 & -0.0399 & -0.0375 & 0.006 & 1.0 & 1.0 & 0.422 & 0.6201 & 0.0794 & 0.0375 \\ NeuMF & MaPFR & 0.01 & 0.9857 & 0.9444 & **0.4389** & 0.7379 & -0.0062 & -0.0038 & 0.0056 & 1.0 & 1.0 & 0.4855 & 0.6311 & **0.0985** & **0.0559** \\ NeuMF & HaPFR & 0.0032 & 0.9857 & 0.9444 & 0.5202 & 0.6898 & -0.071 & -0.0687 & 0.0042 & 1.0 & 1.0 & 0.5944 & 0.6357 & 0.0891 & 0.0468 \\ \hline MuMiVAE & Base & 0.0093 & 0.481 & 0.0 & 0.7426 & 0.5301 & - & 0.0111 & 0.4709 & 0.0061 & 0.7142 & 0.7226 & - & - \\ MuMiVAE & tPFR & 0.0087 & 0.8203 & 0.44 & 0.2985 & 0.572 & 0.079 & - & 0.0097 & 0.7274 & 0.2923 & 0.2690 & 0.5812 & -0.1957 & - \\ MuMiVAE & LaPFR & 0.0103 & 0.9731 & 0.8198 & 0.4070 & 0.7329 & **0.3286** & **0.2813** & 0.0095 & 0.8838 & 0.9303 & 0.3653 & 0.6566 & - 0.0922 & 0.1287 \\ MuMiVAE & MaPFR & 0.0074 & 0.9992 & 0.9993 & 0.5968 & 0.671 & 0.2754 & 0.182 & 0.0056 & 0.9865 & 0.9478 & 0.5558 & 0.6599 & - 0.088 & 0.1339 \\ MuMiVAE & HaPFR & 0.0052 & 0.9938 & 0.9316 & 0.8821 & 0.6878 & 0.2975 & 0.2024 & 0.0027 & 0.9994 & 0.9863 & 0.9657 & 0.704 & -0.0257 & **0.2113** \\ \hline LightCCN & Base & 0.0092 & 0.545 & 0.0909 & 0.7761 & 0.5472 & - & - & 0.0074 & 0.5212 & 0.0592 & 0.7265 & 0.5339 & - & - \\ LightCCN & tPFR & 0.0095 & 0.8352 & 0.5232 & 0.3719 & 0.6334 & 0.3157 & - & 0.0086 & 0.7165 & 0.2796 & 0.2817 & 0.4965 & -0.0735 & - \\ LightCCN & LaPFR & 0.0087 & 0.9686 & 0.8529 & 0.4601 & 0.6827 & 0.2476 & 0.0778 & 0.80067 & 0.8164 & 0.4938 & 0.3422 & 0.5551 & 0.0414 & 0.1241 \\ LightCCN & MaPFR & 0.0084 & 0.995 & 0.9448 & 0.6280 & 0.7342 & **0.3417** & **0.1591** & 0.0057 & 0.9617 & 0.8517 & 0.4925 & 0.6227 & 0.162 & 0.2542 \\ LightCCN & HaPFR & 0.0058 & 0.9754 & 0.8777 & 0.8401 & 0.6935 & 0.2674 & 0.0949 & 0.002 & 0.9961 & 0.9699 & 0.9433 & 0.6762 & **0.2618** & **0.3619** \\ \hline NGCF & Base & 0.008 & 0.5117 & 0.0633 & 0.7694 & 0.4212 & - & - & 0.0052 & 0.5308 & 0.0871 & 0.7064 & 0.4075 & - & - \\ NGCF & tPFR & 0.0067 & 0.9187 & 0.7534 & 0.37 & 0.5478 & 0.3006 & - & 0.00
is remedied by MaPFR and HaPFR. This highlights the importance of balancing and regulating sub-group novelty and group-level fairness in producer fairness research.
### Proposed Amendment: aPFR
We will first present the improvement in Table 4 with respect to the base ranker and the tPFR, shown by \(\Delta_{B}\) and \(\Delta_{t}\), respectively.
* **Improvements with respect to base ranker (\(\Delta_{B}\)).** Positive changes can be noted in the \(\Delta_{B}\)-values, which represent the enhancement in the cumulative All metric compared to the Base (base ranker), for at least one of the versions of the proposed aPFR. For instance, on the Amazon dataset, BPR, MultiVAE, LightGCN and NGCF have the best values of 0.3965, 0.3826, 0.3417, and 0.614, respectively. Similar patterns can be seen in the Foursquare dataset, except for NeuMF (in Amazon) and MultiVAE (in Foursquare), which did not show an overall improvement across all three metrics with respect to the Base. Nevertheless, even in these cases, significant enhancements in recommendation fairness (\(GF_{eq}\) and \(GF_{prop}\)) and \(SGF\) were observed. For example, \(GF_{eq}\) increased from 0.4791 to 0.9857 in Amazon with NeuMF, or \(GF_{prop}\) increased from 0.0061 to 0.9863, while SGF increased from 0.7142 to 0.9657 in Foursquare with MultiVAE. These findings are intriguing because they indicate an average increase of \(\Delta_{B}\) by 20 to 40% compared to the base ranker, suggesting the effectiveness of our proposed amendment and its ability to achieve a better trade-off in comparison to the currently used fairness-unaware CF models.
* **Improvements with respect to the traditional PFR (\(\Delta_{t}\)).** The changes in \(\Delta_{t}\)-values, which measure the improvement of the cumulative All metric compared to tPFR, are even more noteworthy. It can be seen that in several cases, at least one of the variants of the proposed aPFR outperforms the others in terms of \(\Delta_{t}\). Here are a few examples. On the Amazon dataset, BPR, MultiVAE, and NGCF have the best \(\Delta_{t}\) values of 0.2588, 0.2813, and 0.241, respectively. Similarly, in the Foursquare dataset, the values of \(\Delta_{t}\) for BPR, LightGCN, and NGCF are 0.3617, 0.3619, and 0.3211, respectively. For the Amazon dataset, NeuMF does not show an overall improvement in this metric, while it offers only a slight improvement for the Foursquare dataset. Consistent with the trends observed in \(\Delta_{B}\), in these cases, the GF and SGF metrics either maintain their values or show an improvement compared to
Figure 2. Sub-Group Harm resulting from traditional provider fairness re-ranking approach tPFR and corrected via the proposed variations LaPFR, MaPFR, and HaPFR respectively. The figure shows the harm over five different datasets/domains and averages the results of five different baseline models.
tPFR. For example, SGF increases from 0.3551 to 0.5202 in Amazon and from 0.3131 to 0.5944 in Foursquare, while \(GF_{eq}\) and \(GF_{prop}\) remain at 1 in Foursquare. In summary here again, the results show an improvement of \(\Delta_{t}=20-40\%\) compared to tPFR in all experimental cases, which not only confirms the effectiveness of our proposed amendment to the classical tPFR in solving the sub-group fairness problem but also demonstrates its ability to maintain or improve system accuracy and group fairness.
To gain an overall understanding of the relationship between the three evaluation objectives, we have created radar plots in Figures 3, 4, and 5, which illustrate the interplay between metrics as we transition from one dataset to another. In the radar plots, a larger triangle indicates a better model in terms of the underlying metrics. Figure 3 displays an overview of the amendment, which is calculated by taking the **average of the five base ranking models**. Figures 4, and 5, on the other hand, focus on the methods MultiVAE and NGCF, respectively. It is evident that all versions of the proposed method are capable of improving SGF with respect to tPFR, and even the base ranker for HaPFR or MaPFR. Moreover, being fair towards sub-groups increases group-level fairness as well. LaPFR, MaPFR, and HaPFR are observed to maximize GF almost in every case, surpassing the performance of tPFR, which is a strategy specifically designed to increase GF. In Figure 4, a more detailed analysis of the model MultiVAE is presented and it can be seen that SGF is better than the base ranker more frequently with MaPFR, while accuracy levels are maintained or even increased in some cases, such as Amazon Luxury Beauty with LaPFR. Similarly, Figure 5 highlights the performance of NGCF, where it maintains high levels of accuracy with Foursquare, and MaPFR outperforms tPFR in terms of SGF and GF. However, with LastFM, NGCF fails to raise SGF and GF compared to tPFR, even though the traditional approach has the lowest accuracy. These results demonstrate the potential impact of datasets and specific data characteristics on the performance of a re-ranking method.
### Interplay between Re-Ranking Hyperparameters \(\lambda\) and \(\gamma\): an Ablation Study
Figure 6 displays the relationship between the re-ranking hyperparameters \(\lambda\) and \(\gamma\) and their impact on accuracy, sub-group fairness, and group fairness. The first subplot indicates that reducing the values of both parameters increases system accuracy, as expected, due to the trade-off between optimization strategy and other dimensions. The second subplot reveals a linear relationship between sub-group fairness and novelty, while \(\lambda\) has a negligible contribution. The third subplot demonstrates that higher values of \(\lambda\) result in higher group fairness, but some higher \(\gamma\) values can also increase both group and sub-group fairness. These results are consistent with those presented in Table 4. Finally, the fourth subplot shows the average of the three metrics and dimensions.
## 5. Conclusions and Future Work
Our study focused on reproducing previous Producer Fairness Re-ranking (PFR) approaches with a spotlight on producer fairness. Delving deep into the works of (Han et al., 2017) and (Han et al., 2018), we identified how such methods could inadvertently harm colder items. This results in a fairness gap that spans across both advantaged and disadvantaged groups. Of relevance, recent works have taken (Han et al., 2018) as a foundational reference, pushing the envelope further in areas such as consumer-producer fairness.
To counteract the challenges presented in sub-group fairness, we introduced a novel iteration of the conventional PFR method. This refined approach carefully balances accuracy and producer fairness, while prudently optimizing the selection of colder items within every group. Our experiments shed light on pivotal aspects, paving the way for future scholarly pursuits. Notably, the method we proposed accentuates sub-group fairness, enhances group-level fairness, and does so without compromising on accuracy. This underscores the pressing need to factor in individuals within groups when strategizing fairness-enhancing methods.
As we strive for more equitable recommender systems, it's crucial to look beyond the context we investigated. For instance, studies such as (Amigo et al., 2018; Amigo et al., 2018) have assessed the impact of attacks on certain classes of users and items in recommender systems. Furthermore, each domain, e.g., the music industry (Beng et al., 2015; Chen et al., 2018), e-commerce, Point-of-Interest (POI), fashion (Beng et al., 2015), multimedia, or even generative AI ((Amigo et al., 2018)), might delineate harms to stakeholders in unique ways. Recent surveys and investigations, such as (Amigo et al., 2018; Chen et al., 2018; Chen et al., 2018), offer invaluable insights into these nuances. There is also increasing recognition of the need for a more unified methodology for fairness measurement in recommendation systems as proposed by works such as Amigo et al. (Amigo et al., 2018).
Our research aligns with and advances the collective endeavor to render recommender systems that are both inclusive and equitable for all users. In the emerging landscape where fairness is paramount, our findings emphasize the importance of recalibrating approaches to ensure equity across the board.
Figure 5. The Analysis of the Examined Accuracy, Group-Fairness, and Sub-group Fairness across the five datasets on the base **NGCF** model in the five tested scenarios: base, tPFR, LaPFR, MaPFR, and HaPFR. |
2306.17814 | On Higher Order Drift and Diffusion Estimates for Stochastic SINDy | The Sparse Identification of Nonlinear Dynamics (SINDy) algorithm can be
applied to stochastic differential equations to estimate the drift and the
diffusion function using data from a realization of the SDE. The SINDy
algorithm requires sample data from each of these functions, which is typically
estimated numerically from the data of the state. We analyze the performance of
the previously proposed estimates for the drift and diffusion function to give
bounds on the error for finite data. However, since this algorithm only
converges as both the sampling frequency and the length of trajectory go to
infinity, obtaining approximations within a certain tolerance may be
infeasible. To combat this, we develop estimates with higher orders of accuracy
for use in the SINDy framework. For a given sampling frequency, these estimates
give more accurate approximations of the drift and diffusion functions, making
SINDy a far more feasible system identification method. | Mathias Wanner, Igor Mezić | 2023-06-30T17:24:00Z | http://arxiv.org/abs/2306.17814v2 | # On Numerical Methods for Stochastic SINDy +
###### Abstract
The Sparse Identification of Nonlinear Dynamics (SINDy) algorithm can be applied to stochastic differential equations to estimate the drift and the diffusion function using data from a realization of the SDE. The SINDy algorithm requires sample data from each of these functions, which is typically estimated numerically from the data of the state. We analyze the performance of the previously proposed estimates for the drift and diffusion function to give bounds on the error for finite data. However, since this algorithm only converges as both the sampling frequency and the length of trajectory go to infinity, obtaining approximations within a certain tolerance may be infeasible. To combat this, we develop estimates with higher orders of accuracy for use in the SINDy framework. For a given sampling frequency, these estimates give more accurate approximations of the drift and diffusion functions, making SINDy a far more feasible system identification method.
S 37H99,37M15,60H35,65C40,93E12
## 1 Introduction
For many dynamical systems, data may abundant while there remains no analytic models to describe the system. These systems may be too complex, may have too large a dimension, or may be too poorly understood to model using first principles. For these reasons, data driven modeling has become important for applications in science and engineering. There is a wide variety of system identification methods, ranging from classical methods, [15], to Dynamic Mode Decomposition and Koopman operator methods, [20, 23, 19, 22], to neural networks [14, 13] and many others. These methods vary in their their complexity, training methods, model sizes, and interpretability. Sparse Identification of Nonlinear Dynamics (SINDy) is a method which allows for some complexity (allowing nonlinear models over only linear ones) while the sparse solution promotes simple, interpretable models.
The SINDy algorithm, developed by Brunton et. al. [2] estimates the parameters of an ordinary differential equation from data. It does this by using a dictionary of functions and finding a sparse representation of the derivative in this dictionary. The data for the derivative can be obtained using finite differences of data from the state. For ODEs, the performance of this algorithm has been analyzed in [24].
SINDy has several extensions and adaptations; it has also been extended to identify control systems [3, 9], adapted to systems with implicit solutions [16, 8], and formulated in ways to improve its robustness to noise [7, 18, 17], to name a few. Additionally, different methods for computing the sparse solution have been proposed, including LASSO [21], the sequential thresholding presented in the original paper [2].
SINDy has also been extended to estimate the parameters of stochastic differential equations. In [1], it was demonstrated that we can use the SINDy algorithm to estimate both the drift and diffusion functions in an SDE. The drift and diffusion are estimated from the data of the state using the Kramer-Moyal formulas. This method was expanded on in [6]; solution methods based on binning and cross validation were introduced to reduce the effects of noise. Callaham et. al [4] expand upon this method by adapting it to applications for which the random forcing cannot be considered white noise.
In the paper, we conduct a numerical analysis for using SINDy for stochastic system and introduce improved methods which give higher order convergence. As mentioned, in [1] the drift and diffusion are approximated using the Kramer-Moyal formulas. We demonstrate the convergence rates of the algorithm with respect to the sampling period and the length of the trajectory. The
approximations given in [1] only give first order convergence with respect to the sampling frequency. A similar analysis of the Kramer-Moyal estimates based on binning can be found in [5]. Additionally, since they only converge in expectation, we may require a long trajectory for the variance of the estimate to be tolerable. Combined, these can make the data requirements to use SINDy for an SDE very demanding. To help remedy this, we demonstrate how we can develop higher order approximations of the drift and diffusion functions for use in SINDy.
The paper is organized as follows: First, we will review the SINDy algorithm and some concepts from SDEs which we will be using in this paper. We will then conduct a numerical analysis of the algorithms presented in [1], including bounds on the error of the estimates. Next, we will present new, higher order methods and show the convergence rates of these methods. Finally, we will test all of these methods on several numerical examples to demonstrate how the new methods allow us to compute far more accurate approximations of the system for a given sampling frequency and trajectory length.
## 2 Sparse Identification of Nonlinear Dynamics (SINDy)
### Overview
Consider a system governed by the ordinary differential equation
\[\dot{x}=f(x),\ \ \ \ x\in\mathbb{R}^{d}. \tag{1}\]
If the dynamics of the system, \(f\), are unknown, we would like to be able to estimate the function \(f\) using only data from the system. The SINDy algorithm [2] estimates \(f\) by choosing a dictionary of functions, \(\theta=[\theta_{1},\theta_{2},...,\theta_{k}]\) and assuming \(f\) can be expressed (or approximated) as a linear combination of these functions. The \(i^{th}\) component of \(f\), \(f_{i}\), can then be expressed as
\[f_{i}(x)=\sum_{j=1}^{k}\theta_{j}(x)\alpha_{i,j}=\theta(x)\alpha_{i},\]
where \(\theta=\begin{bmatrix}\theta_{1}&\ldots&\theta_{k}\end{bmatrix}\) is a row vector containing the dictionary functions and \(\alpha^{i}=\begin{bmatrix}\alpha_{1}^{i}&\ldots&\alpha_{k}^{i}\end{bmatrix}^{T}\) is the column vector of coefficients. Given data for \(f(x_{j})\) and \(\theta(x_{j})\) for \(j=1,...,n\), we can find the coefficients \(\alpha_{i}\) by solving the minimization
\[\alpha_{i}=\underset{v}{argmin}\sum_{j=1}^{n}|f_{i}(x_{j})-\theta(x_{j})v|^{2}. \tag{2}\]
This optimization can be solved by letting
\[\Theta=\begin{bmatrix}\theta(x_{1})\\ \theta(x_{2})\\ \vdots\\ \theta(x_{n})\end{bmatrix},\ \ \ F=\begin{bmatrix}f(x_{1})\\ f(x_{2})\\ \vdots\\ f(x_{n})\end{bmatrix},\ \ \ \text{and}\ \ \ \alpha=\begin{bmatrix}\alpha^{1}&\alpha^{2}&\ldots&\alpha^{d}\end{bmatrix},\]
and computing \(\alpha=\Theta^{+}F.\)
### Approximating \(f(x)\)
Typically, data for \(f(x)\) cannot be measured directly. Instead, it is usually approximated using finite differences. The forward difference gives us a simple, first order approximation to \(f\):
\[f(x(t))=\frac{x(t+\Delta t)-x(t)}{\Delta t}+O(\Delta t). \tag{3}\]
The approximation (3) is derived from the Taylor expansion of \(x\),
\[x(t+\Delta t)=x(t)+\dot{x}(t)\Delta t+\ddot{x}(t)\frac{\Delta t^{2}}{2}+...=x (t)+f(x(t))\Delta t+\frac{\partial f}{\partial x}\Big{|}_{x(t)}f(x(t))\frac{ \Delta t^{2}}{2}+..., \tag{4}\]
for \(f\) sufficiently smooth. The Taylor expansion (4) is also used to derive higher order methods, such as the central difference,
\[f(x)=\frac{x(t+\Delta t)-x(t-\Delta t)}{2\Delta t}+O(\Delta t^{2}). \tag{5}\]
We can use these finite difference to populate the matrix \(F\) used in the optimization (2), knowing that we can control the error with a small enough step size.
### Sparse Solutions
Since we are choosing an arbitrary dictionary of functions, \(\{\theta_{1},\ldots,\theta_{k}\}\), the conditioning of the minimization (2) can become very poor. Additionally, if the the dictionary is large and contains many redundant functions, having a solution which contains only a few nonzero entries would help to provide a simple interpretable result. The SINDy algorithm addresses these by using a sparse solution to (2). There are multiple methods for obtaining a sparse solution such as the least absolute shrinkage and selection operator (LASSO) or the sequentially thresholded least squares algorithm [2]. Using a sparse solution will give us a simpler identified system and improves the performance over the least squares solution.
## 3 Review of SDEs
Consider the Ito stochastic differential equation
\[dX_{t}=\mu(X_{t})dt+\sigma(X_{t})dW_{t} \tag{6}\]
where \(X_{t}\in\mathbb{R}^{d}\) and \(W_{t}\) is \(d\)-dimensional Brownian motion. The function \(\mu:\mathbb{R}^{d}\to\mathbb{R}^{d}\) is the drift, a vector field which determines the average motion of system, while \(\sigma:\mathbb{R}^{d}\to\mathbb{R}^{d\times d}\) is the diffusion function, which governs the stochastic forcing. The diffusion, \(\sigma\), is also assumed to be positive definite. Motivated by SINDy, we wish to estimate \(\mu\) and \(\sigma^{2}\) from data. We note that we are estimating \(\Sigma=\frac{1}{2}\sigma^{2}\) and not \(\sigma\) directly. However, if \(\sigma\) is positive definite, which is assumed, \(\sigma^{2}\) uniquely determines \(\sigma\).
### Ergodicity
Since SINDy represents functions using the data vectors evaluated along the trajectory, we will need to relate the the data vectors to the functions represented in some function space. To do this, we will assume that the process \(X_{t}\) has an ergodic measure \(\rho\), so that both
\[\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}f(X_{t})dt=\int_{\mathbb{R}^{d}}f(x)d \rho(x)\quad\text{ and }\quad\lim_{N\to\infty}\frac{1}{N}\sum_{i=0}^{N-1}f(X_{t_{i}})=\int_{ \mathbb{R}^{d}}f(x)d\rho(x) \tag{7}\]
hold almost surely. Some sufficient conditions that ensure that the SDE (6) generates a process with a stationary or an ergodic measure are given in e.g. [11].
With this ergodic measure, the natural function space to consider is the Hilbert space \(L^{2}(\rho)\). For any two functions \(f,g\in L^{2}(\rho)\), we can use time averages to evaluate inner products.
\[\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}g^{*}(X_{t})f(X_{t})dt=\lim_{N\to \infty}\frac{1}{N}\sum_{i=0}^{N-1}g^{*}(X_{t_{n}})f(X_{t_{n}})=\int_{\mathbb{ R}^{d}}g^{*}f\,d\rho=\langle f,g\rangle. \tag{8}\]
For notational simplicity, we will also use the brackets \(\langle\cdot,\cdot\rangle\) to denote the matrix of inner products for two row vector-valued functions: if \(f=\begin{bmatrix}f_{1}&\ldots&f_{k}\end{bmatrix}\) and \(g=\begin{bmatrix}g_{1}&\ldots&g_{l}\end{bmatrix}\),
\[\langle f,g\rangle^{i,j}=\langle f^{j},g^{i}\rangle,\qquad\text{ or equivalently,}\qquad\langle f,g\rangle=\int_{\mathbb{R}^{d}}g^{*}f\,d\rho.\]
### Ito-Taylor Expansion
In order to evaluate the performance of different SINDy methods on SDEs, we will need to use the Ito-Taylor expansion of the solution. Let \(\Sigma=\frac{1}{2}\sigma^{2}\). Following the notation of [12], let
\[L^{0}=\sum_{j=1}^{d}\mu^{j}\frac{\partial}{\partial x^{j}}+\sum_{j,l}^{d}( \Sigma)^{j,l}\frac{\partial^{2}}{\partial x^{j}\partial x^{l}}\]
be the operator for the Ito equation (10) and define the operators
\[L^{j}=\sum_{i=1}^{d}\sigma^{i,j}\frac{\partial}{\partial x^{i}}.\]
These operators will give us the coefficients for the Ito-Taylor expansion of a function \(f\). Denoting \(\Delta W^{i}_{t}=W^{i}_{t+\Delta t}-W^{i}_{t}\), the first couple of terms are
\[f(X_{t+\Delta t})= f(X_{t})+L^{0}f(X_{t})\Delta t+\sum_{i=1}^{d}L^{i}f(X_{t}) \Delta W^{i}_{t}+(L^{0})^{2}f(X_{t})\Delta t+\] \[\sum_{i=1}^{d}L^{i}L^{0}f(X_{t})\int_{t}^{t+\Delta t}\int_{t}^{s_{ 1}}dW^{i}_{s_{2}}ds_{1}+\sum_{i=1}^{d}L^{0}L^{i}f(X_{t})\int_{t}^{t+\Delta t} \int_{t}^{s_{1}}ds_{2}dW^{i}_{s_{1}}+\ldots\]
The general Ito-Taylor expansions can be found in Theorem 5.5.1 of [12]. We will use the Ito-Taylor expansion to develop estimates for \(\mu^{i}\) and \(\sigma^{i,j}\). For the purposes of this paper, we will be able to specialize to a few cases, which will allow us to quantify the error in our estimates while also being simpler to manipulate than the larger expansion.
#### 3.2.1 Weak Expansion
The first specialization of the Ito-Taylor expansion will be a weak expansion, which will allow us to estimate the expected error in our estimate.
\[\mathbb{E}(f(X_{t+\Delta t})|X_{t})=f(X_{t})+\sum_{m=1}^{k}(L^{0})^{m}f(X_{t} )\frac{\Delta t^{m}}{m!}+R(X_{t}). \tag{11}\]
with \(R(X_{t})=O(\Delta t^{m+1})\).
This expansion follows from the Proposition 5.5.1 and Lemma 5.7.1 of [12]. Theorem 5.5.1 gives the general Ito-Taylor expansion, while Lemma 5.7.1 shows that all multiple Ito integrals which contain integration with respect to a component of the Weiner process have zero first moment. The remainder term is then a standard integral.
We will consider the expansion (11) with the functions \(f(x)=x^{i}\) to get
\[\mathbb{E}(X^{i}_{t+\Delta t}|X_{t})=X^{i}_{t}+\mu^{i}(X_{t})\Delta t+\sum_{m =2}^{k}(L^{0})^{m-1}\mu^{i}(X_{t})\frac{\Delta t^{m}}{m!}+O(\Delta t^{k+1}) \tag{12}\]
to estimate the drift. To estimate the diffusion, we will let \(f(x)=(x^{i}-X^{i}_{t})(x^{j}-X^{j}_{t})\), with \(X_{t}\) held constant at the value at the beginning of the time step, to get
\[\mathbb{E}(f(X_{t+\Delta t})\,|\,X_{t})=2\Sigma^{i,j}(X_{t})\Delta t+g(X_{t}) \Delta t^{2}+O(\Delta t^{3}) \tag{13}\]
where
\[g=\left(L^{0}\Sigma^{i,j}+\mu^{i}\mu^{j}+\sum_{k=1}^{d}\Sigma^{i,k}\frac{ \partial\mu^{j}}{\partial x^{k}}+\Sigma^{j,k}\frac{\partial\mu^{i}}{\partial x ^{k}}\right).\]
#### 3.2.2 Strong Expansions
We will also use the strong Ito-Taylor expansion, which will give a bound on the variance of our estimates. These immediately follow from Proposition 5.9.1 of [12]. First, if we apply it to \(f(x)=x^{i}\), we have
\[X^{i}_{t+\Delta t}-X^{i}_{t}=\mu^{i}(X_{t})\Delta t+\sum_{m=1}^{d}\sigma^{i,m} (X_{t})\Delta W^{m}_{t}+R_{t}, \tag{14}\]
where \(\mathbb{E}(|R_{t}|^{2}|X_{t})d\rho=O(\Delta t^{2})\).
Similarly, we can apply the same proposition to \(f(x)=(x^{i}-X^{i}_{t})(x^{j}-X^{j}_{t})\), and which gives us (after moving around some of the terms)
\[(X^{i}_{t+\Delta t}-X^{i}_{t})(X^{j}_{t+\Delta t}-X^{j}_{t})=2\Sigma^{i,j}(X_{t })\Delta t+\sum_{k,l=1}^{d}(\sigma^{k,i}\sigma^{l,j}(X_{t})+\sigma^{k,j}\sigma^{ l,i}(X_{t}))I_{(i,j)}+R_{t}, \tag{3.8}\]
where \(\mathbb{E}(|R_{t}|^{2}|X_{t})=O(\Delta t^{3})\) and \(I_{(i,j)}=\int_{0}^{\Delta t}\int_{0}^{s_{1}}dW^{i}_{s_{2}}dW^{j}_{s_{1}}\). When we create estimates of \(\mu^{i}(X_{t})\) and \(\Sigma^{i,j}(X_{t})\), the expansions (3.7) and (3.8) will be useful in bounding the variance of these two estimates.
Remark 1._For the expansions, it is implicit that we must assume that all (up to the necessary order) of the coefficient functions, \(L^{a_{1}}L^{a_{2}}...L^{a_{n}}f\), satisfy the integrability requirements with respect to the multiple Ito integrals set forth in chapter five of [12]. Additionally, we will also assume that the remainder terms will be square integrable with respect to the ergodic measure. In particular, we will assume_
\[\int_{\mathbb{R}^{d}}|R(x)|^{2}d\rho(x)=O(\Delta t^{m+1})\]
_in the weak expansion and_
\[\int_{\mathbb{R}^{d}}R_{2}(x)^{2}d\rho(x)=O(\Delta t)\quad\quad\big{(}\text{ or }\quad O(\Delta t^{2})\big{)}\]
_in the strong expansions, where \(R_{2}(x)=\mathbb{E}(|R_{t}|^{2}\ |\ X_{t}=x)\). This assumption will allow us to take time averages and expect them to be finite. Following the proofs in [12], it can be seen that these can be guaranteed imposing similar conditions on the coefficient functions._
## 4 SINDy for Stochastic Systems
Given data for the drift and diffusion matrix of (3.1), we can set up an optimization problem similar to (2.2). Similar to the deterministic case, we can also approximate \(\mu\) and \(\Sigma\) using finite differences. As before, we assume we have a dictionary \(\theta=[\theta_{1},\theta_{2},...,\theta_{k}]\) and that each of the components of \(\mu\) and \(\Sigma\) lie in the span of the components of \(\theta\):
\[\mu^{i}=\theta\alpha^{i}\qquad\text{ and }\qquad\Sigma^{i,j}=\theta\beta^{i,j}.\]
Suppose we have the data from a trajectory of length \(T\) with sampling period \(\Delta t\). If we let \(\Delta X^{i}_{t_{n}}=X^{i}_{t_{n+1}}-X^{i}_{t_{n}}\), we can approximate the drift using
\[\mu^{i}(X_{t_{m}})\approx\frac{X^{i}_{t_{m+1}}-X^{i}_{t_{m}}}{\Delta t}=\frac {\Delta X^{i}_{t}}{\Delta t}. \tag{4.1}\]
Similarly, we can approximate the diffusion with
\[\Sigma^{i,j}(X_{t_{m}})\approx\frac{(X^{i}_{t_{m+1}}-X^{i}_{t_{m}})(X^{j}_{t_{ m+1}}-X^{j}_{t_{m}})}{2\Delta t}=\frac{\Delta X^{i}_{t_{m}}\Delta X^{j}_{t_{m}}}{2 \Delta t}. \tag{4.2}\]
It was shown in [1] that we can use the approximations (4.1) and (4.2) to set up the minimization problems
\[\tilde{\alpha}^{i}=\underset{v}{argmin}\sum_{m=0}^{N-1}\left|\frac{\Delta X^{ i}_{t_{m}}}{\Delta t}-\theta(X_{t_{m}})v\right|^{2}. \tag{4.3}\]
and
\[\tilde{\beta}^{i,j}=\underset{v}{argmin}\sum_{m=0}^{N-1}\left|\frac{\Delta X^{ i}_{t_{m}}\Delta X^{j}_{t_{m}}}{2\Delta t}-\theta(X_{t_{m}})v\right|^{2}. \tag{4.4}\]
Under the assumptions set forth in Remark 1, we can show that as \(\Delta t\to 0\) and \(T\to\infty\), the coefficients given by (4.3) and (4.4) converge to the true coefficients; \(\tilde{\alpha}^{i}\to\alpha^{i}\) and \(\tilde{\beta}^{i,j}\to\beta^{i,j}\).
If we define the matrices
\[\Theta=\begin{bmatrix}\theta(X_{t_{0}})\\ \theta(X_{t_{1}})\\ \vdots\\ \theta(X_{t_{N-1}})\end{bmatrix},\quad\text{ and }\quad D^{i}=\begin{bmatrix} \Delta X^{i}_{t_{0}}\\ \Delta X^{i}_{t_{1}}\\ \vdots\\ \Delta X^{i}_{t_{N-1}}\end{bmatrix}, \tag{4.5}\]
We can express (4.3) and (4.4) concisely as
\[\tilde{\alpha}^{i}=\underset{v}{argmin}\left\|\frac{D^{i}}{\Delta t}-\Theta v \right\|\quad\text{ and }\quad\beta^{i,j}=\underset{v}{argmin}\left\|\frac{D^{i} \odot D^{j}}{2\Delta t}-\Theta v\right\|.\]
(Here \(D^{i}\odot D^{j}\) represents the Hadamard, or element-wise, product.) These equations are solved by \(\tilde{\alpha}_{i}=\Delta t^{-1}\Theta^{+}D^{i}\) and \(\tilde{\beta}_{i,j}=(2\Delta t)^{-1}\Theta^{+}(D^{i}\odot D^{j})\), respectively.
**Theorem 4.1**.: _Let \(X_{t}\) be an ergodic drift-diffusion process generated by the SDE (3.1). Consider the optimization problems (4.3) and (4.4) using data from a trajectory of length \(T\) sampled with frequency \(\Delta t\). Suppose the components of \(\theta\) are linearly independent and span the subspace \(\mathcal{F}\), and that the assumptions on the Ito-Taylor expansions outlined in Remark 1 are met. If \(\mu^{i}\) or \(\Sigma^{i,j}\) lie in \(\mathcal{F}\), then the vectors given by corresponding optimization converges in probability to the true coefficients as \(T\to\infty\) and \(\Delta t\to 0\). That is, \(\tilde{\alpha}^{i}\to\alpha^{i}\) or \(\tilde{\beta}^{i,j}\to\beta^{i,j}\)._
The formal proof of Theorem 4.1 will be subsumed into the stronger Theorems 5.1 and 5.2, which give rates for the convergence. However, to demonstrate the idea of the proof, by the assumptions we have \(\Theta\) has full rank and \(\mu=\theta\alpha^{i}\), \(\Sigma^{i,j}=\theta\beta^{i,j}\).
\[\tilde{\alpha}^{i}=(\Theta^{*}\Theta)^{-1}\Theta^{*}\frac{D^{i}}{\Delta t}= \left(\frac{1}{N}\Theta^{*}\Theta\right)^{-1}\left(\frac{1}{N\Delta t}\Theta^ {*}D^{i}\right),\]
where \(N=T/\Delta t\) is the number of data samples. The first quantity can be evaluated using ergodicity, as \(N\to\infty\)
\[\frac{1}{N}\Theta^{*}\Theta=\frac{1}{N}\sum_{m=0}^{N-1}\theta^{*}(X_{t_{m}}) \theta(X_{t_{m}})\xrightarrow{N}\langle\theta,\theta\rangle.\]
For the second expression, the definition of the stochastic integral gives us
\[\Theta^{*}D^{i}=\sum_{m=0}^{N-1}\theta^{*}(X_{m})(X^{i}_{t_{m+1}}-X^{i}_{t_{m}} )\xrightarrow{\Delta t}\int_{t_{0}}^{t_{0}+T}\theta^{*}dX^{i}\]
as \(\Delta t\to 0\). Finally, using (3.1) and (3.3), we can show
\[\frac{1}{N\Delta t}\Theta^{*}D^{i}\xrightarrow{\Delta t}\frac{1}{T}\int_{t_{0 }}^{t_{0}+T}\theta^{*}dX^{i}\xrightarrow{T}\langle\mu,\theta\rangle=\langle \theta,\theta\rangle\alpha^{i} \tag{4.6}\]
as \(\Delta t\to 0\) and \(T\to\infty\). The limit as \(\Delta t\to 0\) gives the convergence of the sum to the stochastic integral and the limit as \(T\to\infty\) allows us to sample almost everywhere on the stationary measure for the ergodic convergence. Similarly, we can use the convergence
\[\sum_{m=0}^{N-1}\theta^{*}(X_{t_{m}})(X^{i}_{t_{m+1}}-X^{i}_{t_{m}})(X^{j}_{t_ {m+1}}-X^{j}_{t_{m}})\xrightarrow{\Delta t}\int_{t_{0}}^{t_{0}+T}\theta^{*}d[ X^{i},X^{j}],\quad\Delta t\to 0\]
to show that \(\frac{1}{2N\Delta t}\Theta^{*}(D^{i}\odot D^{j})\to\langle\Sigma^{i,j},\theta \rangle=\langle\theta,\theta\rangle\beta^{i,j}\). (Here \([X,Y]_{t}\) is the quadratic covariation process of \(X_{t}\), and \(Y_{t}\).) This would establish the result, except that we used the iterated limits
\(\Delta t\to 0\) and \(T\to\infty\) in (4.6) without showing the double limit exists. This is where we would use the integrability assumptions in Remark 1, which are used in the proofs of Theorems 5 and 5.
Theorem 4 demonstrates how the least squares solutions converge to the true coefficients of the SDE. However, the SINDy algorithm finds a sparse solution, which can greatly improve the accuracy of the results over the least squares solution. To set this up, the two optimizations (4.3) and (4.4) can be summarized using the normal equations,
\[\Theta^{*}\Theta\tilde{\alpha}^{i}=\frac{1}{\Delta t}\Theta^{*}D^{i} \tag{4.7}\]
and
\[\Theta^{*}\Theta\tilde{\beta}^{i,j}=\frac{1}{2\Delta t}\Theta^{*}(D^{i} \odot D^{j}). \tag{4.8}\]
We can then solve equations (4.7) and (4.8) using a sparse solver, such as the one proposed in [2] to obtain a sparse solution.
## 5 Numerical Analysis of Stochastic SINDy
Theorem 4 claims that as \(\Delta t\to 0\) and \(T\to\infty\), the coefficients given by the (4.3) and (4.4) converge to the true parameters of the SDE (3.1) as \(\Delta t\to 0\) and \(T\to\infty\). In this section, we will look at the accuracy and variation of the approximations for finite \(\Delta t\) and \(T\). In this setting, we will be using "big 'O" notation to denote convergence as \(\Delta t\to 0\), and we will be using "little 'o'" notation for the convergence as \(T\to\infty\).
The SINDy algorithm will give us vectors of coefficients, \(\tilde{\alpha}^{i}\) and \(\tilde{\beta}^{i,j}\), for the system. We will be interested in the error of these vectors relative to the true coefficients \(\alpha^{i}\) and \(\beta^{i,j}\),
\[err=\tilde{\alpha}^{i}-\alpha^{i}\quad\text{ or }\quad err=\tilde{\beta}^{i,j}- \beta^{i,j}.\]
(We note that this error is specifically for the vector \(\alpha^{i}\) or \(\beta^{i,j}\) being estimated, even though it is not indexed. Since each vector is estimated separately, there should be no confusion.) This error will be a random variable depending on the realization of the system. To evaluate the performance of the algorithms, we will use the mean and variance of this error:
\[err_{mean}=\|\mathbb{E}(err)\|_{2}\quad\text{ and }\quad err_{var}=Var(err)= \mathbb{E}(\|err-\mathbb{E}(err)\|_{2}^{2}).\]
The mean error and variance measure the bias and spread in the estimates \(\tilde{\alpha}^{i}\) and \(\tilde{\beta}^{i,j}\). These errors in the coefficients can be quantified using the errors in the estimates of \(\mu^{i}\) and \(\Sigma^{i,j}\) given in (4.1) and (4.2) at each step. We will present the analysis for the drift coefficients, \(\alpha^{i}\), noting that analysis for the diffusion follows the same path.
### Drift
As mentioned, the error in \(\tilde{\alpha}^{i}\) stems from the error in the approximation in (4.1)
\[\mu^{i}(X_{t_{n}})\approx\frac{X_{t_{n+1}}-X_{t_{n}}}{\Delta t}.\]
We can define the error
\[e_{t_{n}}=\frac{X_{t_{n+1}}^{i}-X_{t_{n}}^{i}}{\Delta t}-\mu^{i}(X_{t_{n}}).\]
The order of the error, \(e_{t}\), at each time step will directly determine the error in the coefficients \(\tilde{\alpha^{i}}\). We can use Ito-Taylor expansions for \(X_{t}\) to bound both \(\mathbb{E}(|e_{t}|)\) and \(\mathbb{E}(|e_{t}|^{2})\). The weak Ito-Taylor expansion (3.4) gives us
\[\mathbb{E}(e_{t}\,|\,X_{t})=\frac{1}{\Delta t}\left(\mu^{i}(X_{t})\Delta t+L^{ 0}\mu^{i}(X_{t})\frac{\Delta t^{2}}{2}+O(\Delta t^{3})\right)-\mu^{i}(X_{t})= L^{0}\mu^{i}(X_{t})\frac{\Delta t}{2}+O(\Delta t^{2}). \tag{5.1}\]
Similarly, we can use the strong truncation (3.7) to obtain
\[e_{t}=\sum_{m=1}^{d}\sigma^{i,m}(X_{t})\frac{\Delta W_{t}^{m}}{\Delta t}+\frac{R_ {t}}{\Delta t},\]
where \(\mathbb{E}(|R_{t}|^{2}|X_{t})=O(\Delta t^{2})\). Then, taking the expectance of \(e_{t}^{2}\), we get
\[\mathbb{E}(|e_{t}|^{2}\,|\,X_{t})=\sum_{m=1}^{d}\frac{\sigma^{i,m}(X_{t})^{2}}{ \Delta t}+O\left(\Delta t^{\frac{-1}{2}}\right). \tag{5.2}\]
Now, let \(E\) be the matrix containing the time samples of \(e_{t}\),
\[E=\begin{bmatrix}e_{t_{0}}&e_{t_{1}}&\ldots&e_{t_{N-1}}\end{bmatrix}^{T}=\frac {D^{i}}{\Delta t}-\Theta\alpha^{i},\]
using \(\theta(X_{t})\alpha^{i}=\mu^{i}(X_{t})\). Then we have
\[err=\tilde{\alpha}^{i}-\alpha^{i}=\Theta^{+}\frac{D^{i}}{\Delta t}-\Theta^{+} \Theta\alpha=(\Theta^{*}\Theta)^{-1}\Theta^{*}E. \tag{5.3}\]
Using ergodicity, we have
\[\left(\frac{1}{N}\Theta^{*}\Theta\right)^{-1}=\left(\langle\theta,\theta \rangle+o(1)\right)^{-1}=\langle\theta,\theta\rangle^{-1}+o(1), \tag{5.4}\]
which allows us to evaluate the first term in (5.3):
\[err=\left(\langle\theta,\theta\rangle^{-1}+o(1)\right)\left(\frac{1}{N}\Theta ^{*}E\right). \tag{5.5}\]
Bounding the mean and variance will follow from bounds on the mean and variance of \(\frac{1}{N}\Theta^{*}E\).
Theorem 5.1. _Consider the optimization problem given by (4.1) and (4.3). Then the bias is bounded by_
\[err_{mean}\leq\frac{C_{1}}{2}\left(\|L^{0}\mu^{i}\|_{2}+O(\Delta t)+o(1) \right)\Delta t\]
_and_
\[err_{var}\leq\frac{C_{2}}{T}\left(\sum_{m=1}^{d}\|\sigma^{i,m}\|_{4}^{2}+O \left(\Delta t^{\frac{1}{2}}\right)+o(1)\right),\]
_where_
\[C_{1}=\|\langle\theta,\theta\rangle^{-1}\|_{2}\|\theta\|_{2}\quad\text{and} \quad C_{2}=\|\langle\theta,\theta\rangle^{-1}\|_{2}^{2}\|\theta\|_{4}^{2} \tag{5.6}\]
_depend only on the choice of \(\theta\)._
_Proof._ For the mean error, we will need to bound the quantity \(\frac{1}{N}\,\|\mathbb{E}\left(\Theta^{*}E\right)\|\). We have
\[\mathbb{E}\left(\frac{1}{N}\Theta^{*}E\right)=\mathbb{E}\left(\frac{1}{N}\sum _{n=0}^{N-1}\theta^{*}(X_{t_{n}})e_{t_{n}}\right)=\mathbb{E}\left(\frac{1}{N} \sum_{n=0}^{N-1}\theta^{*}(X_{t_{n}})\mathbb{E}(e_{t_{n}}\,|\,X_{t_{n}})\right).\]
Then, using ergodicity and (5.1), we obtain
\[\mathbb{E}\left(\frac{1}{N}\Theta^{*}E\right) =\mathbb{E}\left(\frac{1}{N}\sum_{n=0}^{N-1}\theta^{*}(X_{t_{n}}) \left(\frac{\Delta t}{2}L^{0}\mu^{i}(X_{t_{n}})+O(\Delta t^{2})\right)\right)\] \[=\frac{\Delta t}{2}\left(\langle L^{0}\mu^{i},\theta\rangle+o(1) \right)+O(\Delta t^{2}).\]
Finally, using (10), we get
\[\|\mathbb{E}(err)\| =\left\|\left(\langle\theta,\theta\rangle^{-1}+o(1)\right)\right\|_ {2}\left(\frac{\Delta t}{2}\left(\langle L^{0}\mu^{i},\theta\rangle+o(1)\right) +O(\Delta t^{2})\right)\] \[\leq\|\langle\theta,\theta\rangle^{-1}\|_{2}\left(\|\theta\|_{2} \|L^{0}\mu^{i}\|_{2}+O(\Delta t)+o(1)\right)\frac{\Delta t}{2}=C_{1}\left(\|L ^{0}\mu^{i}\|_{2}+O(\Delta t)+o(1)\right)\frac{\Delta t}{2}\]
This bounds the mean error. To find the variance, we have
\[Var\left(\frac{1}{N}\Theta^{*}E\right) \leq\mathbb{E}\left(\left\|\frac{1}{N}\Theta^{*}E\right\|_{2}^{2 }\right)=\mathbb{E}\left(\left\|\sum_{n=0}^{N-1}\theta^{*}(X_{t_{n}})e_{t_{n} }\right\|_{2}^{2}\right)\leq\mathbb{E}\left(\sum_{n=0}^{N-1}\|\theta^{*}(X_{t _{n}})\|_{2}^{2}|e_{t_{n}}|^{2}\|\right)\] \[=\mathbb{E}\left(\sum_{n=0}^{N_{1}}\|\theta(X_{t_{n}})\|_{2}^{2} \,\mathbb{E}\left(|e_{t_{n}}|^{2}\,|\,X_{t_{n}}\right)\right)\]
Now, using (10) with this equation, we have
\[Var\left(\frac{1}{N}\Theta^{*}E\right) \leq\mathbb{E}\left(\frac{1}{N^{2}}\sum_{n=0}^{N-1}\|\theta(X_{t _{n}})\|_{2}^{2}\left(\sum_{m=1}^{d}\frac{|\sigma^{i,m}|^{2}}{\Delta t}+O\left( \Delta t^{\frac{-1}{2}}\right)\right)\right)\] \[=\frac{1}{N\Delta t}\left(\sum_{m=1}^{d}\langle(\sigma^{i,m})^{2},\|\theta\|_{2}^{2}\rangle+O\left(\Delta t^{\frac{1}{2}}\right)+o(1)\right)\] \[\leq\frac{1}{T}\|\theta\|_{4}^{2}\left(\sum_{m=1}^{d}\|\sigma^{i, m}\|_{4}^{2}+O\left(\Delta t^{\frac{1}{2}}\right)+o(1)\right).\]
Then
\[Var(err) =\left(\|\langle\theta,\theta\rangle^{-1}\|_{2}^{2}+o(1)\right) \|\theta\|_{4}^{2}\left(\frac{1}{T}\left(\sum_{m=1}^{d}\|\sigma^{i,m}\|_{4}^{2 }+O\left(\Delta t^{\frac{1}{2}}\right)+o(1)\right)\right)\] \[=\frac{\|\langle\theta,\theta\rangle^{-1}\|_{2}^{2}\|\theta\|_{4} ^{2}}{T}\left(\sum_{m=1}^{d}\|\sigma^{i,m}\|_{4}^{2}+O\left(\Delta t^{\frac{1 }{2}}\right)+o(1)\right)\] \[=\frac{C_{2}}{T}\left(\sum_{m=1}^{d}\|\sigma^{i,m}\|_{4}^{2}+O \left(\Delta t^{\frac{1}{2}}\right)+o(1)\right)\]
As shown in Theorem 5.1, in expectation, the accuracy of our estimate depends primarily on the sampling period \(\Delta t\), and not on the length of the trajectory. The length of the trajectory instead controls the variance of the estimate, which is proportional to \(1/T\). Up to the leading term, the variance does not depend on the sampling period. This pattern will persist as we develop higher order methods for estimating the drift, where the sampling frequency determines the bias and the length of the trajectory determines the variance.
### Diffusion
The analysis of the diffusion coefficients follows the same argument. The approximation for \(\Sigma^{i,j}\) given in (11) is
\[\Sigma^{i,j}(X_{t_{m}})\approx\frac{(X_{t_{m+1}}^{i}-X_{t_{m}}^{i})(X_{t_{m+1 }}^{j}-X_{t_{m}}^{j})}{2\Delta t}=\frac{\Delta X_{t_{m}}^{i}\Delta X_{t_{m}}^{ j}}{2\Delta t}.\]
Then we can define the error
\[e_{t}=\frac{(X_{t+\Delta t}^{i}-X_{t}^{i})(X_{t+\Delta t}^{j}-X_{t}^{j})}{2 \Delta t}-\Sigma^{i,j}(X_{t}).\]
We can use the weak Ito-Taylor expansion (3.6) to bound \(\mathbb{E}(e_{t}\,|\,X_{t})\):
\[\mathbb{E}(e_{t}\,|\,X_{t})=g(X_{t})\frac{\Delta t}{2}+O(\Delta t^{2}),\qquad g =\left(L^{0}\Sigma^{i,j}+\mu^{i}\mu^{j}+\sum_{k=1}^{d}\Sigma^{i,k}\frac{ \partial\mu^{j}}{\partial x^{k}}+\Sigma^{j,k}\frac{\partial\mu^{i}}{\partial x ^{k}}\right). \tag{5.7}\]
Similarly, the strong Ito-Taylor expansion (3.8) gives us (see appendix)
\[\mathbb{E}(|e_{t}|^{2}\,|\,X_{t})=\Sigma^{i,i}(X_{t})\Sigma^{j,j}(X_{t})+ \Sigma^{i,j}(X_{t})^{2}+O(\Delta t^{\frac{1}{2}}). \tag{5.8}\]
**Theorem 5.2**: _Consider the optimization problem given by (4.2) and (4.4). Then the mean error is bounded by_
\[err_{mean}=\frac{C_{1}}{2}(\|g\|+O(\Delta t)+o(1))\Delta t,\]
_where_
\[g=\left(L^{0}\Sigma^{i,j}+\mu^{i}\mu^{j}+\sum_{k=1}^{d}\Sigma^{i,k}\frac{ \partial\mu^{j}}{\partial x^{k}}+\Sigma^{j,k}\frac{\partial\mu^{i}}{\partial x ^{k}}\right).\]
_The variance is bounded by_
\[err_{var}=\frac{C_{2}}{4}\left(\left\|\Sigma^{i,i}\Sigma^{j,j}+(\Sigma^{i,j}) ^{2}\right\|+O(\Delta t^{\frac{1}{2}})+o(1)\right)\frac{\Delta t}{T}.\]
_The constants \(C_{1}\) and \(C_{2}\) are the same as those given in (5.6)._
The proof follows that of Theorem 5.1, except using equations (5.7) and (5.8) to bound \(|\mathbb{E}(e_{t}\,|\,X_{t})|\) and \(\mathbb{E}\left(|e_{t}|^{2}\,|\,X_{t}\right)\), respectively.
Similar to Theorem 5.1, the proof above shows that the mean error converges with order \(\Delta t\). However, unlike the estimate for the drift, when estimating the diffusion the variance is proportional to both \(\Delta t\) and \(1/T\). This will also hold true for higher order estimates of the diffusion.
## 6 Higher Order Methods
From Theorems 5.1 and 5.2 we can see that the quantities \(\Delta t\), \(T\), \(C_{1}\), and \(C_{2}\) will control the magnitude of the error. The constants, \(C_{1}\) and \(C_{2}\), depend only on the choice of the dictionary \(\theta\), which determines the conditioning of the problem. The SINDy algorithm also uses a sparsity promoting algorithms which can improve the conditioning of the problem and force many of the coefficients to zero, which can reduce the error [2],[1]. However, even if the sparsity promoting algorithm chooses all of the correct coefficients, we have just shown that there is still a limit to the accuracy of the estimation determined by the sampling frequency and trajectory. The primary purpose of this section is to analyze alternate methods of approximating \(\mu^{i}\) and \(\Sigma^{i,j}\) which can improve the performance of SINDy (with respect to \(\Delta t\)).
The methods above resulted from first order approximations (4.1) and (4.2) of \(\mu^{i}(X_{t})\) and \(\Sigma^{i,j}(X_{t})\), respectively. Higher order approximations of these data points can in turn lead more accurate approximations of the functions in the output of SINDy. We can generate better approximations for the drift using multistep difference method. The use of linear multistep methods (LMMs) to estimate dynamics is investigated in [10] for deterministic systems. While the estimates for the diffusion will be similar, they can not be achieved strictly using LMMs.
In order to achieve a higher order approximation, we will need to use more data points in the approximation at each time step. As such, we will define
\[\Theta_{n}=\begin{bmatrix}\theta(X_{t_{n}})\\ \theta(X_{t_{n+1}})\\ \vdots\\ \theta(X_{t_{N+n-1}})\end{bmatrix}\quad\text{ and }\quad D_{n}^{i}=\begin{bmatrix}X_{t_{n}}^{i}-X_{t_{0}} ^{i}\\ X_{t_{n+1}}^{i}-X_{t_{1}}^{i}\\ \vdots\\ X_{t_{N+n-1}}^{i}-X_{t_{N-1}}^{i}\end{bmatrix}. \tag{6.1}\]
With this definition, \(\Theta_{n}\) contains the data of \(\theta\) time delayed by \(n\) steps. With the earlier definition of \(\Theta\), we have \(\Theta=\Theta_{0}\). Similarly, \(D_{n}^{i}\) contains the data for the change in \(X\) over \(n\) time steps, with \(D_{1}^{i}=D^{i}\) using the earlier definition of \(D^{i}\).
### Drift
First, we will look to make improvements on estimating the drift. These estimates will be simpler than those for the diffusion. As mentioned, these approximations are directly analogous to the linear multistep methods used in the simulation of deterministic systems.
#### 6.1.1 Second Order Forward difference
The first order forward difference, which is used to approximate \(\mu^{i}\) in Theorem 5.1, is also commonly used to approximate the derivative \(f(x)\) in the differential equation \(\dot{x}=f(x)\). In fact, if we compare the weak Ito-Taylor expansion (3.4) with the deterministic Taylor series for an ODE, (2.4), we see that they are almost identical. There are many higher order methods which are used to approximate \(f\) in the simulation of ODEs. By analogy, can expect that these methods would give an approximation of the same order for \(\mu^{i}\) (in expectation). One of the simplest of these would be the second order forward difference,
\[\mu^{i}(X_{t_{n}})\approx\frac{4(X_{t_{n+1}}-X_{t})-(X_{t_{n+2}}-X_{t})}{2 \Delta t}=\frac{-3X_{t_{n}}^{i}+4X_{t_{n+1}}^{i}-X_{t_{n+2}}^{i}}{2\Delta t}. \tag{6.2}\]
Similar to before we can define the error in this approximation to be
\[e_{t}=\frac{-3X_{t}^{i}+4X_{t+\Delta t}^{i}-X_{t+2\Delta t}^{i}}{2\Delta t}- \mu^{i}(X_{t}).\]
Using the weak Ito-Taylor expansion (3.4), it is easy to see that
\[\mathbb{E}(e_{t_{n}}\,|\,X_{t_{n}})=-\frac{(L^{0})^{2}\mu^{i}(X_{t_{n}})}{3} \Delta t^{2}+O(\Delta t^{3}), \tag{6.3}\]
which shows that this method does indeed give a second order approximation of \(\mu\). Using this approximation, we can set up a matrix formulation of (6.2):
\[\Theta_{0}\alpha^{i}\approx\frac{1}{2\Delta t}\left(4D_{1}^{i}-D_{2}^{i}\right),\]
If we set up the normal equations, this becomes
\[\Theta_{0}^{*}\Theta_{0}\tilde{\alpha}^{i}=\frac{1}{2\Delta t}\Theta_{0}^{*} \left(4D_{1}^{i}-D_{2}^{i}\right). \tag{6.4}\]
**Theorem 6.1**: _Consider the approximation \(\tilde{\alpha}^{i}\) obtained from (6.4). The mean error is bounded by_
\[\left\|\mathbb{E}(err)\right\|_{2}=\frac{C_{1}}{3}(\|(L^{0})^{2}\mu^{i}\|+O( \Delta t)+o(1))\Delta t^{2}\]
_and the mean squared error by_
\[\mathbb{E}\left(\|(err)\|_{2}^{2}\right)=\frac{C_{2}}{T}\left(\sum_{j}^{d}\| \sigma^{i,j}\|_{4}^{2}+O(\Delta t^{\frac{1}{2}})+o(1)\right).\]
_The constants \(C_{1}\) and \(C_{2}\) are the same as those given in (5.6)._
The proof of Theorem 6.1 is similar to that of Theorem 5.1, but requires some extra algebraic manipulation, so it is included in the appendix.
**Remark 2**: _These methods can easily be generalized to higher order methods using higher order finite differences, as will be done in section 6.1.3. However, the least squares solution only yields correct results for forward differences. Other finite difference methods can cause certain sums to converge to the wrong stochastic integral. For example, a central difference approximation for \(\mu^{i}\),_
\[\mu^{i}_{t}\approx\frac{X_{t+\Delta t}^{i}-X_{t-\Delta t}^{i}}{2\Delta t},\]
_gives us \(\Theta_{1}\alpha^{i}\approx\frac{1}{2\Delta t}D_{2}^{i}\). The normal equations for the least squares solution_
\[\Theta_{1}^{*}\Theta_{1}\tilde{\alpha}^{i}=\frac{1}{2\Delta t}\Theta_{1}^{*}D_{2} ^{i} \tag{6.5}\]
_gives the wrong results, because as \(\Delta t\to 0\), \(\frac{1}{2}\Theta_{1}^{*}D_{2}^{i}\) converges to the Stratonovich integral instead of the Ito integral,_
\[\frac{1}{2}\Theta_{1}^{*}D_{2}^{i}\to\int_{0}^{T}\theta^{*}(X_{t})\circ dX_{t} ^{i}\neq\int_{0}^{T}\theta^{*}(X_{t})\,dX_{t}^{i},\]
_and \(\tilde{\alpha}^{i}\) will not converge to the correct value. To prevent this, (6.5) can instead be solved using_
\[\Theta_{0}^{*}\Theta_{1}\tilde{\alpha}^{i}=\frac{1}{2\Delta t}\Theta_{0}^{*}D_ {2}^{i},\]
_which gives the proper convergence. This amount to using \(\Theta_{0}\) as a set of instrumental variables (chapter 7 [15])._
#### 6.1.2 Trapezoidal Method
The second order method above uses additional measurements of \(X_{t}^{i}\) to provide a more accurate estimate of \(\mu^{i}\). Alternatively, we can use multiple measurements of \(\mu^{i}\) to better approximate the difference \(X_{t+\Delta t}^{i}-X_{t}^{i}\). Consider the first order forward difference given by (4.1).
\[\mu^{i}(X_{t_{n}})\approx\frac{X_{t_{n+1}}^{i}-X_{t_{n}}^{i}}{\Delta t}.\]
Theorem 5.1 used this difference to give an order \(\Delta t\) approximation of \(\mu^{i}\). However, it turns out that \(\frac{1}{2}(\mu^{i}(X_{t})+\mu^{i}(X_{t+\Delta t}))\) gives a much better approximation of this difference:
\[\frac{1}{2}\left(\mu^{i}(X_{t_{n}})+\mu^{i}(X_{t_{n+1}})\right)\approx\frac{X _{t_{n+1}}^{i}-X_{t_{n}}^{i}}{\Delta t}. \tag{6.6}\]
We will call this approximation the trapezoidal approximation, since this is exactly the trapezoidal method used in the numerical simulation of ODEs. If we consider the error in this equation,
\[e_{t}=\frac{X_{t_{n+1}}^{i}-X_{t_{n}}^{i}}{\Delta t}-\frac{1}{2}\left(\mu^{i} (X_{t_{n}})+\mu^{i}(X_{t_{n+1}})\right),\]
we can use the weak Ito-Taylor approximations of \(X_{t}\) and \(\mu^{i}(X_{t})\) to show that
\[\mathbb{E}(e_{t}\,|\,X_{t})=-(L^{0})^{2}\mu^{i}(X_{t})\frac{\Delta t^{2}}{12}+ O(\Delta t^{3}). \tag{6.7}\]
This not only gives us a second order method, with respect to \(\Delta t\), but the leading coefficient for the error is much smaller (by a factor of \(1/8\)) than the second order forward difference.
To set up the matrix formulation of (6.6), we have
\[\frac{1}{2}\left(\Theta_{0}+\Theta_{1}\right)\alpha^{i}\approx\frac{1}{\Delta t }D_{1}^{i}. \tag{6.8}\]
We can multiply (6.8) by \(\Theta_{0}^{*}\) on each side to obtain
\[\frac{1}{2}\Theta_{0}^{*}(\Theta_{0}+\Theta_{1})\tilde{\alpha}^{i}=\frac{1}{ \Delta t}\Theta_{0}^{*}D_{1}^{i}. \tag{6.9}\]
We can use this equation analogously to the normal equation; we will solve for \(\tilde{\alpha}^{i}\) either directly using matrix inversion or by using a sparse solver.
**Remark 3**.: _We note that we cannot solve (6.8) using least squares,_
\[\tilde{\alpha}^{i}\neq\frac{2}{\Delta t}(\Theta_{0}+\Theta_{1})^{+}D_{1}^{i}.\]
_Similar to Remark 2, this leads to sums converging to the wrong stochastic integral._
Theorem: Consider the estimation \(\tilde{\alpha}^{i}\) given by solving (11). The mean error is bounded by
\[err_{mean}\leq C_{1}\frac{\Delta t^{2}}{12}(\|(L^{0})^{2}\mu^{i}\|_{2}+O( \Delta t)+o(1))\]
and
\[err_{var}\leq\frac{C_{2}}{T}\left(\sum_{j=1}^{d}\|\sigma^{i,j}\|_{2}^{2}+O( \Delta t^{\frac{1}{2}})+o(1)\right).\]
Letting \(E\) be the matrix containing the samples of \(e_{t}\). We have
\[\frac{1}{\Delta t}D_{1}^{i}=\frac{1}{2}(\Theta_{0}+\Theta_{1})\alpha^{i}+E.\]
Using this in (11) gives us
\[\frac{1}{2}\Theta_{0}^{*}(\Theta_{0}+\Theta_{1})\tilde{\alpha^{i}}=\frac{1}{ 2}\Theta_{0}^{*}(\Theta_{0}+\Theta_{1})\alpha^{i}+\Theta_{0}^{*}E,\]
so the error is
\[err=\tilde{\alpha}^{i}-\alpha^{i}=\left(\frac{1}{2}\Theta_{0}^{*}(\Theta_{0}+ \Theta_{1})\right)^{-1}\Theta_{0}^{*}E.\]
Since \(\mathbb{E}(\theta(X_{t+\Delta t})|X_{t})=\theta(X_{t})+O(\Delta t)\), we can use ergodicity to evaluate
\[\frac{1}{2N}\Theta_{0}^{*}(\Theta_{0}+\Theta_{1})\rightarrow\langle\theta, \theta\rangle+O(\Delta t)+o(1).\]
The proof of first inequality then follows the proof of Theorem 5 and (10). The second inequality also follows using
\[\mathbb{E}\left(\|e_{t}\|_{2}^{2}\ |\ X_{t}=x\right)\leq\frac{1}{\Delta t}\sum_{ m=1}^{d}|\sigma^{i,m}(x)|^{2}+O(\Delta t^{\frac{-1}{2}}),\]
which can easily be derived using the Ito-Taylor expansions.
#### 6.1.3 General Method for Estimating Drift
We have given methods which give second order estimates of \(\alpha^{i}\). To generate methods which give even higher order approximations, we note the similarities of the above methods to linear multi-step methods used in the numerical simulation of ODEs. Using the general LMM as a guide, we set up a general method for approximating \(\mu^{i}\):
\[\sum_{l=0}^{k}a_{l}\,\mu^{i}(X_{t_{n+l}})\approx\sum_{l=1}^{p}b_{l}\,(X_{t_{n+ l}}^{i}-X_{t_{n}}^{i}), \tag{12}\]
or
\[\left(\sum_{l=0}^{k}a_{l}\Theta_{l}\right)\alpha^{i}\approx\sum_{l=1}^{p}b_{l }D_{l}^{i}.\]
Keeping Remark 2 in mind, we can solve this using
\[\left(\sum_{l=0}^{k}a_{l}\Theta_{0}^{*}\Theta_{l}\right)\tilde{\alpha^{i}}=b _{l}\sum_{l=1}^{p}\Theta_{0}^{*}D_{l}^{i}. \tag{13}\]
The coefficients in (12) can be chosen to develop higher order methods. However, due to the stochastic nature of the problem, large amounts of data may be required to achieve the order in practice. We will need enough data to average over the randomness in the SDE, and the higher order methods can be sensitive to noise. More detailed investigation into the convergence of certain classes of methods for dynamics discovery can be found in [10] for deterministic systems.
### Diffusion
In this section we will discuss improvements to the estimate for the diffusion. For some systems, particularly when the drift is large relative the diffusion, the first order approximation given above may not be sufficient to obtain an accurate estimate of the diffusion coefficient. Using similar ideas to the previous section we can use the Ito-Taylor expansions to develop more accurate estimates of \(\Sigma^{i,j}(X_{t})\). However, these methods will be more complex; in addition to samples of \(X_{t}\), some of these methods may also require data from the drift, \(\mu^{i}(X_{t})\) and \(\mu^{j}(X_{t})\).
#### 6.2.1 Drift Subtraction
Before discussing the higher order methods, we can make an improvement upon the first order method. By correcting for the effects of the drift in the first order method, we can make significant improvements to the constant controlling the error. The Ito-Taylor expansion for \(X_{t}\) gives us
\[X_{t+\Delta t}^{i}-X_{t}^{i}=\mu(X_{t})\Delta t+\sum_{m=1}^{d}\sigma(X_{t}) \Delta W_{t}^{m}+R_{t},\]
where \(\Delta W_{t}=W_{t+\Delta t}-W_{t}\) is the increment of a \(d\)-dimensional Wiener process and \(R_{t}\) is the remainder term. This equation, with the remainder term excluded, actually gives the Euler-Marayama method for simulating SDEs. In essence, the approximation (4.2) uses
\[X_{t+\Delta t}^{i}-X_{t}^{i}\approx\sum_{m=1}^{d}\sigma^{i,m}(X_{t})\Delta W_ {t}^{m}\]
to approximate the increment of the Wiener process. However, (4.2) tosses out the \(\mu(X_{t})\Delta t\) term because it is of a higher order. If we include it, we get the more accurate
\[\sum_{m=1}^{d}\sigma^{i,m}\Delta W_{t}^{m}=(X_{t+\Delta t}^{i}-X_{t}^{i})-\mu( X_{t})\Delta t-R_{t}. \tag{6.12}\]
We can use this to generated a better approximation of \(\Sigma^{i,j}\),
\[\Sigma^{i,j}(X_{t})\approx\frac{(X_{t+\Delta t}^{i}-X_{t}^{i}-\mu^{i}(X_{t}) \Delta t)(X_{t+\Delta t}^{j}-X_{t}^{j}-\mu^{j}(X_{t})\Delta t)}{2\Delta t}. \tag{6.13}\]
This approximation will be more accurate than (4.2), but it will have the same order with respect to \(\Delta t\). Letting \(e_{t}\) be the error in (6.13), we can use the weak Ito-Taylor expansion to show
\[\mathbb{E}(e_{t}\,|\,X_{t})=f(X_{t})\frac{\Delta t}{4}+O(\Delta t^{2}),\qquad \quad f=L^{0}\Sigma^{i,j}+\sum_{m=1}^{d}\left(\Sigma^{i,m}\frac{\partial\mu^{ i}}{\partial x^{m}}+\Sigma^{j,m}\frac{\partial\mu^{j}}{\partial x^{m}}\right).\]
This gives an improvement over (5.7) by removing the \(\mu^{i}\mu^{j}\Delta t\) term in \(f\) (compared to Theorem 5.2). While this may not be an increase in order, if the contribution of the drift dominates the diffusion in an SDE, this term will give the main contributions to the error. As we will see in the numerical experiments, this leads to a drastic improvement in accuracy for some problems.
In order to implement this method, we will need an approximation of \(\mu^{i}\). However, we can use the methods above to represent the drift as \(\mu^{i}(X_{t})\approx\theta(X_{t})\tilde{\alpha}^{i}\). We can use this to set up the matrix equations
\[\Theta_{0}^{*}\Theta_{0}\tilde{\beta}^{i,j}=\frac{1}{\Delta t}(D_{1}^{i}-\Theta _{0}\tilde{\alpha}^{i})\odot(D_{1}^{j}-\Theta_{0}\tilde{\alpha}^{j}), \tag{6.14}\]
and solve for \(\tilde{\beta}^{i,j}\).
**Remark 4**: _The equation (6.14) assumes that the same dictionary \(\theta\) is used to estimate \(\mu^{i},\mu^{j}\) and \(\Sigma^{i,j}\). In general, we could used separate dictionaries to estimate each of the parameters, since all we need are the approximations of the samples of \(\mu^{i}(X_{t})\) and \(\mu^{j}(X_{t})\) to estimate \(\beta^{i,j}\)._
### Second Order Forward Difference
While subtracting the drift from the differences \(X^{i}_{t+\Delta t}-X^{i}_{t}\) gives marked improvements, we can also generate a higher order method using a two step forward difference, similar to the drift. The analysis for the estimation of the diffusion constant using the two step forward difference is essentially identical to that of the drift, so we will go through it briefly. Define the approximation
\[\Sigma^{i,j}\approx\frac{4(X^{i}_{t+\Delta t}-X^{i}_{t})(X^{j}_{t+\Delta t}-X^{ j}_{t})-(X^{i}_{t+2\Delta t}-X^{i}_{t})(X^{j}_{t+2\Delta t}-X^{j}_{t})}{4 \Delta t}. \tag{6.15}\]
As usual, letting \(e_{t}\) be the error in this approximation, we can use the Ito-Taylor expansions (3.4) and (3.8) to show that
\[\mathbb{E}(e_{t})=O(\Delta t^{2})\quad\text{ and }\quad\mathbb{E}(|e_{t}|^{2})=O( \Delta t).\]
This will gives us a second order method for the diffusion coefficients. We did not include the constants for the order \(\Delta t^{2}\) for brevity, since the number of terms in the expressions can get quite large. We can use the approximation (6.15) to set up the matrix equations
\[\Theta^{*}_{0}\Theta_{0}\tilde{\beta}^{i,j}=\frac{1}{4\Delta t}\Theta^{*}_{0} \left(4D^{i}_{1}\odot D^{j}_{1}-D^{i}_{2}\odot D^{j}_{2}\right), \tag{6.16}\]
which we can solve for \(\tilde{\beta}^{i,j}\).
**Theorem 6.3**: _Consider the estimate \(\tilde{\beta}^{i,j}\) given by solving (6.16). Then we have_
\[err_{mean}=O(\Delta t^{2})+o(1)\]
_and_
\[err_{var}=\frac{1}{T}O(\Delta t)+o(1/T).\]
The proof of Theorem 6.3 is similar to the previous proofs. Additionally, we only give the leading order of the error, so deriving the bounds for \(\mathbb{E}(e_{t}|X_{t})\) and \(\mathbb{E}(|e_{t}|^{2}|X_{t})\) is simpler than the previous methods.
#### 6.3.1 Trapezoidal Method
Extending the trapezoidal approximation to estimating the diffusion coefficient is slightly trickier. Let \(\Delta X^{i}_{t}=X^{i}_{t+\Delta t}-X^{i}_{t}\). If we attempt use the analogue to (6.6), we get
\[\Sigma^{i,j}(X_{t_{n+1}})+\Sigma^{i,j}(X-t)=\frac{\Delta X^{i}_{t_{n}}\Delta X ^{j}_{t_{n}}}{\Delta t}+R_{t_{n}},\]
with
\[\mathbb{E}(R_{t_{n}})=\frac{\Delta t}{2}f(X_{t})+o(\Delta t^{2}),\qquad f=2 \mu^{i}\mu^{j}+\sum_{k=1}^{d}\left(\Sigma^{i,k}\frac{\partial\mu^{i}}{\partial x ^{k}}+\Sigma^{j,k}\frac{\partial\mu^{j}}{\partial x^{k}}\right),\]
which is still only an order \(\Delta t\) method. However, we already demonstrated in (6.12) that correct the difference \(\Delta X^{i}_{t}\) for the drift can improve our approximation of \(\sum_{m=1}^{d}\sigma^{i,m}\Delta W^{m}_{t}\). We will use the same trick here, except we will improve upon (6.12) by using the average values of \(\mu^{i}\) and \(\mu^{j}\) instead of the value at the left endpoint:
\[\sum_{m=1}^{d}\sigma^{i,m}\Delta W^{m}_{t}\approx(X_{t+\Delta t}-X_{t})-\frac {\Delta t}{2}(\mu(X_{t})+\mu(X_{T+\Delta t})).\]
If we use these differences to generate the trapezoidal method, we get
\[\Sigma^{i,j}(X_{t+\Delta t})-\Sigma^{i,j}(X_{t})\approx\frac{\left(\Delta X^{i }_{t}-\frac{\Delta t}{2}(\mu^{i}(X_{t})+\mu^{i}(X_{t+\Delta t}))\right)\left( \Delta X^{j}_{t}-\frac{\Delta t}{2}(\mu^{j}(X_{t})+\mu^{j}(X_{t+\Delta t})) \right)}{2\Delta t}.\]
If we consider the error in (6.17), using the appropriate Ito-Taylor expansions we can show
\[|\mathbb{E}(e_{t}\,|\,X_{t})|=O(\Delta t^{2})\quad\quad\text{and}\quad\quad \mathbb{E}(|e_{t}|^{2})=O(\Delta t).\]
Then, using the usual matrix notation, we can set up the equation
\[\Theta_{0}^{*}(\Theta_{0}+\Theta_{1})\tilde{\beta}^{i,j}=\frac{1}{\Delta t} \left(D_{1}^{i}-\frac{\Delta}{2}t(\Theta_{0}+\Theta_{1})\alpha^{i}\right) \odot\left(D_{1}^{j}-\frac{\Delta t}{2}(\Theta_{0}+\Theta_{1})\alpha^{j} \right). \tag{6.18}\]
We can solve this equation to get an order \(\Delta t^{2}\) approximation of \(\beta^{i,j}\).
**Theorem 6.4**: _Consider the estimate \(\tilde{\beta}^{i,j}\) given by solving (6.18). Then we have_
\[err_{mean}=O(\Delta t^{2})+o(1)\]
_and_
\[err_{var}=\frac{1}{T}O(\Delta t)+o(1/T).\]
The proof of Theorem 6.4 is similar to the previous proofs, using the appropriate error bounds. Although the order of the error is identical to that of Theorem (6.3), we will see that this method tends to have lower error. We did not include the constant terms for these errors for brevity, since the higher order Ito-Taylor expansions involve many terms.
## 7 Numerical Examples
In this section, we demonstrate the performance of the methods presented above on numerical examples. For each example, we will generate approximations \(\tilde{\alpha}^{i}\approx\alpha^{i}\) and \(\tilde{\beta}^{i,j}\approx\beta^{i,j}\). However, to present the data more simply, instead of computing the mean and mean squared error for each vector \(\tilde{\alpha}^{i}\) and \(\tilde{\beta}^{i,j}\), we will be aggregating the errors across all the coefficients. We will compute the mean error, normalized for the norms of \(\alpha^{i}\) and \(\beta^{i,j}\) using
\[Err_{m}=\left(\frac{\sum_{i=1}^{d}\|\mathbb{E}(\tilde{\alpha}^{i})-\alpha^{i} \|_{2}^{2}}{\sum_{i=1}^{d}\|\alpha^{i}\|_{2}^{2}}\right)^{\frac{1}{2}}\quad \text{ or }\quad Err_{m}=\left(\frac{\sum_{i\geq j\geq 1}^{d}\|\mathbb{E}( \tilde{\beta}^{i,j})-\beta^{i,j}\|_{2}^{2}}{\sum_{i\geq j\geq 1}^{d}\|\beta^{i,j}\|_{ 2}^{2}}\right)^{\frac{1}{2}}.\]
Similarly, we will calculate the normalized variance
\[Err_{var}=\frac{\sum_{i=1}^{d}Var\left(\tilde{\alpha}^{i}\right)}{\sum_{i=1}^ {d}\|\alpha^{i}\|_{2}^{2}}\quad\text{ or }\quad Err_{var}=\frac{\sum_{i\geq j\geq 1}^{d} Var\left(\tilde{\beta}^{i,j}\right)}{\sum_{i\geq j\geq 1}^{d}\|\beta^{i,j}\|_{2}^{2}}.\]
Since these errors are based on aggregating the errors for all of the components of \(\alpha^{i}\) or \(\beta^{i,j}\), they will demonstrate the same convergence rates as in Theorems 5.1-6.4. The constants, however, may be different.
\begin{table}
\begin{tabular}{|c||c|c||c|c|} \hline & Drift & \multicolumn{2}{|c|}{Diffusion} \\ \hline \hline Name & Equation & Leading Error Term & Equation & Error \\ \hline FD-Ord 1 & (4.7) & \(\frac{C_{1}}{2}\|L^{0}\mu^{i}\|_{2}\Delta t\) & (4.8) & \(O(\Delta t)\) \\ \hline FD-Ord 2 & (6.4) & \(\frac{2C_{1}}{3}\|(L^{0})^{2}\mu^{i}\|_{2}\Delta t^{2}\) & (6.16) & \(O(\Delta t^{2})\) \\ \hline Trapezoidal & (6.9) & \(\frac{C_{1}}{12}\|(L^{0})^{2}\mu^{i}\|_{2}\Delta t^{2}\) & (6.18) & \(O(\Delta t^{2})\) \\ \hline Drift-Sub & - & - & (6.14) & \(O(\Delta t)\) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the methods for estimating the drift \((\mu^{i})\) and the diffusion \((\Sigma^{i,j})\).
For each example, we will estimate the drift and diffusion using each of the methods described. The drift will be estimated using the first and second order forward differences, as well as the trapezoidal approximation. For the diffusion, we will use the first and second order forward differences, the drift-subtracted first order difference, and the trapezoidal method. For the drift-subtracted estimation, we will use the estimation for \(\mu\) generated by the first order forward difference. Similarly, for the trapezoidal approximation for \(\Sigma\), we will use the estimate generated by the trapezoidal approximation for \(\mu\).
### Double Well Potential
Consider the SDE
\[dX_{t}=\left(-X_{t}^{3}+\frac{1}{2}X_{t}\right)\,dX_{t}+\left(1+\frac{1}{4}X_{t }^{2}\right)dW_{t} \tag{10}\]
This equation represents a diffusion in the double well potential \(U(x)=\frac{1}{4}x^{4}-\frac{1}{2}x^{2}\). Without the diffusion, the trajectories of this system will settle towards one of two fixed points, depending on which basin of attraction it started in. With the stochastic forcing, the trajectories will move around in one basin of attraction until it gets sufficiently perturbed to move to the other basin. We also note that for the majority of the trajectory, the state will be near the point where the drift is zero, so the dynamics will be dominated by the diffusion. At these points, the trajectory will behave similarly to Brownian motion.
For the SINDy algorithm, we will use a dictionary of monomials in \(x\) up to degree 14:
\[\theta(x)=\begin{bmatrix}1&x&\ldots&x^{14}\end{bmatrix}.\]
This basis will be used to estimate both the drift and diffusion. To generate the data for the algorithm, we simulated (10) using the Euler-Maruyama method 1,000 times with a time step of \(2\times 10^{-4}\) seconds and a duration of 20,000 seconds. The initial condition was drawn randomly for each simulation from the standard normal distribution. The SINDy methods were then run on the data from each simulation for different sampling periods, \(\Delta t\), and lengths of the trajectory, \(T\). We use a minimum \(\Delta t\) of 0.002 so the simulation has a resolution of at least ten steps between each data sample. The truncation parameters for the sparse solver were set at \(\lambda=0.005\) for the drift and \(\lambda=0.001\) for the diffusion.
Figure 1: (Left) The mean error in the estimation of the drift coefficients for the double well system (10) is plotted as a function of \(\Delta t\). The error is approximated using 1,000 trajectories of length \(T=20,000\) seconds. (Center, Right) The variance for each method is plotted against the sampling period, \(\Delta t\), and the trajectory length, \(T\). For the trajectory length is fixed at \(T=20,000\) seconds for the center plot, while the sampling period was fixed at \(\Delta t=0.004=4\times 10^{-3}\) for the rightmost plot.
As can be seen from from figure 1, the expected errors in all three methods were converging to zero as \(\Delta t\to 0\). For small \(\Delta t\), the expected estimate was within 1% of the true value. Additionally, the two higher order methods showed that, in expectation, they produce more accurate results and appear to converge more quickly, in line with Theorems 1, 2, and 3. For these methods, the expected error was as much as an order of magnitude smaller, depending on the size of \(\Delta t\).
The variance, however, is rather large relative to the size of the expected error for all three methods. This is likely due to the system tending to settle towards the points \(x=\pm 1/\sqrt{2}\) where the drift is zero. Near these points, the dynamics are dominated by the diffusion, making it difficult to estimate the drift. As can be seen (noting the scale of the center plot), the variance does not change a great amount as \(\Delta t\) decreases, as is predicted for the estimates of the drift. As shown in the rightmost plot, the variance decreases as the length of the trajectory increases. In order to more fully benefit from using the higher order methods to the full extent, we would need a long enough trajectory to control the variance.
For the diffusion, figure 2 shows again that, as \(\Delta t\to 0\), all of the methods do indeed converge in expectation. The Drift-Sub method slightly outperforms FD-Ord 1, the error is typically reduced by about \(20\%-30\%\). Of the two higher order method, the trapezoidal method typically yields the best results, often an order of magnitude better than FD-Ord 1. FD-Ord 2 also gives substantial improvements for small \(\Delta t\). Contrary to the drift, the variance in the estimate of the diffusion does decrease as \(\Delta t\) goes to zero. The decrease appears to be roughly proportional to both \(\Delta t\) and \(1/T\), which is in line with the Theorems 1, 2, and 3.
### Noisy Van-Der-Pol Oscillator
Consider the ODE
\[\begin{bmatrix}\dot{x}^{1}\\ \dot{x}^{2}\end{bmatrix}=\begin{bmatrix}x^{2}\\ (1-(x^{1})^{2})x^{2}-x^{1}\end{bmatrix}.\]
This is the Van-Der-Pol equation, which describes a nonlinear oscillator. We can perturb this equation by adding noise, we get the SDE
\[\begin{bmatrix}dX_{t}^{1}\\ dX_{t}^{2}\end{bmatrix}=\begin{bmatrix}X_{t}^{2}\\ (1-(X_{t}^{1})^{2})X_{t}^{2}-X_{t}^{1}\end{bmatrix}dt+\sigma(X_{t})dW_{t}, \tag{2}\]
Figure 2: (Left) The mean error in the estimation of the diffusion coefficients for the double well system (2) is plotted as a function of \(\Delta t\). The error is approximated using 1,000 trajectories of length \(T=20,000\) seconds. (Center, Right) The variance for each method is plotted against the sampling period, \(\Delta t\), and the trajectory length, \(T\). For the trajectory length is fixed at \(T=20,000\) seconds for the center plot, while the sampling period was fixed at \(\Delta t=0.04=4\times 10^{-3}\) for the rightmost plot.
where \(W_{t}\) is a two dimensional Wiener process. For the simulations, we let
\[\sigma(x)=\frac{1}{2}\begin{bmatrix}1+0.3x^{2}&0\\ 0&0.5+0.2x^{1}\end{bmatrix}.\]
We chose this system to represent a different type of limiting behavior. For this system, the dynamics settle around a limit cycle. While they will have a certain amount of randomness, the trajectories will demonstrate an approximately cyclic behavior. In particular, this also means that the drift will rarely be near zero, as opposed to the previous example where the drift was often small.
The dictionary we will use for the SINDy algorithm consists of all monomials in \(x^{1}\) and \(x^{2}\) up to degree 6:
\[\theta(x)=\begin{bmatrix}1&x^{1}&x^{2}&x^{1}x^{2}&\ldots&(x^{1})^{2}(x^{2})^{4 }&x^{1}(x^{2})^{5}&(x^{2})^{6}\end{bmatrix}.\]
This basis will be used to estimate both the drift and diffusion. To generate the data for the algorithm, we simulated (7.2) using the Euler-Maruyama method 1,000 times with a time step of \(2\times 10^{-5}\) seconds and a duration of 1,000 seconds. Each component of the initial condition was drawn randomly for each simulation from the standard normal distribution. The SINDy methods were then run on the data from each simulation for different sampling periods, \(\Delta t\), and lengths of the trajectory, \(T\). As before, we use \(\Delta t\geq 2\times 10^{-4}\) to ensure that sampling period is at least 10 times the simulation time step. The truncation parameters for the sparse solver were set at \(\lambda=0.05\) for the drift and \(\lambda=0.02\) for the diffusion.
In figure 7.3, we first note that the variance very quickly drops to about \(5\times 10^{-5}\) and stays roughly constant as \(\Delta t\) decreases. This falls very much in line with the Theorems 5.1, 6.1, and 6.2 which assert that the variance does not depend on the sample frequency, it only decreases with the trajectory length \(T\). For the expected error, the FD-Ord 2 and trapezoidal methods show drastic improvements over FD-Ord 1, with the trapezoidal method reducing the error by almost two orders of magnitude on some values of \(\Delta t\). For the larger \(\Delta t\), the slopes of the graphs demonstrate that these methods are converging at twice the order of the first order forward difference, as predicted by Theorems 5.1, 6.1, and 6.2. However, both second order methods quickly reach a point where the performance remained constant at about \(2\times 10^{-4}\). This is due to the lack of data to average over
Figure 7.3: (Left) The mean error in the estimation of the drift coefficients for the Van-Der-Pol system (7.2) is plotted as a function of \(\Delta t\). The error is approximated using 1,000 trajectories of length \(T=1,000\) seconds. (Center, Right) The variance for each method is plotted against the sampling period, \(\Delta t\), and the trajectory length, \(T\). For the trajectory length is fixed at \(T=1,000\) seconds for the center plot, while the sampling period was fixed at \(\Delta t=0.008=8\times 10^{-3}\) for the rightmost plot.
the random variation to sufficient precision. With sufficient data, we would expect the performance to continue to improve proportionally to \(\Delta t^{2}\).
For the diffusion, figure 4 demonstrates a greater separation in the performance of the different methods compared to the double well system. Here, the FD-Ord 1 and drift subtracted methods both demonstrate the same first order convergence, as predicted in Theorem 2, but the drift subtracted method demonstrates a substantially lower error, ranging from half an order to almost a full order of magnitude better. FD-Ord 2 begins at roughly the same error as FD-Ord 1 for large \(\Delta t\), but convergences faster, as predicted by Theorem 3, until it gives over an order of magnitude improvement for small \(\Delta t\). Finally, although it is difficult to judge the speed of convergence for the trapezoidal method, it gives the most accurate results across all \(\Delta t\). The variance for all of the methods behave similarly to the Double Well example and as expected, decreasing as \(\Delta t\to 0\) and \(T\to\infty\).
### Noisy Lorenz Attractor
Consider the ODE
\[\dot{x}=\begin{bmatrix}\dot{x}^{1}\\ \dot{x}^{2}\\ \dot{x}^{3}\end{bmatrix}=\begin{bmatrix}10(x^{2}-x^{1})\\ x^{1}(28-x^{3})-x^{2}\\ x^{1}x^{2}-\frac{8}{3}x^{3}\end{bmatrix}=f(x).\]
This is the Lorenz system, which is famously a chaotic system exhibiting a strange attractor. If we perturb this equation by adding noise, we get the SDE
\[dX_{t}=f(X_{t})dt+\sigma(X_{t})dW_{t}, \tag{22}\]
where \(W_{t}\) is a three dimensional Wiener process. For this example, we let
\[\sigma(x)=\begin{bmatrix}1+\sin(x^{2})&0&\sin(x^{1})\\ 0&1+\sin(x^{3})&0\\ \sin(x^{1})&0&1-\sin(x^{2})\end{bmatrix}.\]
To generate the data for the algorithm, we simulated (21) using the Euler-Maruyama method 1,000 times with a time step of \(2\times 10^{-5}\) seconds and a duration of 1,000 seconds. Each component of initial condition was drawn randomly for each simulation from the standard normal distribution.
Figure 4: (Left) The mean error in the estimation of the diffusion coefficients for the Van-Der-Pol system (21) is plotted as a function of \(\Delta t\). The error is approximated using 1,000 trajectories of length \(T=1,000\) seconds. (Center, Right) The variance for each method is plotted against the sampling period, \(\Delta t\), and the trajectory length, \(T\). For the trajectory length is fixed at \(T=1,000\) seconds for the center plot, while the sampling period was fixed at \(\Delta t=0.008=8\times 10^{-3}\) for the rightmost plot.
The SINDy methods were then run on the data from each simulation for different sampling periods, \(\Delta t\), and lengths of the trajectory, \(T\). The truncation parameters for the sparse solver were set at \(\lambda=0.05\) for the drift and \(\lambda=0.02\) for the diffusion.
We will use different dictionaries to estimate the drift and diffusion. For the drift, the dictionary consists of all monomials in \(x^{1},\ x^{2}\), and \(x^{3}\) up to degree 4:
\[\theta(x)=\begin{bmatrix}1&x^{1}&x^{2}&\ldots&x^{1}x^{2}(x^{3})^{3}&(x^{2})^{2 }(x^{3})^{3}&x^{2}(x^{3})^{4}&(x^{3})^{5}\end{bmatrix}.\]
As before, figure 7.5 shows that the variance of the estimate for the drift decreases steadily as \(T\to\infty\), while it approaches a minimum value as \(\Delta t\) decreases and remains constant after reaching that minimum. In terms of the mean error, this example gives the clearest confirmation of the convergence rates demonstrated in Theorems 5.1, 6.1, and 6.2. The slopes of the plots show that the error with FD-Ord 1 is roughly proportional to \(\Delta t\), while the FD-Ord 2 and trapezoidal methods converge at double the rate. For small \(\Delta t\), the second order methods do not seem to improve, due to the lack of sufficient data to compute the averages to high enough precision.
To estimate the diffusion, we used a dictionary consisting of all monomials in \(\sin(x^{1})\), \(\sin(x^{2})\), and \(\sin(x^{3})\) up to degree four:
\[\theta(x)=\begin{bmatrix}1&\sin(x^{1})&\sin(x^{2})&\ldots&\sin(x^{1})\sin(x^{2 })\sin^{2}(x^{3})&\sin(x^{2})\sin^{3}(x^{3})&\sin^{4}(x^{3})\end{bmatrix}.\]
The error plot in figure 7.6 provides the most compelling example of the improvements of the higher order methods for estimating the diffusion. FD-Ord 1 clearly demonstrates its order one convergence as \(\Delta t\to 0\) (Theorem 5.2), but the error is quite large compared to the other methods. Even at our highest sampling frequency, \(\Delta t=2\times 10^{-}4\), we only get slightly accurate results, with an error over 20%. For this system, the drift subtracted method, although still first order, provides great improvements over FD-Ord 1, nearly two orders of magnitude better for most \(\Delta t\). FD-Ord 2 also demonstrates the second order convergence given in Theorem 6.3, giving very accurate results for small \(\Delta t\). Finally, the best performance again comes from the Trapezoidal method, which gives the best performance across all \(\Delta t\). As expected from Theorem 6.4, we can see that it converges faster than FD-Ord 1, but the convergence rate is not as clear as that of the other methods.
As for the variance, it decreased for all four methods as \(T\) increased and \(\Delta t\) decreased, as expected. However, the Trapezoidal and drift subtracted methods both showed a substantially
Figure 7.5: (Left) The mean error in the estimation of the drift coefficients for the Lorenz system (7.3) is plotted as a function of \(\Delta t\). The error is approximated using 1,000 trajectories of length \(T=1,000\) seconds. (Center, Right) The variance for each method is plotted against the sampling period, \(\Delta t\), and the trajectory length, \(T\). For the trajectory length is fixed at \(T=1,000\) seconds for the center plot, while the sampling period was fixed at \(\Delta t=0.08=8\times 10^{-2}\) for the rightmost plot.
lower variance. This is likely because the drift tends to dominate the diffusion in this system. Both the drift subtracted and trapezoidal methods correct for this, preventing the drift from having an effect on the estimate of the diffusion.
## 8 Conclusion
As was shown in this and previous papers ([1],[6],[4]), the SINDy algorithm can be used to accurately estimate the parameters of a stochastic differential equation. However, the significant amount of noise involved requires one to use either use great deal of data (i.e. a long time series) and/or methods which improve the robustness of SINDy to noise. Unfortunately, even if SINDy should identify all of the correct dictionary functions present in the dynamics, we showed that the sampling frequency limits the accuracy of the results when using the first order Kramer-Moyal formulas to estimate the drift and diffusion. The necessity for high sampling frequencies, combined with long trajectories, make SINDy a data hungry algorithm.
The higher order estimates presented in this paper allow us to overcome the \(O(\Delta t)\) convergence given in [1]. With the higher order methods we can compute accurate estimations of the SDEs using far lower sampling frequencies. In addition to making SINDy a more accurate system identification tool, these improvements also greatly reduce the data requirements to feed the algorithm. By achieving accurate results at lower sampling frequencies we can reduce the data acquisition constraint, which makes SINDy a more feasible system identification method for SDEs.
Figure 6: (Left) The mean error in the estimation of the diffusion coefficients for the Lorenz system (7.3) is plotted as a function of \(\Delta t\). The error is approximated using 1,000 trajectories of length \(T=1,000\) seconds. (Center, Right) The variance for each method is plotted against the sampling period, \(\Delta t\), and the trajectory length, \(T\). For the trajectory length is fixed at \(T=1,000\) seconds for the center plot, while the sampling period was fixed at \(\Delta t=0.02=2\times 10^{-2}\) for the rightmost plot. |
2309.09263 | Zariski invariant for quasi-ordinary hypersurfaces | We introduced an $\tilde{\mathcal{A}}$-invariant for quasi-ordinary
parameterizations and we consider it to describe quasi-ordinary surfaces with
one generalized characteristic exponent admitting a countable moduli. | Rafael Afonso Barbosa, Marcelo Escudeiro Hernandes | 2023-09-17T12:59:10Z | http://arxiv.org/abs/2309.09263v2 | # Zariski invariant for quasi-ordinary hypersurfaces
###### Abstract
We introduced an \(\tilde{\mathcal{A}}\)-invariant for quasi-ordinary parameterizations and we consider it to describe quasi-ordinary surfaces with one generalized characteristic exponent admitting a countable moduli.
2020 Mathematics Subject Classification: 14B05 (primary), 32S25 (secondary).
key words: Quasi-ordinary hypersurface, Analytical invariants, Generalized Zariski exponents.
## 1 Introduction
In [13] Zariski introduces an analytic invariant for irreducible plane curves (plane branches) that can be determined directly from a parameterization such an invariant is known as _Zariski invariant_ or _Zariski exponent_. Considering the topological class and the Zariski exponent it is possible to describe all plane branches admitting a zero-dimensional moduli space (see [3] and [7], for instance).
Analytic plane curves are particular cases of quasi-ordinary hypersurfaces. An analytic germ \((\mathcal{X},0)\subset\left(\mathbb{C}^{r+1},0\right)\) of hypersurface is called a _quasi-ordinary_, shortly q.o.h., if there are local coordinates \((\underline{X},X_{r+1}):=(X_{1},\ldots,X_{r},X_{r+1})\) such that \((\mathcal{X},0)\), in these coordinates, is given by \(\left\{(\alpha_{1},\ldots,\alpha_{r+1})\in\mathbb{C}^{r+1};f(\alpha_{1},\ldots,\alpha_{r+1})=0\right\}\) where \(f\in\mathbb{C}\{\underline{X}\}\left[X_{r+1}\right]\) is a Weierstrass polynomial with discriminant \(\Delta_{X_{r+1}}f=\underline{X}^{\delta}\cdot u:=X_{1}^{\delta_{1}}\cdot\ldots \cdot X_{r}^{\delta_{r}}\cdot u\) for some unit \(u\in\mathbb{C}\{\underline{X}\}\) and \(\delta=(\delta_{1},\ldots,\delta_{r})\in\mathbb{N}^{r}\). This condition is equivalent to saying that there exists a finite morphism \(\pi:(\mathcal{X},0)\rightarrow(\mathbb{C}^{r},0)\) such that the discriminant locus is contained in a normal crossing divisor.
Although q.o.h. are generalizations of plane curves, such hypersurfaces do not have isolated singularities in general. The only possible quasi-ordinary hypersurface isolated singularities are plane curves and normal surfaces. On the other hand plane branches and irreducible q.o.h. share some important properties: in both cases, we can obtain parameterizations and determine
the topological class using a finite number of certain exponents. In fact, by the Abhyankar-Jung theorem (see [1]), if \(f\in\mathbb{C}\{\underline{X}\}\left[X_{r+1}\right]\) is an irreducible Weierstrass polynomial of degree \(n\) and it defines a q.o.h. then any root \(\xi\) of \(f\) (called a _quasi-ordinary branch_) belongs to \(\mathbb{C}\left\{\underline{X}^{\frac{1}{n}}\right\}:=\mathbb{C}\left\{X_{1}^{ \frac{1}{n}},\ldots,X_{r}^{\frac{1}{n}}\right\}\). In this way, denoting \(\{\xi_{1},\ldots,\xi_{n}\}\) the set of roots of \(f\) we have
\[\Delta_{X_{r+1}}f=(-1)^{\frac{n(n-1)}{2}}\prod_{i\neq j}(\xi_{i}-\xi_{j})= \underline{X}^{\delta}\cdot u(\underline{X})\in\mathbb{C}\left\{\underline{X} \right\}\ \ \mbox{\rm{wit}}\ u(\underline{0})\neq 0.\]
In particular, \(\xi_{i}-\xi_{j}=X_{1}^{\frac{\lambda_{1}(i,j)}{n}}\cdot\ldots\cdot X_{r}^{ \frac{\lambda_{r}(i,j)}{n}}u_{ij}\in\mathbb{C}\left\{\underline{X}^{\frac{1}{ n}}\right\}\) with \(u_{ij}\) a unit and \(\lambda_{k}(i,j)\in\mathbb{N}\) for \(k=1,\ldots,r\).
Considering the usual product order \(\preceq\) in \(\mathbb{N}^{r}\), that is, \(\alpha\preceq\beta\) if and only if \(\alpha_{i}\leq\beta_{i}\) for all \(1\leq i\leq r\) and denoting \(\lambda_{1},\ldots,\lambda_{g}\) the distinct \(r\)-tuples \(\lambda(i,j)=(\lambda_{1}(i,j),\ldots,\lambda_{r}(i,j))\) we can reindex them in such a way that \(\lambda_{1}\prec\ldots\prec\lambda_{g}\) (Lemma 5.6, [10]). The elements \(\lambda_{i}\) with \(1\leq i\leq g\) are called _(generalized) characteristic exponents_ of \(f\).
The generalized characteristic exponents play an important role in the topological classification of irreducible q.o.h.. Two q.o.h. \((\mathcal{X},0)\) and \((\mathcal{Y},0)\) in \(\mathbb{C}^{r+1}\) are _topologically equivalent_ as immersed germ, if there are neighborhoods \(U\) and \(V\) of origin and a (germ of) homeomorphism \(\tilde{\Phi}:(\mathbb{C}^{r+1},0)\rightarrow(\mathbb{C}^{r+1},0)\) such that \(\tilde{\Phi}(\mathcal{X}\cap U)=\mathcal{Y}\cap V\). If \(\tilde{\Phi}\) is an analytic isomorphism, so we say that \((\mathcal{X},0)\) and \((\mathcal{Y},0)\) are _analytically equivalent_. As a q.o.h. \((\mathcal{X},0)\subset\mathbb{C}^{r+1}\) can be defined by distinct Weierstrass polynomials the generalized characteristic exponents are not uniquely determined. On the other hand Lipman and Gau (see [10] and [4]) characterized the topological type of irreducible h.q.o. by means \(\{n,\lambda_{1},\ldots,\lambda_{g}\}\) associated to a particular quasi-ordinary branch that they called normalized (see Section 2). In particular, we can conclude that the multiplicity of an irreducible h.q.o. is a topological invariant and it is equal to \(min\{n,\sum_{i=1}^{r}\lambda_{1i}\}\) where \(\lambda_{1}=(\lambda_{11},\ldots,\lambda_{1r})\).
Similarly to the irreducible plane curve, we can consider a semigroup \(\Gamma\subset\mathbb{N}^{r}\) that determines and it is determined by the set \(\{n,\lambda_{1},\ldots,\lambda_{g}\}\), that is, \(\Gamma\) is also a complete topological invariant. The analytical invariance of \(\Gamma\) was shown by Popescu-Pampu in [12] and Gonzalez-Perez in [6].
As mentioned above, several properties and results about plane branches can be generalized by properly introducing concepts in the context of q.o.h.. For the convenience of the reader, we recall some of these results in Section 2.
In this paper, we explore the notion of _generalized Zariski exponents_ that extend the Zariski invariant for irreducible plane curve introduced in [13]. Such notion was considered _en passant_ by Panek in her thesis [11] under the supervision of the second author. In Section 3 we show that the generalized Zariski exponents are invariant for \(\tilde{\mathcal{A}}\)-equivalence of an irreducible normalized q.o.h. where \(\tilde{\mathcal{A}}\) is a subgroup of the well-known group \(\mathcal{A}\) (see Definition 2.3) considered in [8].
Taking into account the generalized Zariski exponents, we explore irreducible q.o.h. such that its topological class admits a countable number of distinct \(\tilde{\mathcal{A}}\)-classes that we call _quasi
_simple_ singularity. For plane branches, quasi-simple are precisely simple singularities and they were classified by Bruce and Gaffney (see [3] or [7]). In Section 4, we present normal forms for quasi-ordinary surfaces with one generalized characteristic exponent that are quasi-simple concerning the formal \(\tilde{\mathcal{A}}\)-action.
## 2 Quasi-ordinary parameterizations
In this section, we recall some results related to quasi-ordinary branches, their topological and analytical aspects.
In [9] (see Proposition 1.3) Lipman shows that a nonunit \(\xi=\sum c_{\delta}\underline{X}^{\frac{\delta}{n}}\in\mathbb{C}\left\{ \underline{X}^{\frac{1}{n}}\right\}\) is a quasi-ordinary branch if and only if there exist \(r\)-tuples \(\lambda_{1}\prec\lambda_{2}\prec\cdots\prec\lambda_{g}\) with \(c_{\lambda_{i}}\neq 0\) satisfying \(\lambda_{j}\not\in Q_{j-1}:=n\mathbb{Z}^{r}+\sum_{i=1}^{j-1}\lambda_{i}\mathbb{Z}\) and \(\delta\in n\mathbb{Z}^{r}+\sum_{\lambda_{i}\preceq\delta}\lambda_{i}\mathbb{Z}\) for any \(c_{\delta}\neq 0\). The \(r\)-tuples \(\lambda_{i}\) with \(1\leq i\leq g\) are the _generalized characteristic exponents_ of \(\xi\).
Given a quasi-ordinary branch \(\xi=\sum c_{\delta}\underline{X}^{\frac{\delta}{n}}\in\mathbb{C}\left\{ \underline{X}^{\frac{1}{n}}\right\}\) we denote
\[t_{i}=X_{i}^{\frac{1}{n}},\ 1\leq i\leq r\ \ \text{and}\ \ S(\underline{t})=\sum c _{\delta}\underline{t}^{\delta}\in\mathbb{C}\{\underline{t}\}:=\mathbb{C}\{t _{1},\ldots,t_{r}\}.\]
This allow us to define a \(\mathbb{C}\)-algebra homomorphism
\[\begin{array}{llll}H^{*}:&\mathbb{C}\{X_{1},\ldots,X_{r+1}\}&\longrightarrow &\mathbb{C}\{\underline{t}\}\\ &h(X_{1},\ldots,X_{r+1})&\mapsto&h(t_{1}^{n},\ldots,t_{r}^{n},S(\underline{t}) )\end{array} \tag{1}\]
with \(\frac{\mathbb{C}\{\underline{X},X_{r+1}\}}{\langle f\rangle}\cong Im(H^{*})= \mathbb{C}\{t_{1}^{n},\ldots,t_{r}^{n},S(\underline{t})\}\) where \(f\in\mathbb{C}\{\underline{X}\}[X_{r+1}]\) is the minimal polynomial of \(\xi\).
We call \(H=H_{f}=(t_{1}^{n},\ldots,t_{r}^{n},S(\underline{t}))\) a _quasi-ordinary parameterization_ of \(f\) or \(\xi\).
As we have mentioned in the introduction, a q.o.h. can be defined by distinct Weierstrass polynomials and consequently, we can obtain distinct quasi-ordinary parameterizations and generalized characteristic exponents as well. However, Lipman (see [9] and [10]) proves that any irreducible q.o.h. admits a _normalized_ quasi-ordinary parameterization, that is, a quasi-ordinary parameterization \(H=(t_{1}^{n},\ldots,t_{r}^{n},S(\underline{t}))\) with \(S(\underline{t})=\sum c_{\delta}\underline{t}^{\delta}\) satisfying
1. \(S(\underline{t})=u(\underline{t})\cdot\underline{t}^{\lambda_{1}}\) with \(u(\underline{0})=1\);
2. \(\lambda^{i}:=(\lambda_{1i},\ldots,\lambda_{gi})\geq_{lex}(\lambda_{1j},\ldots,\lambda_{gj})=:\lambda^{j}\) for all \(1\leq i<j\leq r\);
3. if \(\lambda_{1}=(\lambda_{11},0,\ldots,0)\), then \(\lambda_{11}>n\),
and any normalized quasi-ordinary parameterization of a q.o.h. \((\mathcal{X},0)\) admits the same generalized characteristic exponents. In addition, the topological type of an irreducible q.o.h. determines and it is determined by \(\{n,\lambda_{1},\ldots,\lambda_{g}\}\) or equivalently by its associated semigroup \(\Gamma_{H}\). To describe \(\Gamma_{H}\) we present some notions related to elements in \(\mathbb{C}\{\underline{t}\}\).
**Definition 2.1**.: _Given \(q=\sum c_{\alpha}\underline{t}^{\alpha}\in\mathbb{C}\{\underline{t}\}\setminus\{0\}\) and considering \(supp(q)=\{\alpha;\ c_{\alpha}\neq 0\}\subseteq\mathbb{N}^{r}\) we denote by \(\mathcal{N}(q)\) its Newton polyhedra, that is, the convex closure of the set \(supp(q)+\mathbb{R}^{r}_{+}\) in \(\mathbb{R}^{r}\). We indicate by \(V_{\mathcal{N}}(q)\) the set of vertices of \(\mathcal{N}(q)\)._
_We say that \(q\in\mathbb{C}\{\underline{t}\}\setminus\{0\}\) has dominant exponent \(\mathcal{V}(q)\in\mathbb{N}^{r}\) if \(V_{\mathcal{N}}(q)=\{\mathcal{V}(q)\}\). Given \(A\subseteq\mathbb{C}\{\underline{t}\}\) we write \(\mathcal{V}(A):=\{\mathcal{V}(q);\ q\in A\setminus\{0\}\) admits dominant exponent\(\}\)._
Gonzalez-Perez in [6] shows that
\[\Gamma_{H}:=\mathcal{V}(Im(H^{*}))=\{V_{\mathcal{N}}(H^{*}(h));\ h\in\mathbb{ C}\{\underline{X},X_{r+1}\}\setminus\{0\}\}\subseteq\mathbb{N}^{r}\]
is an additive semigroup generated by \(\nu_{k}\), \(k=1,\ldots,r+g\) done by
\[\nu_{j}=n\theta_{j},\ 1\leq j\leq r,\ \ \ \ \nu_{r+1}=\lambda_{1},\ \ \ \ \nu_{r+i}=n_{i-1}\nu_{r+i-1}+\lambda_{i}-\lambda_{i-1},\ 2\leq i\leq g \tag{2}\]
where \(\{\theta_{j},\ 1\leq j\leq r\}\) is the canonical \(\mathbb{R}\)-basis of \(\mathbb{R}^{r}\) and \(n_{i}=|Q_{i}:Q_{i-1}|\), \(1\leq i\leq g\), that is, the index of the subgroup \(Q_{i-1}=n\mathbb{Z}^{r}+\sum_{j=1}^{i-1}\lambda_{j}\mathbb{Z}\) in \(Q_{i}=Q_{i-1}+\lambda_{i}\mathbb{Z}\). In particular, by (2) we conclude that \(n\mathbb{Z}^{r}+\sum_{j=1}^{i}\nu_{r+j}\mathbb{Z}=Q_{i}=n\mathbb{Z}^{r}+\sum_{ j=1}^{i}\lambda_{j}\mathbb{Z}\). In addition we get \(n_{1}\cdot\ldots\cdot n_{g}=|Q_{1}:Q_{0}|\cdot\ldots\cdot|Q_{g}:Q_{g-1}|=n\).
The semigroup \(\Gamma_{H}\) is called the _associated semigroup_ of \(H\).
**Remark 2.2**.: _Given \(\gamma\in Q_{k}\) for some \(k\in\{0,\ldots,g\}\) there are unique \(a_{1},\ldots,a_{r+k}\in\mathbb{Z}\) with \(0\leq a_{r+j}<n_{j}\) for \(j=1,\ldots,k\), such that \(\gamma=\sum_{i=1}^{r+k}a_{i}\nu_{i}\). Such a representation of \(\gamma\) is called standard representation. Moreover, if \(\gamma=\sum_{i=1}^{r+k}a_{i}\nu_{i}\in Q_{k}\) is given by a standard representation, we have \(\gamma\in\Gamma_{k}:=\langle\nu_{1},\ldots,\nu_{r+k}\rangle=\sum_{i=1}^{r+k} \nu_{i}\mathbb{N}\) if and only if \(a_{i}\geq 0\) for all \(1\leq i\leq r\) (see [2])._
A finner equivalence relation is the analytic equivalence. Given two q.o.h. \((\mathcal{X}_{1},0),(\mathcal{X}_{2},0)\subset(\mathbb{C}^{r+1},0)\), defined by Weierstrass polynomials \(f_{1},f_{2}\in\mathbb{C}\{\underline{X}\}[X_{r+1}]\), we say that they are _analytically equivalent_ if and only if
\[\mathbb{C}\{t_{1}^{n},\ldots,t_{r}^{n},S_{1}(\underline{t})\}\cong\frac{ \mathbb{C}\{\underline{X},X_{r+1}\}}{\langle f_{1}\rangle}\cong\frac{\mathbb{ C}\{\underline{X},X_{r+1}\}}{\langle f_{2}\rangle}\cong\mathbb{C}\{t_{1}^{n}, \ldots,t_{r}^{n},S_{2}(\underline{t})\}\]
as \(\mathbb{C}\)-algebras, where \(H_{i}=(t_{1}^{n},\ldots,t_{r}^{n},S_{i}(\underline{t}))\) is a quasi-ordinary parameterization (not necessarily normalized) of \((\mathcal{X}_{i},0)\) for \(i=1,2\). Considering the group \(\mathcal{A}=\mathrm{Iso}(\mathbb{C}^{r+1},0)\times\mathrm{Iso}(\mathbb{C}^{r},0)\) where \(\mathrm{Iso}(\mathbb{C}^{k},0)\) denotes the group of analytic isomorphisms of \((\mathbb{C}^{k},0)\) and identifying \(H_{i}\) with a map germ \(H_{i}:(\mathbb{C}^{r},0)\rightarrow(\mathbb{C}^{r+1},0)\), the analytical equivalence of q.o.h. can be expressed by the \(\mathcal{A}\)-_equivalence_ of \(H_{1}\) and \(H_{2}\), that is, \(H_{2}=\sigma\circ H_{1}\circ\rho^{-1}\) for some \((\sigma,\rho)\in\mathcal{A}\) that we denote \(H_{1}\underset{\mathcal{A}}{\sim}H_{2}\).
In [8] the second author and Panek consider a subgroup \(\widetilde{\mathcal{A}}\) of \(\mathcal{A}\) to detect terms in a quasi-ordinary parameterization that can be eliminable by the \(\widetilde{\mathcal{A}}\)-action (see Proposition 2.15, Theorem 3.1 and Corollary 3.6, [8]). In this work, we introduce an invariant concerning \(\widetilde{\mathcal{A}}\)-action and we explore it to classify quasi-ordinary hypersurfaces in a given topological class.
For the convenience of the reader, we present the description of the group \(\widetilde{\mathcal{A}}\) and some results concerning it.
**Definition 2.3**.: _Fix a topological class of an irreducible q.o.h. in \((\mathbb{C}^{r+1},\underline{0})\) determined by \(\{n,\lambda_{1},\ldots,\lambda_{g}\}\). We denote by \(\widetilde{\mathcal{A}}\) the subgroup of \(\mathcal{A}\) consisting of elements \((\sigma,\rho)\in\mathcal{A}\) with \(\rho=(t_{1}(c_{1}+\zeta_{1}),\ldots,t_{r}(c_{r}+\zeta_{r}))\) and \(\sigma(\underline{X},X_{r+1})=(\sigma_{1}(\underline{X},X_{r+1}),\ldots, \sigma_{r+1}(\underline{X},X_{r+1}))\) such that_
\[\sigma_{i}(\underline{X},X_{r+1})=a_{i}X_{i}+P_{i},\ \ \ \ \sigma_{r+1}( \underline{X},X_{r+1})=a_{r+1}X_{r+1}+P_{r+1},\] \[P_{i}=X_{i}\epsilon_{i}+X_{r+1}\eta_{i},\ \ \ \ P_{r+1}=X_{r+1} \epsilon_{r+1}+\underline{X}^{\alpha}\eta_{r+1},\ \ \ \alpha=\left(\left\lceil\frac{\lambda_{1i}}{n}\right\rceil,\ldots,\left\lceil \frac{\lambda_{1r}}{n}\right\rceil\right)\ \ \text{and}\] \[c_{i},a_{i},a_{r+1}\in\mathbb{C}\setminus\{0\},\ \ \ \zeta_{i}\in \mathcal{M}_{r},\ \ \ \epsilon_{i},\epsilon_{r+1}\in\mathcal{M}_{r+1}; \tag{3}\] \[\eta_{i},\eta_{r+1}\in\mathbb{C}\{\underline{X},X_{r+1}\};\ \ \eta_{i}=0\text{ if }n> \lambda_{1i}\ \ \text{for }\ i=1,\ldots,r,\]
_where \(\mathcal{M}_{r}=\langle\underline{t}\rangle\) and \(\mathcal{M}_{r+1}=\langle\underline{X},X_{r+1}\rangle\) denote the maximal ideals of \(\mathbb{C}\{\underline{t}\}\) and \(\mathbb{C}\{\underline{X},X_{r+1}\}\), respectively._
If \(H_{1}=(t_{1}^{n},\ldots,t_{r}^{n},S_{1}(\underline{t}))\) is a q.o. parameterization, then \((h_{1},\ldots,h_{r+1})=\sigma\circ H_{1}\circ\rho^{-1}\) is not a quasi-ordinary parameterization for any \((\sigma,\rho)\in\widetilde{\mathcal{A}}\). In fact, considering \((\sigma,\rho)\in\widetilde{\mathcal{A}}\) as described in the above definition and denoting \(\rho^{-1}=((\rho^{-1})_{1},\ldots,(\rho^{-1})_{r})\), in order to obtain \(h_{i}=t_{i}^{n}\) for \(1\leq i\leq r\) we must have
\[h_{i}=\left((\rho^{-1})_{i}(\underline{t})\right)^{n}.\left(a_{i}+\epsilon_{i }\circ H_{1}\circ\rho^{-1}(\underline{t})\right)+S_{1}(\rho^{-1}(\underline{ t}))\cdot(\eta_{i}\circ H_{1}\circ\rho^{-1}(\underline{t}))=t_{i}^{n}=(\rho_{i} \circ\rho^{-1}(\underline{t}))^{n},\]
and consequently
\[\rho_{i}(\underline{t})=(t_{i}^{n}\cdot(a_{i}+\epsilon_{i}\circ H_{1})+S_{1}( \underline{t})\cdot(\eta_{i}\circ H_{1}))^{\frac{1}{n}}=t_{i}\cdot\left(a_{i} +\frac{H_{1}^{*}(X_{i}\epsilon_{i}+X_{r+1}\eta_{i})}{t_{i}^{n}}\right)^{\frac{ 1}{n}}.\]
Recall that \(\eta_{i}=0\) if \(n>\lambda_{1i}\) (see (2)).
The above computations show us that the elements \((\sigma,\rho)\in\widetilde{\mathcal{A}}\) for which we have that \(H_{2}=\sigma\circ H_{1}\circ\rho^{-1}\) is a q.o. parameterization are
\[\sigma_{i}(\underline{X},X_{r+1})=a_{i}X_{i}+P_{i},\ \ \ \ \sigma_{r+1}( \underline{X},X_{r+1})=a_{r+1}X_{r+1}+P_{r+1}, \tag{4}\] \[\rho_{i}(\underline{t})=t_{i}\cdot\left(a_{i}+\frac{H_{1}^{*}(P_{ i})}{t_{i}^{n}}\right)^{\frac{1}{n}},\]
with \(a_{i},a_{r+1}\in\mathbb{C}\setminus\{0\}\), \(P_{i}\) and \(P_{r+1}\) satisfying the conditions (3) for \(1\leq i\leq r\).
Notice that the above elements are well behaved concerning the finite morphism \(\pi:(\mathcal{X},0)\to(\mathbb{C}^{r},0)\) and they are similar to the change of coordinates presented by Gonzalez-Perez (see Lemma 2.7 in [5]). However, the change of coordinates given in (4) depends on \(H_{1}\) and therefore the set of such elements is not a subgroup of \(\mathcal{A}\) (neither \(\tilde{\mathcal{A}}\)).
The reader is invited to compare the above description with Proposition 1.2.3 of [7] which addresses the particular case of plane curves.
**Definition 2.4**.: _In what follows we denote_
\[\mathcal{H}=\left\{(\sigma,\rho)\in\tilde{\mathcal{A}};\ \sigma( \underline{X},X_{r+1})=(a_{1}X_{1},\ldots,a_{r+1}X_{r+1}),\rho(\underline{t})=( c_{1}t_{1},\ldots,c_{r}t_{r}),\ a_{i}\neq 0\neq c_{i}\right\}\] \[\widetilde{\mathcal{A}}_{1}=\left\{(\sigma,\rho)\in\tilde{ \mathcal{A}};\ j^{1}\sigma(\underline{X},X_{r+1})=(X_{1}+d_{1}X_{r+1},\ldots,X _{r}+d_{r}X_{r+1},X_{r+1})\text{ and }j^{1}\rho(\underline{t})=(\underline{t})\right\},\]
_where \(d_{i}\in\mathbb{C}\) (\(d_{i}=0\) if \(n>\lambda_{1i}\)) and \(j^{k}h\) is the \(k\)th jet of \(h\)._
Notice that any element \((\sigma,\rho)\in\tilde{\mathcal{A}}\) can be express as a composition of an element \((\sigma_{0},\rho_{0})\in\mathcal{H}\) and an element \((\sigma_{1},\rho_{1})\in\tilde{\mathcal{A}}_{1}\).
Given a q.o. parameterization \(H=(t_{1}^{n},\cdots,t_{r}^{n},S(\underline{t}))\) where \(S(\underline{t})=\sum_{\delta\succeq\lambda_{1}}b_{\delta\underline{t}} \delta^{\delta}\) we say that \(\underline{t}^{\gamma}\) with \(\gamma\in supp(S(\underline{t}))\) is _eliminable_ (by \(\tilde{\mathcal{A}}\)-action) if there exists a q.o. parameterization \(H^{\prime}=(t_{1}^{n},\ldots,t_{r}^{n},\sum_{\delta\succeq\lambda_{1}}c_{ \delta\underline{t}}\delta^{\delta})\) with \(H^{\prime}\)\(\tilde{\mathcal{A}}\)-equivalent to \(H\) such that \(c_{\gamma}=0\) and \(b_{\delta}=c_{\delta}\), except eventually for \(\delta\succ\gamma\).
If \(H_{1}=(t_{1}^{n},\ldots,t_{r}^{n},S_{1}(\underline{t}))\) is a q.o. parameterization and \((\sigma,\rho)\in\mathcal{H}\) satifies (4), that is, \(c_{i}=a_{i}\) for \(1\leq i\leq r\), then \((t_{1}^{n},\ldots,t_{r}^{n},S_{2}(\underline{t}))=:H_{2}=\sigma\circ H_{1} \circ\rho^{-1}(\underline{t})=(t_{1}^{n},\ldots,t_{r}^{n},S_{1}(a_{1}^{-1}t_{1 },\ldots,a_{r}^{-1}t_{r}))\) in particular, \(supp(S_{2}(\underline{t}))=supp(S_{1}(\underline{t}))\). So, elements in \(\mathcal{H}\) do not introduce or eliminate terms in a q.o. parameterization but allow us to normalize some coefficients as described in the following proposition.
**Proposition 2.5** (Proposition 2.5 in [8]).: _Let \(H=(t_{1}^{n},\ldots,t_{r}^{n},S(\underline{t}))\) be a q.o. parameterization. If \(P\subseteq\{\zeta-\lambda_{1};\ \lambda_{1}\neq\zeta\in supp(S(\underline{t}))\} \subset\mathbb{R}^{r}\) is a linearly independent set, then there exists \((\sigma,\rho)\in\mathcal{H}\) such that all terms with exponent belonging to \(\{\delta+\lambda_{1};\ \delta\in P\cup\{\underline{0}\}\}\) in \(\sigma\circ H\circ\rho^{-1}\) are monic._
Let \(\Omega^{r}:=\Omega^{r}_{\mathbb{C}\{\underline{X},X_{r+1}\}}=\sum_{i=1}^{r+1} \mathbb{C}\{\underline{X},X_{r+1}\}dX_{1}\wedge\cdots\wedge\widehat{dX_{i}} \wedge\cdots\wedge dX_{r+1}\) be the \(\mathbb{C}\{\underline{X},X_{r+1}\}\)-module of differential \(r\)-forms and the map
\[\Psi_{H}:\Omega^{r} \rightarrow \mathbb{C}\{\underline{t}\} \tag{5}\] \[\omega \mapsto \frac{\underline{t}}{dt_{1}\wedge\cdots\wedge dt_{r}}\left(\sum_ {i=1}^{r+1}H^{*}(h_{i})\bigwedge_{j=1;j\neq i}^{r+1}dH^{*}(X_{j})\right)\]
where \(\omega=\sum_{i=1}^{r+1}h_{i}dX_{1}\wedge\cdots\wedge\widehat{dX_{i}}\wedge \cdots\wedge dX_{r+1}\) and \(H^{*}\) is the homomorphism given in (1).
Similarly to the semigroup \(\Gamma_{H}\) and considering the notion of dominant exponent given in Definition 2.1 we put
\[\Lambda_{H}:=\mathcal{V}(Im(\Psi_{H})).\]
The set \(\Lambda_{H}\) was introduced in [8] considering the \(\frac{\mathbb{C}\{\underline{X},X_{r+1}\}}{\langle f\rangle}\).module Kahler \(r\)-forms for a quasi-ordinary hypersurface defined by \(f\) with parameterization \(H\). It follows that \(\Lambda_{H}\) is a \(\Gamma_{H}\)-monomodule, that is, \(\Gamma_{H}+\Lambda_{H}\subset\Lambda_{H}\).
We can use \(\Lambda_{H}\) to identify terms in a quasi ordinary parameterization that are amenable to elimination through coordinate changes as given in Definition 2.3, that is, considering the \(\tilde{\mathcal{A}}\)-action. More specifically we highlight:
**Proposition 2.6** (Corollary 3.6 in [8]).: _Let \(H=(t_{1}^{n},\cdots,t_{r}^{n},S(\underline{t}))\) be a q.o. parameterization. If \(\gamma=\mathcal{V}(\Psi_{H}(\omega))\in\Lambda_{H}\) with \(\gamma_{i}\geq n\) and \(\omega=\sum_{i=1}^{r+1}(-1)^{r+1-i}P_{i}dX_{1}\wedge\cdots\wedge\widehat{dX_{i} }\wedge\cdots\wedge dX_{r+1}\in\Omega^{r}\) where \(P_{i}\) is as described in (3) for all \(i=1,\ldots,r\), then \(\underline{t}^{\gamma-\underline{n}}\) is eliminable (by \(\tilde{\mathcal{A}}_{1}\)-action)._
In the next section, we present an \(\tilde{\mathcal{A}}\)-invariant that can be easily computed using a q.o. parameterization.
## 3 Generalized Zariski exponents
Considering an irreducible plane curve \(C\) with parameterization \((t^{n},\sum_{i\geq\lambda_{1}}a_{i}t^{i})\) and associated semigroup \(\Gamma=\langle v_{0}:=n,v_{1}:=\lambda_{1},v_{2},\ldots,v_{g}\rangle\), Zariski (see [13], [14] or [7]) shows that there exists a curve analytically equivalent to \(C\) with parameterization
\[\left(t^{n},t^{\lambda_{1}}+\sum_{i>\lambda_{1}}a^{\prime}_{i}t^{i}\right)\ \text{ with }i\not\in\Gamma\bigcup\{\mathbb{N}\lambda_{1}+2\lambda_{1}-n\}\]
that he calls a _short parameterization_. In addition, given a short parameterization as before, Zariski shows that if \(supp\left(\sum_{i>\lambda_{1}}a^{\prime}_{i}t^{i}\right)\neq\emptyset\) then the minimum element of this set is an analytical invariant called _Zariski exponent_ or _Zariski invariant_. Notice that \(supp\left(\sum_{i>\lambda_{1}}a^{\prime}_{i}t^{i}\right)=\emptyset\) if and only if the plane curve \(C\) is analytically equivalent to a curve with parameterization \((t^{n},t^{\lambda_{1}})\), that is, it is defined by the quasi-homogeneous polynomial \(Y^{n}-X^{\lambda_{1}}\).
In this section, we propose a generalization of the concepts of short parameterization and Zariski exponent for quasi-ordinary hypersurfaces.
**Proposition 3.1**.: _Let \(H=(t_{1}^{n},\ldots,t_{r}^{n},S(\underline{t}))\) be a normalized q.o. parameterization with \(S(\underline{t})=t^{\lambda_{1}}+\sum_{\delta\succ\lambda_{1}}b_{\delta} \underline{t}^{\delta}\) and associated semigroup \(\Gamma\). Given \(\gamma\in supp\left(\sum_{\delta\succ\lambda_{1}}b_{\delta}\underline{t}^{ \delta}\right)\) if_
\[\gamma\in\Gamma\bigcup_{1\leq i\leq r\atop\lambda_{1}i\geq n}(\Gamma+2\lambda_ {1}-\nu_{i}) \tag{6}\]
_then \(\underline{t}^{\gamma}\) is eliminable by \(\tilde{\mathcal{A}}_{1}\)-action._
Proof.: Let us consider \(\omega=P_{r+1}dX_{1}\wedge\cdots\wedge dX_{r}\) with \(\mathcal{V}(P_{r+1})=\gamma\in\Gamma\). Notice that \(\gamma\succ\lambda_{1}\) implies that \(P_{r+1}\) satisfies (3). As \(\Psi_{H}(\omega)=n^{r}\cdot H^{*}(P_{r+1})\cdot\underline{t}^{\underline{n}}\), we get \(\mathcal{V}(\Psi_{H}(\omega))=\gamma+\underline{n}\) and, by Proposition 2.6, the term \(\underline{t}^{\gamma}\) is eliminable by \(\tilde{\mathcal{A}}_{1}\)-action.
On the other hand, if \(\lambda_{1i}\geq n\) and \(\gamma=\delta+2\lambda_{1}-\nu_{i}\) with \(\delta\in\Gamma\), we take
\[\omega=P_{i}dX_{1}\wedge\cdots\wedge\widehat{dX_{i}}\wedge\cdots\wedge dX_{r }\wedge dX_{r+1}\ \ \text{with}\ \ P_{i}=X_{r+1}\eta_{i}\ \ \text{and}\ \ \mathcal{V}(\eta_{i})=\delta.\]
In this way,
\[\Psi_{H}(\omega)=n^{r-1}(-1)^{r-i}\cdot H^{*}(X_{r+1}\eta_{i})\cdot\underline{ t}^{\underline{n}-\nu_{i}}\cdot\left(\lambda_{1i}\underline{t}^{\lambda_{1}}+ \sum_{\delta\succ\lambda_{1}}\delta_{i}b_{\delta}\underline{t}^{\delta}\right),\]
that is, \(\gamma=\mathcal{V}(\Psi_{H}(\omega))-\underline{n}\) and, by Proposition 2.6, the term \(\underline{t}^{\gamma}\) is eliminable by \(\tilde{\mathcal{A}}_{1}\)-action.
By the previous proposition, given a normalized q.o. parameterization \(H\) we can consider (possibly infinitely many) changes of coordinates and we obtain \(H^{\prime}=(t_{1}^{n},\ldots,t_{r}^{n},S(\underline{t}))\) such that any exponent in \(S(\underline{t})-\underline{t}^{\lambda_{1}}\) does not belong to the union of sets (6). We avoid verifying if a composition of infinite many of elements of \(\mathcal{A}\) is analytic, for this reason, when we consider eventually infinite many changes of coordinates to obtain \(H^{\prime}\) from \(H\), we say that \(H^{\prime}\) is _formally \(\mathcal{A}\)-equivalent_ to \(H\).
**Example 3.2**.: _Let us consider a normalized q.o. parameterization \(H=(t_{1}^{2},\ldots,t_{r}^{2},\underline{t}^{\lambda_{1}}+\sum_{\gamma\succ \lambda_{1}}a_{\gamma}\underline{t}^{\gamma})\). As \(2=|Q_{g}:Q_{0}|=n_{1}\cdot\ldots\cdot n_{g}\), we must have \(g=1\), that is, the associated semigroup of \(H\) is \(\Gamma=\langle\nu_{1},\ldots,\nu_{r},\nu_{r+1}\rangle\). By Remark 2.2, we can write any \(\gamma\in Q_{1}\) as \(\gamma=2(a_{1},\ldots,a_{r})+a\cdot\nu_{r+1}\) with \(0\leq a<n_{1}=n=2\) and \(a_{i}\in\mathbb{Z}\) for \(1\leq i\leq r\). In this way, if \(\gamma\succ\lambda_{1}=\nu_{r+1}\) we must have \(a_{i}\geq 0\) for every \(1\leq i\leq r\), that is, \(\gamma\in\Gamma\) and consequently, by Proposition 3.1, \(H\) is formally \(\mathcal{A}\)-equivalent to \((t_{1}^{2},\ldots,t_{r}^{2},\underline{t}^{\lambda_{1}})\)._
**Example 3.3**.: _If a q.o. parameterization \(H\) has \(\lambda_{1}=\underline{1}=(1,\ldots,1)\) and \(n>1\) then, as \(n_{1}\lambda_{1}\in Q_{0}=n\mathbb{Z}\), we must have \(n_{1}=|Q_{1}:Q_{0}|=n=n_{1}\cdot\ldots\cdot n_{g}\), that is \(g=1\) and the value semigroup of \(H\) is given by \(\Gamma=\langle\nu_{1},\ldots,\nu_{r},\nu_{r+1}\rangle\). By Remark 2.2, any \(\gamma\in Q_{1}\) can be expressed as \(\gamma=n(a_{1},\ldots,a_{r})+a\cdot\nu_{r+1}\) with \(0\leq a<n\) and \(a_{i}\in\mathbb{Z}\) for \(1\leq i\leq r\). In order to \(\gamma\succ\lambda_{1}=\nu_{r+1}\) we get \(a_{i}\geq 0\) for all \(1\leq i\leq r\), that is, \(\gamma\in\Gamma\). In this way, by Proposition 3.1, \(H\) is formally \(\mathcal{A}\)-equivalent to \((t_{1}^{n},\ldots,t_{r}^{n},t_{1}\cdot\ldots\cdot t_{r})\)._
The Propositon 3.1 motivates the following definition.
**Definition 3.4**.: _Let \(H=(t_{1}^{n},\ldots,t_{r}^{n},t^{\lambda_{1}}+\sum_{\delta\succ\lambda_{1}}a_ {\delta}\underline{t}^{\delta})\) be a normalized q.o. parameterization with value semigroup \(\Gamma\). We say that \(H\) is a quasi-short parameterization if every element in \(supp(\sum_{\gamma\succ\lambda_{1}}a_{\delta}\underline{t}^{\delta})\) does not belong to \(\Gamma\bigcup_{1\leq i\leq r\atop\lambda_{1i}\geq n}\left(\Gamma+2\lambda_{1}- \nu_{i}\right)\)._
_If \(H=(t_{1}^{n},\ldots,t_{r}^{n},t^{\lambda_{1}}+\sum_{\delta\succ\lambda_{1}}a_ {\delta}\underline{t}^{\delta})\) is a quasi-short parameterization and \(\sum_{\delta\succ\lambda_{1}}a_{\delta}\underline{t}^{\delta}\neq 0\) the we call Generalized Zariski exponents the elements in \(E_{\mathcal{Z}}(H):=\min_{\preceq}supp(\sum_{\delta\succ\lambda_{1}}a_{\delta }\underline{t}^{\delta})\). If \(\sum_{\delta\succ\lambda_{1}}a_{\delta}\underline{t}^{\delta}=0\) we put \(E_{\mathcal{Z}}(H)=\{\underline{\infty}\}\)._
It follows that any q.o. parameterization with generalized Zariski exponents \(E_{\mathcal{Z}}\) is formally \(\mathcal{A}\)-equivalent to a quasi-short parameterization
\[H=\left(t_{1}^{n},\ldots,t_{r}^{n},\underline{t}^{\lambda_{1}}\right)\ \text{if}\ E_{\mathcal{Z}}=\{ \underline{\infty}\}\quad\text{or}\quad H=\left(t_{1}^{n},\ldots,t_{r}^{n}, \underline{t}^{\lambda_{1}}+\sum_{\delta\in E_{\mathcal{Z}}}a_{\delta} \underline{t}^{\delta}u_{\delta}(\underline{t})\right)\ \text{if}\ E_{\mathcal{Z}}\neq\{ \underline{\infty}\}\]
where \(a_{\delta}\in\mathbb{C}\setminus\{0\}\) and \(u_{\delta}(\underline{0})=1\).
**Remark 3.5**.: _If \(g\geq 2\), that is, \(\lambda_{2}\in supp(S(\underline{t}))\) then \(\lambda_{2}\not\in\Gamma\bigcup_{1\leq i\leq r\atop\lambda_{1i}\geq n}\left( \Gamma+2\lambda_{1}-\nu_{i}\right)\). In fact, by (2), we get \(\lambda_{2}=\nu_{r+2}+\nu_{r+1}-n_{1}\lambda_{1}\), as \(2\leq n_{1}=|Q_{1}:Q_{0}|\) it follows that \(n_{1}\lambda_{1}=n\alpha\) with \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\in\left(\mathbb{N}^{r}\right)^{*}:= \mathbb{N}^{r}\setminus\{\underline{0}\}\) and, by Remark 2.2, we conclude that \(\lambda_{2}\not\in\Gamma\). On the other hand, suppose that \(\lambda_{1i}\geq n\), so \(\alpha_{i}\geq 2\). If \(\lambda_{2}\in\Gamma+2\lambda_{1}-\nu_{i}\), that is, \(\lambda_{2}-\lambda_{1}+\nu_{i}\in\Gamma+\lambda_{1}\subset\Gamma\) then, as \(\nu_{r+1}=\lambda_{1}\), we get_
\[\nu_{r+2}-n(\alpha_{1},\ldots,\alpha_{i-1},\alpha_{i}-1,\alpha_{i+1},\ldots, \alpha_{r})=\nu_{r+2}-n\alpha+\nu_{i}=\lambda_{2}-\lambda_{1}+\nu_{i}\in\Gamma\]
_that, by Remark 2.2, is an absurd. So, \(\lambda_{2}\not\in\Gamma\bigcup_{1\leq i\leq r\atop\lambda_{1i}\geq n}\left( \Gamma+2\lambda_{1}-\nu_{i}\right)\) and for \(g\geq 2\) we always have \(E_{\mathcal{Z}}(H)\neq\{\underline{\infty}\}\)._
Notice that, for \(r=1\), that is, if \(H=(t^{n},S(t))\) is a parameterization of a plane curve we have \(\lambda_{1}>n\) and \(\mathbb{N}\lambda_{1}+2\lambda_{1}-n\subset\Gamma+2\lambda_{1}-n\) so, we get the short parameterization notion introduced by Zariski.
**Theorem 3.6**.: _The generalized Zariski exponents are \(\widetilde{\mathcal{A}}\)-invariant._
Proof.: Let \(H_{1}\) and \(H_{2}\) be two quasi-short parameterization with same semigroup \(\Gamma=\langle\nu_{1},\ldots,\nu_{r+g}\rangle\). Let us suppose that \(H_{1}\) and \(H_{2}\) are \(\widetilde{\mathcal{A}}\)-equivalent and they are given by
\[H_{1}=(t_{1}^{n},\ldots,t_{r}^{n},S_{1}(\underline{t}))\quad\text{and}\quad H_ {2}=(t_{1}^{n},\ldots,t_{r}^{n},S_{2}(\underline{t}))\]
with \(S_{1}(\underline{t})=\underline{t}^{\lambda_{1}}+\sum_{\delta\in E}a_{\delta \underline{t}}t^{\delta}u_{1}(\underline{t})\), \(S_{2}(\underline{t})=\underline{t}^{\lambda_{1}}+\sum_{\delta\in E}b_{\delta \underline{t}}t^{\delta}u_{1}(\underline{t})\) where \(u_{1}(\underline{0})=u_{2}(\underline{0})=1\), \(a_{\delta}\neq 0\) for every \(\delta\in E\) and \(E:=E_{\mathcal{Z}}(H_{1})\neq\{\underline{\infty}\}\) is the set of generalized Zariski exponents of \(H_{1}\).
Considering \((\sigma,\rho)\in\widetilde{\mathcal{A}}\) such that \(\sigma\circ H_{1}\circ\rho^{-1}=H_{2}\), or equivalent \(\sigma\circ H_{1}=H_{2}\circ\rho\), we will show that \(b_{\delta}\neq 0\) for every \(\delta\in E\) and consequently, the generalized Zariski exponents are \(\widetilde{\mathcal{A}}\) invariants.
As we have remarked after the Definition 2.4 it is sufficient to consider \((\sigma,\rho)\in\tilde{\mathcal{A}}_{1}\), that is,
\[\sigma_{i}(\underline{X},X_{r+1})=X_{i}+P_{i}(\underline{X},X_{r+1}),\quad \ \sigma_{r+1}(\underline{X},X_{r+1})=X_{r+1}+P_{r+1}(\underline{X},X_{r+1}),\]
\[\rho_{i}(\underline{t})=t_{i}\cdot\left(1+\frac{H_{1}^{n}(P_{1}(\underline{X},X_{r+1}))}{t_{i}^{n}}\right)^{\frac{1}{n}},\]
with \(P_{r+1}(\underline{X},X_{r+1})=X_{r+1}\cdot\epsilon_{r+1}+\underline{X}^{ \alpha}\cdot\eta_{r+1}\) and \(P_{i}(\underline{X},X_{r+1})=X_{i}\cdot\epsilon_{i}+X_{r+1}\cdot\eta_{i}\), \(1\leq i\leq r\) satisfying the conditions (3).
Notice that
\[\sigma\circ H_{1}\circ\rho^{-1}(\underline{t})=(t_{1}^{n},\ldots,t_{r}^{n},S_ {1}\circ\rho^{-1}(\underline{t})+P_{r+1}\circ H_{1}\circ\rho^{-1}(\underline{ t}))=(t_{1}^{n},\ldots,t_{r}^{n},S_{2}(\underline{t}))=H_{2}(\underline{t}),\]
that is, we must have
\[S_{1}(\underline{t})+P_{r+1}(H_{1}(\underline{t}))=S_{2}(\rho(\underline{t})). \tag{7}\]
As \(P_{r+1}(\underline{X},X_{r+1})=X_{r+1}\cdot\epsilon_{r+1}+\underline{X}^{ \alpha}\cdot\eta_{r+1}\) with \(\epsilon_{r+1}\in\mathcal{M}_{r+1}\), \(\eta_{r+1}\in\mathbb{C}\{\underline{X},X_{r+1}\}\) and \(\alpha=\left(\left\lfloor\frac{\lambda_{11}}{n}\right\rceil,\ldots,\left\lceil \frac{\lambda_{1z}}{n}\right\rceil\right)\) we have that any element in \(supp(P_{r+1}(H_{1}(\underline{t}))\) belongs to \(\Gamma\) or \(E+(\mathbb{N}^{r})^{*}\), that is, all terms \(a_{\delta j}\underline{t}^{\delta_{j}}\) with \(\delta_{j}\in E\) remain unchanged on left side of (7). In addition, as \(\rho_{i}(\underline{t})=t_{i}\cdot\left(1+\frac{H_{1}^{*}(P_{i}(\underline{X},X_{r+1}))}{t_{i}^{n}}\right)^{\frac{1}{n}},\) where \(P_{i}(\underline{X},X_{r+1})=X_{i}\cdot\epsilon_{i}+X_{r+1}\cdot\eta_{i}\), \(\epsilon_{i}\in\mathcal{M}_{r+1}\), \(\eta_{i}\in\mathbb{C}\{\underline{X},X_{r+1}\}\) for \(1\leq i\leq r\) with \(\eta_{i}=0\) if \(n>\lambda_{1i}\) we get
\[S_{2}(\rho(\underline{t}))=\prod_{i=1}^{r}\rho_{i}^{\lambda_{1i}}+\sum_{\delta \in E}b_{\delta}\underline{t}^{\delta}v_{j}(\underline{t})\]
with \(v_{i}(\underline{0})=1\). As \(P_{i}(\underline{X},X_{r+1})=X_{i}\cdot\epsilon_{i}+X_{r+1}\cdot\eta_{i}\) we can assume that \(\eta_{i}\not\in\langle X_{i}\rangle\).
We will show that \(supp\left(\prod_{i=1}^{r}\rho_{i}^{\lambda_{1i}}\right)\cap E=\emptyset\), that is, \(a_{\delta}=b_{\delta}\) for all \(\delta\in E:=E_{\mathcal{Z}}(H_{1})\).
Using the description of \(\rho_{i}\) for \(1\leq i\leq r\) we have
\[\prod_{i=1}^{r}\rho_{i}^{\lambda_{1i}}=\underline{t}^{\lambda_{1}}\cdot\prod_{ \begin{subarray}{c}1\leq i\leq r\\ \lambda_{1i}\leq n\end{subarray}}(1+H_{1}^{*}(\epsilon_{i}))^{\frac{\lambda_{1 i}}{n}}\cdot\prod_{\begin{subarray}{c}1\leq i\leq r\\ \lambda_{1i}\geq n\end{subarray}}\left(1+H_{1}^{*}(\epsilon_{i})+\frac{H_{1}^ {*}(X_{r+1}\eta_{i})}{t_{i}^{n}}\right)^{\frac{\lambda_{1i}}{n}}.\]
We analyse \(supp\left(\prod_{i=1}^{r}\rho_{i}^{\lambda_{1i}}\right)\) considering in the above expression the expansion \((1+z(\underline{t}))^{\alpha}=\sum_{k\geq 0}\binom{\alpha}{k}\,z^{k}\) for any \(\alpha\in\mathbb{Q}_{>0}\) and \(z(\underline{t})\in\langle\underline{t}\rangle=\mathcal{M}_{r}\subset\mathbb{ C}\{\underline{t}\}\).
If \(\eta_{i}=0\) for every \(1\leq i\leq r\), then \(supp\left(\prod_{i=1}^{r}\rho_{i}^{\lambda_{1i}}\right)\subset\Gamma\cup(E+( \mathbb{N}^{r})^{*})\) because \(\epsilon_{i}\in\mathcal{M}_{r+1}\). By definition \(\Gamma\cap E=\emptyset\) so, in order to have (7) we must have \(a_{\delta}=b_{\delta}\) for every \(\delta\in E\) and we can not eliminate any generalized Zariski exponent by \(\widetilde{\mathcal{A}}\)-action.
If there exists \(\eta_{i}\neq 0\), that can happen for \(\lambda_{1i}\geq n\) then, in \(supp\left(\prod_{i=1}^{r}\rho_{i}^{\lambda_{1i}}\right)\), besides the elements in \(\Gamma\cup(E+(\mathbb{N}^{r})^{*})\) we eventually obtain elements on form \(\lambda_{1}+k(\gamma+\lambda_{1}-\nu_{i})\) where \(\gamma\in V_{\mathcal{N}}(H_{1}^{*}(\eta_{i}))\subset\Gamma\) for \(k\geq 1\).
It is sufficient to analyze the possibility \(\gamma+\lambda_{1}-\nu_{i}\not\in\Gamma\) with \(\gamma\in\Gamma_{1}=\langle\nu_{1},\ldots,\nu_{r+1}\rangle\). In fact, if \(\gamma+\lambda_{1}-\nu_{i}\in\Gamma\) then \(\lambda_{1}+k(\gamma+\lambda_{1}-\nu_{i})\in\Gamma\) for every \(k\geq 1\), consequently \(supp\left(\prod_{i=1}^{r}\rho_{i}^{\lambda_{1i}}\right)\subset\Gamma\). On the other hand if \(g\geq 2\) and \(\gamma=\sum_{i=1}^{r+g}\alpha_{i}\nu_{i}\) with \(\alpha_{j}\neq 0\) for some \(j>r+1\) then it follows, by Remark 3.5 and the inequality \(\lambda_{1i}\geq n\), that \(\lambda_{1}+k(\gamma+\lambda_{1}-\nu_{i})\in E+(\mathbb{N}^{r})^{*}\). In both situations we get \(\lambda_{1}+k(\gamma+\lambda_{1}-\nu_{i})\in\Gamma\cup(E+(\mathbb{N}^{r})^{*})\) and similarly to the case \(\eta_{i}=0\) we conclude that \(a_{\delta}=b_{\delta}\) for every \(\delta\in E\).
Considering \(\gamma+\lambda_{1}-\nu_{i}\not\in\Gamma\) with \(\gamma\in\langle\nu_{1},\ldots,\nu_{r+1}\rangle\) we may have the possibilities:
1. \(\Gamma+2\lambda_{1}-\nu_{i}\in E+(\mathbb{N}^{r})^{*}\). In this case, \(\lambda_{1}+k(\gamma+\lambda_{1}-\nu_{i})\in E+(\mathbb{N}^{r})^{*}\) for any \(k\geq 1\) and \(supp\left(\prod_{i=1}^{r}\rho_{i}^{\lambda_{1i}}\right)\subset\Gamma\cup(E+( \mathbb{N}^{r})^{*})\) so, \(a_{\delta}=b_{\delta}\) for every \(\delta\in E\).
2. \(\gamma+2\lambda_{1}-\nu_{i}\not\in\Gamma\cup(E+(\mathbb{N}^{r})^{*})\). As \(\delta\not\in\bigcup_{\begin{subarray}{c}1\leq i\leq r\\ \lambda_{1i}\geq n\end{subarray}}\left(\Gamma+2\lambda_{1}-\nu_{i}\right)\) for every \(\delta\in E_{\mathcal{Z}}(H_{1})\) and we do not have any element in \(\Gamma+2\lambda_{1}-\nu_{i}\) on the left hand of (7), we can not have the value \(\gamma\) in \(V_{\mathcal{N}}(H_{1}^{*}(\eta_{i}))\) and consequently, for such \(\gamma\) we can not have \(\lambda_{1}+k(\gamma+\lambda_{1}-\nu_{i})\not\in supp\left(\prod_{i=1}^{r} \rho_{i}^{\lambda_{1i}}\right)\) for any \(k\geq 1\).
3. \(\gamma+2\lambda_{1}-\nu_{i}\in\Gamma\setminus(E_{\mathcal{Z}}(H_{1})+(\mathbb{ N}^{r})^{*})\). As \(\gamma\in\langle\nu_{1},\ldots,\nu_{r+1}\rangle\) and \(\gamma+\lambda-\nu_{i}\not\in\Gamma\) we may write \(\gamma=\alpha_{r+1}\nu_{r+1}+\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{r}\alpha_{j}\nu_{j}\) with \(0\leq\alpha_{r+1}\leq n_{1}\) and \(\alpha_{j}\in\mathbb{N}\) for \(j\in\{1,\ldots,i-1,i+1,\ldots,r\}\). The condition \(\gamma+2\lambda_{1}-\nu_{i}=(\alpha_{r+1}+2)\nu_{r+1}+\sum_{\begin{subarray}{c}j= 1\\ j\neq i\end{subarray}}^{r}\alpha_{j}\nu_{j}-\nu_{1}\in\Gamma\) implies that \(\alpha_{r+1}+2=sn_{1}\) for some \(s\in\mathbb{N}\setminus\{0\}\), recall that \(n_{1}\nu_{r+1}=n_{1}\lambda_{1}\in n\mathbb{N}^{r}\). In this way, as \(n_{1}\geq 2\) and \(\lambda_{1i}\geq n\) we get \[\gamma+2\lambda_{1}-\nu_{i}=(\alpha_{r+1}+2)\nu_{r+1}+\sum_{\begin{subarray}{c}j= 1\\ j\neq i\end{subarray}}^{r}\alpha_{j}\nu_{j}-\nu_{1}=n(\mu_{1},\ldots,\mu_{r})\in n \mathbb{N}^{r}\ \ \text{with}\ \ \mu_{i}\geq 1.\] (8)
Now, for any \(k\geq 1\) we express \(k=2q+p\) with \(q\in\mathbb{N}\), \(p\in\{0,1\}\) and we obtain
\[\lambda_{1}+k(\gamma+\lambda_{1}-\nu_{i}) =\lambda_{1}+p\cdot(\gamma+\lambda_{1}-\nu_{i})+q\cdot\gamma+q\cdot (\gamma+2\lambda_{1}-\nu_{i})-q\cdot\nu_{i}\] \[=\lambda_{1}+p\cdot(\gamma+\lambda_{1}-\nu_{i})+q\cdot\gamma+q \cdot n\cdot(\mu_{1},\ldots,\mu_{r})-q\cdot\nu_{i}.\]
As \(\lambda_{1}+p\cdot(\gamma+\lambda_{1}-\nu_{i})\in\Gamma\) for \(p\in\{0,1\}\) and, by (8), \(\mu_{i}\geq 1\) we conclude (in this case) that \(\lambda_{1}+k(\gamma+\gamma_{1}-\nu_{i})\in\Gamma\) for every \(k\geq 1\). Consequently, with the same argument for the case \(\eta_{i}=0\) we conclude that \(a_{\delta}=b_{\delta}\) for every \(\delta\in E_{\mathcal{Z}}(H_{1})\).
**Remark 3.7**.: _Notice that the proof of the previous theorem reveals that terms with generalized Zariski exponents cannot be eliminated by changes of coordinates belonging to the group \(\widetilde{\mathcal{A}}\) and such changes of coordinates, except for homotheties, keep the coefficients of such terms unchanged._
By Theorem 3.6 and the above remark, we can rewrite the Definition 3.4 as:
**Definition 3.4'**.: _Let \(H=(t_{1}^{n},\cdots,t_{r}^{n},S(\underline{t}))\) be a q.o. parameterization. The set of generalized Zariski exponents of \(H\) is_
\[E_{\mathcal{Z}}(H)=\min_{\preceq}\left\{supp(S(\underline{t}))\setminus\left( \Gamma\bigcup_{1\leq i\leq r\atop\lambda_{1}\geq n}(\Gamma+2\lambda_{1}-\nu_{ i})\right)\right\}\]
_where \(\preceq\) is the product order in \(\mathbb{N}^{r}\) and \(\min_{\preceq}\{\emptyset\}=\{\underline{\infty}\}\)._
In the next section, we will consider the notion of generalized Zariski exponents to explore the \(\tilde{\mathcal{A}}\)-equivalence of quasi-ordinary surfaces. We will use some technical lemmas and they will be presented in Section 5 to make the presentation more fluid.
## 4 Quasi-simple surface singularities
In this section, we will illustrate how the Zariski exponents can be used to study the \(\tilde{\mathcal{A}}\)-equivalence of quasi-ordinary hypersurface. In particular, we characterize quasi ordinary surfaces with one characteristic exponent that admits a countable number of distinct \(\tilde{\mathcal{A}}\)-classes in the same topological class.
**Definition 4.1**.: _We say that a quasi-ordinary normalized parameterization \(H\) is quasi-simple if there is a countable number of distinct \(\tilde{\mathcal{A}}\)-classes for q.o.h. in the same topological class of \(H\), that is, we have a countable moduli._
Notice that, by Example 3.2 and Example 3.3, any normalized q.o. parameterization with \(n=2\) or \(\lambda_{1}=\underline{1}=(1,\ldots,1)\) is formally \(\tilde{\mathcal{A}}\)-equivalent to \((t_{1}^{2},\ldots,t_{r}^{2},\underline{t}^{\lambda_{1}})\), respectively \((t_{1}^{n},\ldots,t_{r}^{n},t_{1}\cdot\ldots\cdot t_{r})\), and consequently they are quasi-simple.
In the rest of this section, we consider quasi-ordinary surfaces with one generalized characteristic exponent and \(n>2\). In particular, its topological class is completely characterized by its associated semigroup which, in this case, is given by \(\Gamma=\langle\nu_{1}=(n,0),\nu_{2}=(0,n),\nu_{3}=\lambda_{1}=(\lambda_{11}, \lambda_{12})\rangle\).
By Proposition 3.1, any quasi-ordinary surface with semigroup \(\Gamma\) is formally \(\tilde{\mathcal{A}}\)-equivalent to a quasi-short parameterization (see Definition 3.4)
\[H=\left(t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+\sum_{\delta\in E_{ \mathcal{Z}}(H)}a_{\delta}\underline{t}^{\delta}u_{\delta}(\underline{t})\right) \tag{9}\]
where \(E_{\mathcal{Z}}(H)\) denotes the generalized Zariski exponents and \(u_{\delta}(\underline{0})=1\). In addition, Proposition 2.5 allows us to normalize at most two coefficients of terms corresponding to generalized Zariski exponents, that is, if there exist \(\delta_{1},\delta_{2}\in E_{\mathcal{Z}}(H)\) with \(\{\delta_{1}-\lambda_{1},\delta_{2}-\lambda_{1}\}\) linearly independent, then \(H\) is \(\tilde{\mathcal{A}}\)-equivalent to a q.o. parameterization as (9) with \(a_{\delta_{1}}=a_{\delta_{2}}=1\). By Remark 3.7, if there is \(\delta\in E_{\mathcal{Z}}(H)\setminus\{\delta_{1},\delta_{2}\}\) then for any \(a_{\delta}\in\mathbb{C}\) we obtain a q.o. parameterization in a distinct \(\tilde{\mathcal{A}}_{1}\)-class, that is, we get an uncountable distinct orbit with respect to the \(\tilde{\mathcal{A}}\) group. As the unique homothety that keeps the coefficient of \(\underline{t}^{\lambda_{1}}\), \(\underline{t}^{\delta_{1}}\) and \(\underline{t}^{\delta_{2}}\) equal to \(1\) is the identity, we conclude that \(H\) is not quasi-simple.
**Lemma 4.2**.: _Let \(H=(t_{1}^{n},t_{2}^{n},S(t_{1},t_{2}))\) be a quasi-short parameterization with unique characteristic exponent \(\lambda_{1}=(\lambda_{11},\lambda_{12})\). If_
\[1)\ \lambda_{11}\geq\frac{4n}{n-2}\qquad\text{or}\qquad 2)\ \lambda_{12}\geq\frac{2n}{n-2}\qquad\text{or}\qquad 3)\ \lambda_{12}\geq\frac{n}{n-2}\text{ and }\lambda_{11}\geq\frac{3n}{n-2}\]
_then \(H\) can admit three generalized Zariski exponents._
Proof.: Suppose that \(\lambda_{11}\geq\frac{4n}{n-2}\). We get
\[(n-1)\lambda_{1}+n(-4,2),\ \ (n-1)\lambda_{1}+n(-3,1),\ \ (n-1)\lambda_{1}+n(-2,0) \succ\lambda_{1}.\]
In fact,
\[(n-1)\lambda_{11}-4n=(n-2)\lambda_{11}-4n+\lambda_{11}\geq\lambda_{11}\quad \text{and}\quad(n-1)\lambda_{12}+2n>\lambda_{12};\]
\[(n-1)\lambda_{11}-3n=(n-2)\lambda_{11}-3n+\lambda_{11}\geq n+\lambda_{11}> \lambda_{11}\quad\text{and}\quad(n-1)\lambda_{12}+n>\lambda_{12};\]
\[(n-1)\lambda_{11}-2n=(n-2)\lambda_{11}-2n+\lambda_{11}\geq 2n+\lambda_{11}> \lambda_{11}\quad\text{and}\quad(n-1)\lambda_{12}\geq\lambda_{12}.\]
As \(Z=\{(n-1)\lambda_{1}+n(-4,2),(n-1)\lambda_{1}+n(-3,1),(n-1)\lambda_{1}+n(-2,0)\}\) is not contained in \(\Gamma\bigcup_{i=1,\lambda_{1i}\geq n}^{2}\left(\Gamma+2\lambda_{1}-\nu_{i}\right)\) and \(min_{\prec}(Z)=Z\), all elements of \(Z\) can occur as generalized Zariski exponents.
Now consider \(\lambda_{12}\geq\frac{2n}{n-2}\). As \(\lambda_{11}\geq\lambda_{12}\), we get \((n-1)\lambda_{1}+n(-2,0)\succ\lambda_{1}\), \((n-1)\lambda_{1}+n(0,-2)\succ\lambda_{1}\) and \((n-1)\lambda_{1}+n(-1,-1)\succ\lambda_{1}\). In fact,
\[(n-1)\lambda_{11}-2n=(n-2)\lambda_{11}-2n+\lambda_{11}\geq\lambda_{11}\ \ \text{and}\ \ (n-1)\lambda_{12}=(n-2)\lambda_{12}+\lambda_{12}\geq 2n+ \lambda_{12}>\lambda_{12};\]
\((n-1)\lambda_{11}=(n-2)\lambda_{11}+\lambda_{11}\geq 2n+\lambda_{11}>\lambda_{11}\) and \((n-1)\lambda_{12}-2n=(n-2)\lambda_{12}-2n+\lambda_{12}\geq\lambda_{12}\); \((n-1)\lambda_{11}-n=(n-2)\lambda_{11}-n+\lambda_{11}\geq n+\lambda_{11}> \lambda_{11}\) and \((n-1)\lambda_{12}-n=(n-2)\lambda_{12}-n+\lambda_{12}>\lambda_{12}\). Moreover, these elements satisfy all the conditions to be generalized Zariski exponents.
If \(\lambda_{12}\geq\frac{n}{n-2}\) and \(\lambda_{11}\geq\frac{3n}{n-2}\) then
\[(n-1)\lambda_{11}-3n=(n-2)\lambda_{11}-3n+\lambda_{11}\geq\lambda_{11}\ \ \ \text{and}\ \ \ (n-1)\lambda_{12}+n>\lambda_{12};\]
\[(n-1)\lambda_{11}-2n=(n-2)\lambda_{11}-2n+\lambda_{11}>\lambda_{11}\ \ \ \text{and}\ \ \ (n-1)\lambda_{12}>\lambda_{12};\]
\[(n-1)\lambda_{11}-n=(n-2)\lambda_{11}-n+\lambda_{11}>\lambda_{11}\ \ \ \text{and}\ \ \ (n-1)\lambda_{12}-n=(n-2)\lambda_{12}-n+\lambda_{12}\geq\lambda_{12}.\]
So, we can verify that \((n-1)\lambda_{1}+n(-3,1),(n-1)\lambda_{1}+n(-2,0)\) and \((n-1)\lambda_{1}+n(-1,-1)\) can be possible generalized Zariski exponents.
By the previous lemma, to a quasi-short parameterization to admit at most two generalized Zariski exponents, which is a necessary condition to be quasi-simple, we must consider \(\lambda_{11}<\frac{3n}{n-2}\), \(\lambda_{12}<\frac{2n}{n-2}\) or \(\lambda_{12}<\frac{n}{n-2}\) if \(\frac{3n}{n-2}\leq\lambda_{11}<\frac{4n}{n-2}\).
In the following proposition we will analize the case \(\lambda_{12}<\frac{n}{n-2}\) and \(\frac{3n}{n-2}\leq\lambda_{11}<\frac{4n}{n-2}\) simultaneously with the possibility \(\lambda_{12}=0\) for any situation.
**Proposition 4.3**.: _Let \(H\) be a quasi-short parameterization with \(\Gamma=\langle(n,0),(0,n),\lambda_{1}\rangle\). If_
\[\lambda_{12}=0\ \ \ \ \ \text{or}\ \ \ \ \ \lambda_{12}<\frac{n}{n-2}\ \text{and}\ \frac{3n}{n-2}\leq\lambda_{11}<\frac{4n}{n-2}\]
_then \(H\) is quasi-simple if and only if \(n\in\{3,4\}\) and \((\lambda_{11},\lambda_{12})\neq\left(\frac{3n}{n-2},0\right)\). In this case, \(H\) is formally \(\tilde{\mathcal{A}}\)-equivalent to_
\[\left(t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+a\underline{t}^{(n-1) \lambda_{1}+n(-2,i)}+b\underline{t}^{(n-1)\lambda_{1}+n(-3,j)}\right)\]
_where \(a,b\in\{0,1\}\), \(i,j\in\mathbb{N}\) with \(a=0\) if \(i\geq j\)._
Proof.: As \(H\) is normalized, if \(\lambda_{12}=0\) we must have \(n<\lambda_{11}\). But \(\lambda_{11}<\frac{4n}{n-2}\) so \(n<6\).
On the other hand, for \(\lambda_{12}\neq 0\), if \(n\geq 6\) and \(\lambda_{11}<\frac{4n}{n-2}\) we get \(\lambda_{11}\leq 6\leq n\). As \(\frac{3n}{n-2}\leq\lambda_{11}\) we have
\[(n-1)\lambda_{1}+n(-3,1)\succ\lambda_{1},\ \ (n-1)\lambda_{1}+n(-2,0)\succ \lambda_{1}\ \ \text{and}\ \ (n-2)\lambda_{1}+n(-2,2)\succ\lambda_{1} \tag{10}\]
because
\[(n-1)\lambda_{11}-3n=(n-2)\lambda_{11}-3n+\lambda_{11}\geq\lambda_{11}\ \ \text{and}\ \ (n-1)\lambda_{12}+n>\lambda_{12},\]
\[(n-1)\lambda_{11}-2n=(n-2)\lambda_{11}-2n+\lambda_{11}>\lambda_{11}\ \ \text{and}\ \ (n-1)\lambda_{12}\geq\lambda_{12},\]
\[(n-2)\lambda_{11}-2n\geq n\geq\lambda_{11}\ \ \text{and}\ \ (n-2)\lambda_{12}+n> \lambda_{12}.\]
Moreover, we can verify that all three elements in (10) can be generalized Zariski exponents for \(H\), so \(H\) can not be quasi-simple.
If \(n=5\), independently \(\lambda_{12}=0\) or not, we get \(\lambda_{11}\geq n=5\) and we conclude that all values in (10) are possible generalized Zariski exponents for \(H\) and the parameterization is not quasi-simple.
In this way, to obtain a quasi-simple parameterization we must have \(3\leq n\leq 4\).
Notice that \(\lambda_{12}<\frac{n}{n-2}\leq n\) and \(n<\frac{3n}{n-2}\leq\lambda_{11}\). In this way the exponents in the quasi-short parameterization \(H\) does not belong to \(\Gamma\cup\{\Gamma+2\lambda_{1}-(n,0)\}\).
By Remark 2.2, any \(\delta\not\in\Gamma\) can be expressed as \(\delta=c_{3}\lambda_{1}+n(c_{1},c_{2})\) with \(0\leq c_{3}<n\) where \(c_{1}<0\) or \(c_{2}<0\). In addition, in order to \(\delta\succ\lambda_{1}\) we must have \(c_{3}\geq 2\).
If \(n=4\) and \(c_{3}=2\), the condition \(\lambda_{1}\prec\delta=2\lambda_{1}+4(c_{1},c_{2})\) implies \(c_{2}\geq 0\) and \(c_{1}\geq-1\). But such a condition gives us \(\delta\in\Gamma+2\lambda_{1}-(4,0)\) that is not a possible exponent in a quasi-short parameterization.
So, for \(3\leq n\leq 4\) the exponents to be considered in \(H\) are \((n-1)\lambda_{1}+n(c_{1},c_{2})\succ\lambda_{1}\), i.e.,
\[c_{1}\geq-\frac{n-2}{n}\cdot\lambda_{11}>-\frac{n-2}{n}\cdot\frac{4n}{n-2}=-4 \ \ \text{and}\ \ c_{2}\geq-\frac{n-2}{n}\cdot\lambda_{12}>-\frac{n-2}{n}\cdot\frac{n}{n-2}= -1.\]
If \(c_{1}=-1\) and \(c_{2}\geq 0\) we obtain an element in \(\Gamma+2\lambda_{1}-(n,0)\), that is not a possible exponent in a quasi-short parameterization.
If \(\lambda_{12}=0\) and \(\lambda_{11}=\frac{3n}{n-2}\) then \(n_{1}\neq n\) and \(\Gamma\neq\langle(n,0),(0,n),\lambda_{1}\rangle\). So, we can exclude this possibility.
In this way, for \(3\leq n\leq 4\), we can consider a quasi-short parameterization given by
\[\left(t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+\sum_{k\geq 0}a_{k}t^{(n -1)\lambda_{1}+n(-2,k)}+\sum_{l\geq 0}a_{l}t^{(n-1)\lambda_{1}+n(-3,l)}\right),\]
with \(l>0\) if \(\lambda_{12}=0\) and \(\lambda_{11}=\frac{3n}{n-2}\). In particular \(E_{\mathcal{Z}}(H)\subset\{(n-1)\lambda_{1}+n(-2,k),\ (n-1)\lambda_{1}+n(-3,l);\ k,l \in\mathbb{N}\}\cup\{\underline{\infty}\}\).
Notice that for \(l\leq k\) we get \((n-1)\lambda_{1}+n(-2,k)=(n-1)\lambda_{1}+n(-3,l)+n(1,k-l)\in(n-1)\lambda_{1}+ n(-3,l)+\Gamma\setminus\{(0,0)\}\), that is, \((n-1)\lambda_{1}+n(-2,k)\) is not a generalized Zariski exponent.
Suppose that \(E_{\mathcal{Z}}(H)=\{\delta=(n-1)\lambda_{1}+n(-2,i)\}\) for some \(i\geq 0\), that is, \(a_{l}=0\) for every \(l\geq 0\). As any \(\gamma\succ\delta\) with \(\gamma\in Q_{1}=\mathbb{Z}\lambda_{1}+n\mathbb{Z}^{2}\) is such that \(\gamma\in\Gamma\cup(\Gamma+2\lambda_{1}-\nu_{1})\cup\{\delta+n(0,k);k\geq 1\}\) we can apply (possibly infinite many times) Proposition 3.1 and Lemma 5.1 in order to eliminate all terms \(\underline{t}^{\gamma}\) with \(\gamma\succ\delta=(n-1)\lambda_{1}+n(-2,i)\) and we obtain \((t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+a_{i}\underline{t}^{(n-1) \lambda_{1}+n(-2,i)})\). In addition, by Proposition 2.5, we conclude that, in this case, \(H\) is formally \(\tilde{\mathcal{A}}\)-equivalent to \(\big{(}t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+\underline{t}^{(n-1) \lambda_{1}+n(-2,i)}\big{)}\).
If \(E_{\mathcal{Z}}(H)=\{\delta=(n-1)\lambda_{1}+n(-3,i)\}\) for some \(i\geq 0\), that is, \(a_{k}=0\) for every \(k<l\) then for any \(\gamma\in Q_{1}=\mathbb{Z}\lambda_{1}+n\mathbb{Z}^{2}\) with \(\gamma\succ\delta\) we have \(\gamma\in\Gamma\cup(\Gamma+2\lambda_{1}-\nu_{1})\cup\{\delta+\Gamma\setminus \{(0,0)\}\}\).
Applying (possibly infinite many times) Proposition 3.1 and Lemma 5.1 we can to eliminate all terms \(\underline{t}^{\gamma}\) with \(\gamma\succ\delta=(n-1)\lambda_{1}+n(-2,i)\) and, by Proposition 2.5, \(H\) is formally \(\tilde{\mathcal{A}}\)-equivalent to \(\left(t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+\underline{t}^{(n-1) \lambda_{1}+n(-2,i)}\right)\).
Finally, suppose that \(E_{\mathcal{Z}}(H)=\{\delta_{1}=(n-1)\lambda_{1}+n(-2,i),\delta_{2}=(n-1) \lambda_{1}+n(-3,j)\}\) for some \(0\leq i<j\) then
\[\{\gamma\succ\delta_{1};\;\gamma\in Q_{1}\}\cup\{\gamma\succ\delta_{2};\; \gamma\in Q_{1}\}\subset\Gamma\cup(\Gamma+2\lambda_{1}-\nu_{1})\cup\{\delta_{1 }+n(0,k),\;k>0\}\cup\{\delta_{2}+n(0,k),\;k>0\}.\]
In this situation, as \(\{\delta_{1}-\lambda_{1},\delta_{2}-\lambda_{1}\}\) is a linearly independent set, we can apply (possibly infinite many times) Proposition 3.1, Lemma 5.2, Proposition 2.5 and we conclude that \(H\) is formally \(\tilde{\mathcal{A}}\)-equivalent to \(\left(t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+\underline{t}^{(n-1) \lambda_{1}+n(-2,i)}+\underline{t}^{(n-1)\lambda_{1}+n(-3,j)}\right)\).
By Lemma 4.2 and Proposition 4.3, the next step is to consider the cases:
\[0<\lambda_{12}\leq\lambda_{11}<\frac{2n}{n-2}\quad\text{ or }\quad 0<\lambda_{12 }<\frac{2n}{n-2}\leq\lambda_{11}<\frac{3n}{n-2}. \tag{11}\]
**Proposition 4.4**.: _Let \(H=(t_{1}^{n},t_{2}^{n},S(t_{1},t_{2}))\) be a quasi-short parameterization with \(0<\lambda_{12}\leq\lambda_{11}<\frac{2n}{n-2}\), then \(H\) is quasi-simple if and only if_
* \(\Gamma_{H}=\langle(n,0),(0,n),(1,1)\rangle\) _and in this case_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \(\left(t_{1}^{n},t_{2}^{n},t_{1}t_{2}\right)\)_, that is, a normal surface;_
* \(\Gamma_{H}=\langle(n,0),(0,n),(2,1)\rangle\) _for_ \(6\leq n\leq 7\) _or_ \(\Gamma_{H}=\langle(5,0),(0,5),(3,1)\rangle\)_. In this case_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \(\left(t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+\underline{at}^{(n-2) \lambda_{1}+n(-1,i)}+b\underline{t}^{(n-1)\lambda_{1}+n(-1,j)}\right)\) _where_ \(a,b\in\{0,1\}\)_,_ \(i,j\in\mathbb{N}\) _with_ \(b=0\) _if_ \(i\leq j\)_._
* \(\Gamma_{H}=\langle(5,0),(0,5),(2,1)\rangle\) _or_ \(\Gamma_{H}=\langle(4,0),(0,4),(\lambda_{11},1)\rangle\) _where_ \(2\leq\lambda_{11}\leq 3\)_. In this case_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \(\left(t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+\underline{at}^{(n-1) \lambda_{1}+n(-1,i)}\right)\) _where_ \(a\in\{0,1\}\) _and_ \(i\in\mathbb{N}\)_._
* \(\Gamma_{H}=\langle(5,0),(0,5),(2,2)\rangle\) _or_ \(\Gamma_{H}=\langle(4,0),(0,4),\lambda_{1})\rangle\) _where_ \(\lambda_{1}\in\{(3,2),(3,3)\}\)_. In this case_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+\underline{at}^{(n-1) \lambda_{1}+n(-1,-1)}+b\underline{t}^{(n-1)\lambda_{1}+n(i,-1)}+c\underline{t }^{(n-1)\lambda_{1}+n(-1,j)}\right)\] _where_ \(a,b,c\in\{0,1\}\)_,_ \(i,j\in\mathbb{N}\) _with_ \(b=c=0\) _if_ \(a=1\)_._
* \(\Gamma_{H}=\langle(3,0),(0,3),(\lambda_{11},\lambda_{12})\rangle\) _where_ \(1\leq\lambda_{12}\leq\lambda_{11}\in\{2,3,4,5\}\) _with_ \(\lambda_{1}\neq(3,3)\)_. In this case,_ \(H\) _is_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \((t_{1}^{3},t_{2}^{3},\underline{t}^{\lambda_{1}})\) _if_ \(\lambda_{12}<3\) _and_ \((t_{1}^{3},t_{2}^{3},\underline{t}^{\lambda_{1}}+\underline{at}^{2\lambda_{1}+3 (-1,-1)});\;a\in\{0,1\}\) _if_ \(3\leq\lambda_{12}\)_._
Proof.: As \(0<\lambda_{12}\leq\lambda_{11}<\frac{2n}{n-2}\), for \(n\geq 4\) we get \(\lambda_{12}\leq\lambda_{11}<n\) and for \(n\geq 6\) we obtain that \(\lambda_{11}\leq 2\).
If \(\lambda_{11}=1\) then we must have \(\lambda_{12}=1\) and the proposition follows by Example 3.3.
So, in what follows we suppose that \(\lambda_{11}\geq 2\) and we split the cases according to the possible value for \(n>2\).
**Case \(n\geq 6\):** Notice that in this case we must have \(1\leq\lambda_{12}\leq\lambda_{11}=2<n\). In particular, if a parameterization \(H=(t_{1}^{n},t_{2}^{n},S(\underline{t}))\) is quasi-short then \(\lambda_{1}\prec\gamma\in supp(S(\underline{t})-\underline{t}^{\lambda_{1}})\) does not belong to \(\Gamma\).
For \(\lambda_{12}=2\) we can verify that
\[(n-1)\lambda_{1}+n(-1,0),\ \ (n-1)\lambda_{1}+n(0,-1)\ \ \mbox{and}\ \ (n-2) \lambda_{1}+n(-1,1) \tag{12}\]
are possible generalized Zariski exponents. In addition, for \(n\geq 8\) and \(1\leq\lambda_{12}\leq 2\), we have that
\[(n-1)\lambda_{1}+n(-1,0),\ \ (n-2)\lambda_{1}+n(-1,1)\ \ \mbox{and}\ \ (n-3) \lambda_{1}+n(-1,2) \tag{13}\]
can be considered as generalized Zariski exponents and, as we have remarked, we do not have quasi-simple parameterization. Recall that if \(\lambda_{1}=(2,2)\) then \(n\) must be odd in order to obtain the value semigroup \(\langle(n,0),(0,n),\lambda_{1}\rangle\).
So, in this case, it is sufficient to consider \(6\leq n\leq 7\), \(\lambda_{11}=2\) and \(\lambda_{12}=1\).
By Remark 2.2, any \(\gamma\not\in\Gamma\) can be expressed as \(\gamma=c_{3}\lambda_{1}+n(c_{1},c_{2})\) with \(0\leq c_{3}<n\) where \(c_{1}<0\) or \(c_{2}<0\). Notice that for \(6\leq n\leq 7\) if \(c_{3}\leq n-3\), to obtain \(\gamma=c_{3}(2,1)+n(c_{1},c_{2})\succ\lambda_{1}=(2,1)\) we must have \(c_{1},c_{2}\in\mathbb{N}\), that is, \(\gamma\in\Gamma\). Moreover,
\[\begin{array}{l}(n-1)\lambda_{1}+n(c_{1},c_{2})\succ\lambda_{1}\\ (n-2)\lambda_{1}+n(c_{1},c_{2})\succ\lambda_{1}\end{array}\ \ \Leftrightarrow\ \ c_{1}\geq-1\ \mbox{and}\ c_{2}\geq 0.\]
Consequently, in this case, any quasi-short parameterization can be given by
\[H=\left(t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+\sum_{k\geq 0}a_{k} \underline{t}^{(n-2)\lambda_{1}+n(-1,k)}+\sum_{l\geq 0}a_{l}\underline{t}^{(n-1 )\lambda_{1}+n(-1,l)}\right) \tag{14}\]
and \(E_{\mathcal{Z}}(H)\subset\{(n-2)\lambda_{1}+n(-1,k),\ (n-1)\lambda_{1}+n(-1,l);\ k,l \in\mathbb{N}\}\cup\{\underline{\infty}\}\). Notice that if \(k\leq l\) then \((n-1)\lambda_{1}+n(-1,l)=(n-2)\lambda_{1}+n(-2,k)+\lambda_{1}+n(0,l-k)\), that is, we have at most one generalized Zariski exponent.
Similarly to Proposition 4.3 we conclude that \(H\) is formally \(\tilde{\mathcal{A}}\)-equivalent to
\[\left(t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+a\underline{t}^{(n-2) \lambda_{1}+n(-1,i)}+b\underline{t}^{(n-1)\lambda_{1}+n(-1,j)}\right) \tag{15}\]
where \(a,b\in\{0,1\}\), \(i,j\in\mathbb{N}\) with \(b=0\) if \(i\leq j\).
**Case \(n=5\):** In this case we can have \(1\leq\lambda_{12}\leq\lambda_{11}\leq\frac{2n}{n-2}=\frac{10}{3}<n=5\).
Recall that, by Remark 2.2, any \(\gamma\not\in\Gamma\) can be expressed as \(\gamma=c_{3}\lambda_{1}+5(c_{1},c_{2})\) with \(0\leq c_{3}<5\) where \(c_{1}<0\) or \(c_{2}<0\). In order to \(\gamma\succ\lambda_{1}\) we must have \(c_{3}\in\{3,4\}\) with \(c_{1}=-1\) or \(c_{2}=-1\).
If \(2\leq\lambda_{12}\leq\lambda_{11}=3\) then the elements in (12) are possible generalized Zariski exponents and we do not obtain a quasi-simple surface.
If \(\lambda_{1}=(3,1)\) then we can verify that the possible generalized Zariski exponents are \(4\lambda_{1}+5(-1,c_{2})\) and \(3\lambda_{1}+5(-1,c_{1})\) with \(c_{1},c_{2}\in\mathbb{N}\). So, we can consider quasi-short parameterization given as (14) and consequently, \(H\) is formally \(\tilde{\mathcal{A}}\)-equivalent to (15).
For \(\lambda_{11}=2\) we have that
\[\gamma=3(2,\lambda_{12})+5(c_{1},c_{2})\succ(2,\lambda_{12})=\lambda_{1} \ \ \Leftrightarrow\ \ c_{1},c_{2}\in\mathbb{N},\]
that is, \(\gamma\in\Gamma\). In this way, possible generalized Zariski exponent is \(4(2,\lambda_{12})+5(c_{1},c_{2})\not\in\Gamma\) with \(c_{1}\geq-1,c_{2}\geq-1\) if \(\lambda_{12}=2\) and \(c_{1}=-1\), \(c_{2}\geq 0\) for \(\lambda_{12}=1\). Consequently, a quasi-short parameterization can be given by \(H=(t_{1}^{n},t_{2}^{n},S(\underline{t}))\) with
\[S(\underline{t})=\underline{t}^{\lambda_{1}}+\sum_{k\geq 0}a_{k}\underline{t}^{( n-1)\lambda_{1}+n(-1,k)}\ \ \mbox{if}\ \ \lambda_{12}=1 \tag{16}\]
or
\[S(\underline{t})=\underline{t}^{\lambda_{1}}+a\underline{t}^{(n-1)\lambda_{1 }+n(-1,-1)}+\sum_{k\geq 0}a_{k}\underline{t}^{(n-1)\lambda_{1}+n(k,-1)}+ \sum_{l\geq 0}a_{l}\underline{t}^{(n-1)\lambda_{1}+n(-1,l)}\ \ \mbox{if}\ \lambda_{12}=2. \tag{17}\]
If \(H\) is as (16) and \(E_{\mathcal{Z}}(H)=\{(n-1)\lambda_{1}+n(-1,i)\}\) then, similarly to Proposition 4.3, we conclude that, \(H\) is formally \(\tilde{\mathcal{A}}\)-equivalent to \(\left(t_{1}^{5},t_{2}^{5},\underline{t}^{\lambda_{1}}+\underline{t}^{4 \lambda_{1}+5(-1,i)}\right)\) with \(i\in\mathbb{N}\).
If in (17) we have
1. \(a\neq 0\), then \(E_{\mathcal{Z}}(H)=\{\delta=(n-1)\lambda_{1}+n(-1,-1)\}\) and any \(\gamma\in Q_{1}=\mathbb{Z}\lambda_{1}+n\mathbb{Z}^{2}\) with \(\gamma\succ\delta\) belongs to the set \(\Gamma\cup\{\delta+k\nu_{1};\ k>0\}\cup\{\delta+k\nu_{2};\ k>0\}\).
2. \(a=0\), \(a_{c_{2}}=0\) for all \(c_{2}\geq 0\) and \(E_{\mathcal{Z}}(H)=\{\delta=(n-1)\lambda_{1}+n(i,-1)\}\) for some \(i\geq 0\), then \(\gamma\in Q_{1}=\mathbb{Z}\lambda_{1}+n\mathbb{Z}^{2}\) with \(\gamma\succ\delta\) belongs to the set \(\Gamma\cup\{\delta+k\nu_{1};\ k>0\}\).
3. \(a=0\), \(a_{c_{1}}=0\) for all \(c_{1}\geq 0\) and \(E_{\mathcal{Z}}(H)=\{\delta=(n-1)\lambda_{1}+n(-1,j)\}\) for some \(j\geq 0\), then \(\gamma\in Q_{1}=\mathbb{Z}\lambda_{1}+n\mathbb{Z}^{2}\) with \(\gamma\succ\delta\) belongs to the set \(\Gamma\cup\{\delta+k\nu_{2};\ k>0\}\).
In the above cases we have one generalized Zariski exponent \(\delta\) and, as Proposition 4.3, we obtain that \(H\) is formally \(\tilde{\mathcal{A}}\)-equivalent to \((t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+\underline{t}^{\delta})\).
If \(a=0\) and \(E_{\mathcal{Z}}(H)=\{\delta_{1}=(n-1)\lambda_{1}+n(i,-1),\delta_{2}=(n-1) \lambda_{1}+n(-1,j)\}\) for some \(i,j\in\mathbb{N}\), then \(\{\gamma\succ\delta_{1};\ \gamma\in Q_{1}\}\cup\{\gamma\succ\delta_{2};\ \gamma\in Q_{1}\}\subset\Gamma\cup\{\delta_{1}+n(k,0),\ k>0\}\cup\{\delta_{2}+n (0,k),\ k>0\}\). In this case, we can apply (possibly infinite many times) Proposition 3.1, Lemma 5.3, Proposition 2.5 and we conclude that \(H\) is formally \(\tilde{\mathcal{A}}\)-equivalent to \(\left(t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+\underline{t}^{(n-1) \lambda_{1}+n(i,-1)}+\underline{t}^{(n-1)\lambda_{1}+n(-1,j)}\right)\).
**Case \(n=4\):** Recall that we get \(0<\lambda_{12}\leq\lambda_{1}<\frac{2n}{n-2}=4\), that is \(\lambda_{12}\leq\lambda_{11}<4=n\) and any possible generalized Zariski exponent is expressed as \(\delta=3\lambda_{1}+4(c_{1},c_{2})\succ\lambda_{1}\) with \(c_{1}<0\) or \(c_{2}<0\).
If \(\lambda_{12}=1\) then we must have \(c_{1}=-1\) and \(c_{2}\geq 0\), that is, we can consider the quasi-short parameterization given as (16). In addition, if \(E_{\mathcal{Z}}(H)\neq\{\infty\}\) we can proceed as Proposition 4.3 and to conclude that \(H\) is formally \(\tilde{\mathcal{A}}\)-equivalent to \(\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+\underline{t}^{3\lambda _{1}+4(-1,i)}\right)\) with \(i\in\mathbb{N}\).
Recall we do not have \(\lambda_{11}=\lambda_{12}=2\) otherwise, the value semigroup is not \(\langle(n,0),(0,n),\lambda_{11}\rangle\). Excluding this case, if \(1<\lambda_{12}\leq\lambda_{11}\leq 3\) then \(\delta\in E_{\mathcal{Z}}(H)\) if and only if \(\delta=3\lambda_{1}+4(c_{1},c_{2})\) with \(c_{1}=-1\) or \(c_{2}=-1\). So, the quasi-short parameterization is given as (17) and we can proceed in the same way as the case \(n=5\) and \(\lambda_{1}=(2,2)\).
**Case \(n=3\):** Notice that we get \(0<\lambda_{12}\leq\lambda_{1}<\frac{2n}{n-2}=6\), that is \(\lambda_{12}\leq\lambda_{11}\in\{2,3,4,5\}\) with the restriction \(\lambda_{12}<3\) if \(\lambda_{11}=3\) in order to obtain \(\Gamma=\langle(3,0),(0,3),\lambda_{1}\rangle\).
Any \(\lambda_{1}\prec\gamma\in Q_{1}\) is expressed, by Remark 2.2, as \(\gamma=2(\lambda_{11},\lambda_{12})+3(c_{1},c_{2})\succ\lambda_{1}=(\lambda_{ 11},\lambda_{12})\) and considering the possible values for \(\lambda_{1i}\) we obtain
\[c_{i}\geq 0\ \ \text{if}\ \ \lambda_{1i}<3\ \ \ \ \ \text{and}\ \ \ \ \ c_{i}\geq-1\ \ \text{if}\ \ \lambda_{1i}\geq 3.\]
As the possible generalized Zariski exponents do not belong to \(\Gamma\bigcup_{\begin{subarray}{c}1\leq i\leq 2\\ \lambda_{11}\geq 3\end{subarray}}(\Gamma+2\lambda_{1}-\nu_{i})\), we get
\[E_{\mathcal{Z}}(H)=\{\underline{\infty}\}\ \ \text{if}\ 0<\lambda_{12}<3\ \ \ \ \ \text{and}\ \ \ \ \ \ E_{\mathcal{Z}}(H)\subset\{2\lambda_{1}+3(-1,-1),\underline{\infty}\}\ \ \text{if}\ 3\leq\lambda_{12}.\]
If \(E_{\mathcal{Z}}(H)=\{2\lambda_{1}+3(-1,-1)\}\) then \(\gamma\succ 2\lambda_{1}+3(-1,-1)\) belongs to \(\Gamma\bigcup_{\begin{subarray}{c}1\leq i\leq 2\\ \lambda_{1i}\geq 3\end{subarray}}(\Gamma+2\lambda_{1}-\nu_{i})\) and it can be eliminable. In addition, Proposition 2.5 allows us to normalize the coefficient of the term with the generalized Zariski exponent. Consequently, \(H\) is formally \(\tilde{\mathcal{A}}\)-equivalent to
\[(t_{1}^{3},t_{2}^{3},\underline{t}^{\lambda_{1}})\ \ \text{for}\ \ \lambda_{12}<3\ \ \ \ \ \text{or}\ \ \ \ \ (t_{1}^{3},t_{2}^{3},\underline{t}^{\lambda_{1}}+at\underline{t}^{2\lambda_{1}+3 (-1,-1)});\ a\in\{0,1\}\ \ \text{for}\ \ 3\leq\lambda_{12}.\]
Now we will consider \(0<\lambda_{12}<\frac{2n}{n-2}\leq\lambda_{11}<\frac{3n}{n-2}\).
Notice that for \(n\geq 5\) we obtain \(\lambda_{11}<\frac{3n}{n-2}\leq 5\leq n\) and \(\lambda_{12}<\frac{2n}{n-2}<n\). So, in this case, in a quasi-short parameterization \(H=(t_{1}^{n},t_{2}^{n},S(\underline{t}))\) with \(n\geq 5\) we get \(\gamma\in supp(S(\underline{t})-\underline{t}^{\lambda_{1}})\) implies \(\gamma\not\in\Gamma\) and, by Remark 2.2, any \(\gamma\not\in\Gamma\) can be expressed as \(\gamma=c_{3}\lambda_{1}+n(c_{1},c_{2})\) with \(0\leq c_{3}<n\) where \(c_{1}<0\) or \(c_{2}<0\).
For \(6\leq n\leq 7\) we obtain \(3\leq\lambda_{11}\leq 4\) and \(\lambda_{11}=3\) for \(n\geq 8\). In this way, for \(n\geq 6\) all the elements in (13) can be considered as generalized Zariski exponents.
If \(n=5\) then \(3<\frac{2n}{n-2}\leq\lambda_{11}<\frac{3n}{n-2}=5\), that is, \(\lambda_{11}=4\) and \(0<\lambda_{12}<\frac{2n}{n-2}=\frac{10}{3}\). In this case, \(4\lambda_{1}+5(-2,1),4\lambda_{1}+5(-1,0)\) and \(3\lambda_{1}+5(-1,1)\) can be generalized Zariski exponents.
So, for \(0<\lambda_{12}<\frac{2n}{n-2}\leq\lambda_{11}<\frac{3n}{n-2}\) and \(n\geq 5\) it is possible to get three generalized Zariski exponents and consequently, we can not have quasi-simple surfaces.
**Proposition 4.5**.: _Let \(H=(t_{1}^{n},t_{2}^{n},S(t_{1},t_{2}))\) be a quasi-short parameterization with \(0<\lambda_{12}<\frac{2n}{n-2}\leq\lambda_{11}<\frac{3n}{n-2}\), then \(H\) is quasi-simple if and only if_
* \(\Gamma_{H}=\langle(3,0),(0,3),\lambda_{1}\rangle\) _where_ \(1\leq\lambda_{12}\leq 5<\lambda_{11}\leq 8\) _with_ \(\lambda_{1}=(\lambda_{11},\lambda_{12})\neq(6,3)\)_. In this case_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{3},t_{2}^{3},\underline{t}^{\lambda_{1}}+a\underline{t}^{2 \lambda_{1}+3(-2,i)}\right);\ a\in\{0,1\}\ \text{and}\ i\geq 0\ \ \text{if}\ \ \lambda_{12}\in\{1,2\}\ \ \text{or}\] \[\left(t_{1}^{3},t_{2}^{3},\underline{t}^{\lambda_{1}}+a\underline{t}^{2 \lambda_{1}+3(-1,-1)}+b\underline{t}^{2\lambda_{1}+3(-2,i)}\right);\ a,b\in\{0,1 \},i\geq-1\ \text{if}\ \ 3\leq\lambda_{12}\leq 5,\] _with_ \(a=0\) _if_ \(i=-1\) \(e\) \(b=1\)_._
* \(\Gamma_{H}=\langle(4,0),(0,4),\lambda_{1}\rangle\) _where_ \(\lambda_{12}\in\{1,2,3\}\) _and_ \(\lambda_{11}\in\{4,5\}\) _with_ \((\lambda_{11},\lambda_{12})\neq(4,2)\)_. In this case,_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+a\underline{t}^{3 \lambda_{1}+4(-2,i)}\right);\ a\in\{0,1\},\ i\in\mathbb{N}\ \ \text{if}\ \ \lambda_{12}=1\ \ \text{or}\] \[\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+a\underline{t}^{3 \lambda_{1}+4(-2,i)}+b\underline{t}^{3\lambda_{1}+4(j,-1)}\right);\ \ \text{if}\ \ \lambda_{12}\in\{2,3\}\] _with_ \(a,b\in\{0,1\}\)_,_ \(i,j\geq-1\) _where_ \(b=0\) _if_ \(i=-1\) _and_ \(a=1\)_._
Proof.: By the comment before the proposition, it is sufficient to consider \(3\leq n\leq 4\).
**Case \(n=3\):** The conditions \(0<\lambda_{12}<\frac{2n}{n-2}\leq\lambda_{11}<\frac{3n}{n-2}\) give us \(0<\lambda_{12}\leq 5\) and \(6\leq\lambda_{11}\leq 8\). Recall that to obtain \(\Gamma=\langle(3,0),(0,3),\lambda_{1}\rangle\) we must have \(\lambda_{1}\neq(6,3)\).
Taking \(\gamma\not\in\Gamma\) we can write \(\gamma=c_{3}\lambda_{1}+4(c_{1},c_{2})\) with \(0\leq c_{3}<3\) where \(c_{1}<0\) or \(c_{2}<0\). To obtain \(\gamma\succ\lambda_{1}\) it is sufficient to consider \(c_{3}=2\), \(c_{1}\geq-2\) and
\[c_{2}\geq 0\ \text{ for }\ \lambda_{12}\in\{1,2\}\ \ \ \ \text{ or }\ \ \ \ c_{2}\geq-1\ \ \text{for}\ \ \lambda_{12}\in\{3,4,5\}.\]
In this case, an exponent \(\gamma\succ\lambda_{1}\) in a quasi-short parameterization is such that
\[\gamma\not\in\Gamma\cup(\Gamma+2\lambda_{1}-(3,0))\ \text{if}\ \ \lambda_{12}<3\ \ \ \text{or }\ \ \ \gamma\not\in\Gamma\cup(\Gamma+2\lambda_{1}-(3,0))\cup(\Gamma+2\lambda_{1}-(0,3)) \ \text{if}\ \ \lambda_{12}\geq 3.\]
So, any quasi-simple parameterization with \(n=3\) and \(0<\lambda_{12}<\frac{2n}{n-2}\leq\lambda_{11}<\frac{3n}{n-2}\) is formally \(\tilde{\mathcal{A}}\)-equivalent to
\[\left(t_{1}^{3},t_{2}^{3},\underline{t}^{\lambda_{1}}+\sum_{k\geq 0}a_{k} \underline{t}^{2\lambda_{1}+3(-2,k)}\right)\ \ \text{if}\ \ \lambda_{12}\in\{1,2\}\ \ \text{or}\] \[\left(t_{1}^{3},t_{2}^{3},\underline{t}^{\lambda_{1}}+a\underline{t}^{2 \lambda_{1}+3(-1,-1)}+\sum_{k\geq-1}a_{k}\underline{t}^{2\lambda_{1}+3(-2,k)} \right)\ \ \text{if}\ \ \lambda_{12}\in\{3,4,5\}.\]
As we proceeded in Proposition 4.3, we conclude that \(H\) is formally \(\tilde{\mathcal{A}}\)-equivalent to
\[\left(t_{1}^{3},t_{2}^{3},\underline{t}^{\lambda_{1}}+a\underline{t}^{2 \lambda_{1}+3(-2,i)}\right)\ a\in\{0,1\},\ i\in\mathbb{N}\ \ \text{if}\ \ 0<\lambda_{12}<3\ \ \text{or}\]
\[\left(t_{1}^{3},t_{2}^{3},\underline{t}^{\lambda_{1}}+a\underline{t}^{2 \lambda_{1}+3(-1,-1)}+b\underline{t}^{2\lambda_{1}+3(-2,i)}\right),\ a,b\in\{0,1\},\ i\geq-1\ \ \text{if}\ \ 3\leq\lambda_{12}\leq 5,\]
with \(a=0\) if \(i=-1\).
**Case \(n=4\):** As \(0<\lambda_{12}<\frac{2n}{n-2}\leq\lambda_{11}<\frac{3n}{n-2}\) we get \(1\leq\lambda_{12}\leq 3\) and \(4\leq\lambda_{11}\leq 5\) with \(\lambda_{1}\neq(4,2)\). So, in a quasi-short parameterization it is sufficient consider exponents \(\gamma\succ\lambda_{1}\) such that \(\gamma\not\in\Gamma\cup(\Gamma+2\lambda_{1}-\nu_{1})\).
By Remark 2.2, any \(\gamma\not\in\Gamma\) can be expressed \(\gamma=c_{3}\lambda_{1}+4(c_{1},c_{2})\) with \(0\leq c_{3}<4\) where \(c_{1}<0\) or \(c_{2}<0\). In order to \(\gamma\succ\lambda_{1}\) we have to consider \(c_{3}\in\{2,3\}\).
For \(c_{3}=2\) we have \(\gamma=2(\lambda_{11},\lambda_{12})+4(c_{1},c_{2})\succ(\lambda_{11},\lambda_ {12})\) if and only if \(c_{1}\geq-1\) and \(c_{2}\geq 0\). In this case, \(\gamma\in\Gamma\cup(\Gamma+2\lambda_{1}-\nu_{1})\) and we can discard such exponents in a quasi-short parameterization.
Taking \(c_{3}=3\) we have \(\gamma=3(\lambda_{11},\lambda_{12})+4(c_{1},c_{2})\succ(\lambda_{11},\lambda_ {12})\) if and only if \(c_{1}\geq-2\) and
\[c_{2}\geq 0\ \ \text{for}\ \ \lambda_{12}=1\ \ \ \ \ \text{or}\ \ \ \ \ c_{2}\geq-1\ \ \text{for}\ \ \lambda_{12}\in\{2,3\}.\]
If \(c_{1}\geq-1\) and \(c_{2}\geq 0\) we obtain \(\gamma\in\Gamma\cup(\Gamma+2\lambda_{1}-\nu_{1})\). So, any quasi-short parameterization with \(n=4\) and \(0<\lambda_{12}<\frac{2n}{n-2}=4\leq\lambda_{11}<\frac{3n}{n-2}=6\) is \(\tilde{\mathcal{A}}\)-equivalent to
\[\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+\sum_{k\geq 0}a_{k} \underline{t}^{3\lambda_{1}+4(-2,k)}\right)\ \ \text{if}\ \ \lambda_{12}=1; \tag{18}\]
\[\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+\sum_{k\geq-1}b_{k} \underline{t}^{3\lambda_{1}+4(-2,k)}+\sum_{l\geq-1}c_{l}\underline{t}^{3 \lambda_{1}+4(l,-1)}\right)\ \ \text{if}\ \ \lambda_{12}\in\{2,3\}. \tag{19}\]
In addition,
1. If \(E_{\mathcal{Z}}(H)=\{\delta=3\lambda_{1}+4(-2,i)\}\) with \(i\in\mathbb{N}\) in (18) or
2. If \(E_{\mathcal{Z}}(H)=\{\delta=3\lambda_{1}+4(-2,i)\}\) with \(i\geq-1\) in (19) or
3. If \(E_{\mathcal{Z}}(H)=\{\delta=3\lambda_{1}+4(i,-1)\}\) with \(i\geq-1\) in (19)
then, by Lemma 5.1 and Proposition 3.1, we conclude that \(H\) is formally \(\tilde{\mathcal{A}}\)-equivalent to \(\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+\underline{t}^{\delta }\right)\).
On the other hand, if it is not the case and \(E_{\mathcal{Z}}(H)\neq\{\underline{\infty}\}\) then in (19) we have \(b_{-1}=0\) and \(E_{\mathcal{Z}}=\{3\lambda_{1}+4(-2,i),3\lambda_{1}+4(j,-1)\}\). In this case, we proceed as Proposition 4.3 and \(H\) is formally \(\tilde{\mathcal{A}}\)-equivalent to \(\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+\underline{t}^{3 \lambda_{1}+4(-2,i)}+\underline{t}^{3\lambda_{1}+4(j,-1)}\right)\).
We summarize the results of this section in the following theorem.
**Theorem 4.6**.: _Let \(H=(t_{1}^{n},t_{2}^{n},S(t_{1},t_{2}))\) be a normalized q.o. parameterization with value semigroup \(\Gamma=\langle(n,0),(0,n),\lambda_{1}=(\lambda_{11},\lambda_{12})\rangle\). Then \(H\) is quasi-simple if and only if we have one of the following cases:_
1. \(n=2\)_. In this case,_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \(\left(t_{1}^{2},t_{2}^{2},\underline{t}^{\lambda_{1}}\right)\)_._
_._
2. \(\lambda_{1}=(1,1)\)_. In this case,_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \((t_{1}^{a},t_{2}^{a},t_{1}t_{2})\)_, i.e., a normal surface._
3. \(n=3\) _and_ 1. \(1\leq\lambda_{12}\leq\lambda_{11}\in\{2,3,4,5\}\) _with_ \(\lambda_{1}\neq(3,3)\)_. In this case_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[(t_{1}^{3},t_{2}^{3},\underline{t}^{\lambda_{1}})\;\;\mbox{if}\;\;\lambda_{12 }\in\{1,2\}\;\;\;\;\;\mbox{or}\;\;\;\;\;(t_{1}^{3},t_{2}^{3},\underline{t}^{ \lambda_{1}}+\underline{at}^{2\lambda_{1}+3(-1,-1)});\;a\in\{0,1\}\;\;\mbox{if} \;\;\;3\leq\lambda_{12}.\]
4. \(1\leq\lambda_{12}\leq 5<\lambda_{11}\leq 8\) _with_ \(\lambda_{1}\neq(3,6)\)_. In this case_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{3},t_{2}^{3},\underline{t}^{\lambda_{1}}+\underline{at}^{2\lambda _{1}+3(-1,-1)}+\underline{bt}^{2\lambda_{1}+3(-2,i)}\right)\;\;a,b\in\{0,1\},i \geq-1\;\;\mbox{if}\;\;\lambda_{12}\in\{3,4,5\},\] _with_ \(a=0\) _if_ \(i=-1\) \(e\) \(b=1\)_._
5. \(0\leq\lambda_{12}\leq 2\) _and_ \(9\leq\lambda_{11}\leq 11\) _with_ \((\lambda_{11},\lambda_{12})\neq(9,0)\)_. In this case_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{3},t_{2}^{3},\underline{t}^{\lambda_{1}}+\underline{at}^{2\lambda _{1}+3(-2,i)}+\underline{bt}^{2\lambda_{1}+3(-3,j)}\right)\] _where_ \(a,b\in\{0,1\}\)_,_ \(i,j\in\mathbb{N}\) _with_ \(a=0\) _if_ \(i\geq j\)_._
6. \(n=4\) _and_ 1. \(\lambda_{1}=(\lambda_{11},1)\) _where_ \(2\leq\lambda_{11}\leq 3\)_. In this case_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+\underline{at}^{3\lambda _{1}+4(-1,i)}\right)\;\;\mbox{where}\;a\in\{0,1\}\;\mbox{and}\;i\in\mathbb{N}.\]
7. \(2\leq\lambda_{12}\leq\lambda_{11}\leq 3\) _with_ \((\lambda_{11},\lambda_{12})\neq(2,2)\)_. In this case_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+\underline{at}^{3\lambda _{1}+4(-1,-1)}+\underline{bt}^{3\lambda_{1}+4(i,-1)}+c\underline{t}^{3\lambda _{1}+4(-1,j)}\right)\] _where_ \(a,b,c\in\{0,1\}\)_,_ \(i,j\in\mathbb{N}\) _and_ \(b=c=0\) _if_ \(a=1\)_._
8. \(\lambda_{12}\in\{1,2,3\}\) _and_ \(\lambda_{11}\in\{4,5\}\) _with_ \((\lambda_{11},\lambda_{12})\neq(4,2)\)_. In this case,_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+\underline{at}^{3\lambda _{1}+4(-2,i)}\right);\;a\in\{0,1\},\;i\in\mathbb{N}\;\;\mbox{if}\;\;\lambda_{12 }=1\;\;\mbox{or}\] \[\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+\underline{at}^{3\lambda _{1}+4(-2,i)}+\underline{bt}^{3\lambda_{1}+4(j,-1)}\right);\;\;\mbox{if}\;\; \lambda_{12}\in\{2,3\}\] _with_ \(a,b,c\in\{0,1\}\)_,_ \(i,j\geq-1\) _where_ \(b=0\) _if_ \(i=-1\) _and_ \(a=1\)_._
9. \(\lambda_{12}\in\{0,1\}\) _and_ \(\lambda_{11}\in\{6,7\}\) _with_ \((\lambda_{11},\lambda_{12})\neq(6,0)\)_. In this case,_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+\underline{at}^{3\lambda _{1}+4(-2,i)}+\underline{bt}^{3\lambda_{1}+4(-3,j)}\right)\] _where_ \(a,b\in\{0,1\}\)_,_ \(i,j\in\mathbb{N}\) _with_ \(a=0\) _if_ \(i\geq j\)_._
10. \(\lambda_{12}\in\{0,1\}\) _and_ \(\lambda_{11}\in\{6,7\}\) _with_ \((\lambda_{11},\lambda_{12})\neq(6,0)\)_. In this case,_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+\underline{at}^{3\lambda _{1}+4(-2,i)}+\underline{bt}^{3\lambda_{1}+4(-3,j)}\right)\] _where_ \(a,b\in\{0,1\}\)_,_ \(i,j\in\mathbb{N}\) _with_ \(a=0\) _if_ \(i\geq j\)_._
11. \(\lambda_{12}\in\{1,2,3\}\) _and_ \(\lambda_{11}\in\{4,5\}\) _with_ \((\lambda_{11},\lambda_{12})\neq(4,2)\)_. In this case,_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+\underline{at}^{3\lambda _{1}+4(-2,i)}+\underline{bt}^{3\lambda_{1}+4(-3,j)}\right)\] _where_ \(a,b\in\{0,1\}\)_,_ \(i,j\in\mathbb{N}\) _with_ \(a=0\) _if_ \(i\geq j\)_._
12. \(\lambda_{12}\in\{1,2,3\}\) _and_ \(\lambda_{11}\in\{4,5\}\) _with_ \((\lambda_{11},\lambda_{12})\neq(4,2)\)_. In this case,_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+\underline{at}^{3\lambda _{1}+4(-2,i)}+\underline{bt}^{3\lambda_{1}+4(-2,i)}\right)\] _where_ \(a,b\in\{0,1\}\)_,_ \(i,j\in\mathbb{N}\) _with_ \(a=0\) _if_ \(i\geq j\)_._
13. \(\lambda_{12}\in\{1,2,3\}\) _and_ \(\lambda_{11}\in\{4,5\}\) _with_ \((\lambda_{11},\lambda_{12})\neq(4,2)\)_. In this case,_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+\underline{at}^{3\lambda _{1}+4(-2,i)}+\underline{bt}^{3\lambda_{1}+4(-2,i)}\right)\] _where_ \(a,b\in\{0,1\}\)_,_ \(i,j\in\mathbb{N}\) _with_ \(a=0\) _if_ \(i\geq j\)_._
14. \(\lambda_{12}\in\{1,2,3\}\) _and_ \(\lambda_{11}\in\{4,5\}\) _with_ \((\lambda_{11},\lambda_{12})\neq(4,2)\)_. In this case,_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{4},t_{2}^{4},\underline{t}^{\lambda_{1}}+\underline{at}^{3\lambda _{1}+4(-2,i)}+\underline{bt}^{3\lambda_{1}+4(-2,i)}\right)\] _where_ \(a,b\in\{0,1\}\)_,_ \(i
_
* \(n=5\) _and_ \(\lambda_{1}\in\{(2,1),(3,1),(2,2)\}\)_. In this case_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{5},t_{2}^{5},\underline{t}^{(2,1)}+a\underline{t}^{4(2,1)+5(-1,i )}\right);\text{ where }a\in\{0,1\}\text{ and }i\in\mathbb{N};\] \[\left(t_{1}^{5},t_{2}^{5},\underline{t}^{(3,1)}+a\underline{t}^{4(3,1 )+5(-1,i)}+b\underline{t}^{3(3,1)+5(-1,j)}\right);\text{ }a,b\in\{0,1\},\text{ }i,j\in\mathbb{N}\text{ with }b=0\text{ if }j\leq i;\] \[\left(t_{1}^{5},t_{2}^{5},\underline{t}^{(2,2)}+a\underline{t}^{4(2,2)+5 (-1,-1)}+a\underline{t}^{4(2,2)+5(i,-1)}+b\underline{t}^{4(2,2)+5(-1,j)}\right); \text{ where }a,b,c\in\{0,1\},\] \[i,j\in\mathbb{N}\text{ with }b=c=0\text{ if }a=1.\]
* \(6\leq n\leq 7\) _and_ \(\lambda_{1}=(2,1)\)_. In this case_ \(H\) _is formally_ \(\tilde{\mathcal{A}}\)_-equivalent to_ \[\left(t_{1}^{n},t_{2}^{n},\underline{t}^{(2,1)}+a\underline{t}^{(n-1)(2,1)+n (-1,i)}+b\underline{t}^{(n-2)(2,1)+n(-1,j)}\right)\] _where_ \(a,b\in\{0,1\}\)_,_ \(i,j\in\mathbb{N}\) _with_ \(b=0\) _if_ \(j\leq i\)_._
Proof.: It follows from Example 3.2, Example 3.3, Proposition 4.3, Proposition 4.4 and Proposition 4.5.
## 5 Technical Lemmas
In this section, we present some technical lemmas that make use of notations and concepts presented in Section 2 concerning dominant exponent and differentials \(2\)-forms. We use these lemmas in Proposition 4.3, Proposition 4.4, and Proposition 4.5 to obtain normal forms of quasi-simple surfaces concerning \(\tilde{\mathcal{A}}\) group.
**Lemma 5.1**.: _If \(H=\left(t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+a\underline{t}^{ \delta}u(\underline{t})\right)\) is a q.o. parameterization where \(a\neq 0\) and \(u(\underline{0})=1\) then \(\underline{t}^{\delta+\gamma}\) is eliminable for any \(\gamma\in\Gamma\setminus\{(0,0)\}\)._
Proof.: By Proposition 2.6 it is sufficient to exhibit \(\omega=\sum_{i=1}^{r+1}(-1)^{r+1-i}P_{i}dX_{1}\wedge\cdots\wedge\widehat{dX_{ i}}\wedge\cdots\wedge dX_{r+1}\in\Omega^{r}\) where \(P_{i}\) is as described in (3) for all \(i=1,\ldots,r\) such that \(\mathcal{V}(\omega)=\delta+\gamma+(n,n)\).
Let us take
\[\omega_{0}=\frac{1}{n}\left(s_{1}X_{1}dX_{2}\wedge dX_{3}+s_{2}X_{2}dX_{1} \wedge dX_{3}+\frac{(s_{1}\lambda_{11}-s_{2}\lambda_{12})}{n}X_{3}dX_{1}\wedge dX _{2}\right). \tag{20}\]
Considering the map \(\Psi_{H}\) given in (5) we get
\[\Psi_{H}(\omega_{0})=\left(s_{2}(\delta_{2}-\lambda_{12})-s_{1}(\delta_{1}- \lambda_{11})\right)a\underline{t}^{\delta+(n,n)}u(\underline{t})+a\underline {t}^{\delta+(n,n)}(s_{2}u_{2}(\underline{t})-s_{1}u_{1}(\underline{t})),\]
where \(u_{i}(\underline{t})\) denotes the derivative of \(u(\underline{t})\) with respect to \(t_{i}\).
As \(u(\underline{t})\in\mathbb{C}\{t\}\) is a unit, \(u_{i}(\underline{0})=0\) and consequently, for any \(\alpha\in supp(a\underline{t}^{\delta+(n,n)}(s_{2}u_{2}(\underline{t})-s_{1}u _{\underline{t}}))\) we have \(\alpha\succ\delta+(n,n)\). Recall that \(\delta_{1}\neq\lambda_{11}\) or \(\delta_{2}\neq\lambda_{12}\) so, we can choose \(s_{1},s_{2}\in\mathbb{C}\) in a such way that \(\mathcal{V}(\omega_{0})=\delta+(n,n)\).
Now, for any \(\gamma\in\Gamma\setminus\{(0,0)\}\) we take \(\epsilon\in\mathcal{M}_{3}\) such that \(\mathcal{V}(\epsilon)=\gamma\) and, in this way, \(\omega=\epsilon\omega_{0}\in\Omega^{r}\) satisfies the conditions (3) and \(\mathcal{V}(\omega)=\delta+\gamma+(n,n)\). Consequently, by Proposition 2.6, \(\underline{t}^{\delta+\gamma}\) is eliminable by \(\tilde{\mathcal{A}}\)-action.
Notice that if \(\delta\in\Gamma\bigcup_{\begin{subarray}{c}1\leq i\leq 2\\ \lambda_{1}\sqcup n\end{subarray}}(\Gamma+2\lambda_{1}-\nu_{i})\) then the above lemma is a particular case of Proposition 3.1. On the other hand if \(E_{\mathcal{Z}}(H)=\{\delta\}\) then Lemma 5.1 allows us to eliminate terms with exponent in \(\delta+\Gamma\setminus\{(0,0)\}\) and it can be considered as the counterpart to the quasi-ordinary case of an elimination criterion of terms in plane curve parameterizations proved by Zariski (see Section 2.3, Chapter III in [14]).
In the same way, the relevance of the next result corresponds to the case \(E_{\mathcal{Z}}(H)=\{\delta_{1},\delta_{2}\}\).
**Lemma 5.2**.: _Let \(H=\left(t_{1}^{n},t_{2}^{n},S(\underline{t})=\underline{t}^{\lambda_{1}}+ \sum_{j\geq 0}a_{j}\underline{t}^{\delta_{1}+j\cdot\nu_{1}}+\sum_{j\geq 0}b_{j} \underline{t}^{\delta_{2}+j\cdot\nu_{1}}\right)\) be a q. o. parameterization where \(a_{0}\neq 0\neq b_{0}\), \(\nu_{1}=(n,0)\) with \(\min\{S(\underline{t})-\underline{t}^{\lambda_{1}}\}=\{\delta_{1},\delta_{2}\}\). If \(\{\delta_{1}-\lambda_{1},\delta_{2}-\lambda_{1}\}\subset\mathbb{R}^{2}\) is a linearly independent set then \(\underline{t}^{\delta_{1}+j\cdot\nu_{1}}\) and \(\underline{t}^{\delta_{2}+j\cdot\nu_{1}}\) are eliminable for any \(j>0\). The same is true if we change \(\nu_{1}\) by \(\nu_{2}=(0,n)\)._
Proof.: As the previous lemma, it is sufficient to guarantee the existence of differential \(2\)-forms that admit dominant exponent \(\delta_{1}+(n,n)+j\cdot\nu_{1}\) and \(\delta_{2}+(n,n)+j\cdot\nu_{1}\) for any \(j>0\).
Considering \(\omega_{0}\in\Omega^{r}\) given as (20) and the map \(\Psi_{H}\) given in (5) we get
\[\Psi_{H}(\omega_{0})= (s_{2}\cdot(\delta_{12}-\lambda_{12})-s_{1}\cdot(\delta_{11}- \lambda_{11}))\,a_{0}\underline{t}^{\delta_{1}+(n,n)}+\] \[+\sum_{j>0}\left(s_{2}\cdot(\delta_{12}-\lambda_{12})-s_{1}\cdot (\delta_{11}+jn-\lambda_{11})\right)a_{j}\underline{t}^{\delta_{1}+j\cdot\nu_ {1}+(n,n)}+\] \[(s_{2}\cdot(\delta_{22}-\lambda_{12})-s_{1}\cdot(\delta_{21}- \lambda_{11}))\,b_{0}\underline{t}^{\delta_{2}+(n,n)}+\] \[+\sum_{j>0}\left(s_{2}\cdot(\delta_{22}-\lambda_{12})-s_{1}\cdot (\delta_{21}+jn-\lambda_{11})\right)b_{j}\underline{t}^{\delta_{2}+j\cdot\nu_ {1}+(n,n)}.\]
As \(\{\delta_{1}-\lambda_{1},\delta_{2}-\lambda_{1}\}\) is a linearly independent set, the linear system equations
\[(*)\ \left\{\begin{array}{l}(\delta_{12}-\lambda_{12})\cdot Z-(\delta_{11}- \lambda_{11})\cdot W=\frac{1}{a_{0}}\\ (\delta_{22}-\lambda_{12})\cdot Z-(\delta_{21}-\lambda_{11})\cdot W=0\end{array} \right.\ \ \text{ and }\ (**)\ \left\{\begin{array}{l}(\delta_{12}-\lambda_{12})\cdot Z-(\delta_{11}- \lambda_{11})\cdot W=0\\ (\delta_{22}-\lambda_{12})\cdot Z-(\delta_{21}-\lambda_{11})\cdot W=\frac{1} {b_{0}}\end{array}\right.\]
admit solutions. Taking a solution \((Z,W)=(s_{2},s_{1})\) for the system \((*)\) and substituting in \(\omega_{0}\) we get the differential \(2\)-form \(\omega_{1}\) with
\[\Psi_{H}(\omega_{1})=\underline{t}^{\delta_{1}+(n,n)}+\sum_{j>0}a^{\prime}_{j }\underline{t}^{\delta_{1}+j\cdot\nu_{1}+(n,n)}+\sum_{j>0}b^{\prime}_{j} \underline{t}^{\delta_{2}+j\cdot\nu_{1}+(n,n)}\]
and considering a solution \((Z,W)=(s_{2},s_{1})\) for the system \((**)\) and substituting in \(\omega_{0}\) we obtain the differential \(2\)-form \(\omega_{2}\) with
\[\Psi_{H}(\omega_{2})=\underline{t}^{\delta_{2}+(n,n)}+\sum_{j>0}a^{\prime\prime }_{j}\underline{t}^{\delta_{1}+j\cdot\nu_{1}+(n,n)}+\sum_{j>0}b^{\prime\prime }_{j}\underline{t}^{\delta_{2}+j\cdot\nu_{1}+(n,n)}\]
for some \(a^{\prime}_{j},a^{\prime\prime}_{j},b^{\prime}_{j},b^{\prime\prime}_{j}\in \mathbb{C}\).
In this way, there are \(c_{k},d_{k}\in\mathbb{C}\) for \(k>0\) such that
\[\mathcal{V}\left(\Psi_{H}\left(\omega_{1}+\sum_{k>0}c_{k}X_{1}^{k}\omega_{2} \right)\right)=\delta_{1}+(n,n)\ \ \text{and}\ \ \mathcal{V}\left(\Psi_{H}\left(\omega_{2}+\sum_{k>0}d_{k}X_{1}^{k}\omega_{1} \right)\right)=\delta_{2}+(n,n). \tag{21}\]
As, for any \(j>0\), the differential \(2\)-forms
\[X_{1}^{j}\left(\omega_{1}+\sum_{k>0}c_{k}X_{1}^{k}\omega_{2}\right)\ \ \text{and}\ \ X_{1}^{j}\left(\omega_{2}+\sum_{k>0}d_{k}X_{1}^{k}\omega_{1}\right)\]
satisfy the conditions (3). By Proposition 2.6, any term \(\underline{t}^{\delta_{1}+j\cdot\nu_{1}}\) and \(\underline{t}^{\delta_{2}+j\cdot\nu_{1}}\) are eliminable by \(\tilde{\mathcal{A}}\)-action.
If \(H=\left(t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+\sum_{j\geq 0}a_{j} \underline{t}^{\delta_{1}+j\cdot\nu_{2}}+\sum_{j\geq 0}b_{j}\underline{t}^{ \delta_{2}+j\cdot\nu_{2}}\right)\) then interchange \(X_{1}\) by \(X_{2}\) in (21) we obtain that any term \(\underline{t}^{\delta_{1}+j\cdot\nu_{2}}\) and \(\underline{t}^{\delta_{2}+j\cdot\nu_{2}}\) with \(j>0\) are eliminable by \(\tilde{\mathcal{A}}\)-action.
We note that the next lemma is a particular case of the Proposition 3.1 if \(\lambda_{1i}\geq n\), but it gives us relevant information for the other cases.
**Lemma 5.3**.: _Let \(H=\left(t_{1}^{n},t_{2}^{n},\underline{t}^{\lambda_{1}}+\sum_{j\geq 0}a_{j} \underline{t}^{\delta_{1}+j\cdot\nu_{1}}+\sum_{j\geq 0}b_{j}\underline{t}^{ \delta_{2}+j\cdot\nu_{2}}\right)\) be a \(q\). o. parameterization where \(\delta_{1}=(n-1)\lambda_{1}+n(i,-1),\delta_{2}=(n-1)\lambda_{1}+n(-1,j)\) with \(a_{0}\neq 0\neq b_{0}\) and \(i,j\in\mathbb{N}\). We have that the terms \(\underline{t}^{\delta_{1}+j\cdot\nu_{1}}\) and \(\underline{t}^{\delta_{2}+j\cdot\nu_{2}}\) are eliminable for any \(j>0\)._
Proof.: We will present differential \(2\)-forms with dominant exponents \(\delta_{1}+(n,n)+j\cdot\nu_{1}\) and \(\delta_{2}+(n,n)+j\cdot\nu_{2}\) for any \(j>0\) and, in this way, the result follows by Proposition 2.6.
We take \(\omega_{0}\in\Omega^{r}\) given as (20) and we consider the expansion \(\Psi_{H}(\omega_{0})\). As \(\{\delta_{1}-\lambda_{1},\delta_{2}-\lambda_{1}\}\) is a linearly independent set, the linear systems \((*)\) and \((**)\) admit solutions that give us differentials \(2\)-forms \(\omega_{1}\) and \(\omega_{2}\) such that
\[\Psi_{H}(\omega_{1}) =\underline{t}^{\delta_{1}+(n,n)}+\sum_{j>0}a^{\prime}_{j} \underline{t}^{\delta_{1}+j\cdot\nu_{1}+(n,n)}+\sum_{j>0}b^{\prime}_{j} \underline{t}^{\delta_{2}+j\cdot\nu_{2}+(n,n)}\] \[\Psi_{H}(\omega_{2}) =\underline{t}^{\delta_{2}+(n,n)}+\sum_{j>0}a^{\prime\prime}_{j} \underline{t}^{\delta_{1}+j\cdot\nu_{1}+(n,n)}+\sum_{j>0}b^{\prime\prime}_{j} \underline{t}^{\delta_{2}+j\cdot\nu_{2}+(n,n)}\]
for some \(a^{\prime}_{j},a^{\prime\prime}_{j},b^{\prime}_{j},b^{\prime\prime}_{j}\in \mathbb{C}\).
So, we can determine \(c_{k},d_{k}\in\mathbb{C}\) for \(k>0\) such that
\[\mathcal{V}\left(\Psi_{H}\left(\omega_{1}+\sum_{k>0}c_{k}X_{2}^{k}\omega_{2} \right)\right)=\delta_{1}+(n,n)\ \ \text{and}\ \ \mathcal{V}\left(\Psi_{H}\left(\omega_{2}+\sum_{k>0}d_{k}X_{1}^{k}\omega_{1} \right)\right)=\delta_{2}+(n,n).\]
In addition, for any \(j>0\), the differential \(2\)-forms
\[X_{1}^{j}\left(\omega_{1}+\sum_{k>0}c_{k}X_{2}^{k}\omega_{2}\right)\ \ \text{and}\ \ X_{2}^{j}\left(\omega_{2}+\sum_{k>0}d_{k}X_{1}^{k}\omega_{1}\right)\]
satisfy the conditions (3) consequently, by Proposition 2.6, any term \(\underline{t}^{\delta_{1}+j\cdot\nu_{1}}\) and \(\underline{t}^{\delta_{2}+j\cdot\nu_{2}}\) are eliminable by \(\tilde{\mathcal{A}}\)-action. |
2306.17800 | Hopf Algebra on Vincular Permutation Patterns | We introduce a new Hopf algebra that operates on pairs of finite interval
partitions and permutations of equal length. This algebra captures vincular
patterns, which involve specifying both the permutation patterns and the
consecutive occurrence of values. Our motivation stems from linear functionals
that encode the number of occurrences of these patterns, and we show that they
behave well with respect to the operations of this Hopf algebra. | Joscha Diehl, Emanuele Verri | 2023-06-30T16:59:44Z | http://arxiv.org/abs/2306.17800v1 | # Hopf Algebra on Vincular Permutation Patterns
###### Abstract
We introduce a new Hopf algebra that operates on pairs of finite interval partitions and permutations of equal length. This algebra captures _vincular patterns_, which involve specifying both the permutation patterns and the consecutive occurrence of values. Our motivation stems from linear functionals that encode the number of occurrences of these patterns, and we show that they behave well with respect to the operations of this Hopf algebra.
###### Contents
* 1 Introduction
* 2 Notation
* 3 Finite interval partitions
* 3.1 Auxiliary operations
* 3.1.1 Gluing partitions
* 3.2 Algebraic operations
* 3.2.1 Products, coproducts
* 3.2.2 Bialgebras on interval partitions
* 3.3 Signature of an interval partition
* 3.3.1 Character property and Chen's identity
* 4 Vincular permutation patterns
* 4.1 Algebraic operations
* 4.1.1 Products, coproducts
* 4.1.2 Bialgebras on vincular permutations
* 4.2 Signature for vincular permutation patterns
* 4.2.1 Character property and Chen's identity
5 Summary and outlook5.1 Open questionA Gluing partitions is associativeA.1 AssumptionsA.2 CompressionA.3 Associativity
## 1 Introduction
_Permutation patterns_ are ubiquitous in discrete mathematics. Much effort is devoted to developing algorithms that count these patterns efficiently, see for example Even-Zohar 2020 and Even-Zohar and Leng 2021.
They have also been successfully used in _time series analysis_ in the popular work from Bandt and Pompe 2002, where the authors introduced the concept of _permutation entropy_. For a discrete time series, consider only the order of the values. As an example, the time series
\[1\,2\,3\,4\,5\,6\]
can be "reduced" to the permutation \(134265\in\mathbf{S}_{6}\). Permutation entropy is based on specific permutation patterns, namely consecutive patterns. If we fix an order for the patterns, say 2, we observe
[MISSING_PAGE_POST]
\(\tau\in\mathbf{S}_{n}\), then
\[\Delta_{\Uparrowright}(\sigma):=\sum_{\begin{subarray}{c}A,B\subset[m]: \\ A\cup B=[m]\end{subarray}}\mathbf{st}\left(\sigma\,_{|A}\right)\otimes\mathbf{st} \left(\sigma\,_{|B}\right),\] \[\sigma\circ\tau:=\sigma_{1}\cdots\sigma_{m}(\tau_{1}+m)\cdots( \tau_{n}+m).\]
where, for example
\[\Delta_{\Uparrowright}(21) =\mathsf{e}\otimes 21+2\cdot 1\otimes 1+2\cdot 1\otimes 21+21 \otimes\mathsf{e}+2\cdot 21\otimes 1+21\otimes 21,\] \[21\circ 12 =2134.\]
and it is there shown that \(\mathcal{H}_{\mathsf{per}}:=(\bigoplus_{n\in\mathbb{N}}\mathbb{Q}[\mathbf{S}_ {n}],\Uparrowright,\Delta_{\circ})\) is a _filtered_ Hopf algebra. This construction relates to occurrences of permutation patterns. Denote with \(\mathbf{S}:=\bigcup_{n}\mathbf{S}_{n}\), the set of all permutations, and consider the family of linear functionals \((\mathbf{pattern}(\sigma))_{\sigma\in\mathbf{S}}\), defined on basis elements, \(\Lambda\in\mathbf{S}\) as
\[\left\langle\mathbf{pattern}(\sigma),\Lambda\right\rangle:=\#\{A\subset[| \Lambda|]|\;\mathbf{st}(\Lambda\,_{|A})=\sigma\},\]
i.e. the occurrences of \(\sigma\) as a _pattern_ on \(\Lambda\). We can endow \(\mathbb{Q}^{\mathbf{S}}\), the set of all functions from \(\mathbf{S}\) to \(\mathbb{Q}\), with a \(\mathbb{Q}\)-algebra structure using the pointwise product
\[\forall f,g\in\mathbb{Q}^{\mathbf{S}}:\forall\sigma\in\mathbf{S}:\;\;(f+g)( \sigma):=f(\sigma)+g(\sigma),\,(f\cdot g)(\sigma):=f(\sigma)\cdot g(\sigma).\]
Then Vargas showed that
\[\left(\bigoplus_{n}\mathbb{Q}[\mathbf{S}_{n}],\Uparrowright) \to\left(\mathbb{Q}^{\mathbf{S}},\cdot\right),\] \[\sigma \mapsto\mathbf{pattern}(\sigma)\]
is an (injective) algebra homomorpshism, i.e., for all \(\Lambda\in\mathbf{S}\)
\[\left\langle\mathbf{pattern}(\sigma\Uparrowright\tau),\Lambda\right\rangle= \left\langle\mathbf{pattern}(\sigma),\Lambda\right\rangle\!\left\langle \mathbf{pattern}(\tau),\Lambda\right\rangle\!.\]
Taking on a slightly different approach, we fix \(\Lambda\in\mathbf{S}\), vary the pattern \(\sigma\) and define the family of functionals
\[\left\langle\mathsf{PC}(\Lambda),\sigma\right\rangle:=\#\{A\subset[|\Lambda| ]|\;\mathbf{st}(\Lambda\,_{|A})=\sigma\},\]
and call \(\mathsf{PC}(\Lambda)\) the _signature_ of the permutation \(\Lambda\). The term "signature" is motivated by the following identity
\[\forall\sigma,\tau\in\mathbf{S}:\;\left\langle\mathsf{PC}(\Lambda),\sigma \right\rangle\!\left\langle\mathsf{PC}(\Lambda),\tau\right\rangle=\left\langle \mathsf{PC}(\Lambda),\sigma\Uparrowright\tau\right\rangle \tag{1}\]
which is reminiscent of the shuffle identity for the signature of a path (Hambly and Lyons 2010; Ree 1958; Chen 1957). For example
\[\left\langle\mathsf{PC}(132),1\right\rangle\cdot\left\langle \mathsf{PC}(132),1\right\rangle\] \[=\left\langle\mathsf{PC}\Big{(}\begin{array}{c}\includegraphics[ 14.143074pt]{2010}\end{array}\Big{)},\includegraphics[ 14.143074pt]{2010}\right\rangle\cdot\left\langle\mathsf{PC}\Big{(}\begin{array} []{c}\includegraphics[14.143074pt]{2010}\end{array}\Big{)},\includegraphics [14.143074pt]{2010}\right\rangle\] \[=|\{\{1\},\{2\},\{3\}\}\times\{\{1\},\{2\},\{3\}\}|\] \[=|\{\{\includegraphics[14.143074pt]{2010}\},\includegraphics[14.14307 4pt]{2010}\},\includegraphics[14.143074pt]{2010}\}\times\{\begin{array}{c} \includegraphics[14.143074pt]{2010}\end{array},\includegraphics[14.143074pt] {2010}\},\includegraphics[14.143074pt]{2010}\}\rangle|\] \[=|\{\{\includegraphics[14.143074pt]{2010}\},\includegraphics[14.14307 4pt]{2010}\},\includegraphics[14.143074pt]{2010},\includegraphics[14.143074pt] {2010}\}\rangle|\] \[=|\{(\{1\},\{1\}),(\{2\},\{2\}),(\{3\},\{3\})\}|\] \[+|\{\{(\{1\},\{2\}),(\{2\},\{1\}),(\{1\},\{3\}),(\{3\},\{1\})\}|\] \[+|\{(\{2\},\{3\}),(\{3\},\{2\})\}|\] \[=|\{\{\includegraphics[14.143074pt]{2010}\},\includegraphics[14.14307 4pt]{2010}\},\includegraphics[14.143074pt]{2010}\}\rangle|\] \[+|\{\{\includegraphics[14.143074pt]{2010}\},\includegraphics[14.14307 4pt]{2010}\}\rangle|\] \[+|\{\{\includegraphics[14.143074pt]{2010}\},\includegraphics[14.14307 4pt]{2010}\}\rangle|\] \[+|\{\{\includegraphics[14.143074pt]{2010}\},\includegraphics[14.14307 4pt]{2010}\}\rangle|\] \[+|\{\{\includegraphics[14.143074pt]{2010}\},\includegraphics[14.14307 4pt]{2010}\}\rangle|\] \[+|\{\{\includegraphics[14.143074pt]{2010}\},\includegraphics[14.14307 4pt]{2010}\}\rangle|\] \[=\Big{\langle}\mathsf{PC}(132),1\Big{\rangle}+2\Big{\langle} \mathsf{PC}(132),12\Big{\rangle}+2\Big{\langle}\mathsf{PC}(132),21\Big{\rangle}\] \[=\Big{\langle}\mathsf{PC}(132),1+2\,12+2\,21\Big{\rangle}\] \[=\Big{\langle}\mathsf{PC}(132),1\!\upharpoonright 1\Big{\rangle}.\]
This terminology is also motivated by an identity similar to _Chen's identity_ (Lyons,
Qian, et al. 2002, p.10)
\[\forall\Lambda,\Upsilon\in\mathbf{S}:\;\mathsf{PC}(\Lambda\circ\Upsilon)= \mathsf{PC}(\Lambda)\circ\mathsf{PC}(\Upsilon). \tag{2}\]
Inspired by the work of Bandt and Pompe 2002 and Vargas 2014, aim of this work is to generalize the \(\Uparrow\)-Hopf algebra to encode more general patterns. Given a permutation \(\Lambda\), we search for permutation patterns \(\sigma\in\mathbf{S}_{n_{k}}\)
\[\begin{array}{|c|c|c|c|c|c|c|c|c|}\hline\sigma_{1}&\cdots&\sigma_{n_{1}}& \cdots&\sigma_{n_{1}+1}&\cdots&\sigma_{n_{2}}&\cdots&\sigma_{n_{k-1}+1}& \cdots&\sigma_{n_{k}}\\ \hline\end{array}\]
where for all \(j=1,\ldots,k\), \(n_{0}:=0\), the values from \(n_{j-1}+1\) to \(n_{j}\) need to occur consecutively in time. These patterns are known in the literature as _vincular permutation patterns_. They seem to have been first introduced in Babson and Steingrimsson 2000; see also Branden and Claesson 2011 using the more recent term "vincular pattern".
As an example, Let 2 1 3, be the pattern we search on 134265. Then
\[\begin{array}{|c|c|c|c|c|c|c|c|c|c|}\hline\sigma_{1}&\cdots&\sigma_{n_{1}}& \cdots&\sigma_{n_{1}+1}&\cdots&\sigma_{n_{2}}&\cdots&\sigma_{n_{k-1}+1}& \cdots&\sigma_{n_{k}}\\ \hline\end{array}\]
where \(\sigma_{n_{1}+1}\) and \(\sigma_{n_{2}+1}\) are \(\sigma_{n_{1}+1}\) and \(\sigma_{n_{2}+1}\) are \(\sigma_{n_{1}+1}\) and \(\sigma_{n_{2}+1}\) are \(\sigma_{n_{1}+1}\) and \(\sigma_{n_{2}+1}\) are \(\sigma_{n_{1}+1}\) and \(\sigma_{n_{2}+1}\) are
is isomorphic to a Hopf algebra on words, where the letters correspond to blocks of a partition. Finally, we "combine" our Hopf algebra on interval partitions with the superinfiltration \(\Uparrow\)-Hopf algebra and define a new Hopf algebra on vincular permutation patterns,
\[\mathcal{H}_{\text{\bf vine}}:=(\bigoplus_{n\in\mathbb{N}}\mathbb{Q}[\mathbf{ StIP}_{n}\times\mathbf{S}_{n}],\triangledown,\Delta_{\bullet}).\]
The \(\triangledown\) product originates from our \(\sharp\) product on partitions and the \(\Uparrow\) product on permutations, see Vargas 2014. The coproduct is also a "combination" of the other two. The Hopf algebra \(\mathcal{H}_{\text{\bf vine}}\) generalizes \(\mathcal{H}_{\text{\bf per}}\) and \(\mathcal{H}_{\text{\bf int}}\) as well. A correspondent family of signatures is also introduced.
## 2 Notation
* We denote the natural numbers, including \(0\), with \(\mathbb{N}\) and the strictly positive natural numbers with \(\mathbb{N}_{\geq 1}\).
* Let \(A,B\) be sets. If \(A\cap B=\emptyset\), then we write \(A\uplus B:=A\cup B\). This notation comes in handy when we assume that the sets are disjoint. With this notation, \[|\biguplus_{i\in I}A_{i}|=\sum_{i\in I}|A_{i}|.\] if only finitely many sets \(A_{i}\) are non-empty and \(\forall i\in I:|A_{i}|<\infty\).
* For \(n\in\mathbb{N}_{\geq 1}\) we denote \[[n]:=\{1,\ldots,n\}.\] We set \[[0]:=\emptyset.\]
* We say that \(\mathcal{I}\) is a _(finite) partition_\(\mathcal{I}=\{\mathcal{I}_{1},...,\mathcal{I}_{I}\}\) of a set \(A\) if the (finite) family \(\mathcal{I}_{i},i\in I\), consists of non-empty, pairwise disjoint sets, and their union is \(A\), \(\bigcup_{i}\mathcal{I}_{i}=A\). All partitions considered in this work are _finite interval partitions_ of _finite subsets_ of the positive integers. Let \(A\subset\mathbb{N}_{\geq 1}\), with \(|A|<\infty\). We say that \(\mathcal{I}\) is an **interval partition** of \(A\) if \(\mathcal{I}\) is a partition of \(A\), whose blocks \(\mathcal{I}_{j}\) are intervals on \(\mathbb{N}_{\geq 1}\). As an example, if \(A=\{2,4,5\}\), then \(\{\{4,5\},\{2\}\}\) is an interval partition of \(A\), while \(\{\{2,4\},\{5\}\}\) is not. For a partition \(\mathcal{I}\) of a set \(A\) we also write \[\bigcup\mathcal{I}:=\bigcup_{i}\mathcal{I}_{i}=A,\] for its ground-set.
* The set \[\mathbf{IP}:=\{X|X\text{ is an interval partition of some finite subset of }\mathbb{N}_{\geq 1}\}\] contains "labeled" finite interval partition.
* We also fix \(A\subset\mathbb{N}_{\geq 1},A<\infty\) and write \[\mathbf{IP}(A):=\{X|X\text{ is an interval partition of }A\}\]
* The subset \[\mathbf{StIP}:=\biguplus_{n\in\mathbb{N}}\mathbf{StIP}_{n}\subset\mathbf{IP}\] where \(\mathbf{StIP}_{n}\,:=\,\{A|A\,\text{ is an interval partition of }\,[n]\}\), contains "unlabeled" or _standardized_ interval partitions. Notice that \(\mathbf{IP}(\emptyset)=\{\emptyset\}\), i.e., the unique partition for the empty set \(\emptyset\) is the empty set itself, which means \[\mathbf{StIP}_{0}:=\{\emptyset\}.\] Also, we have: \[\mathbf{StIP}=\biguplus_{\ell\in\mathbb{N}}\mathbf{NB}_{\ell}\] where \[\mathbf{NB}_{\ell}:=\{\mathfrak{s}\in\mathbf{StIP}|\mathfrak{s}=\{\mathfrak{ s}_{1},...,\mathfrak{s}_{\ell}\}\}\] i.e., all standardized partitions having \(\ell\) blocks.
* We use gothic letters to denote elements of \(\mathbf{StIP}\), for example \(\mathfrak{s}\in\mathbf{StIP}\). We use capital calligraphical letters when we consider any element of \(\mathbf{IP}\), for example \(\mathcal{I}\in\mathbf{IP}\).
* We define a partial order relation on \(\mathbf{IP}\), as follows \[\mathcal{I}\leq\mathcal{J}\iff\bigcup\mathcal{I}=\bigcup\mathcal{J}\ \wedge\ \forall j\in J:\exists i\in I:\mathcal{I}_{i}\subset\mathcal{J}_{j}.\] If \(I\) and \(J\) are not comparable, we write \(I\perp J\). As an example: \(\{\{2\},\{3\},\{1\}\}\leq\\ \{\{2,3\},\{1\}\}\), while \(\{\{2\},\{3\},\{1\}\}\perp\{\{5\},\{3\},\{4\}\}\) and \(\{\{1,2\},\{3\}\}\perp\{\{1\},\{2,3\}\}\).
* We denote with \(\mathbf{S}_{n}\) the symmetric group of order \(n\), \[\mathbf{S}_{n}:=\{f|\ f:[n]\rightarrow[n]\text{ is a bijection}\}.\] Let \(\mathbf{S}:=\bigcup_{n}\mathbf{S}_{n}\) denote the set of all permutations. Notice that \(\mathbf{S}_{0}:=\{f|f:\,\emptyset\rightarrow\emptyset\}\) contains the unique empty permutation. We denote elements of \(\mathbf{S}\), with Greek letters, for example, \(\sigma\in\mathbf{S}\). We use one-line notation, for example, \[\mathbf{S}_{2}=\{12,21\}.\]
Let \(A\) denote a set which we call _alphabet_. We call its elements _letters_. Denote with \(A^{*}\) the set of words on the alphabet, i.e. finite sequences of elements of \(A\). If \(A\) is a totally ordered set, define the _standardization_ of a word \(w\in A^{*}\) as the relative order of the letters which appear in \(w\). If a letter appears more than once, we order them from left to right. For example, if \(A:=\mathbb{N}\) with the usual order \[\mathbf{st}\left(234121\right)=356142.\] Permutations written in one-line notation can be seen as (particular) words on positive integers. When we restrict \(\sigma\in\mathbf{S}_{n}\) to a subset \(A\subset[n]\), with \(|A|=k\), we can standardize the subword to get an element of \(\mathbf{S}_{k}\). As an example \[\mathbf{st}\left(645123\left.\right|_{\left\{2,4,5\right\}}\right)=\mathbf{st }\left(423\right)=312.\]
* If \(\sigma\in\mathbf{S}_{n}\), then \(|\sigma|:=n\). Analogously, if \(\mathfrak{s}\in\mathbf{StIP}_{n}\), then \(|\mathfrak{s}|:=n\).
## 3 Finite interval partitions
### Auxiliary operations
Before introducing the algebraic operations, we need some auxiliary operations. Let \(A\) be a finite subset of \(\mathbb{N}_{\geq 1}\). The following operation yields the coarsest interval partition of \(A\).
**Definition 3.1** (Cliques).: \[\mathbf{cliques}:\{A|A\subset\mathbb{N}_{\geq 1},|A|<\infty\} \rightarrow\mathbf{IP}\] \[A \mapsto\{[x]_{\sim(\mathsf{Succ}_{A})}|x\in A\}\]
where \((x,y)\in\mathsf{Succ}_{A}\subset A\times A\iff y=x+1\) and \(\langle\mathsf{Succ}_{A}\rangle\) is the equivalence relation generated by \(\mathsf{Succ}_{A}\).
**Example 3.2**.: \[\mathbf{cliques}(\{2,4,5,6\})=\{\{2\},\{4,5,6\}\}\]
\[\mathbf{cliques}(\emptyset)=\emptyset.\]
**Remark 3.3**.: _Let \(\mathcal{I}\) be any partition of \(A\), then_
\[\mathcal{I}\text{ is an interval partition of }A\ \Leftrightarrow\mathcal{I}\leq \mathbf{cliques}(A).\]
Now fix an interval partition \(\mathcal{I}\in\mathbf{IP}\) and a finite subset of positive integers, \(A\subset\mathbb{N}_{\geq 1}\). We define an interval partition of \(A\cap\bigcup\mathcal{I}\) as follows.
**Definition 3.4** (Cliques of \(A\) through \(\mathcal{I}\)).: Let \(\mathcal{I}\in\mathbf{IP}\) and \(A\subset\mathbb{N}_{\geq 1},A<\infty\).
\[\mathcal{I}(A):=\{\mathcal{I}_{j}\cap c|\ c\in\mathbf{cliques}(A),j\in J\} \setminus\{\emptyset\}.\]
**Remark 3.5**.: _Since the intersection of two intervals is again an interval, \(\mathcal{I}(A)\) is a set of intervals. These intervals are pairwise disjoint since \(\mathcal{I}\) and \(\mathbf{cliques}(A)\) are partitions and this implies that \(\mathcal{I}_{i}\cap c\) and \(\mathcal{I}_{j}\cap c^{\prime}\) are also pairwise disjoint. We clearly have \(\bigcup\mathcal{I}(A)=A\cap\bigcup\mathcal{I}\). Therefore, by Remark 3.3, one has_
\[\mathcal{I}(A)\leq\mathbf{cliques}(A\cap\bigcup\mathcal{I}).\]
**Example 3.6**.: _For \(\mathcal{I}:=\{\{2,3,4\}\}\) and \(A:=\{2,4,5\}\) we have_
\[\mathcal{I}(A)=\{\{2\},\{4\}\}.\]
_If \(\mathcal{I}:=\{\{2,3\},\{4,5\},\{6,7\},\{8\}\}\) and \(A:=\{2,4,5,6\}\) we have_
\[\mathcal{I}(A)=\{\{2\},\{4,5\},\{6\}\}<\{\{2\},\{4,5,6\}\}=\mathbf{cliques}(A \cap\bigcup\mathcal{I}).\]
_In case \(A=\emptyset\)_
\[\mathcal{I}(A)=\emptyset.\]
Finally, we can convert elements of \(\mathbf{IP}\), into elements of \(\mathbf{StIP}\) as follows.
**Definition 3.7** (Standardization of a partition).: \[\mathbf{std}:\mathbf{IP}\rightarrow\mathbf{StIP}\]
Let \(i_{1},\ldots,i_{\ell_{1}},\ldots,i_{\ell_{2}},\ldots,i_{\ell_{k}}\in\mathbb{ N}_{\geq 1}\), where \(k\in\mathbb{N}\), such that
\[i_{1}\prec\cdots\prec i_{\ell_{1}}<i_{\ell_{1}+1}\prec\cdots\prec i_{\ell_{2} }<\cdots<i_{\ell_{k-1}+1}\prec\cdots\prec i_{\ell_{k}}\]
where \(i\prec j:\iff j-i=1\). In other words, we consider an interval partition with \(k\) blocks. Then define
\[\mathbf{std}\left(\left\{\{i_{1},...,i_{\ell_{1}}\},\{i_{\ell_{1 }+1},...,i_{\ell_{2}}\},...,\{i_{\ell_{k-1}+1},...,i_{\ell_{k}}\}\right\}\right)\] \[:=\{\{1,...,\ell_{1}\},\{\ell_{1}+1,...,\ell_{2}\},...,\{\ell_{k -1}+1,...,\ell_{k}\}\}\]
with
\[1\prec\cdots\prec\ell_{1}\prec\ell_{1}+1\prec\cdots\prec\ell_{2}\prec\cdots \prec\ell_{k-1}+1\prec\cdots\prec\ell_{k}.\]
**Example 3.8**.: \[\mathbf{std}(\{\{2\},\{4,5\},\{6\}\})=\{\{1\},\{2,3\},\{4\}\}\]
**Remark 3.9**.: _Let \(\mathcal{I}\in\mathbf{IP}\). We can turn \(A\subset\bigcup\mathcal{I}\), into a standardized interval partition, namely_
\[\mathbf{std}(\mathcal{I}(A)).\]
**Example 3.10**.: _If \(\mathcal{I}:=\{\{2,3\},\{4,5\},\{6,7\},\{8\}\}\) and \(A:=\{2,4,5,6\}\), we have_
\[\mathbf{std}(\mathcal{I}(A))=\{\{1\},\{2,3\},\{4\}\}.\]
#### 3.1.1 Gluing partitions
We now introduce a binary operation on interval partitions.
**Definition 3.11** (Gluing partitions).: Let \(\mathcal{I},\mathcal{J}\in\mathbf{IP}\). First define the relation
\[R_{\mathcal{I},\mathcal{J}}\subset(\mathcal{I}\cup\mathcal{J}) \times(\mathcal{I}\cup\mathcal{J})\] \[(E,F)\in R_{\mathcal{I},\mathcal{J}}\iff E\cap F\neq\emptyset.\]
Then define the binary operation as
\[\mathcal{I}\cdot\textbf{glue}\ \mathcal{J}:=\left\{\bigcup_{X\in[T]_{\sim(R_{ \mathcal{I},\mathcal{J}})}}X\Bigg{|}\ T\in\mathcal{I}\cup\mathcal{J}\right\},\]
where \(\sim\langle R_{\mathcal{I},\mathcal{J}}\rangle\) is the smallest equivalence relation which contains \(R_{\mathcal{I},\mathcal{J}}\).
**Example 3.12**.: _Let \(\mathcal{I}=\{\{1\},\{2\},\{3,4\},\{5,6,7\}\}\) and \(\mathcal{J}=\{\{2,3\},\{4,5\}\}\). Then_
\[\mathcal{I}\cdot\textbf{glue}\ \mathcal{J}=\{\{1\},\{2,3,4,5,6,7\}\}.\]
_We visualize \(R_{\mathcal{I},\mathcal{J}}\) in the following picture_
\[\mathcal{J}=\]
We now state several lemmas that clarify the behavior of the gluing operation.
**Lemma 3.13**.: _Let \(\mathcal{I},\mathcal{J}\in\mathbf{IP}\). Then_
\[\mathcal{I}\cdot\textbf{glue}\ \mathcal{J}\in\mathbf{IP}\left(\bigcup \mathcal{I}\cup\bigcup\mathcal{J}\right).\]
_In particular \(\forall\mathcal{I},\mathcal{J}\in\mathbf{IP},\ \ \mathcal{I}\cdot\textbf{glue}\ \mathcal{J}\in\mathbf{IP}.\)_
Proof.: Let \(T\in\mathcal{I}\cup\mathcal{J}\) and consider its equivalence class
\[[T]_{\sim\langle R_{\mathcal{I},\mathcal{J}}\rangle}.\]
From the definition of \(\langle R_{\mathcal{I},\mathcal{J}}\rangle\) it follows that we can write
\[[T]_{\sim\langle R_{\mathcal{I},\mathcal{J}}\rangle}=\{X_{1},...,X_{n}\}\ \text{where}\ X_{i}\in\mathcal{I}\lor X_{i}\in\mathcal{J}\]
for some positive integer \(n\), where \(\forall i\in\{1,...,n-1\}:X_{i}\cap X_{i+1}\neq\emptyset\). Obviously, the union of two intersecting intervals is again an interval. It then follows by induction that
\[\bigcup_{i=1}^{n}X_{i}\]
is an interval. It remains to show that
\[\mathcal{I}\cdot_{\mbox{\bf glue}}\mathcal{J}=\left\{\bigcup_{X\in[T]_{ \sim(R_{\mathcal{I},\mathcal{J}})}}X\Bigg{|}\,T\in\mathcal{I}\cup\mathcal{J}\right\}\]
is a partition of \(\bigcup\mathcal{I}\cup\bigcup\mathcal{J}\). Clearly
\[\bigcup_{T\in\mathcal{I}\cup\mathcal{J}}\bigcup_{X\in[T]_{\sim(R_{ \mathcal{I},\mathcal{J}})}}X=\bigcup\mathcal{I}\cup\bigcup\mathcal{J}.\]
Since
\[\left\{[T]_{\sim(R_{\mathcal{I},\mathcal{J}})}\Bigg{|}\,T\in\mathcal{I}\cup \mathcal{J}\right\}\]
is a partition of the blocks of \(\mathcal{I}\cup\mathcal{J}\), we have that
\[\forall T\in\mathcal{I}\cup\mathcal{J}:[T]_{\sim(R_{\mathcal{I}, \mathcal{J}})}\neq\emptyset.\]
and therefore \(\bigcup_{X\in[T]_{\sim(R_{\mathcal{I},\mathcal{J}})}}X\neq\emptyset\). Now let
\[\bigcup_{X\in[T]_{\sim(R_{\mathcal{I},\mathcal{J}})}}X\cap\bigcup_{X\in[S]_{ \sim(R_{\mathcal{I},\mathcal{J}})}}X\neq\emptyset.\]
This means \(\exists X\in[T]_{\sim(R_{\mathcal{I},\mathcal{J}})}\) and \(\exists Y\in[S]_{\sim(R_{\mathcal{I},\mathcal{J}})}\) such that \(X\cap Y\neq\emptyset\). Therefore \([T]_{\sim(R_{\mathcal{I},\mathcal{J}})}=[S]_{\sim(R_{\mathcal{I},\mathcal{J}})}\), which means
\[\bigcup_{X\in[T]_{\sim(R_{\mathcal{I},\mathcal{J}})}}X=\bigcup_{X\in[S]_{\sim( R_{\mathcal{I},\mathcal{J}})}}X.\]
This operation is associative.
**Lemma 3.14**.: _Let \(\mathcal{I},\mathcal{I}^{\prime},\mathcal{I}^{\prime\prime}\in\mathbf{IP}\). Then_
\[(\mathcal{I}\cdot_{\mbox{\bf glue}}\mathcal{I}^{\prime})\cdot_{\mbox{\bf glue}} \mathcal{I}^{\prime\prime}=\mathcal{I}\cdot_{\mbox{\bf glue}}(\mathcal{I}^{ \prime}\cdot_{\mbox{\bf glue}}\mathcal{I}^{\prime\prime}).\]
For a proof of this result, see Lemma A.10, which can be seen as an instance of a more general phenomenon, we refer to Appendix A.
**Lemma 3.15**.: _Let \(A,A^{\prime}\subset\mathbb{N}_{\geq 1}\) be finite subsets. Let \(\mathcal{I},\mathcal{J}\in\mathbf{IP}(A)\) and \(\mathcal{I}^{\prime},\mathcal{J}^{\prime}\in\mathbf{IP}(A^{\prime})\). If_
\[\mathcal{I}\leq\mathcal{J}\text{ and }\mathcal{I}^{\prime}\leq\mathcal{J}^{ \prime},\]
_then_
\[\mathcal{I}\cdot_{\textbf{glue}}\mathcal{I}^{\prime}\leq\mathcal{J}\cdot_{ \textbf{glue}}\mathcal{J}^{\prime}.\]
Proof.: Let \(z\in\mathcal{I}\cdot_{\textbf{glue}}\mathcal{I}^{\prime}\). Then
\[z=\bigcup_{i=1}^{m}X_{i},\]
where \(\forall i\in\{1,...,m-1\}:X_{i}\cap X_{i+1}\neq\emptyset\) and \(X_{i}\in\mathcal{I}\) or \(X_{i}\in\mathcal{I}^{\prime}\). From the hypothesis, we know that
\[\forall i\in\{1,...,m\}:\exists Y_{i}\in\mathcal{J}\cup\mathcal{J}^{\prime}: X_{i}\subset Y_{i}.\]
Now since
\[\forall i\in\{1,...,m-1\}:Y_{i}\cap Y_{i+1}\supset X_{i}\cap X_{i+1}\neq\emptyset,\]
We have that \(\exists w\in\mathcal{J}\cdot_{\textbf{glue}}\mathcal{J}^{\prime}\) such that \(z\subset\bigcup_{i=1}^{m}Y_{i}\subset w\) and we are done.
**Corollary 3.16**.: _Let \(\mathcal{I}\in\mathbf{IP}\), \(A,A^{\prime}\subset\mathbb{N}_{\geq 1},|A|,|A^{\prime}|<\infty\). Then_
\[\mathcal{I}(A)\cdot_{\textbf{glue}}\mathcal{I}(A^{\prime})\leq\mathcal{I}(A \cup A^{\prime}).\]
Proof.: Let \(\mathcal{W}^{\prime}=\mathcal{Z}^{\prime}=\mathcal{I}(A\cup A^{\prime})\), \(\mathcal{W}=\mathcal{I}(A)\) and \(\mathcal{Z}=\mathcal{J}(A)\). We just need to show that \(\mathcal{W}\leq\mathcal{W}^{\prime}\) and \(\mathcal{Z}\leq\mathcal{Z}^{\prime}\) hold. We can then apply Lemma 3.15: it follows immediately from the definition of \(\cdot_{\textbf{glue}}\) that \(\mathcal{W}^{\prime}\cdot_{\textbf{glue}}\mathcal{Z}^{\prime}=\mathcal{I}(A \cup A^{\prime})\cdot_{\textbf{glue}}\mathcal{I}(A\cup A^{\prime})=\mathcal{I }(A\cup A^{\prime})\). First, for any \(c\in\textbf{cliques}(A)\) and any \(x\in c\)
\[c=[x]_{\sim\langle\textbf{Succ}_{A}\rangle}\subset[x]_{\sim\langle\textbf{Succ }_{A\cup A^{\prime}}\rangle}\in\textbf{cliques}(A\cup A^{\prime}).\]
Let \(X\in\mathcal{I}(A)\). Claim: there is a \(Y\in\mathcal{I}(A\cup A^{\prime})\) with
\[X\subset Y.\]
Indeed, we can write \(X=c\cap\mathcal{I}_{j}\) for some \(c\in\textbf{cliques}(A),\mathcal{I}_{j}\in\mathcal{I}\). Then, for any \(x\in c\),
\[X=(c\cap\mathcal{I}_{j})\subset([x]_{\sim\langle\textbf{Succ}_{A\cup A^{ \prime}}\rangle}\cap\mathcal{I}_{j})=:Y\in\mathcal{I}(A\cup A^{\prime}).\]
Analogously, for every \(X\in\mathcal{I}(A^{\prime})\) there is a \(Y\in\mathcal{I}(A\cup A^{\prime})\) with \(X\subset Y\)
**Lemma 3.17**.: _Let \(\mathcal{I},\mathcal{I}^{\prime}\in\mathbf{IP}\). Consider_
\[\mathcal{J}=\mathcal{I}\cdot_{\mathbf{glue}}\mathcal{I}^{\prime}.\]
_Then_
\[\mathcal{I}\leq\mathcal{J}(\bigcup\mathcal{I})\text{ and }\mathcal{I}^{ \prime}\leq\mathcal{J}(\bigcup\mathcal{I}^{\prime}).\]
Proof.: Let \(z\in\mathcal{J}\), then
\[z=\bigcup_{i=1}^{n}X_{i}\]
where \(\forall i\in\{1,...,n-1\}\), \(X_{i}\cap X_{i+1}\neq\emptyset\) and \(X_{i}\in\mathcal{I}\) or \(X_{i}\in\mathcal{I}^{\prime}\). Now recall that
\[\mathcal{J}(\bigcup\mathcal{I}) =\{c\cap\mathcal{J}_{i}|c\in\mathbf{cliques}(\bigcup\mathcal{I}), i\in I\}\setminus\{\emptyset\},\] \[\mathcal{J}(\bigcup\mathcal{I}^{\prime})) =\{c\cap\mathcal{J}_{i}|c\in\mathbf{cliques}(\bigcup\mathcal{I}^ {\prime}),i\in I\}\setminus\{\emptyset\}.\]
Let \(w\in\mathcal{I}\), then obviously there exists a \(z\in\mathcal{J}\) such that \(w\subset z\). Also by definition of \(\mathbf{cliques}(\bigcup\mathcal{I})\), we know that \(\mathcal{I}\leq\mathbf{cliques}(\bigcup\mathcal{I})\) and therefore \(\exists c\in\mathbf{cliques}(\bigcup\mathcal{I})\) such that \(w\subset c\). Therefore \(w\subset z\cap c\) which means that \(\mathcal{I}\leq\mathcal{J}(\bigcup\mathcal{I})\). Analogously one shows that \(\mathcal{I}^{\prime}\leq\mathcal{J}(\bigcup\mathcal{I}^{\prime})\).
### Algebraic operations
We now define algebraic operations on \(\mathbf{StIP}\). We first define the free \(\mathbb{Q}\)-vector space. It can be graded according to cardinalities or the number of blocks.
**Definition 3.18** (\(\mathbb{Q}\)-vector space over interval partitions).: \[\mathcal{H}_{\mathbf{int}} :=\bigoplus_{n\in\mathbb{N}}\mathbb{Q}[\mathbf{StIP}_{n}]\] (3) \[=\bigoplus_{\ell\in\mathbb{N}}\mathbb{Q}[\mathbf{NB}_{\ell}].\] (4)
#### 3.2.1 Products, coproducts
We now define a (non-commutative) product on interval partitions.
**Definition 3.19**.: Let \(\mathfrak{s}:=\{\mathfrak{s}_{1},...,\mathfrak{s}_{m}\},\mathfrak{t}:=\{ \mathfrak{t}_{1},...,\mathfrak{t}_{n}\}\in\mathbf{StIP}\). Then define
\[\bullet:\mathcal{H}_{\mathbf{int}}\otimes\mathcal{H}_{\mathbf{ int}}\rightarrow\mathcal{H}_{\mathbf{int}}\] \[\mathfrak{s}\bullet:=\{\mathfrak{s}_{1},...,\mathfrak{s}_{m}, \mathfrak{t}^{\prime}_{1},...,\mathfrak{t}^{\prime}_{n}\}\]
where for \(i=1,...,n\), \(\mathfrak{t}^{\prime}_{i}:=\|\bigcup\mathfrak{s}|+\mathfrak{t}_{i}\). And extend linearly.
**Example 3.20**.: \[\{\{1,2\}\}\bullet\{\{1\},\{2\}\}=\{\{1,2\},\{3\},\{4\}\}\]
_To denote elements of \(\mathbf{StIP}\), we will also simply write_
\[\square\,\bullet\,\square\,\square\,:=\{\{1,2\}\}\bullet\{\{1\},\{2\}\}=\{\{1,2\},\{3\},\{4\}\}=\square\,\square\,\square\,\square\]
_and \(\mathsf{e}:=\emptyset\)._
**Proposition 3.21**.: _The \(\bullet\) product is associative._
Proof.: The proof is immediate and is left to the reader.
We now define a coproduct which is based on the gluing operation.
**Definition 3.22**.: Let \(\mathsf{s}\in\mathbf{StIP}\).
We define a (cocommutative) coproduct as follows
\[\Delta_{\underline{\bullet}}(\mathsf{s}):=\sum_{A\sqcup A^{\prime}=\bigcup \mathsf{s}}\ \sum_{\begin{subarray}{c}\mathcal{I}\in\mathbf{IP}(A),\ \mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime}) \end{subarray}}\mathbf{std}(\mathcal{I})\otimes\mathbf{std}(\mathcal{I}^{ \prime})\]
and extend linearly. Recall that from Lemma 3.17, it follows that \(\mathcal{I}\cdot_{\mbox{\bf glue}}\mathcal{I}^{\prime}=\mathsf{s}\) implies that \(\mathcal{I}\leq\mathsf{s}(A)\) and \(\mathcal{I}\leq\mathsf{s}(A^{\prime})\).
**Example 3.23**.: \[\Delta_{\underline{\bullet}}(\{\{1,2\}\}) =\Delta_{\underline{\bullet}}(\ \square\,\square\,)=\mathsf{e}\otimes\square\,\square+2\ \square\otimes\square\,\square+\square\,\square\otimes\mathsf{e}\] \[+2\ \square\,\square\otimes\square+\square\,\square\otimes\square+ \square\,\square\otimes\square\,+\square\,\square\otimes\square\,\square\]
\[\Delta_{\underline{\bullet}}(\ \{1\},\{2\}\ ) =\Delta_{\underline{\bullet}}(\ \square\,\square\,)=\mathsf{e}\otimes\square\,\square+2\ \square\otimes\square+2\ \square\otimes\square\,\square+\square\,\square\otimes\mathsf{e}\] \[+2\ \square\,\square\otimes\square+\square\,\square\otimes\square\,\square\]
**Remark 3.24** (Shuffle coproduct).: _Notice that \(\Delta_{\underline{\bullet}}\) has a "co-shuffle part". Indeed_
\[\Delta_{\sqcup\!\!\sqcup}(\mathsf{s}):=\sum_{A\uplus A^{\prime}=\bigcup \mathsf{s}}\ \sum_{\begin{subarray}{c}\mathcal{I}\in\mathbf{IP}(A),\ \mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime}) \end{subarray}}\mathbf{std}(\mathcal{I})\otimes\mathbf{std}(\mathcal{I}^{ \prime}).\]
_is a shuffle coproduct. As an example_
\[\Delta_{\sqcup\!\!\sqcup}(\ \square\ \square\ )=\mathsf{e}\otimes\square\,\square\,+\square\, \square\otimes\mathsf{e}+2\ \square\otimes\square\,.\]
_More precisely, \((\mathcal{H}_{\textsf{int}},\Delta_{\shuffle})\) is isomorphic to a coalgebra on words, where letters are positive integers, see Remark 3.33. For example_
\[\Delta_{\shuffle,\,\textsf{words}}(1\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}1)=\textsf{e}\otimes 1\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}1+1\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}1 \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}1\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$ \blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}} \raisebox{0.86pt}{\scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{ \scalebox{1.0}{$\blacktriangleright$}}\raisebox{0.86pt}{\scalebox{1.
\[=\{\{z_{h_{1}},\ldots,z_{h_{2}}\},\{z_{h_{2}+1},\ldots,z_{h_{3}}\}, \ldots,\{z_{h_{l-1}+1},\ldots,z_{h_{l}}\}\},\] \[f(\mathcal{I}^{\prime}):=\{\{f(w_{s_{1}}),\ldots,f(w_{s_{2}})\}, \{f(w_{s_{2}+1}),\ldots,f(w_{s_{3}})\},\ldots,\{f(w_{s_{m-1}+1}),\ldots,f(w_{s_ {m}})\}\}\] \[=\{\{z_{s_{1}},\ldots,z_{s_{2}}\},\{z_{s_{2}+1},\ldots,z_{s_{3}}\},\ldots,\{z_{s_{m-1}+1},\ldots,z_{s_{m}}\}\}.\]
Now, we clearly have
\[A\cup A^{\prime}=\bigcup\mathcal{W}\iff f(A)\cup f(A^{\prime})=\bigcup \mathcal{Z}\]
and also
\[\mathcal{I}\cdot_{\mbox{\bf glue}}\mathcal{I}^{\prime}=\mathcal{W}\iff f( \mathcal{I})\cdot_{\mbox{\bf glue}}f(\mathcal{I}^{\prime})=\mathcal{Z}.\]
since
\[\{w_{h_{i-1}+1},\ldots,w_{h_{i}}\}\cap\{w_{s_{j-1}+1},\ldots,w_{s_{j}}\}\neq \emptyset\iff\{z_{h_{i-1}+1},\ldots,z_{h_{i}}\}\cap\{z_{s_{j-1}+1},\ldots,z_{s _{j}}\}\neq\emptyset.\]
Moreover
\[\mbox{\bf std}(\mathcal{I})=\mbox{\bf std}(\{\{w_{h_{1}},\ldots,w_ {h_{2}}\},\{w_{h_{2}+1},\ldots,w_{h_{3}}\},\ldots,\{w_{h_{l-1}+1},\ldots,w_{h_{ l}}\}\})\] \[=\mbox{\bf std}(\{\{z_{h_{1}},\ldots,z_{h_{2}}\},\{z_{h_{2}+1}, \ldots,z_{h_{3}}\},\ldots,\{z_{h_{l-1}+1},\ldots,z_{h_{l}}\}\})=\mbox{\bf std} (f(\mathcal{I})).\]
Proof of Proposition 3.25.: Let \(\mathfrak{s}\in\mbox{\bf StIP}\). Since the gluing operation is associative, see Lemma 3.14, the expression
\[\sum_{A\cup A^{\prime}\cup A^{\prime\prime}=\bigcup\mathfrak{s}}\sum_{ \mathcal{I}\in\mbox{\bf IP}(A),\mathcal{I}^{\prime}\in\mbox{\bf IP}(A^{\prime }),\mathcal{I}^{\prime\prime}\in\mbox{\bf IP}(A^{\prime\prime})}\mbox{\bf std} (\mathcal{I})\otimes\mbox{\bf std}(\mathcal{I}^{\prime})\otimes\mbox{\bf std} (\mathcal{I}^{\prime\prime})\]
is well-defined. We have
\[\sum_{A\cup A^{\prime}\cup A^{\prime\prime}=\bigcup\mathfrak{s}} \sum_{\mathcal{I}\in\mbox{\bf IP}(A),\mathcal{I}^{\prime}\in\mbox{\bf IP}(A^{ \prime}),\mathcal{I}^{\prime\prime}\in\mbox{\bf IP}(A^{\prime\prime})}\mbox{ \bf std}(\mathcal{I})\otimes\mbox{\bf std}(\mathcal{I}^{\prime})\otimes\mbox{ \bf std}(\mathcal{I}^{\prime\prime})\] \[=\sum_{B\cup A^{\prime\prime}=\bigcup\mathfrak{s}}\sum_{ \mathcal{J}\in\mbox{\bf IP}(B),\mathcal{I}^{\prime\prime}\in\mbox{\bf IP}(A^{ \prime\prime})}\left(\sum_{A\cup A^{\prime}=B}\sum_{\mathcal{I}\in\mbox{\bf IP} (A),\mathcal{I}^{\prime}\in\mbox{\bf IP}(A^{\prime})}\mbox{\bf std}(\mathcal{ I})\otimes\mbox{\bf std}(\mathcal{I}^{\prime})\right)\otimes\mbox{\bf std}( \mathcal{I}^{\prime\prime})\] \[\stackrel{{\mbox{\scriptsize Lemma \ref{lem:2011}}}}{{=}}\sum_{B\cup A^{\prime\prime}= \bigcup\mathfrak{s}}\sum_{\mathcal{J}\in\mbox{\bf IP}(B),\mathcal{I}^{\prime \prime}\in\mbox{\bf IP}(A^{\prime\prime})}\] \[\stackrel{{\mbox{\scriptsize Lemma \ref{lem:2011}}}}{{=}}\sum_{B\cup A^{\prime\prime}= \bigcup\mathfrak{s}}\sum_{\mathcal{J}\in\mbox{\bf IP}(B),\mathcal{I}^{\prime \prime}\in\mbox{\bf IP}(A^{\prime\prime})}\] \[\stackrel{{\mbox{\scriptsize Lemma \ref{lem:2011}}}}{{=}}\sum_{\mathcal{J}:\mbox{\bf glue}\mathcal{I}^{ \prime\prime}=\mathfrak{s}}\sum_{\mathcal{J}\in\mbox{\bf IP}(B),\mathcal{I}^{ \prime\prime}\in\mbox{\bf IP}(A^{\prime\prime})}\]
\[\left(\sum_{X\cup Y=\cup_{i}\mathbf{std}(\mathcal{J})_{i}}\sum_{ \begin{subarray}{c}\mathcal{I}\in\mathbf{IP}(X),\mathcal{I}^{\prime}\in \mathbf{IP}(Y)\\ \mathcal{I}_{\mathbf{\cdot}\mathbf{glue}^{\mathcal{I}^{\prime}}=\mathbf{std}( \mathcal{J})}\end{subarray}}\mathbf{std}(\mathcal{I})\otimes\mathbf{std}( \mathcal{I}^{\prime})\right)\otimes\mathbf{std}(\mathcal{I}^{\prime\prime})\] \[=(\Delta_{\triangleq}\otimes\mathrm{id})\circ\Delta_{ \triangleq}(\mathfrak{s}).\]
Similarly, we have
\[\sum_{A\cup A^{\prime}\cup A^{\prime\prime}=\bigcup\mathfrak{s} \ \mathcal{I}\in\mathbf{IP}(A),\mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime}), \mathcal{I}^{\prime\prime}\in\mathbf{IP}(A^{\prime\prime})}\mathbf{std}( \mathcal{I})\otimes\mathbf{std}(\mathcal{I}^{\prime})\otimes\mathbf{std}( \mathcal{I}^{\prime\prime})\] \[=(\mathrm{id}\otimes\Delta_{\triangleq})\circ\Delta_{ \triangleq}(\mathfrak{s}).\]
**Remark 3.27**.: _The dual associative product is given by_
\[\mathfrak{s}\triangleq\mathfrak{t} :=\sum_{\mathfrak{g}\in\mathbf{StIP}}\left\langle\mathfrak{s} \otimes\mathfrak{t},\Delta_{\triangleq}(\mathfrak{g})\right\rangle\mathfrak{g}\] \[=\sum_{\mathfrak{g}\in\mathbf{StIP}}\sum_{A\cup A^{\prime}= \bigcup\mathfrak{g}_{i}\ \mathcal{I}\in\mathbf{IP}(A),\mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime})} \mathfrak{g}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\mathcal{I}_{\mathbf{ \cdot}\mathbf{glue}^{\mathcal{I}^{\prime}}=\mathfrak{g}}\] \[=\sum_{n\in\mathbb{N}}\sum_{A\cup A^{\prime}=[n]}\sum_{ \begin{subarray}{c}\mathcal{I}\in\mathbf{IP}(A),\mathcal{I}^{\prime}\in \mathbf{IP}(A^{\prime})\\ \mathbf{std}(\mathcal{I})=\mathfrak{s},\ \mathbf{std}(\mathcal{I}^{\prime})= \mathfrak{t}\end{subarray}}\mathcal{I}\cdot\mathbf{glue}\ \mathcal{I}^{\prime}.\]
**Example 3.28**.: \[\square\hskip-5.690551pt\square\hskip-5.690551pt\square\hskip-5.690551pt \square\hskip-5.690551pt\square\hskip-5.690551pt\square\hskip-5.690551pt =2\ \square\hskip-5.690551pt\square\hskip-5.690551pt\square\hskip-5.690551pt \square\hskip-5.690551pt\square\hskip-5.690551pt\square\hskip-5.690551pt +2\ \square\hskip-5.690551pt\square\hskip-5.690551pt\square\hskip-5.690551pt +\square\hskip-5.690551pt\square\]
**Remark 3.29** (Section coefficients).: _We can write_
\[\left\langle\mathfrak{s}\otimes\mathfrak{t},\Delta_{ \triangleq}(\mathfrak{g})\right\rangle=\#\{A,A^{\prime}\subset\bigcup \mathfrak{g}_{j}\ |A\cup A^{\prime}=\bigcup\mathfrak{g},\ \exists!\ \mathcal{I}\in\mathbf{IP}(A),\exists!\ \mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime}),\] \[\mathbf{std}(\mathcal{I})=\mathfrak{s},\mathbf{std}(\mathcal{I}^ {\prime})=\mathfrak{t},\mathcal{I}\cdot\mathbf{glue}\ \mathcal{I}^{\prime}=\mathfrak{g}\}.\]
#### 3.2.2 Bialgebras on interval partitions
We now show that the algebraic operations are compatible. The unit and the counit maps are given by
\[u:\mathbb{Q}\rightarrow\mathcal{H}_{\text{int}}\]
\[u :1\mapsto\mathsf{e}\] \[\varepsilon :\mathcal{H}_{\mathsf{int}}\to\mathbb{Q}\] \[\varepsilon(x)=\begin{cases}x\text{ if }x\in\mathbb{Q}[\mathbf{StIP}_{0}] \\ 0\text{ else}\end{cases}\]
**Theorem 3.30** (Bialgebras on interval partitions).: _The following holds:_
* \((\mathcal{H}_{\mathsf{int}},\bullet,\Delta_{\underline{\bullet}},u,\varepsilon)\)_, is a bialgebra._
* \((\mathcal{H}_{\mathsf{int}},\underline{\bullet},\Delta_{\bullet},u,\varepsilon)\)_, is a connected filtered Hopf algebra, according to both Equation (_3_) and Equation (_4_)._
**Lemma 3.31**.: _Let \(A,B\in\mathbb{N}_{\geq 1},\ |A|,|B|<\infty\), \(\mathfrak{s},\mathfrak{t}\in\mathbf{StIP}\), and \(\mathcal{I}\in\mathbf{IP}(A),\mathcal{I}^{\prime}\in\mathbf{IP}(B)\). Assume_
\[\mathcal{I}\cdot_{\mathbf{glue}}\mathcal{I}^{\prime}=\mathfrak{s}\bullet \mathfrak{t}.\]
_Recall that_
\[\mathfrak{s}\bullet\mathfrak{t}:=\{\mathfrak{s}_{1},...,\mathfrak{s}_{m}, \mathfrak{t}^{\prime}_{1},...,\mathfrak{t}^{\prime}_{n}\}\]
_where for \(i=1,...,n\), \(\mathfrak{t}^{\prime}_{i}:=|\bigcup\mathfrak{s}|+\mathfrak{t}_{i}\). Then_
1. \[\mathcal{I} =\mathcal{I}(A\cap\bigcup\mathfrak{s})\uplus\mathcal{I}(A\cap \bigcup\mathfrak{t}^{\prime}_{j}),\] \[\mathcal{I}^{\prime} =\mathcal{I}^{\prime}(B\cap\bigcup\mathfrak{s})\uplus\mathcal{I}^ {\prime}(B\cap\bigcup\mathfrak{t}^{\prime}_{j}),\]
2. \[\mathcal{I}(A\cap\bigcup\mathfrak{s})\cdot_{\mathbf{glue}} \mathcal{I}^{\prime}(B\cap\bigcup\mathfrak{s}_{j}) =\mathfrak{s},\] \[\mathcal{I}(A\cap\bigcup\mathfrak{t}^{\prime}_{j})\cdot_{\mathbf{ glue}}\mathcal{I}^{\prime}(B\cap\bigcup\mathfrak{t}^{\prime}_{j}) =\mathfrak{t}^{\prime}.\]
3. \[\mathbf{std}(\mathcal{I}) =\mathbf{std}(\mathcal{I}(A\cap\bigcup\mathfrak{s}))\bullet \mathbf{std}(\mathcal{I}(A\cap\bigcup\mathfrak{t}^{\prime})),\] \[\mathbf{std}(\mathcal{I}^{\prime}) =\mathbf{std}(\mathcal{I}^{\prime}(B\cap\bigcup\mathfrak{s})) \bullet\mathbf{std}(\mathcal{I}^{\prime}(B\cap\bigcup\mathfrak{t}^{\prime})).\]
Proof.: First, we have
\[\mathcal{I} =\mathcal{I}(A)=\mathcal{I}(A\cap\bigcup\mathfrak{s}\uplus A \cap\bigcup\mathfrak{t}^{\prime}_{j})\] \[=\{\mathcal{I}_{j}\cap c|j\in J,c\in\mathbf{cliques}(A\cap\bigcup \mathfrak{s}\uplus A\cap\bigcup\mathfrak{t}^{\prime}_{j})\}\setminus\{\emptyset\}\] \[=\{\mathcal{I}_{j}\cap c|j\in J,c\in\mathbf{cliques}(A\cap\bigcup \mathfrak{s})\uplus\mathbf{cliques}(A\cap\bigcup\mathfrak{t}^{\prime}_{j}) \}\setminus\{\emptyset\}\]
\[=\{\mathcal{I}_{j}\cap c|j\in J,c\in\mathbf{cliques}(A\cap\bigcup \mathfrak{s})\}\setminus\{\emptyset\}\] \[\uplus\{\mathcal{I}_{j}\cap c|j\in J,c\in\mathbf{cliques}(A\cap \bigcup\mathfrak{t}^{\prime})\}\setminus\{\emptyset\}\] \[=\mathcal{I}(A\cap\bigcup\mathfrak{s})\uplus\mathcal{I}(A\cap \bigcup\mathfrak{t}^{\prime}_{j}).\]
Analogously one shows that \(\mathcal{I}^{\prime}=\mathcal{I}^{\prime}(B\cap\bigcup\mathfrak{s})\uplus \mathcal{I}^{\prime}(B\cap\bigcup\mathfrak{t}^{\prime}_{j})\). Therefore
\[\mathcal{I}(A\cap\bigcup\mathfrak{s})\uplus\mathcal{I}(A\cap \bigcup\mathfrak{t}^{\prime})\cdot_{\mathbf{glue}}(\mathcal{I}(B\cap\bigcup \mathfrak{s})\uplus\mathcal{I}(B\cap\bigcup\mathfrak{t}^{\prime})=\mathfrak{ s}\bullet\mathfrak{t}.\]
which holds if and only if
\[\mathcal{I}(A\cap\bigcup\mathfrak{s})\cdot_{\mathbf{glue}}\mathcal{I}^{ \prime}(B\cap\bigcup\mathfrak{s})=\mathfrak{s},\]
and
\[\mathcal{I}(A\cap\bigcup\mathfrak{t}^{\prime})\cdot_{\mathbf{glue}}\mathcal{ I}^{\prime}(B\cap\bigcup\mathfrak{t}^{\prime})=\mathfrak{t}^{\prime},\]
Finally, since we can write
\[\mathcal{I}=\{I_{s,1},I_{s,m},I_{t^{\prime},1},...,I_{t^{\prime},n}\}\]
with
\[\mathcal{I}(A\cap\bigcup\mathfrak{s}) =\{I_{s,1},I_{s,m}\},\] \[\mathcal{I}(A\cap\bigcup\mathfrak{t}^{\prime}) =\{I_{t^{\prime},1},...,I_{t^{\prime},n}\}\]
we have
\[\mathbf{std}(\mathcal{I}) =\mathbf{std}(\{I_{s,1},...,I_{s,m}\})\uplus\left(|\bigcup_{j=1}^ {m}I_{s,j}|+\mathbf{std}(\{I_{t,1},...,I_{t,n}\})\right)\] \[\mathbf{std}(\mathcal{I}(A\cap\bigcup\mathfrak{s}))\bullet \mathbf{std}(\mathcal{I}(A\cap\bigcup\mathfrak{t}^{\prime})).\]
And analogously one shows that \(\mathbf{std}(\mathcal{I}^{\prime}(B\cap\bigcup\mathfrak{s}))\bullet\mathbf{ std}(\mathcal{I}^{\prime}(B\cap\bigcup\mathfrak{t}^{\prime}))\)
Proof.: We show that \(\Delta_{\uplus}(\mathfrak{s}\bullet\mathfrak{t})=\Delta_{\uplus}(\mathfrak{s}) \bullet\Delta_{\uplus}(\mathfrak{t})\). We have
\[\Delta_{\uplus}(\mathfrak{s}\bullet\mathfrak{t})=\sum_{A\cup A^{ \prime}=\bigcup(\mathfrak{s}\bullet\mathfrak{t})}\sum_{\begin{subarray}{c} \mathcal{I}\in\mathbf{IP}(A),\mathcal{I}\in\mathbf{IP}(A^{\prime})\\ \mathcal{I}\cdot_{\mathbf{glue}}\mathcal{I}^{\prime}=\mathfrak{s}\bullet \mathfrak{t}\end{subarray}}\mathbf{std}(\mathcal{I})\otimes\mathbf{std}( \mathcal{I}^{\prime}).\]
Thanks to Lemma 3.31, we can write
\[\Delta_{\uplus}(\mathfrak{s}\bullet\mathfrak{t})\]
\[=\sum_{\begin{subarray}{c}(A\cap\bigcup\mathfrak{s})\cup(A^{\prime}\cap \bigcup\mathfrak{s})=\bigcup\mathfrak{s}\\ (A\cap\bigcup\mathfrak{t}^{\prime})\cup(A^{\prime}\cap\bigcup\mathfrak{t}^{ \prime})=\bigcup\mathfrak{t}^{\prime}\ \mathcal{I}(A^{\prime}\cup\mathfrak{s})\cdot \operatorname{\mathbf{glue}}^{\mathcal{I}^{\prime}}(A^{\prime}\cap\bigcup \mathfrak{s})=\mathfrak{s}\\ \mathcal{I}(A^{\prime}\cap\bigcup\mathfrak{t}^{\prime})\cdot\operatorname{ \mathbf{glue}}^{\mathcal{I}^{\prime}}(A^{\prime}\cap\bigcup\mathfrak{t}^{ \prime})=\mathfrak{t}^{\prime}\\ \end{subarray}}\sum_{\begin{subarray}{c}\mathcal{I}\in\operatorname{\mathbf{ IP}}(A_{1}),\mathcal{I}^{\prime}\in\operatorname{\mathbf{IP}}(A^{\prime})\\ \mathcal{I}_{2}\in\operatorname{\mathbf{IP}}(A_{2}),\mathcal{I}^{\prime}_{2} \in\operatorname{\mathbf{IP}}(A^{\prime}_{2})\\ \mathcal{I}_{1}\cdot\operatorname{\mathbf{glue}}^{\mathcal{I}^{\prime}_{1}= \mathfrak{s}},\ \mathcal{I}_{2}\cdot\operatorname{\mathbf{glue}}^{\mathcal{I}^{\prime}_{2}= \mathfrak{t}^{\prime}}\end{subarray}}\operatorname{\mathbf{std}}(\mathcal{I} _{1})\bullet\sum_{\begin{subarray}{c}\mathcal{I}_{2}\in\operatorname{\mathbf{ IP}}(A_{2})\\ \mathcal{I}_{2}^{\prime}\in\operatorname{\mathbf{IP}}(A^{\prime}_{2})\\ \mathcal{I}_{2}\cdot\operatorname{\mathbf{glue}}^{\mathcal{I}^{\prime}_{2}= \mathfrak{t}^{\prime}}\end{subarray}}\operatorname{\mathbf{std}}(\mathcal{I} _{2})\otimes\operatorname{\mathbf{std}}(\mathcal{I}^{\prime}_{2})\]
Using Lemma 3.26, we have
\[\sum_{\begin{subarray}{c}A_{2}\cup A^{\prime}_{2}=\bigcup \mathfrak{t}^{\prime}\ \mathcal{I}_{2}\in\operatorname{\mathbf{IP}}(A_{2}),\mathcal{I}^{\prime}_{2} \in\operatorname{\mathbf{IP}}(A^{\prime}_{2})\\ \mathcal{I}_{2}\cdot\operatorname{\mathbf{glue}}^{\mathcal{I}^{\prime}_{2}= \mathfrak{t}^{\prime}}\\ =\sum_{B\cup B^{\prime}=\bigcup\mathfrak{t}\ \mathcal{J}\in \operatorname{\mathbf{IP}}(B),\mathcal{J}^{\prime}\in\operatorname{\mathbf{IP}}( B^{\prime})}\operatorname{\mathbf{std}}(\mathcal{J})\otimes\operatorname{ \mathbf{std}}(\mathcal{J}^{\prime}).\]
Therefore
\[\Delta_{\underline{\phi}}(\mathfrak{s}\bullet\mathfrak{t})\] \[=\Delta_{\underline{\phi}}(\mathfrak{s})\bullet\Delta_{ \underline{\phi}}(\mathfrak{t})\] \[=\sum_{A\cup A^{\prime}=\bigcup\mathfrak{s}\ \mathcal{I}\in \operatorname{\mathbf{IP}}(A),\mathcal{I}^{\prime}\in\operatorname{\mathbf{IP}} (A^{\prime})}\operatorname{\mathbf{std}}(\mathcal{I})\otimes\operatorname{ \mathbf{std}}(\mathcal{I}^{\prime})\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
The following remark is about how, under certain assumptions, one can learn about the structure of an algebra by endowing it with a Hopf algebra structure.
**Remark 3.32** (Polynomial algebra).: _We recall the well-known result that any filtered Hopf algebra is isomorphic, as an algebra, to a polynomial algebra. See Cartier and Patras 2021[Theorem 4.4.1]. Therefore, \((\mathcal{H}_{\text{int}},\triangleq)\) is a free commutative algebra._
**Remark 3.33**.: _We provide another description of the Hopf algebra from Theorem 3.30, by showing that it is isomorphic to a Hopf algebra on words, where the letters are positive integers. Let \(V:=\mathbb{Q}[\mathbb{N}_{\geq 1}]\), then_
\[\Phi:\bigoplus_{n\in\mathbb{N}}V^{\otimes n}\to(\mathcal{H}_{\text{int}}, \bullet),\]
_defined as the unique extension of_
\[\Phi(n):=\{\{1,2,...,n\}\}\]
_to a map of associative algebras, is a linear isomorphism. We denote with \(\blacklozenge\) the concatenation of tensors. If we restrict \(\Phi\) to words on positive integers, which form a linear basis, we obtain an injection onto a basis for the codomain, the set \(\mathbf{StIP}\). Indeed, we have:_
\[\Phi(n_{1}\blacklozenge n_{2}\blacklozenge\;\cdots\blacklozenge n _{k}) =\Phi(n_{1})\bullet\Phi(n_{2})\bullet\cdots\bullet\Phi(n_{k})\] \[=\left\{\{1,2,...,n_{1}\},\{n_{1}+1,\dots,n_{1}+n_{2}\},\dots, \{\sum_{j=1}^{k-1}n_{j}+1,...,\sum_{j=1}^{k}n_{j}\}\right\}.\]
_As an example_
\[\Phi(1\blacklozenge 3\blacklozenge 2)=\left\{\{1\},\{2,3,4\},\{5,6\}\right\}.\]
_We can use \(\Phi\) to define a product on words, which is just a relabeling of \(\triangleq\). We define on basis elements_
\[m_{1}\blacklozenge m_{2}\blacklozenge\;\cdots\blacklozenge m_{\ell} \triangleq_{\text{\tiny{\rm{wrd}}}}n_{1}\blacklozenge n_{2} \blacklozenge\;\cdots\blacklozenge n_{k}\] \[:=\Phi^{-1}\left(\Phi(m_{1}\blacklozenge m_{2}\blacklozenge\; \cdots\blacklozenge m_{\ell})\triangleq\Phi(n_{1}\blacklozenge n_{2} \blacklozenge\;\cdots\blacklozenge n_{k})\right).\]
_For example_
\[2\triangleq_{\text{\tiny{\rm{wrd}}}}2 :=\Phi^{-1}(\Phi(2)\triangleq\Phi(2))\] \[=\Phi^{-1}(\;\Box\Box\;\Box\;\Box\;)\] \[=\Phi^{-1}(\;2\;\Box\;\Box\;\Box+2\;\Box\;\Box+\Box\;\Box\;)\] \[=2\,2\blacklozenge 2+2\,3+2,\] \[1\blacklozenge 2\triangleq_{\text{\tiny{\rm{wrd}}}}1\blacklozenge 1 =3\,1\blacklozenge 2\blacklozenge 1+3\,1\blacklozenge 2+1 \blacklozenge 2\blacklozenge 1\blacklozenge 1\] \[+6\,1\blacklozenge 1\blacklozenge 2+3\,1\blacklozenge 1 \blacklozenge 1\blacklozenge 2+21\blacklozenge 1\blacklozenge 2 \blacklozenge 1.\]
_Then, in analogy to Theorem 3.30_
**Theorem 3.34**.: _Let \(\mathcal{H}_{\text{int}}\) be a Hilbert space and \(\mathcal{H}_{\text{int}}\) be a Hilbert space and \(\mathcal{H}_{\text{int}}\) be a Hilbert space. Then, in analogy to Theorem 3.30_
**Proof.** (1).: _Let \(\mathcal{H}_{\text{int}}\) be a Hilbert space and \(\mathcal{H}_{\text{int}}\) be a Hilbert space. Then, in analogy to Theorem 3.30_
\[\mathcal{H}_{\text{int}}\;\mathcal{H}_{\text{int}}=\mathcal{H}_{\text{int}} \;\mathcal{H}_{\text{int}}.\]
_Then, in analogy to Theorem 3.30_
* \((\bigoplus_{n\in\mathbb{N}}V^{\otimes n},\bullet,\Delta_{\underset{\bullet\hhd}{ \text{\tiny{wrd}}}})\) _is a bialgebra,_
* \((\bigoplus_{n\in\mathbb{N}}V^{\otimes n},\underset{\bullet\hhd}{\text{\tiny{wrd} }},\Delta_{\bullet})\) _is a connected filtered (by the length of the words, see Equation (_4_)) Hopf algebra._
### Signature of an interval partition
We define a family of linear maps on the dual of \(\mathcal{H}_{\text{\tiny{int}}}\), indexed by \(\mathfrak{L}\in\mathbf{StIP}\). These maps, encode "occurrences" of other standardized interval partitions in \(\mathfrak{L}\).
**Definition 3.34** (Signature of an interval partition).: For \(\mathfrak{L}\in\mathbf{StIP}\), define the linear map \(\mathsf{IPC}(\mathfrak{L}):\mathcal{H}_{\text{\tiny{int}}}\to\mathbb{Q}\) via
\[\left\langle\mathsf{IPC}(\mathfrak{L}),\mathfrak{s}\right\rangle:=\#\{A \subset\bigcup\mathfrak{L}|\;\mathbf{std}(\mathfrak{L}(A))\geq\mathfrak{s} \},\quad\mathfrak{s}\in\mathbf{StIP}.\]
**Remark 3.35**.: _The condition \(\mathbf{std}(\mathfrak{L}(A))\geq\mathfrak{s}\) tells us that we count \(\mathfrak{s}\), each time we find a pattern that is equal to \(\mathfrak{s}\) or coarser. Let \(\mathfrak{L}=\{\{1\},\{2\},\{3,4\}\}\), and \(\mathfrak{s}=\{\{1\},\{2\}\}\)._
\[\text{For }A=\{3,4\},\;\mathbf{std}(\mathfrak{L}(A))=\{\{1,2\}\}\geq\{\{1\},\{2 \}\}.\]
_So that if we look for a fine pattern (arbitrary gaps) we also count coarse pattern (consecutive values), while if we look for a coarse pattern we discard arbitrary gaps and only keep consecutive values. Let \(\mathfrak{L}=\{\{1\},\{2\},\{3,4\}\}\), and \(\mathfrak{s}=\{\{1,2\},\{3\}\}\)._
\[\text{For }A=\{2,3,4\},\;\mathbf{std}(\mathfrak{L}(A))=\{\{1\},\{2,3\}\} \perp\{\{1,2\},\{3\}\},\]
_i.e., the structure of the gaps needs to be respected._
**Remark 3.36**.: _When we count the occurrences of an interval partition, \(\mathfrak{s}=\mathbf{n}_{1}\bullet\cdots\bullet\mathbf{n}_{k}\) (conveniently written taking the product \(\bullet\) of single blocks partitions, \(\mathbf{n}_{j}:=\{[n_{j}]\},n_{j}\in\mathbb{N}_{\geq 1}\)) in a single block partition, i.e. \(\mathfrak{L}=\mathbf{N}:=\{[N]\},N\in\mathbb{N}_{\geq 1}\), we have_
\[\left\langle\mathsf{IPC}(\mathfrak{L}),\mathfrak{s}\right\rangle=\binom{N-(n _{1}+...+n_{k})+k}{k}. \tag{5}\]
_which are the weak \(k+1\) compositions of \(N-(n_{1}+...+n_{k})\). For example, if we have \(N=9,n_{1}=3,n_{2}=5\), the three possible constellations_
_correspond to \((1,0,0),(0,1,0),(0,0,1)\) respectively: i.e. the weak \(3\) compositions of \(1\)._
#### 3.3.1 Character property and Chen's identity
We call these maps _signatures_ for interval partitions, motivated by the following result.
**Theorem 3.37** (Character property).: _The maps \(\left(\mathsf{IPC}(\mathfrak{L})\right)_{\mathfrak{L}}\) are characters under the product \(\triangleq\). Let \(\mathfrak{L}\in\mathbf{StIP}_{N}\), with \(N\in\mathbb{N}\). For all \(\mathfrak{s},\mathfrak{t}\in\mathbf{StIP}\), we have_
\[\left\langle\mathsf{IPC}(\mathfrak{L}),\mathfrak{s}\right\rangle\cdot\left\langle \mathsf{IPC}(\mathfrak{L}),\mathfrak{t}\right\rangle=\left\langle\mathsf{IPC }(\mathfrak{L}),\,\mathfrak{s}\triangleq\mathfrak{t}\right\rangle. \tag{6}\]
**Remark 3.38** (Single blocks).: _Let \(\mathbf{N}:=\{[N]\}\), \(\mathbf{m}:=\{[m]\}\), \(\mathbf{n}:=\{[n]\}\). Then we have_
\[\left\langle\mathsf{IPC}(\mathbf{N}),\mathbf{m}\right\rangle\cdot\left\langle \mathsf{IPC}(\mathbf{N}),\mathbf{n}\right\rangle=(N-m+1)(N-n+1)\]
_thanks to Remark 3.36. Since_
\[\mathbf{m}\triangleq\mathbf{n}\] \[=\mathbf{m}\bullet\mathbf{n}+\mathbf{n}\bullet\mathbf{m}+2\sum_{ i=1}^{\min(m,n)-1}\{[m+n-i]\}\] \[+(\max(m,n)-\min(m,n)+1)\{[\max(m,n)]\}\]
_using again Remark 3.36, we obtain_
\[\left\langle\mathsf{IPC}(\mathbf{N}),\,\mathbf{m}\triangleq \mathbf{n}\right\rangle\] \[=2\binom{N-(m+n)+2}{2}+2\sum_{i=1}^{\min(m,n)-1}(N-(m+n-i)+1)\] \[+(\max(m,n)-\min(m,n)+1)(N-\max(m,n)+1)\]
_It is easy to show that_
\[\left\langle\mathsf{IPC}(\mathbf{N}),\mathbf{m}\right\rangle\cdot\left\langle \mathsf{IPC}(\mathbf{N}),\mathbf{n}\right\rangle=\left\langle\mathsf{IPC}( \mathbf{N}),\,\mathbf{m}\triangleq\mathbf{n}\right\rangle\]
We now state several lemmas needed for the proof of Theorem 3.37.
**Lemma 3.39** (\(\mathfrak{s}\)-refinement of \(\mathcal{I}\)).: _Let \(\mathcal{I}\in\mathbf{IP}\) and \(\mathfrak{s}\in\mathbf{StIP}\). If_
\[\mathbf{std}(\mathcal{I})\geq\mathfrak{s},\]
_then there exists a unique refinement \(\mathcal{I}_{\mathfrak{s}}\) of \(\mathcal{I}\) such that_
\[\mathbf{std}(\mathcal{I}_{\mathfrak{s}})=\mathfrak{s}.\]
**Example 3.40**.: _Let \(\mathcal{I}=\{\{5\},\{6\},\{10,11,12\}\}\) and \(\mathfrak{s}=\{\{1\},\{2\},\{3,4\},\{5\}\}\). Then:_
\[\mathbf{std}(\mathcal{I})\geq\mathfrak{s}\]
_and_
\[\mathcal{I}_{\mathfrak{s}}=\{\{5\},\{6\},\{10,11\},\{12\}\}\]
Proof.: Let
\[\mathbf{std}(\mathcal{I})\geq\mathfrak{s}.\]
We recall the notation used in Definition 3.7. Let \(i_{1},\ldots,i_{\ell_{1}},\ldots,i_{\ell_{2}},\ldots,i_{\ell_{k}}\in\mathbb{N} _{\geq 1}\) such that
\[i_{1}\prec\cdots\prec i_{\ell_{1}}<i_{\ell_{1}+1}\prec\cdots\prec i_{\ell_{2}} <\cdots<i_{\ell_{k-1}+1}\prec\cdots\prec i_{\ell_{k}},\]
where \(i\prec j\iff j-i=1\). Then
\[\mathbf{std}(\mathcal{I}):=\mathbf{std}(\{\{i_{1},\ldots,i_{\ell _{1}}\},\{i_{\ell_{1}+1},\ldots,i_{\ell_{2}}\},\ldots,\{i_{\ell_{k-1}+1}, \ldots,i_{\ell_{k}}\}\})\] \[:=\{\{1,\ldots,\ell_{1}\},\{\ell_{1}+1,\ldots,\ell_{2}\}, \ldots,\{\ell_{k-1}+1,\ldots,\ell_{k}\}\}\] \[\geq\{\{1|,\ldots,|\ell_{1}\},\{\ell_{1}+1|,\ldots,|\ell_{2}\}, \ldots,\{\ell_{k-1}+1|,\ldots,|\ell_{k}\}\}=\mathfrak{s}.\]
where
\[1\prec\cdots\prec\ell_{1}\prec\ell_{1}+1\prec\cdots\prec\ell_{2}<\cdots<\ell _{k-1}+1\prec\cdots\prec\ell_{k}\]
and the \(|\) indicate potential "cuts", which lead to a different \(\mathfrak{s}\in\mathbf{StIP}\). Clearly only
\[\{\{i_{1}|,\ldots,|i_{\ell_{1}}\},\{i_{\ell_{1}+1}|,\ldots,|i_{\ell_{2}}\}, \ldots,\{i_{\ell_{k-1}+1}|,\ldots,|i_{\ell_{k}}\}\}\]
does the job.
**Lemma 3.41**.: _Let \(\mathfrak{s},\mathfrak{t},\mathfrak{L}\in\mathbf{StIP}\). Then the following holds_
\[\Big{\{}A,B\subset\bigcup\mathfrak{L}\;|\mathbf{std}(\mathfrak{ L}(A))\geq\mathfrak{s},\mathbf{std}(\mathfrak{L}(B))\geq\mathfrak{t}\Big{\}}\] \[=\underset{\mathbf{std}(\mathfrak{L}(C))\geq\mathfrak{s}}{ \mathbf{std}(\mathfrak{L}(C))\geq\mathfrak{g}}\] \[\mathbf{std}(\mathfrak{L}(A)_{\mathfrak{s}}\cdot_{\mathbf{glue}} \mathfrak{L}(B)_{\mathfrak{t}})=\mathfrak{g}\Big{\}}.\]
_where the union is taken over disjoint sets (possibly empty)._
Proof.: Let \((A,B)\) be an element of the LHS. Then there exists a unique pair \((\mathfrak{g},C)\) such that it belongs to one of the sets on the RHS. We have \(C:=A\cup B\) and \(\mathfrak{g}:=\mathbf{std}(\mathfrak{L}(A)_{\mathfrak{s}}\cdot\mathbf{glue} \ \mathfrak{L}(B)_{\mathbf{t}})\). The inequality \(\mathbf{std}(\mathfrak{L}(C))\geq\mathfrak{g}\) holds since
\[\mathbf{std}(\mathfrak{L}(C))\geq\mathbf{std}(\mathfrak{L}(A)\cdot\mathbf{glue }\ \mathfrak{L}(B))\geq\mathbf{std}(\mathfrak{L}(A)_{\mathfrak{s}}\cdot\mathbf{glue }\ \mathfrak{L}(B)_{\mathbf{t}})=\mathfrak{g}.\]
From left to right: the first inequality is due to Corollary 3.16, while the second is due to Lemma 3.15.
The other inclusion is trivial.
**Lemma 3.42** (Zooming in).: _Let \(\mathfrak{s},\mathfrak{t},\mathfrak{L}\in\mathbf{StIP}\), \((\mathfrak{g},C)\in\mathbf{StIP}\times\bigcup\mathfrak{L}\). Assume that \(\mathbf{std}(\mathfrak{L}(C))=:\mathfrak{h}\geq\mathfrak{g}\). Then_
\[\#\left\{\,A,A^{\prime}\subset\bigcup\mathfrak{L}\ \Big{|}\ A\cup A^{\prime}=C,\mathbf{std}( \mathfrak{L}(A))\geq\mathfrak{s},\mathbf{std}(\mathfrak{L}(A^{\prime}))\geq \mathfrak{t},\mathbf{std}(\mathfrak{L}(A)_{\mathfrak{s}}\cdot\mathbf{glue}\ \mathfrak{L}(A^{\prime})_{\mathbf{t}})=\mathfrak{g}\right\}\] \[=\] \[\#\left\{\,A,A^{\prime}\subset\bigcup\mathfrak{h}\ \Big{|}\ A\cup A^{\prime}=\bigcup \mathfrak{h},\ \mathbf{std}(\mathfrak{h}(A))\geq\mathfrak{s},\mathbf{std}( \mathfrak{h}(A^{\prime}))\geq\mathfrak{t},\mathfrak{h}(A)_{\mathfrak{s}}\cdot \mathbf{glue}\ \mathfrak{h}(A^{\prime})_{\mathbf{t}}=\mathfrak{g}\right\}.\]
Proof.: Immediate.
**Lemma 3.43** (Invariance).: _Let \(\mathfrak{g},\mathfrak{h}\in\mathbf{StIP}\) such that \(\mathfrak{h}\geq\mathfrak{g}\). Then_
\[\left\{\,A,A^{\prime}\subset\bigcup\mathfrak{h}\ |A\cup A^{ \prime}=\bigcup\mathfrak{h},\ \mathbf{std}(\mathfrak{h}(A))\geq\mathfrak{s},\mathbf{std}( \mathfrak{h}(A^{\prime}))\geq\mathfrak{t},\mathfrak{h}(A)_{\mathfrak{s}}\cdot \mathbf{glue}\ \mathfrak{h}(A^{\prime})_{\mathbf{t}}=\mathfrak{g}\right\}\] \[=\] \[\{A,A^{\prime}\subset\bigcup\mathfrak{g}\ |A\cup A^{ \prime}=\bigcup\mathfrak{g},\ \mathbf{std}(\mathfrak{g}(A))\geq\mathfrak{s},\mathbf{std}( \mathfrak{g}(A^{\prime}))\geq\mathfrak{t},\mathfrak{g}(A)_{\mathfrak{s}}\cdot \mathbf{glue}\ \mathfrak{g}(A^{\prime})_{\mathbf{t}}=\mathfrak{g}\}.\]
Proof.: Denote the set on the left-hand side with \(\mathrm{Set}_{0}\) and the one on the right-hand side with \(\mathrm{Set}_{1}\). Let \((A,A^{\prime})\in\mathrm{Set}_{1}\). Since \(\mathfrak{h}\geq\mathfrak{g}\) means that \(\forall i\in I:\exists j\in J:\mathfrak{g}_{i}\subset\mathfrak{h}_{j}\) and since
\[\mathfrak{h}(A) :=\{c\cap\mathfrak{h}_{j}|c\in\mathbf{cliques}(A),j\in J\}\setminus \{\emptyset\}\] \[\mathfrak{g}(A) :=\{c\cap\mathfrak{g}_{j}|c\in\mathbf{cliques}(A),j\in J\}\setminus \{\emptyset\}\]
we have \(\mathfrak{h}(A)\geq\mathfrak{g}(A)\) and, analogously, \(\mathfrak{h}(A^{\prime})\geq\mathfrak{g}(A^{\prime})\). This means that \(\mathbf{std}(\mathfrak{h}(A))\geq\mathbf{std}(\mathfrak{g}(A))\geq\mathfrak{s}\) and \(\mathbf{std}(\mathfrak{h}(A^{\prime}))\geq\mathbf{std}(\mathfrak{g}(A^{ \prime}))\geq\mathfrak{t}\). Therefore \(\mathfrak{h}(A)\) and \(\mathfrak{h}(A^{\prime})\) can be refined so that \(\mathfrak{h}(A)_{\mathfrak{s}}\cdot\mathbf{glue}\ \mathfrak{h}(A^{\prime})_{\mathbf{t}}=\mathfrak{g}(A)_{\mathfrak{s}}\cdot \mathbf{glue}\ \mathfrak{g}(A^{\prime})_{\mathbf{t}}=\mathfrak{g}\), see Lemma 3.39. So, \((A,A^{\prime})\in\mathrm{Set}_{0}\). For the other direction, let \((A,A^{\prime})\in\mathrm{Set}_{0}\). Since \(\mathfrak{h}(A)_{\mathfrak{s}}\cdot\mathbf{glue}\ \mathfrak{h}(A^{\prime})_{\mathbf{t}}= \mathfrak{g}\), using Lemma 3.17, we have
\[\mathfrak{g}(A)\geq\mathfrak{h}(A)_{\mathfrak{s}}\] \[\mathfrak{g}(A^{\prime})\geq\mathfrak{h}(A^{\prime})_{\mathbf{t}}.\]
Then it is immediate to verify that \((A,A^{\prime})\in\mathrm{Set}_{1}\).
**Lemma 3.44** (Recovering section coefficients).: _Let \(\mathfrak{s},\mathfrak{t},\mathfrak{g}\in\mathbf{StIP}\). Then:_
\[\left\{A,A^{\prime}\subset\bigcup\mathfrak{g}\left|A\cup A^{\prime}= \bigcup\mathfrak{g},\ \mathbf{std}(\mathfrak{g}(A))\geq\mathfrak{s},\mathbf{std}(\mathfrak{g}(A^{ \prime}))\geq\mathfrak{t},\mathfrak{g}(A)_{\mathfrak{s}}\cdot\mathbf{glue}\ \mathfrak{g}(A^{\prime})_{\mathfrak{t}}=\mathfrak{g}\right\}\] \[=\] \[\left\{A,A^{\prime}\subset\bigcup\mathfrak{g}\left|A\cup A^{ \prime}=\bigcup\mathfrak{g},\ \exists!\ \mathcal{I}\in\mathbf{IP}(A),\exists!\ \mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime}),\mathbf{std}( \mathcal{I})=\mathfrak{s},\mathbf{std}(\mathcal{I}^{\prime})=\mathfrak{t}, \mathcal{I}\cdot\mathbf{glue}\ \mathcal{I}^{\prime}=\mathfrak{g}\right\}.\]
_In particular, see Remark 3.29_
\[\#\{A,A^{\prime}\subset\bigcup\mathfrak{g}\ |A\cup A^{\prime}= \bigcup\mathfrak{g},\ \mathbf{std}(\mathfrak{g}(A))\geq\mathfrak{s},\mathbf{std}( \mathfrak{g}(A^{\prime}))\geq\mathfrak{t},\mathfrak{g}(A)_{\mathfrak{s}}\cdot \mathbf{glue}\ \mathfrak{g}(A^{\prime})_{\mathfrak{t}}=\mathfrak{g}\}\] \[=\langle\mathfrak{s}\otimes\mathfrak{t},\Delta_{\underline{ \mathfrak{g}}}(\mathfrak{g})\rangle.\]
Proof.: Denote the set on the left-hand side with \(\mathrm{Set}_{0}\) and the one on the right-hand side with \(\mathrm{Set}_{1}\). Let \((A,A^{\prime})\in\mathrm{Set}_{0}\), then \(A\cup A^{\prime}=\bigcup\mathfrak{g}\), \(\mathfrak{g}(A)_{\mathfrak{s}}\in\mathbf{IP}(A)\) and \(\mathfrak{g}(A^{\prime})_{\mathfrak{t}}\in\mathbf{IP}(A^{\prime})\), \(\mathbf{std}(\mathfrak{g}(A)_{\mathfrak{s}})=\mathfrak{s}\), \(\mathbf{std}(\mathfrak{g}(A^{\prime})_{\mathfrak{t}})=\mathfrak{t}\), \(\mathfrak{g}(A)_{\mathfrak{s}}\cdot\mathbf{glue}\ \mathfrak{g}(A^{\prime})_{\mathfrak{t}}=\mathfrak{g}\). Let \((A,A^{\prime})\in\mathrm{Set}_{1}\), then \(A\cup A^{\prime}=\bigcup\mathfrak{g}\). Since \(\mathcal{I}\cdot\mathbf{glue}\ \mathcal{I}^{\prime}=\mathfrak{g}\), with \(\mathcal{I}\in\mathbf{IP}(A)\) and \(\mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime})\), we have \(\mathcal{I}\leq\mathfrak{g}(A)\), \(\mathcal{I}^{\prime}\leq\mathfrak{g}(A^{\prime})\), and \(\mathfrak{s}=\mathbf{std}(\mathcal{I})\leq\mathbf{std}(\mathfrak{g}(A))\), \(\mathfrak{t}=\mathbf{std}(\mathcal{I}^{\prime})\leq\mathbf{std}(\mathfrak{g}( A^{\prime}))\) as well. Therefore, using Lemma 3.39, we have \(\mathcal{I}=\mathfrak{g}(A)_{\mathfrak{s}}\) and \(\mathcal{I}^{\prime}=\mathfrak{g}(A)_{\mathfrak{t}}\).
Proof of Theorem 3.37.: \[\left\langle\mathsf{IPC}(\mathfrak{L}),\mathfrak{s}\right\rangle \left\langle\mathsf{IPC}(\mathfrak{L}),\mathfrak{t}\right\rangle\] \[=|\{A,B\subset\bigcup\mathfrak{L}\ |\mathbf{std}(\mathfrak{L}(A))\geq\mathfrak{s},\mathbf{std}( \mathfrak{L}(B))\geq\mathfrak{t}\}|\] \[=|\underbrace{\biguplus}_{(\mathfrak{g},C)\in\mathbf{StIP}\times \bigcup\mathfrak{L}}\{A,B\subset\bigcup\mathfrak{L}\ |A\cup B=C,\mathbf{std}(\mathfrak{L}(A))\geq\mathfrak{s},\mathbf{std}( \mathfrak{L}(B))\geq\mathfrak{t},\] \[\qquad\mathbf{std}(\mathfrak{L}(C))\geq\mathfrak{g}\] \[\mathbf{std}(\mathfrak{L}(A)_{\mathfrak{s}}\cdot\mathbf{glue}\ \mathfrak{L}(B)_{\mathfrak{t}})=\mathfrak{g}\}|\] \[=\sum_{(\mathfrak{g},C)\in\mathbf{StIP}\times\bigcup\mathfrak{L} }\langle\underline{\mathfrak{g}}(\mathfrak{s}\otimes\mathfrak{t}), \mathfrak{g}\rangle=\sum_{\mathfrak{g}\in\mathbf{StIP}}\sum_{\begin{subarray}{ c}C\in\bigcup\mathfrak{L}\mathfrak{L}\\ \mathbf{std}(\mathfrak{L}(C))\geq\mathfrak{g}\end{subarray}}\langle\triangleq( \mathfrak{s}\otimes\mathfrak{t}),\mathfrak{g}\rangle\] \[=\sum_{\mathfrak{g}\in\mathbf{StIP}}\langle\triangleq( \mathfrak{s}\otimes\mathfrak{t}),\mathfrak{g}\rangle\sum_{\begin{subarray}{ c}C\in\bigcup\mathfrak{L}\\ \mathbf{std}(\mathfrak{L}(C))\geq\mathfrak{g}\end{subarray}}1=\sum_{\mathfrak{g}\in \mathbf{StIP}}\langle\triangleq(\mathfrak{s}\otimes\mathfrak{t}),\mathfrak{g} \rangle\Big{\langle}\mathsf{IPC}(\mathfrak{L}),\mathfrak{g}\Big{\rangle}\] \[=\left\langle\mathsf{IPC}(\mathfrak{L}),\sum_{\mathfrak{g}\in \mathbf{StIP}}\langle\triangleq(\mathfrak{s}\otimes\mathfrak{t}),\mathfrak{g} \rangle\mathfrak{g}\right\rangle=\left\langle\mathsf{IPC}(\mathfrak{L}), \mathfrak{s}\triangleq\mathfrak{t}\right\rangle.\]
We used Lemma 3.41 for the second equality and Lemmas 3.42 to 3.44 for the third one.
**Example 3.45**.: _In the following expression, we write \(\underline{a}\)\(b\)\(c\)\(d\) for \(\{\{1,2,3,4\}\}\) to illustrate how the mechanism works._
\[\left\langle\mathsf{IPC}\left(\begin{array}{
\(\{\{a,b\}\}\)**:**
\[+\left\langle\mathsf{IPC}(\mathfrak{L}),\square\,\square\,\square\, \square\right\rangle+\left\langle\mathsf{IPC}(\mathfrak{L}),\square\,\square\, \square\right\rangle+\left\langle\mathsf{IPC}(\mathfrak{L}),\square\,\square\, \square\right\rangle\Bigr{\rangle},\]
_which verifies Equation (6)._
We also have a form of Chen's identity.
**Theorem 3.46** (Chen's identity).: _The maps \((\mathsf{IPC}(\mathfrak{L}))_{\mathfrak{L}}\) satisfy a form of Chen's identity, i.e., for all \(\mathfrak{s},\mathfrak{L},\mathfrak{M}\in\mathbf{StIP}\):_
\[\left\langle\mathsf{IPC}(\mathfrak{L}\bullet\mathfrak{M}),\mathfrak{s} \right\rangle=\left\langle\mathsf{IPC}(\mathfrak{L})\otimes\mathsf{IPC}( \mathfrak{M}),\Delta_{\bullet}(\mathfrak{s})\right\rangle \tag{7}\]
**Remark 3.47**.: _As a consequence of Chen's identity, we can sum over numbers of the form of Equation (5) to calculate explicitly any occurrence. Let \(\mathfrak{L}=\mathbf{N}_{1}\bullet\cdots\bullet\mathbf{N}_{m}\) and \(\mathfrak{s}=\mathbf{n}_{1}\bullet\cdots\bullet\mathbf{n}_{k}\), where \(m,k\in\mathbb{N}\) and \(\forall i\in[m],\forall j\in[k],N_{i}\in\mathbb{N}_{\geq 1},n_{j}\in\mathbb{N}_{ \geq 1},\mathbf{N}_{i}:=\{[N_{i}]\},\mathbf{n}_{j}:=\{[n_{j}]\}\). Then_
\[\left\langle\mathsf{IPC}(\mathfrak{L}),\mathfrak{s}\right\rangle =\left\langle\mathsf{IPC}(\mathbf{N}_{1})\otimes\cdots\otimes\mathsf{IPC}( \mathbf{N}_{m}),\Delta_{\bullet}^{m}(\mathbf{n}_{1}\bullet\cdots\bullet \mathbf{n}_{k})\right\rangle\] \[=\sum_{0\leq i_{1}\leq\cdots\leq i_{m-1}\leq k}\left\langle \mathsf{IPC}(\mathbf{N}_{1}),\mathbf{n}_{1}\bullet\cdots\bullet\mathbf{n}_{i_{ 1}}\right\rangle\!\!\left\langle\mathsf{IPC}(\mathbf{N}_{2}),\mathbf{n}_{i_{1 }+1}\bullet\cdots\bullet\mathbf{n}_{i_{2}}\right\rangle\cdots\] \[\qquad\cdots\left\langle\mathsf{IPC}(\mathbf{N}_{m}),\mathbf{n} _{i_{m-1}+1}\bullet\cdots\bullet\mathbf{n}_{k}\right\rangle\] \[=\sum_{0\leq i_{1}\leq\cdots\leq i_{m-1}\leq k}\binom{N_{1}-\sum \limits_{j=1}^{i_{1}}n_{j}+i_{1}}{i_{1}}\binom{N_{2}-\sum\limits_{j=i_{1}+1}^{ i_{2}}n_{j}+i_{2}-i_{1}}{i_{2}-i_{1}}\cdots\] \[\qquad\cdots\left(N_{m}-\sum\limits_{j=i_{m-1}+1}^{k}n_{j}+k-i_{m -1}\right)\]
To prove Theorem 3.46, we need three lemmas.
**Lemma 3.48**.: _Let \(\mathfrak{t}\in\mathbf{StIP}\)_
\[\mathfrak{t} :=\{[n_{1}]\}\bullet\{[n_{2}]\}\bullet\cdots\bullet\{[n_{k}]\}\] \[=\left\{\{1,\ldots,n_{1}\},\{n_{1}+1,\ldots,n_{1}+n_{2}\},\ldots,\{\sum\limits_{i=1}^{k-1}n_{i}+1,\ldots,\sum\limits_{i=1}^{k}n_{i}\}\right\}.\]
_where \(n_{1},\ldots,n_{k}\in\mathbb{N}_{\geq 1}\) and \(k\in\mathbb{N}\). Let \(\mathfrak{s}\in\mathbf{StIP}\) such that \(\mathfrak{t}\leq\mathfrak{s}\). Then \(\mathfrak{s}\) can be written as_
\[\mathfrak{s}:=\{[\sum\limits_{i=1}^{j_{1}}n_{i}]\}\bullet\{[\sum \limits_{i=j_{1}+1}^{j_{2}}n_{i}]\}\bullet\cdots\bullet\{[\sum\limits_{i=j_{N -1}+1}^{j_{N}}n_{i}]\}\]
\[=\{\{1,\ldots,\sum_{i=1}^{j_{1}}n_{i}\},\{\sum_{i=1}^{j_{1}}n_{i}+1, \ldots,\sum_{i=1}^{j_{2}}n_{i}\},\ldots,\{\sum_{i=1}^{j_{N-1}}n_{i}+1,\ldots, \sum_{i=1}^{j_{N}}n_{i}\}\}.\]
_where_
\[j_{1},\ldots,j_{N}\in\mathbb{N},1\leq j_{1}<\cdots<j_{N}=k,1\leq N\leq k.\]
_In particular, for \(t\in\{0\}\cup N\), setting \(j_{t}:=0\), we have_
\[\{[n_{1}]\}\bullet\cdots\bullet\{[n_{j_{t}}]\}\leq\{[\sum_{i=1}^{j_{1}}n_{i}] \}\bullet\cdots\bullet\{[\sum_{i=j_{t-1}+1}^{j_{t}}n_{i}]\},\]
_and_
\[\{[n_{j_{t}+1}]\}\bullet\cdots\bullet\{[n_{j_{N}}]\}\leq\{[\sum_{i=j_{t}+1}^{ j_{t+1}}n_{i}]\}\bullet\cdots\bullet\{[\sum_{i=j_{N-1}+1}^{j_{N}}n_{i}]\},\]
_where \(\{[n_{1}]\}\bullet\cdots\bullet\{[n_{0}]\}:=\mathsf{e}\) and \(\{[n_{j_{N+1}}]\}\bullet\cdots\bullet\{[n_{j_{N}}]\}:=\mathsf{e}\)._
Proof.: The fact that \(\mathsf{s}\) is of the given form is simply an explicit description of the interval partitions which are coarser than \(\mathsf{t}\).
**Lemma 3.49**.: _Let \(\mathbf{n}_{1}\bullet\cdots\bullet\mathbf{n}_{k}\in\mathbf{StIP}\), where \(k\in\mathbb{N}\) and \(\forall r\in[k]:n_{r}\in\mathbb{N}_{\geq 1},\mathbf{n}_{i}:=\{[n_{r}]\}\). Let \(\mathfrak{L},\ \mathfrak{M}\in\mathbf{StIP}\). Then we have_
\[\Big{\{}A\subset\bigcup(\mathfrak{L}\bullet\mathfrak{M})|\ \mathbf{std}((\mathfrak{L}\bullet\mathfrak{M})(A))\geq\mathbf{n}_{1}\bullet \cdots\bullet\mathbf{n}_{k}\Big{\}}\] \[=\biguplus_{0\leq r\leq k}\Big{\{}A\subset\bigcup(\mathfrak{L} \bullet\mathfrak{M})|\ \mathbf{std}((\mathfrak{L}\bullet\mathfrak{M})(A\cap\bigcup \mathfrak{L}))\geq\mathbf{n}_{1}\bullet\cdots\bullet\mathbf{n}_{r},\] \[\
First notice that
\[\mathbf{std}\left((\mathfrak{L}\bullet\mathfrak{M})(A)\right)\geq n_{1}\bullet n _{2}\bullet\cdots\bullet n_{k}\]
is equivalent to
\[(\mathfrak{L}\bullet\mathfrak{M})(A)\geq(\mathfrak{L}\bullet \mathfrak{M})(A)_{n_{1}\bullet n_{2}\bullet\cdots\bullet n_{k}}\]
because of Lemma 3.39. From Lemma 3.48 we know \(\exists!\,t\in\{0\}\cup N\) and therefore \(\exists!\,j_{t}\), where
\[j_{0}:=0\] \[j_{1},\ldots,j_{N}\in\mathbb{N},1\leq j_{1}<\cdots<j_{N}=k,1\leq N \leq k.\]
such that, using also the definition of \(\bullet\) and the definition of \((\mathfrak{L}\bullet\mathfrak{M})(A)\), we can write
\[(\mathfrak{L}\bullet\mathfrak{M})(A)_{n_{1}\bullet n_{2}\bullet \cdots\bullet n_{k}}= \{\{\ell_{1},...,\ell_{n_{1}}\},\{\ell_{n_{1}+1},...,\ell_{n_{ 1}+n_{2}}\},...,\{\ell_{\sum_{i=1}^{j_{t-1}}n_{i}+1},...,\ell_{\sum_{i=1}^{j_ {t}}n_{i}}\},\] \[\{m^{\prime}_{\sum_{i=1}^{j_{t}}n_{i}+1},...,m^{\prime}_{\sum_{i= 1}^{j_{t}+1}n_{i}}\},\{m^{\prime}_{\sum_{i=1}^{j_{t}+1}n_{i}+1},...,m^{\prime }_{\sum_{i=1}^{j_{t}+2}n_{i}}\},\] \[...,\{m^{\prime}_{\sum_{i=1}^{k-1}n_{i}+1},...,m^{\prime}_{\sum_ {i=1}^{k}n_{i}}\}\}.\]
where
\[\ell_{1}\prec\cdots\prec\ell_{n_{1}}<\ell_{n_{1}+1}\prec\cdots \prec\ell_{n_{1}+n_{2}}<\cdots<\ell_{\sum_{i=1}^{j_{t-1}}n_{i}+1}\prec\cdots \prec\ell_{\sum_{i=1}^{j_{t}}n_{i}}\] \[<m^{\prime}_{\sum_{i=1}^{j_{t}}n_{i}+1}\prec\cdots\prec m^{\prime }_{\sum_{i=1}^{j_{t}+1}n_{i}}<m^{\prime}_{\sum_{i=1}^{j_{t}+1}n_{i}+1}\prec \cdots\prec m^{\prime}_{\sum_{i=1}^{j_{t}+2}n_{i}}<\cdots\] \[\cdots<m^{\prime}_{\sum_{i=1}^{k-1}n_{i}+1}\prec\cdots\prec m^{ \prime}_{\sum_{i=1}^{k}n_{i}}.\]
and
\[(\mathfrak{L}\bullet\mathfrak{M})(A)= \{\{\ell_{1},\ldots,\ell_{\sum_{i=1}^{j_{1}}n_{i}}\},\{\ell_{ \sum_{i=1}^{j_{1}}n_{i}+1},\cdots,\ell_{\sum_{i=1}^{j_{2}}n_{i}}\},\ldots,\{ \ell_{\sum_{i=1}^{j_{t-1}}n_{i}+1},\cdots,\ell_{\sum_{i=1}^{j_{t}}n_{i}}\},\] \[\{m^{\prime}_{\sum_{i=1}^{j_{t}}n_{i}+1},\ldots,m^{\prime}_{\sum _{i=1}^{j_{t+1}}n_{i}}\},\{m^{\prime}_{\sum_{i=1}^{j_{t+1}}n_{i}+1},\cdots,m^{ \prime}_{\sum_{i=1}^{j_{t+2}}n_{i}}\},\] \[\ldots,\{m^{\prime}_{\sum_{i=1}^{j_{N-1}}n_{i}+1},\ldots,m^{ \prime}_{\sum_{i=1}^{j_{N}}n_{i}}\}\}\]
where
\[\ell_{1}\prec\cdots\prec\ell_{\sum_{i=1}^{j_{1}}n_{i}}<\ell_{ \sum_{i=1}^{j_{1}}n_{i}+1}\prec\cdots\prec\ell_{\sum_{i=1}^{j_{2}}n_{i}}<\cdots \prec\ell_{\sum_{i=1}^{j_{t-1}}n_{i}+1}\prec\cdots\prec\ell_{\sum_{i=1}^{j_ {t}}n_{i}}\] \[<m^{\prime}_{\sum_{i=1}^{j_{t}}n_{i}+1}\prec\cdots\prec m^{ \prime}_{\sum_{i=1}^{j_{t+1}}n_{i}}<m^{\prime}_{\sum_{i=1}^{j_{t+1}}n_{i}+1} \prec\cdots\prec m^{\prime}_{\sum_{i=1}^{j_{t+2}}n_{i}}<\cdots\] \[\cdots<m^{\prime}_{\sum_{i=1}^{j_{N-1}}n_{i}+1}\prec\cdots\prec m ^{\prime}_{\sum_{i=1}^{j_{N}}n_{i}}\]
where the \(\ell\)'s are elements of \(\bigcup\mathfrak{L}\) and the \(m^{\prime}\)'s are elements of \(\bigcup\mathfrak{M}^{\prime}\). Recall that \(a\prec b\iff b=a+1\).
If we consider
\[(\mathfrak{L}\bullet\mathfrak{M})(A\cap\bigcup\mathfrak{L})= \{\{\ell_{1},\ldots,\ell_{\sum_{i=1}^{j_{1}}n_{i}}\},\{\ell_{ \sum_{i=1}^{j_{1}}n_{i}+1},\ldots,\ell_{\sum_{i=1}^{j_{2}}n_{i}}\},\] \[\ldots,\{\ell_{\sum_{i=1}^{j_{t-1}}n_{i}+1},\ldots,\ell_{\sum_{i=1 }^{j_{t}}n_{i}}\}\}\]
and
\[(\mathfrak{L}\bullet\mathfrak{M})(A\cap\bigcup\mathfrak{M}^{\prime})= \{\{m^{\prime}_{\sum_{i=1}^{j_{t}}n_{i}+1},\ldots,m^{\prime}_{ \sum_{i=1}^{j_{t+1}}n_{i}}\},\{m^{\prime}_{\sum_{i=1}^{j_{t+1}}n_{i}+1}, \ldots,m^{\prime}_{\sum_{i=1}^{j_{t+2}}n_{i}}\},\] \[\ldots,\{m^{\prime}_{\sum_{i=1}^{j_{N-1}}n_{i}+1},\ldots,m^{ \prime}_{\sum_{i=1}^{j_{N}}n_{i}}\}\}.\]
they satisfy
\[\mathbf{std}\left((\mathfrak{L}\bullet\mathfrak{M})(A\cap\bigcup \mathfrak{L})\right) \geq\{[n_{1}]\}\bullet\cdots\bullet\{[n_{j_{t}}]\}\] \[\mathbf{std}\left((\mathfrak{L}\bullet\mathfrak{M})(A\cap \bigcup\mathfrak{M}^{\prime})\right) \geq\{[n_{j_{t}+1}]\}\bullet\cdots\bullet\{[n_{k}]\}.\]
**Lemma 3.50**.: _Let \(\mathbf{n}_{1}\bullet\cdots\bullet\mathbf{n}_{k}\in\mathbf{StIP}\), where \(k\in\mathbb{N}\) and \(\forall r\in[k]:n_{i}\in\mathbb{N}_{\geq 1},\mathbf{n}_{r}:=\{[n_{r}]\}\). Let \(\mathfrak{L},\ \mathfrak{M}\in\mathbf{StIP}\). Then we have_
\[\Big{|}\bigcup_{0\leq r\leq k}\Big{\{}A\subset\bigcup(\mathfrak{ L}\bullet\mathfrak{M})\Big{|}\;\mathbf{std}\left((\mathfrak{L}\bullet \mathfrak{M})(A\cap\bigcup\mathfrak{L})\right)\geq\mathbf{n}_{1}\bullet\cdots \bullet\mathbf{n}_{r},\] \[\mathbf{std}\left((\mathfrak{L}\bullet\mathfrak{M})(A\cap \bigcup\mathfrak{M}^{\prime})\right)\geq\mathbf{n}_{r+1}\bullet\cdots\bullet \mathbf{n}_{k}\Big{\}}\Big{|}\] \[=\Big{|}\bigcup_{0\leq r\leq k}\Big{\{}A\subset\bigcup\mathfrak{ L}|\;\mathbf{std}\left(\mathfrak{L}(A)\right)\geq\mathbf{n}_{1}\bullet\cdots \bullet\mathbf{n}_{r}\Big{\}}\] \[\qquad\times\Big{\{}B\subset\bigcup\mathfrak{M}|\;\mathbf{std} \left(\mathfrak{M}(B)\right)\geq\mathbf{n}_{r+1}\bullet\cdots\bullet\mathbf{n }_{k}\Big{\}}\Big{|}.\]
Proof.: We explicitly give a bijection between the two sets. For some \(0\leq r\leq k\) let
\[(A,B)\in\bigcup_{0\leq r\leq k}\{A\subset\bigcup\mathfrak{L}|\; \mathbf{std}\left(\mathfrak{L}(A)\right)\geq\mathbf{n}_{1}\bullet\cdots\bullet \mathbf{n}_{r}\}\] \[\times\{B\subset\bigcup\mathfrak{M}|\;\mathbf{std}\left( \mathfrak{M}(B)\right)\geq\mathbf{n}_{r+1}\bullet\cdots\bullet\mathbf{n}_{k}\}\]
and define the map
\[(A,B)\mapsto A\cup(B+\big{|}\bigcup\mathfrak{L}|).\]
Therefore \((B+\big{|}\bigcup\mathfrak{L}|)\subset\bigcup\mathfrak{M}^{\prime}\) and if \(B=\emptyset\), set \(B+\big{|}\bigcup\mathfrak{L}|:=\emptyset\). We have
\[\mathbf{std}\left((\mathfrak{L}\bullet\mathfrak{M})((A\cup(B+\big{|}\bigcup \mathfrak{L}|))\cap\bigcup\mathfrak{L})\right)=\mathbf{std}\left((\mathfrak{L} \bullet\mathfrak{M})(A\cap\bigcup\mathfrak{L})\right)\geq\mathbf{n}_{1} \bullet\cdots\bullet\mathbf{n}_{r}\]
and
\[\mathbf{std}\left((\mathfrak{L}\bullet\mathfrak{M})((A\cup(B+\left| \bigcup\mathfrak{L}|))\cap\bigcup\mathfrak{M}^{\prime})\right)=\mathbf{std} \left((\mathfrak{L}\bullet\mathfrak{M})((B+\left|\bigcup\mathfrak{L}|)\cap \bigcup\mathfrak{M}^{\prime})\right)\right)\] \[=\mathbf{std}\left((\mathfrak{L}\bullet\mathfrak{M})(B\cap \bigcup\mathfrak{M}^{\prime})\right)\geq\mathbf{n}_{r+1}\bullet\cdots\bullet \mathbf{n}_{k}.\]
Now, let
\[A\in\bigcup_{0\leq r\leq k}\{A\subset\bigcup(\mathfrak{L} \bullet\mathfrak{M})|\;\mathbf{std}((\mathfrak{L}\bullet\mathfrak{M})\,(A \cap\bigcup\mathfrak{L}))\geq\mathbf{n}_{1}\bullet\cdots\bullet\mathbf{n}_{k},\] \[\mathbf{std}\left((\mathfrak{L}\bullet\mathfrak{M})(A\cap \bigcup\mathfrak{M}^{\prime})\right)\geq\mathbf{n}_{i+1}\bullet\cdots\bullet \mathbf{n}_{k}\}.\]
Define
\[A\mapsto(A\cap(\bigcup\mathfrak{L}),(A\cap(\bigcup\mathfrak{M}^{\prime}))- \left|\bigcup\mathfrak{L}|\right)\]
and if \((A\cap(\bigcup\mathfrak{M}^{\prime}))=\emptyset\), set \((A\cap(\bigcup\mathfrak{M}^{\prime}))-\left|\bigcup\mathfrak{L}\right|=\emptyset\), which is clearly the inverse of the other map.
Proof of Theorem 3.46.: We use Lemmas 3.48 to 3.50.
\[\left\langle\mathsf{IPC}(\mathfrak{L}\bullet\mathfrak{M}), \mathbf{n}_{1}\bullet\cdots\bullet\mathbf{n}_{k}\right\rangle\] \[=\#\{A\subset\bigcup(\mathfrak{L}\bullet\mathfrak{M})|\;\mathbf{ std}\left((\mathfrak{L}\bullet\mathfrak{M})(A)\right)\geq\mathbf{n}_{1}\bullet\cdots \bullet\mathbf{n}_{k}\}\] \[=\sum_{0\leq i\leq k}\{A\subset\bigcup(\mathfrak{L}\bullet \mathfrak{M})|\;\mathbf{std}\left((\mathfrak{L}\bullet\mathfrak{M})(A\cap \bigcup\mathfrak{L})\right)\geq\mathbf{n}_{1}\bullet\cdots\bullet\mathbf{n}_{i},\] \[\mathbf{std}\left((\mathfrak{L}\bullet\mathfrak{M})(A\cap \bigcup\mathfrak{M}^{\prime})\right)\geq\mathbf{n}_{i+1}\bullet\cdots\bullet \mathbf{n}_{k}\}\] \[=\sum_{0\leq i\leq k}\#\{A\subset\bigcup\mathfrak{L}|\;\mathbf{ std}\left(\mathfrak{L}(A)\right)\geq\mathbf{n}_{1}\bullet\cdots\bullet\mathbf{n}_{i}\}\] \[\times\{B\subset\bigcup\mathfrak{M}|\;\mathbf{std}\left( \mathfrak{M}(B)\right)\geq\mathbf{n}_{i+1}\bullet\cdots\bullet\mathbf{n}_{k}\}\] \[=\sum_{0\leq i\leq k}\#\{A|A\subset\bigcup\mathfrak{L},\mathfrak{ L}(A))\geq\mathbf{n}_{1}\bullet\cdots\bullet\mathbf{n}_{i}\}\] \[\#\{B|B\subset\bigcup\mathfrak{M},\mathfrak{M}(B))\geq\mathbf{n} _{i+1}\bullet\cdots\bullet\mathbf{n}_{k}\}\] \[=\sum_{0\leq i\leq k}\left\langle\mathsf{IPC}(\mathfrak{L}), \mathbf{n}_{1}\bullet\cdots\bullet\mathbf{n}_{i}\right\rangle\Bigl{\langle} \mathsf{IPC}(\mathfrak{M}),\mathbf{n}_{i+1}\bullet\cdots\bullet\mathbf{n}_{k }\Bigr{\rangle}\] \[=\Bigl{\langle}\mathsf{IPC}(\mathfrak{L})\otimes\mathsf{IPC}( \mathfrak{M}),\Delta_{\bullet}(\mathfrak{s})\Bigr{\rangle}.\]
## 4 Vincular permutation patterns
In this section, we define a Hopf algebra on pairs which consists of an interval partition of \([n]\) and a permutation of length \(n\). A pair corresponds to a certain vincular pattern, since the intervals express which values are consecutive in the permutation pattern. It incorporates both the Hopf algebra of Section 3 and the Hopf algebra appearing in Vargas 2014. The underlying set is now given by
\[\bigcup_{n\in\mathbb{N}}\mathbf{StIP}_{n}\times\mathbf{S}_{n},\]
general elements of which we denote with \((\mathfrak{s},\sigma)\). We consider the free \(\mathbb{Q}\)-vector space over it.
**Definition 4.1** (\(\mathbb{Q}\)-vector space over vincular permutations).: \[\mathcal{H}_{\mathsf{vine}}:=\bigoplus_{n\in\mathbb{N}}\mathbb{Q}[\mathbf{ StIP}_{n}\times\mathbf{S}_{n}].\] (8)
graded by the size of the partitions (or alternatively, the length of permutations).
We now introduce algebraic operations on \(\mathcal{H}_{\mathsf{vine}}\). This can be seen as a "combination" of our Hopf algebra on interval partitions, \(\mathcal{H}_{\mathsf{int}}\), and Vargas' superinfiltration Hopf algebra, \(\mathcal{H}_{\mathsf{per}}\).
### Algebraic operations
We recall the operations of the superinfiltration Hopf algebra introduced in Vargas 2014.
**Definition 4.2** (Superinfiltration product, Vargas 2014).: Let \(\sigma,\tau\in\mathbf{S}\).
\[\sigma\nmid\tau:=\sum_{\gamma\in\mathbf{S}}\sum_{\begin{subarray}{c}A\cup B= \left\|\gamma\right\|\\ \mathbf{st}(\gamma\mid_{A})=\sigma,\mathbf{st}(\gamma\mid_{B})=\tau\end{subarray}} \gamma.\]
**Definition 4.3** (Vargas 2014).: Let \(\sigma\in\mathbf{S}\).
\[\Delta_{\circ}(\sigma):=\sum_{\alpha\circ\beta}\alpha\otimes\beta,\]
where for \(\alpha\in\mathbf{S}_{m}\) and \(\beta\in\mathbf{S}_{n}\) we have \(\alpha\circ\beta:=\alpha_{1}\dots\alpha_{m}(\beta_{1}+m)\dots(\beta_{n}+m)\).
Then, \((\mathcal{H}_{\mathsf{per}},\nmid,\Delta_{\circ})\), is a connected filtered Hopf algebra Vargas 2014, Corollary 4.8.
**Remark 4.4** (Malvenuto-Reutenauer Hopf algebra on permutations).: _In Vargas 2014, the supershuffle coproduct_
\[\Delta_{\,\underline{\shuffle}\,}(\sigma):=\sum_{\begin{subarray}{c}A,B\subset[ \left|\sigma\right|]:\\ A\uplus B=[\left|\sigma\right|]\end{subarray}}\mathbf{st}\left(\sigma_{\, \mid A}\right)\otimes\mathbf{st}\left(\sigma_{\,\mid B}\right).\]
_also appears. Vargas has showed that \((\mathcal{H}_{\text{per}},\underline{\shuffle},\Delta_{\circ})\) is a Hopf algebra. The supershuffle, \(\underline{\shuffle}\), incorporates the algebraic operations of the Malvenuto-Reutenauer Hopf Algebra on permutations, \((\mathcal{H}_{\text{per}},\ast^{\prime},\Delta_{*})\), Malvenuto and Reutenauer 1995. If \(\sigma\in\mathbf{S}_{m},\tau\in\mathbf{S}_{n}\)_
\[\sigma\ast^{\prime}\tau :=\sigma\,\shuffle\,\tau_{m},\] \[\Delta_{*}(\sigma) :=\sum_{i=0}^{n}\mathbf{st}(\sigma_{1}\cdots\sigma_{i})\otimes \mathbf{st}(\sigma_{i+1}\cdots\sigma_{n}).\]
_where \(\,\shuffle\,\) is the usual shuffle product on words and \(\tau_{m}:=(\tau_{1}+m)\cdots(\tau_{n}+m)\). We present a few computations as examples._
\[\Delta_{*}(1243) =\mathsf{e}\otimes 1243+1\otimes 132+12\otimes 21+123\otimes 1+124 3\otimes\mathsf{e},\] \[12\,\underline{\shuffle}\,21 =1243+1324+2\;1342+2\;1423+3\;1432+2134+2\;2314\] \[+3\;2341+2413+2\;2431+2\;3124\] \[+3142+3\;3214+2\;3241+3421\] \[+3\;4123+2\;4132+2\;4213+4231+4312,\] \[12\ast 21 =1243+1342+1432+2341+2431+3421,\] \[12\ast^{\prime}21 =1243+1423+4123+1432+4312+4132.\]
#### 4.1.1 Products, coproducts
We now endow \(\mathcal{H}_{\text{vine}}\) with a Hopf algebra structure.
The product combines the products \(\bullet\) (Definition 3.19) and \(\circ\) (Definition 4.3).
**Definition 4.5**.: We define on basis elements, \((\mathfrak{s},\sigma),(\mathfrak{t},\tau)\in\bigcup_{n\in\mathbb{N}}\mathbf{ StIP}_{n}\times\mathbf{S}_{n}\)
\[(\mathfrak{s},\sigma)\mbox{\textvisibles{$\bullet$}}(\mathfrak{t},\tau):=( \mathfrak{s}\bullet\mathfrak{t},\sigma\circ\tau).\]
and then extend linearly.
**Proposition 4.6** (Associativity).: _The product \(\,\mbox{\textvisibles{$\bullet$}}\) is associative._
Proof.: Since \(\bullet\) and \(\circ\) are associative products, the result follows.
**Example 4.7** (Product).: \[\begin{array}{|c|
**Remark 4.8**.: _Its dual coproduct is_
\[\Delta_{\bullet}((\mathfrak{s},\sigma)):=\sum_{\begin{subarray}{c}\mathfrak{a} \mathfrak{s}\mathfrak{b}=\mathfrak{s}\\ \alpha\in\mathbf{S}_{\|\bigcup\mathfrak{s}|},\,\beta\in\mathbf{S}_{\| \bigcup\mathfrak{b}|}\end{subarray}}(\mathfrak{s},\sigma)\otimes(\mathfrak{ t},\tau)\]
_For example_
\[\Delta_{\bullet}\left(\begin{array}{c}\framebox{1}&2\\ \end{array}\right) =\mathfrak{e}\otimes\framebox{1}&2\\ \Delta_{\bullet}\left(\framebox{2}&\framebox{1}\right) =\mathfrak{e}\otimes\framebox{2}&\framebox{1}+\framebox{2}& \framebox{1}\otimes\mathfrak{e}\] \[\Delta_{\bullet}\left(\framebox{1}&\framebox{2}\right) =\mathfrak{e}\otimes\framebox{1}&2\\ \end{array}\]
The following definition incorporates Definition 3.22, and Definition 4.2.
**Definition 4.9**.: On basis elements, \(\bigcup_{n\in\mathbb{N}}\mathbf{StIP}_{n}\times\mathbf{S}_{n}\), we define
\[\Delta_{\psi}\left((\mathfrak{s},\sigma)\right):=\sum_{A\cup A^{\prime}= \bigcup\mathfrak{s}}\sum_{\begin{subarray}{c}\mathcal{I}\in\mathbf{IP}(A), \,\mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime})\\ \mathcal{I}:\mathbf{glue}^{\mathcal{I}^{\prime}}=\mathfrak{s}\end{subarray}}( \mathbf{std}\left(\mathcal{I}\right),\mathbf{st}(\sigma_{\big{|}A}))\otimes( \mathbf{std}\left(\mathcal{I}^{\prime}\right),\mathbf{st}(\sigma_{\big{|}A^{ \prime}})).\]
and extend linearly. Its dual product is defined as
\[(\mathfrak{s},\sigma)^{\psi}(\mathfrak{t},\tau):=\sum_{\begin{subarray}{c}( \mathfrak{g},\gamma)\in\bigcup_{n\in\mathbb{N}}\mathbf{StIP}_{n}\times\mathbf{S }_{n}\\ A\cup A^{\prime}=\bigcup\mathfrak{g}\end{subarray}}\sum_{\begin{subarray}{c} \mathcal{I}\in\mathbf{IP}(A),\,\mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime}) \,\,\mathfrak{st}(\gamma_{\big{|}A^{\prime}})=\sigma,\mathbf{st}(\gamma_{ \big{|}A^{\prime}})=\tau\\ \mathbf{std}(\mathcal{I})=\mathfrak{s},\,\mathbf{std}(\mathcal{I}^{\prime})= \mathfrak{t}\end{subarray}}\sum_{\begin{subarray}{c}\mathcal{I}\in\mathbf{IP }(A),\,\mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime})\,\,\mathfrak{st}(\gamma_{ \big{|}A^{\prime}})=\sigma,\mathbf{st}(\gamma_{\big{|}A^{\prime}})=\tau\\ \mathcal{I}:\mathbf{glue}^{\mathcal{I}^{\prime}}=\mathfrak{g}\end{subarray}}( \mathfrak{g},\gamma).\]
For readability, we will also just write:
\[(12,\{\{1\},\{2\}\})=:\framebox{1}\]
and so on.
**Proposition 4.10** (Coassociativity).: _The coproduct \(\Delta_{\psi}\) is coassociative._
Proof.: The proof is analogous to the one of Proposition 3.25 and therefore omitted.
**Example 4.11** (Coproduct).: \[\Delta_{\psi}\left(\begin{array}{c}\framebox{1}&2\\ \end{array}\right) =\mathfrak{e}\otimes\framebox{1}&\framebox{2}&+\framebox{1}& \framebox{2}&\otimes\mathfrak{e}+2&\framebox{1}\otimes\framebox{1}\\ +2&\framebox{1}\otimes\framebox{1}&\framebox{2}&+2&\framebox{1}& \framebox{2}&\otimes&\framebox{1}+\framebox{1}&\framebox{2}&\otimes& \framebox{1}&\framebox{2}.\end{array}\]
**Example 4.12** (Product).: \[\begin{array}{c}\includegraphics[height=142.26378pt]{figs/142.eps}\end{array}\]
\[\begin{array}{c}\includegraphics[height=142.26378pt]{figs/142.eps}\end{array}\]
\[\begin{array}{c}\includegraphics[height=142.26378pt]{figs/142.eps}\end{array}\]
\[\begin{array}{c}\includegraphics[height=142.26378pt]{figs/142.eps}\end{array}\]
\[\begin{array}{c}\includegraphics[height=142.26378pt]{figs/142.eps}\end{array}\]
\[\begin{array}{c}\includegraphics[height=142.26378pt]{figs/142.eps}\end{array}\]
#### 4.1.2 Bialgebras on vincular permutations
We have an analogous result to Theorem 3.30.
**Theorem 4.13** (Bialgebras on vincular permutations).: _The following holds:_
* \((\mathcal{H}_{\text{inc}},\mathbf{\mathfrak{n}},\Delta_{\mbox{ \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \
is equal to
\[\sum_{\begin{subarray}{c}A_{1}\cup A_{1}^{\prime}=\{1,\ldots,m\}\\ A_{2}\cup A_{2}^{\prime}=\{m+1,\ldots,m+n\}\end{subarray}}\sum_{\begin{subarray}{c }\mathcal{I}_{1}\in\mathbf{IP}(A_{1}),\mathcal{I}_{1}^{\prime}\in\mathbf{IP}(A _{1}^{\prime})\\ \mathcal{I}_{2}\in\mathbf{IP}(A_{2}),\mathcal{I}_{2}^{\prime}\in\mathbf{IP}(A _{2}^{\prime})\\ \mathcal{I}_{1}:\text{\bf glue}_{1}^{\mathcal{I}_{1}=\mathbf{5},\ \mathcal{I}_{2}:\text{\bf glue}_{2}^{ \mathcal{I}_{2}^{\prime}}=\mathfrak{U}}^{\prime}\\ \end{subarray}}(\mathbf{std}(\mathcal{I}_{1}),\mathbf{st}(\sigma_{\left|A_{1}^{ \prime}\right.}))\boldsymbol{\blacksquare}(\mathbf{std}(\mathcal{I}_{2}),\mathbf{ st}(\tau_{m}\left|{}_{A_{2}^{\prime}}\right.))\]
which, using arguments similar to the proof of Theorem 3.30, can be shown to be equal to
\[\Delta_{\psi}((\mathfrak{s},\sigma))\boldsymbol{\blacksquare}\Delta_{ \psi}((\mathfrak{t},\tau))\] \[=\sum_{A\cup A^{\prime}=\bigcup\mathfrak{s}\ \mathcal{I}\in\mathbf{IP}(A),\ \mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime})}(\mathbf{std}( \mathcal{I}),\mathbf{st}(\sigma|_{A}))\otimes(\mathbf{std}(\mathcal{I}^{ \prime}),\mathbf{st}(\sigma|_{A^{\prime}}))\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
Proof of Proposition 4.14.: Let \(\mathfrak{s}\in\textbf{StIP}_{k}\) and \(\mathfrak{t}\in\textbf{StIP}_{k^{\prime}}\). We have
\[\psi(\mathfrak{s}\uplus\mathfrak{t})=\sum_{n\in\mathbb{N}}\sum_{A\cup A^{\prime }=[n]}\sum_{\begin{subarray}{c}\mathcal{I}\in\textbf{IP}(A),\mathcal{I}\in \textbf{IP}(A^{\prime})\\ \textbf{std}(\mathcal{I})=\textbf{5},\textbf{std}(\mathcal{I}^{\prime})= \mathfrak{t}\end{subarray}}\sum_{\gamma\in\textbf{S}_{n}}(\mathcal{I}\cdot _{\textbf{glue}}\mathcal{I}^{\prime},\gamma)\]
and
\[\psi(\mathfrak{s})^{\varphi}\psi(\mathfrak{t}) =\sum_{\begin{subarray}{c}\sigma\in\textbf{S}_{k}\\ \tau\in\textbf{S}_{k^{\prime}}\end{subarray}}\sum_{\begin{subarray}{c}n\in \mathbb{N}\\ |A|=k,|A^{\prime}|=k^{\prime}\end{subarray}}\sum_{\begin{subarray}{c}\mathcal{ I}\in\textbf{IP}(A),\mathcal{I}\in\textbf{IP}(A^{\prime})\\ \textbf{std}(\mathcal{I})=\textbf{5},\textbf{std}(\mathcal{I}^{\prime})= \mathfrak{t}\end{subarray}}\sum_{\begin{subarray}{c}\gamma\in\textbf{S}_{n} \\ |A|=\sigma,\textbf{st}(\gamma\mid_{A^{\prime}})=\tau\end{subarray}}(\mathcal{I} \cdot_{\textbf{glue}}\mathcal{I}^{\prime},\gamma)\] \[=\sum_{n\in\mathbb{N}}\sum_{\begin{subarray}{c}A\cup A^{\prime }=[n]\\ |A|=k,|A^{\prime}|=k^{\prime}\end{subarray}}\sum_{\begin{subarray}{c}\mathcal{ I}\in\textbf{IP}(A),\mathcal{I}\in\textbf{IP}(A^{\prime})\\ \textbf{std}(\mathcal{I})=\textbf{5},\textbf{std}(\mathcal{I}^{\prime})= \mathfrak{t}\end{subarray}}\sum_{\begin{subarray}{c}\gamma\in\textbf{S}_{n} \\ \sigma\in\textbf{S}_{k},\tau\in\textbf{S}_{k^{\prime}}\\ \textbf{st}(\gamma\mid_{A})=\sigma,\textbf{st}(\gamma\mid_{A^{\prime}})= \tau\end{subarray}}(\mathcal{I}\cdot_{\textbf{glue}}\mathcal{I}^{\prime},\gamma)\] \[\stackrel{{\text{Lemma \ref{lem:s-
For the proof, we need two lemmas.
**Lemma 4.17**.: _Let \(A,A^{\prime}\in\mathbb{N}_{\geq 1},|A|,|A^{\prime}|<\infty\) and let \(\mathcal{I}\in\mathbf{IP}(A),\mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime})\) be the singletons' partitions of \(A\) and \(A^{\prime}\) respectively. Then_
\[\mathcal{I}\cdot_{\mbox{\bf glue}}\mathcal{I}^{\prime}\in\mathbf{IP}(A\cup A^ {\prime})\]
_is the singletons' partition of \(A\cup A^{\prime}\)._
Proof.: It follows immediately from the definition of \(\cdot_{\mbox{\bf glue}}\).
**Lemma 4.18**.: _Let \(n\in\mathbb{N}\). Then_
\[\Big{|}\underset{A\cup A^{\prime}=[n]}{\bigtriangleup}\{\mathcal{I}\in \mathbf{IP}(A),\mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime})\mid\mathbf{std} (\mathcal{I})=\{\{i\}|1\leq i\leq|A|\},\mathbf{std}(\mathcal{I}^{\prime})=\{\{ i\}|1\leq i\leq|A^{\prime}|\},\]
\[\mathcal{I}\cdot_{\mbox{\bf glue}}\mathcal{I}^{\prime}=\{\{1\},...,\{n\}\}\} \Big{|}\]
\[=\Big{|}\{A,A^{\prime}\subset[n]|A\cup A^{\prime}=[n]\}\Big{|}.\]
Proof.: It follows immediately, since for \(A\cup A^{\prime}=[n]\)
\[|\{\mathcal{I}\in\mathbf{IP}(A),\mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime} )|\mathbf{std}(\mathcal{I})=\{\{i\}|1\leq i\leq|A|\},\mathbf{std}(\mathcal{I}^ {\prime})=\{\{i\}|1\leq i\leq|A^{\prime}|\},\]
\[\mathcal{I}\cdot_{\mbox{\bf glue}}\mathcal{I}^{\prime}=\{\{1\},...,\{n\}\}\}|\]
\[=1.\]
Proof of Proposition 4.16.: Let \(\sigma,\tau\in\mathbf{S}\).
\[\phi(\sigma)^{\psi}\phi(\tau)\] \[=\sum_{(\mathfrak{g},\gamma)}\sum_{A\cup A^{\prime}=\bigcup \mathfrak{g}}\sum_{\begin{subarray}{c}\mathcal{I}\in\mathbf{IP}(A),\, \mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime})\\ \mathbf{std}(\mathcal{I})=\{\{i\}|1\leq i\leq|\sigma|\}\\ \mathbf{std}(\mathcal{I}^{\prime})=\{\{i\}|1\leq i\leq|\tau|\}\end{subarray}} \sum_{\begin{subarray}{c}\mathbf{st}(\gamma|_{A})=\sigma,\mathbf{st}(\gamma|_ {A^{\prime}})=\tau\\ \mathbf{std}(\mathcal{I})=\{\{i\}|1\leq i\leq|\tau|\}\end{subarray}}( \mathfrak{g},\gamma)\] \[\overset{\mathcal{I}\cdot_{\mbox{\bf glue}}\mathcal{I}^{\prime}= \mathfrak{g}}{=}\] \[=\overset{\text{Lemma \ref{lemma:16}}}{=}\sum_{n\in\mathbb{N}}\sum_{ \gamma\in\mathbf{S}_{n}}\sum_{A\cup A^{\prime}=[n]}\sum_{\begin{subarray}{c} \mathcal{I}\in\mathbf{IP}(A),\,\mathcal{I}^{\prime}\in\mathbf{IP}(A^{\prime}) \\ \mathbf{std}(\mathcal{I})=\{\{i\}|1\leq i\leq|\sigma|\}\\ \mathbf{std}(\mathcal{I}^{\prime})=\{\{i\}|1\leq i\leq|\tau|\}\\ \mathcal{I}_{\mbox{\bf glue}}\mathcal{I}^{\prime}=\{\{i\}|1\leq i\leq n\} \end{subarray}}\sum_{\begin{subarray}{c}\mathbf{st}(\gamma|_{A})=\sigma, \,\mathbf{st}(\gamma|_{A^{\prime}})=\tau\\ \mathbf{std}(\mathcal{I})=\{\{i\}|1\leq i\leq|\sigma|\}\\ \mathbf{std}(\mathcal{I}^{\prime})=\{\{i\}|1\leq i\leq|\tau|\}\\ \mathcal{I}_{\mbox{\bf glue}}\mathcal{I}^{\prime}=\{\{i\}|1\leq i\leq n\} \end{subarray}}\]
\[=\sum_{n\in\mathbb{N}}\sum_{\gamma\in\mathbf{S}_{n}}\sum_{A\cup A^{ \prime}=[n]}\quad(\{\{i\}|1\leq i\leq n\},\gamma)\] \[=\phi(\sigma\Uparrow\tau).\]
While for the coalgebra part
\[\Delta_{\bullet}(\{\{i\}|1\leq i\leq|\sigma|\},\sigma))=\sum_{ \begin{subarray}{c}|\alpha|+|\beta|=|\sigma|\\ \alpha\circ\beta=\sigma\end{subarray}}(\{\{i\}|1\leq i\leq|\alpha|\},\alpha) \otimes(\{\{i\}|1\leq i\leq|\beta|\},\beta)\] \[=\phi\otimes\phi\circ\Delta_{\circ}(\sigma).\]
### Signature for vincular permutation patterns
In an analogous fashion to Definition 3.34, define, for \((\mathfrak{L},\Lambda),(\mathfrak{s},\sigma)\in\bigcup_{n\in\mathbb{N}} \mathbf{StIP}_{n}\times\mathbf{S}_{n}\)
\[\Big{\langle}\mathsf{GPC}((\mathfrak{L},\Lambda)),(\mathfrak{s},\sigma) \Big{\rangle}:=\#\{A\subset[N]|\ \mathbf{std}(\mathfrak{L}(A))\geq\mathfrak{s},\mathbf{st}(\Lambda \big{|}_{A})=\sigma\}.\]
and extend linearly to \(\mathcal{H}_{\mathsf{vine}}\). These patterns are known in the literature as _vincular patterns_. They appear in Babson and Steingrimsson 2000.
#### 4.2.1 Character property and Chen's identity
Again, we have analogous results to Theorem 3.37 and Theorem 3.46.
**Theorem 4.19** (Character property).: _Let \((\mathfrak{L},\Lambda)\in\mathbf{StIP}_{N}\times\mathbf{S}_{N}\), with \(N\in\mathbb{N}\). Then \(\forall(\mathfrak{s},\sigma),(\mathfrak{t},\tau)\in\bigcup_{n\in\mathbb{N}} \mathbf{StIP}_{n}\times\mathbf{S}_{n}\),_
Proof.: Using arguments analogous to the proof of Theorem 3.37, we have
\[|\{A,B\subset[N]|\ \mathbf{std}(\mathfrak{L}(A))\geq\mathfrak{s}, \mathbf{std}(\mathfrak{L}(B))\geq\mathfrak{t},\mathbf{st}(\Lambda\big{|}_{A} )=\sigma,\mathbf{st}(\Lambda\big{|}_{B})=\tau\}|\] \[=|\underbrace{\biguplus\biguplus}_{(\mathfrak{g},\gamma,C)\in \bigcup_{n\in\mathbb{N}}\mathbf{StIP}_{n}\times\mathbf{S}_{n}\times 2^{N}}\{A,B \subset[N]|\ |A\cup B=C,\mathbf{std}(\mathfrak{L}(A))\geq\mathfrak{s},\mathbf{std}( \mathfrak{L}(B))\geq\mathfrak{t},\] \[\mathbf{std}(\mathfrak{L}(C))\geq\mathfrak{g}\] \[\mathbf{std}(\mathfrak{L}(A)_{\mathfrak{s}}\cdot_{\mbox{{glue}}} \mathfrak{L}(B)_{\mathfrak{t}})=\mathfrak{g},\mathbf{st}(\Lambda\big{|}_{A})= \sigma,\mathbf{st}(\Lambda\big{|}_{B})=\tau,\mathbf{st}(\Lambda\big{|}_{C})= \gamma\}|\]
\[=\sum_{(\mathfrak{g},\gamma,C)\in\bigcup_{n\in\mathbb{N}}\mathbf{StIP}_{n} \times\mathbf{S}_{n}\times 2^{N}}\langle\forall((\mathfrak{s},\sigma)\otimes( \mathfrak{t},\tau)),(\mathfrak{g},\gamma)\rangle\] \[\qquad\qquad\qquad\mathbf{std}(\mathfrak{L}(C))\geq\mathfrak{g}\] \[=\sum_{(\mathfrak{g},\gamma)\in\bigcup_{n\in\mathbb{N}}\mathbf{ StIP}_{n}\times\mathbf{S}_{n}}\sum_{C\in 2^{N}}\langle\forall((\mathfrak{s},\sigma)\otimes(\mathfrak{t},\tau)),( \mathfrak{g},\gamma)\rangle\] \[\qquad\qquad\qquad\qquad\qquad\mathbf{std}(\mathfrak{L}(C))\geq \mathfrak{g}\] \[=\sum_{(\mathfrak{g},\gamma)\in\bigcup_{n\in\mathbb{N}}\mathbf{ StIP}_{n}\times\mathbf{S}_{n}}\langle\forall((\mathfrak{s},\sigma)\otimes( \mathfrak{t},\tau)),(\mathfrak{g},\gamma)\rangle\sum_{C\in 2^{N}}\quad 1\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\mathbf{std}( \mathfrak{L}(C))\geq\mathfrak{g}\] \[=\sum_{(\mathfrak{g},\gamma)\in\bigcup_{n\in\mathbb{N}}\mathbf{ StIP}_{n}\times\mathbf{S}_{n}}\langle\forall((\mathfrak{s},\sigma)\otimes( \mathfrak{t},\tau)),(\mathfrak{g},\gamma)\rangle\Big{\langle}\mathsf{GPC}(( \mathfrak{L},\Lambda)),(\mathfrak{g},\gamma)\Big{\rangle}\] \[=\Big{\langle}\mathsf{GPC}((\mathfrak{L},\Lambda)),\sum_{( \mathfrak{g},\gamma)\in\bigcup_{n\in\mathbb{N}}\mathbf{StIP}_{n}\times\mathbf{ S}_{n}}\langle\forall((\mathfrak{s},\sigma)\otimes(\mathfrak{t},\tau)),( \mathfrak{g},\gamma)\rangle(\mathfrak{g},\gamma)\Big{\rangle}\] \[=\Big{\langle}\mathsf{GPC}((\mathfrak{L},\Lambda)),(\mathfrak{s},\sigma)\forall(\mathfrak{t},\tau)\Big{\rangle}.\]
**Theorem 4.20** (Chen's identity).: _Let \(N_{1},N_{2}\in\mathbb{N}\) and let \((\mathfrak{L},\Lambda)\in\mathbf{StIP}_{N_{1}}\times\mathbf{S}_{N_{1}}\) and \((\mathfrak{M},M)\in\mathbf{StIP}_{N_{2}}\times\mathbf{S}_{N_{2}}\), then \(\forall(\mathfrak{s},\sigma)\in\bigcup_{n\in\mathbb{N}}\mathbf{StIP}_{n} \times\mathbf{S}_{n}\)_
Proof.: Recall that \(\mathfrak{L}\bullet\mathfrak{M}=\{\mathfrak{L},...,\mathfrak{L}_{p}, \mathfrak{M}^{\prime}_{1},...,\mathfrak{M}^{\prime}_{q}\}\), where \(\mathfrak{M}^{\prime}_{i}:=\mathfrak{M}_{i}+\left|\bigcup\mathfrak{L}\right|\), where \(\mathfrak{M}^{\prime}_{i}:=\mathfrak{M}_{i}+\left|\bigcup\mathfrak{L}\right|\), and \(\sigma\circ\tau=\sigma_{1}\cdots\sigma_{m}(\tau_{1}+m)\cdots(\tau_{n}+m)\). We have
\[|\{A\subset[N_{1}+N_{2}]|\ \mathbf{std}\left((\mathfrak{L} \bullet\mathfrak{M})(A)\right)\geq\mathfrak{s},(\Lambda\circ M)\left.\right|_ {A}=\sigma\}|\] \[=|\underbrace{\biguplus}_{\mathfrak{a}\bullet\mathfrak{b}= \mathfrak{g}}\qquad\{A\subset[N_{1}+N_{2}]|\ \mathbf{std}\left((\mathfrak{L}\bullet\mathfrak{M})(A\cap\bigcup \mathfrak{L})\right)\geq\mathfrak{a},\mathbf{st}((\Lambda\circ M)\left. \right|_{A\cap\bigcup\mathfrak{L}})=\alpha,\] \[\qquad\qquad\qquad\alpha\oslash=\sigma\] \[\alpha\in\mathbf{S}_{\left|\bigcup\mathfrak{a}\right|,\,\beta \in\mathbf{S}_{\left|\bigcup\mathfrak{b}\right|}}\]
\[\mathbf{std}\left((\mathfrak{L}\bullet\mathfrak{M})(A\cap\bigcup \mathfrak{M}^{\prime})\right)\geq\mathfrak{b},\mathbf{st}((\Lambda\circ M) \left.\right|_{A\cap\bigcup\mathfrak{M}^{\prime}})=\beta\}|.\]
using arguments similar to the proof of Theorem 3.46: indeed there exists unique \(\mathfrak{a},\mathfrak{b}\in\mathbf{IP}\) and \(\alpha,\beta\) which "split" \(\mathfrak{s}\) and \(\sigma\), respectively.
**Example 4.21**.: \[\left\langle\mathsf{GPC}\Big{(}\begin{array}{c
This Hopf algebra can be equivalently seen as a Hopf algebra on words of positive integers where the filtered product possesses a shuffle part and the coproduct is deconcatenation, see Remark 3.33. We define a family of linear functionals parametrized by interval partitions and show that they are characters and satisfy a Chen's type identity.
The underlying combinatorics here is quite simple since the number of "occurrences" of one word into another is a closed-form expression that depends on the letters of the two words, see Remark 3.36.
Finally, in Section 4, we introduce the Hopf algebra on vincular patterns which is built upon our Hopf algebra on interval partitions and the superinfiltration Hopf algebra introduced in Vargas 2014. We also extend the definition of the functionals from Section 3 to store the number of occurrences of vincular patterns. These maps are shown again to behave like _signatures_, satisfying identities which are reminiscent of the shuffle and the Chen's identities for paths.
### Open question
We are interested in the following open questions.
* Is there a recursive definition for the product?
* Both Hopf algebras, and are free commutative as algebras, see Cartier and Patras 2021[Theorem 4.4.1]. Are there "interesting" sets of free commutative generators? An analogy we have in mind: the shuffle Hopf algebra is connected and graded and therefore automatically isomorphic, as an algebra, to a polynomial algebra. Though, one can also explicitly show that the shuffle algebra is a polynomial algebra over the set of Lyndon words.
* Is the algebra on interval partitions, or equivalently, isomorphic to the shuffle algebra.
* Since the Hopf algebras in this work are connected and filtered, one can compute the antipode using the well-known Takeuchi's formula, see Takeuchi 1971[lemma 14], but cancellations can occur. Indeed, this happens in our case. If we compute the antipode in on we obtain ()
* Are there cancellation-free formulas for the antipodes of our Hopf algebras? In Penaguiao and Vargas 2022, the authors provide a cancellation-free formula for the antipode of.
* The Hopf algebra () "originates" from and. Is this an instance of a more general construction?
## Acknowledgements
The authors would like to thank Anders Claesson for pointing out, at the occasion of a talk by one of the authors at the ACPMS [https://www.math.ntnu.no/acpms/](https://www.math.ntnu.no/acpms/), that the patterns used here are known in the literature as "vincular permutation patterns".
## Appendix A Gluing partitions is associative
The aim of this section is to show that the binary operation from Definition 3.11 is associative. We found it easier to formulate the operation more abstractly first. We will obtain the desired statement as a special case, Lemma A.10.
### Assumptions
Let \(\Omega\) be a non-empty set and consider maps
\[Q:\Omega\times\Omega\rightarrow\{0,1\}\] \[m:2^{\Omega}\setminus\{\emptyset\}\rightarrow\Omega.\]
We assume
1. \(\forall x\in\Omega:Q(x,x)=1\),
2. \(\forall x,y\in\Omega:Q(x,y)=1\implies Q(y,x)=1\),
3. \(\forall A\in 2^{\Omega}\setminus\{\emptyset\}:\forall y\in\Omega:Q(m(A),y)=1 \iff\exists x\in A:Q(x,y)=1\),
4. \(\forall A,B\in 2^{\Omega}\setminus\{\emptyset\}:\:m(A\cup B)=m(\{m(A)\}\cup B)\).
**Remark A.1**.: _Let \(A_{1},...,A_{n},B\in 2^{\Omega}\setminus\{\emptyset\}\). As a consequence of 4., one has_
\[m(\bigcup_{i=1}^{n}A_{i}\cup B)=m(\bigcup_{i=1}^{n}\{m(A_{i})\}\cup B)\]
### Compression
**Definition A.2** (Compression of \(A\)).: Let \(\mathsf{compr}:2^{\Omega}\setminus\{\emptyset\}\to 2^{\Omega} \setminus\{\emptyset\}\) where
\[\mathsf{compr}(A)=\{m([x]_{\sim_{A}})|x\in A\}\]
and \(\sim_{A}\) is the smallest equivalence relation on \(A\) which contains the relation \(R\subset A\times A\) defined as:
\[(x,y)\in R\iff Q(x,y)=1.\]
Before stating Proposition A.6, we need three lemmas.
**Lemma A.3**.: _Let \(A\in 2^{\Omega}\setminus\{\emptyset\}\). Then_
1. \(\forall x,y\in A:Q(x,y)=1\implies[x]_{\sim_{A}}=[y]_{\sim_{A}}\)__
2. \(\forall x,y\in A:Q(m([x]_{\sim_{A}}),m([y]_{\sim_{A}}))=1\iff[x]_{\sim_{A}}=[y]_ {\sim_{A}}\)__
3. \(\forall x,y\in A:Q(x,m([y]_{\sim_{A}}))=1\iff[x]_{\sim_{A}}=[y]_{\sim_{A}}\)__
4. \(\forall x,y\in A:m([x]_{\sim_{A}})=m([y]_{\sim_{A}})\iff[x]_{\sim_{A}}=[y]_{ \sim_{A}}\)__
5. \(\forall x,y\in A:m([x]_{\sim_{A}})\sim_{\mathsf{compr}(A)}m([y]_{\sim_{A}}) \iff m([x]_{\sim_{A}})=m([y]_{\sim_{A}})\)__
Proof.:
_i._ follows from the definition of \(\sim_{A}\).
_ii._ We have
\[[x]_{\sim_{A}}=[y]_{\sim_{A}}\implies Q(m([x]_{\sim_{A}}),m([y]_{\sim_{A}}))= Q(m([x]_{\sim_{A}}),m([x]_{\sim_{A}}))=1\]
by P1. On the other hand
\[Q(m([x]_{\sim_{A}}),m([y]_{\sim_{A}}))=1\]
implies, P3, \(\exists w\in[x]_{\sim_{A}}\) such that \(Q(w,m([y]_{\sim_{A}}))=1\) which implies, P2 and P3, \(\exists z\in[y]_{\sim_{A}}\) such that \(Q(w,z)=1\). Therefore \(x\sim_{A}w\sim_{A}z\sim_{A}y\), i.e. \([x]_{\sim_{A}}=[y]_{\sim_{A}}\).
_iii._
If \(Q(x,m([y]_{\sim_{A}}))=1\) then, P3, there is \(w\in[y]_{\sim_{A}}\) such that \(Q(x,w)=1\). Then \(x\sim_{A}w\sim_{A}y\), i.e. \([x]_{\sim_{A}}=[y]_{\sim_{A}}\).
If \([x]_{\sim_{A}}=[y]_{\sim_{A}}\), then \(Q(x,m([y]_{\sim_{A}}))=Q(x,m([x]_{\sim_{A}}))=1\), by P3, since \(x\in[x]_{\sim_{A}}\).
_iv._ If \(m([x]_{\sim_{A}})=m([y]_{\sim_{A}})\), then \(Q(m([x]_{\sim_{A}}),m([y]_{\sim_{A}}))=1\) and therefore, by ii., \([x]_{\sim_{A}}=[y]_{\sim_{A}}\). The other implication is immediate.
\(v\). Let \(m([x]_{\sim_{A}})\sim_{\mathsf{compr}(A)}m([y]_{\sim_{A}})\), then \(\exists w_{1},...,w_{n}\in A\) such that
\[\forall i\in\{1,...,n-1\}:Q(m([w_{i}]_{\sim_{A}}),m([w_{i+1}]_{\sim_{A}}))=1\]
and \(Q(m([x]_{\sim_{A}}),w_{1})=1\) and \(Q(w_{n},m([y]_{\sim_{A}}))=1\). But then we have
\[m([x]_{\sim_{A}})=m([w_{1}]_{\sim_{A}})=\cdots=m([w_{n}]_{\sim_{A}})=m([y]_{ \sim_{A}}).\]
**Lemma A.4**.: _Let \(A\in 2^{\Omega}\setminus\{\emptyset\}\). Then_
1. \(\forall x,y\in A:x\sim_{A}y\iff x\sim_{\mathsf{compr}(A)\cup A}y\)__
2. \(\forall x,y\in A:x\sim_{A}y\iff m([x]_{\sim_{A}})\sim_{\mathsf{compr}(A)\cup A}y\)__
3. \(\forall x,y\in A:x\sim_{A}y\iff x\sim_{\mathsf{compr}(A)\cup A}m([y]_{\sim_{A}})\)__
_iv._ \(\forall x,y\in A:x\sim_{A}y\iff m([x]_{\sim_{A}})\sim_{\mathsf{compr}(A)\cup A}m([y]_ {\sim_{A}})\)__
_v._ \(\forall x,y\in A:x\sim_{A}y\iff m([x]_{\sim_{A}})=m([y]_{\sim_{A}})\)__
Proof.: These are consequences of properties P1, P2 and P3 and of the sets being equivalence classes and of the previous lemma.
**Lemma A.5**.: _For \(A\in 2^{\Omega}\setminus\{\emptyset\}\), \(x,y\in A\),_
\[m([x]_{\sim_{A}})\in[y]_{\sim_{A}}\Rightarrow[x]_{\sim_{A}}=[y]_{\sim_{A}}\]
Proof.: Since \(Q(m([x]_{\sim_{A}}),m([y]_{\sim_{A}}))=1\), then by Lemma A.3 ii., \([x]_{\sim_{A}}=[y]_{\sim_{A}}\).
The map \(\mathsf{compr}\) satisfies a form of idempotence.
**Proposition A.6**.: _For \(A\in 2^{\Omega}\setminus\{\emptyset\}\),_
\[\mathsf{compr}(A)=\mathsf{compr}(\mathsf{compr}(A)\cup A).\]
Proof.: We define the map
\[g:\{[x]_{\sim_{A}}|x\in A\} \to\{[z]_{\sim_{\mathsf{compr}(A)\cup A}}|z\in\mathsf{compr}(A) \cup A\}\] \[[x]_{\sim_{A}} \mapsto[x]_{\sim_{A}}\cup\{m([x]_{\sim_{A}})\}\]
and show that it is surjective (one can show that it is also injective, with Lemma A.5 but we do not need this here). We need to show that
\[\{[x]_{\sim_{A}}\cup\{m([x]_{\sim_{A}}\})|x\in A\}=\{[z]_{\sim_{\mathsf{compr} (A)\cup A}}|z\in\mathsf{compr}(A)\cup A\}\]
Let \(x\in A\), then
\[[x]_{\sim_{A}}\cup\{m([x]_{\sim_{A}})\}\subset[x]_{\sim_{\mathsf{compr}(A)\cup A }}.\]
Indeed, this follows from Lemma A.4
\[y\sim_{A}x\implies y\sim_{\mathsf{compr}(A)\cup A}x\]
and, since \(Q(m([x]_{\sim_{A}}),x)=1\), from Lemma A.3
\[m([x]_{\sim_{A}})\sim_{\mathsf{compr}(A)\cup A}x.\]
For the other direction, let \(w\in[x]_{\sim_{\mathsf{compr}(A)\cup A}}\) and \(w\in A\). Then, from Lemma A.4, we know that \(w\sim_{\mathsf{compr}(A)\cup A}x\) implies \(w\sim_{A}x\), and \(w\in[x]_{\sim_{A}}\). If \(w\in[x]_{\sim_{\mathsf{compr}(A)\cup A}}\) and \(w\in\mathsf{compr}(A)\), we can write \(w=m([u]_{\sim_{A}})\) for some \(u\in A\). Then, from the previous Lemma A.4, we know that \(m([u]_{\sim_{A}})\sim_{\mathsf{compr}(A)\cup A}x\), implies \([u]_{\sim_{A}}=[x]_{\sim_{A}}\), and \(w=m([x]_{\sim_{A}})\). Therefore
\[[x]_{\sim_{A}}\cup\{m([x]_{\sim_{A}})\}=[x]_{\sim_{\mathsf{compr}(A)\cup A}} \in\{[z]_{\sim_{\mathsf{compr}(A)\cup A}}|z\in\mathsf{compr}(A)\cup A\}.\]
Now let \(z\in A\), we have
\[[z]_{\sim_{\mathsf{compr}(A)\cup A}}=[z]_{\sim_{A}}\cup\{m([z]_{\sim_{A}})\}\]
and in case \(z\in\mathsf{compr}(A)\), which means \(z=m([u]_{\sim_{A}})\) for some \(u\in A\),
\[[z]_{\sim_{\mathsf{compr}(A)\cup A}}=[u]_{\sim_{A}}\cup\{m([u]_{\sim_{A}})\}\]
using analogous arguments used to show the previous inclusion. This shows
\[g(\{[x]_{\sim_{A}}|x\in A\})=\{[x]_{\sim_{A}}\cup\{m([x]_{\sim_{A}})\}|x\in A \}=\{[z]_{\sim_{\mathsf{compr}(A)\cup A}}|z\in\mathsf{compr}(A)\cup A\}\]
If we use property 4 of \(m\), we get the desired result
\[\{m([x]_{\sim_{A}})|x\in A\} =\{m([x]_{\sim_{A}}\cup[x]_{\sim_{A}})|x\in A\}\] \[=\{m\left([x]_{\sim_{A}}\cup\{m([x]_{\sim_{A}})\}\right)|x\in A\}\] \[=\{m([z]_{\sim_{\mathsf{compr}(A)\cup A}})|z\in\mathsf{compr}(A) \cup A\}.\]
We now state a lemma used in the proof of the upcoming Proposition A.8.
**Lemma A.7**.: _Let \(A,B\in 2^{\Omega}\setminus\{\emptyset\}\). Then_
\[\forall x,y\in A:x\sim_{A\cup B}y\iff m([x]_{\sim_{A}})\sim_{ \mathsf{compr}(A)\cup B}m([y]_{\sim_{A}})\] \[\forall x\in A:\forall y\in B:x\sim_{A\cup B}y\iff m([x]_{\sim_{A }})\sim_{\mathsf{compr}(A)\cup B}y\] \[\forall x,y\in B:x\sim_{A\cup B}y\iff x\sim_{\mathsf{compr}(A) \cup B}y\]
Proof.: These statements are similar to the ones of Lemma A.3 and Lemma A.4, and the proof is also similar.
We now show the main result.
**Proposition A.8**.: _Let \(A,B\in 2^{\Omega}\setminus\{\emptyset\}\). Then_
\[\mathsf{compr}(A\cup B)=\mathsf{compr}(\mathsf{compr}(A)\cup B)\]
Proof.: Let \(x\in A\). We can write
\[[m([x]_{\sim_{A}})]_{\sim_{\mathsf{compr}(A)\cup B}}\] \[=\{u\in\mathsf{compr}(A)|u\sim_{\mathsf{compr}(A)\cup B}m([x]_{ \sim_{A}})\}\cup\{b\in B|b\sim_{\mathsf{compr}(A)\cup B}m([x]_{\sim_{A}})\}.\]
We now show that
\[\{u\in\mathsf{compr}(A)|u\sim_{\mathsf{compr}(A)\cup B}\}=\{m([a]_{\sim_{A}})| a\in A,a\sim_{A\cup B}x\}.\]
Thanks to Lemma A.7, \(a\in A\) such that \(a\sim_{A\cup B}x\), implies \(m([a]_{\sim_{A}})\sim_{\mathsf{compr}(A)\cup B}m([x]_{\sim_{A}})\). Therefore we have
\[\{m([a]_{\sim_{A}})|a\in A,a\sim_{A\cup B}x\}\subset\{u\in\mathsf{compr}(A)|u \sim_{\mathsf{compr}(A)\cup B}m([x]_{\sim_{A}})\}.\]
Now let \(u\in\{u\in\mathsf{compr}(A)|u\sim_{\mathsf{compr}(A)\cup B}m([x]_{\sim_{A}})\}\), which means \(u=m([z]_{\sim_{A}})\) for some \(z\in A\). Then we have
\[m([z]_{\sim_{A}})\sim_{\mathsf{compr}(A)\cup B}m([x]_{\sim_{A}}) \implies z\sim_{A\cup B}x,\]
which means
\[m([z]_{\sim_{A}})\in\{m([a]_{\sim_{A}})|a\in A,a\sim_{A\cup B}x\}.\]
Therefore
\[\{m([a]_{\sim_{A}})|a\in A,a\sim_{A\cup B}x\}\supset\{u\in\mathsf{compr}(A)|u \sim_{\mathsf{compr}(A)\cup B}m([x]_{\sim_{A}})\}.\]
Similarly, we can also show that
\[\{b\in B|b\sim_{\mathsf{compr}(A)\cup B}m([x]_{\sim_{A}})\}=\{b \in B|b\sim_{A\cup B}x\}.\]
We can now finally write
\[m([m([x]_{\sim_{A}})]_{\sim_{\mathsf{compr}(A)\cup B}})=m\left( \{m([a]_{\sim_{A}})|a\in A,a\sim_{A\cup B}x\}\cup\{b\in B|b\sim_{A\cup B}\}\right) \tag{10}\]
We also obviously have
\[m([x]_{\sim_{A\cup B}})=m\left(\{a\in A|a\sim_{A\cup B}x\}\cup \{b\in B|b\sim_{A\cup B}x\}\right) \tag{11}\]
We now show that Equation (10) and Equation (11) are equal. Notice that it is quite straightforward to verify that
\[\{a\in A|a\sim_{A\cup B}x\}=\bigcup_{\begin{subarray}{c}a\in A\\ a\sim_{A\cup B}x\end{subarray}}[a]_{\sim_{A}}\]
and using property 4 of \(m\) yields:
\[m\left(\bigcup_{\begin{subarray}{c}a\in A\\ a\sim_{A\cup B}x\end{subarray}}[a]_{\sim_{A}}\right) =m\left(\bigcup_{\begin{subarray}{c}a\in A\\ a\sim_{A\cup B}x\end{subarray}}\{m([a]_{\sim_{A}})\}\right) =m\left(\{m([a]_{\sim_{A}})|a\in A,a\sim_{A\cup B}x\}\right)\]
and therefore
\[m([x]_{\sim_{A\cup B}})=m([m([x]_{\sim_{A}})]_{\sim_{\mathsf{ compr}(A)\cup B}}). \tag{12}\]
Now let \(x\in B\). We can write
\[[x]_{\sim_{A\cup B}}=\{a\in A|a\sim_{A\cup B}x\}\cup\{b\in B|b \sim_{A\cup B}x\}\]
\[[x]_{\sim_{\mathsf{compr}(A)\cup B}}=\{y\in\mathsf{compr}(A)|y\sim_{\mathsf{ compr}(A)\cup B}x\}\cup\{b\in B|b\sim_{\mathsf{compr}(A)\cup B}x\}\]
With steps similar to the ones used to show Equation (12), one obtains
\[m([x]_{\sim_{A\cup B}})=m([x]_{\sim_{\mathsf{compr}(A)\cup B}}).\]
Since
\[\forall x\in A:m([x]_{\sim_{A\cup B}})=m([m([x]_{\sim_{A}})]_{ \sim_{\mathsf{compr}(A)\cup B}}),\] \[\forall x\in B:m([x]_{\sim_{A\cup B}})=m([x]_{\sim_{\mathsf{compr}( A)\cup B}}),\]
we have shown that any element in \(\mathsf{compr}(A\cup B)\) can be written as an element of \(\mathsf{compr}(\mathsf{compr}(A)\cup B)\) and vice-versa, which finishes the proof.
### Associativity
Define \(M:2^{\Omega}\setminus\{\emptyset\}\times 2^{\Omega}\setminus\{\emptyset\} \to 2^{\Omega}\setminus\{\emptyset\}\) as
\[M(A,B):=\mathsf{compr}(A\cup B).\]
\((2^{\Omega}\setminus\{\emptyset\},M)\) is a semigroup as illustrated in the following proposition.
**Proposition A.9** (Associativity of M).: \[\forall A,B,C\in 2^{\Omega}\setminus\{\emptyset\}:M(A,M(B,C))=M(M(A,B),C)\]
Proof.: As a consequence of Proposition A.8 one has
\[\mathsf{compr}(A\cup\mathsf{compr}(B\cup C))=\mathsf{compr}(A\cup B\cup C)= \mathsf{compr}(\mathsf{compr}(A\cup B)\cup C)\]
We are now ready to show that the operation \(\cdot_{\mbox{\bf glue}}\) (see Definition 3.11) is associative.
**Lemma A.10**.: _Let \(\mathcal{I},\mathcal{I}^{\prime},\mathcal{I}^{\prime\prime}\in\mathbf{IP}\). Then:_
\[(\mathcal{I}\cdot_{\mbox{\bf glue}}\mathcal{I}^{\prime})\cdot_{\mbox{\bf glue }}\mathcal{I}^{\prime\prime}=\mathcal{I}\cdot_{\mbox{\bf glue}}(\mathcal{I}^{ \prime}\cdot_{\mbox{\bf glue}}\mathcal{I}^{\prime\prime})\]
Proof.: \(\Omega:=2^{\mathbb{N}_{\geq 1}}\setminus\{\emptyset\}\), and
\[\forall x,y\in\Omega:\,Q(x,y):=\begin{cases}1,&\mbox{if}\;x\cap y\neq\emptyset \\ 0,&\mbox{else}\end{cases}\]
\[\forall A\in 2^{\Omega}\setminus\{\emptyset\}:\,m(A):=\bigcup_{x\in A}x\]
satisfy assumptions 1,2,3 and 4 from Appendix A.1. Indeed, 2. follows from the commutativity of \(\cap\), 1. follows from the idempotence of \(\cap\), 3. follows by definition of union, and regarding 4., for \(A,B\in 2^{\Omega}\setminus\{\emptyset\}\)
\[m\left(\{m(A)\}\cup B\right) =m(\{\bigcup_{x\in A}x\}\cup B)\] \[=m(\{\bigcup_{x\in A}x\}\cup\{y|y\in B\})\] \[=\bigcup_{x\in A}x\cup\bigcup_{y\in B}y\] \[=\bigcup_{z\in A\cup B}z\] \[=m(A\cup B),\]
as claimed.
We hence get from Proposition A.9 that
\[M:2^{\Omega}\setminus\{\emptyset\}\times 2^{\Omega}\setminus\{\emptyset\} \to 2^{\Omega}\setminus\{\emptyset\}\]
is associative. Now from Lemma 3.13, it follows that, in particular, \(\forall\mathcal{I},\mathcal{J}\in\mathbf{IP}\setminus\{\emptyset\}\)
\[M(\mathcal{I},\mathcal{J})=\mathcal{I}\cdot_{\mbox{\bf glue }}\mathcal{J}\in\mathbf{IP}\setminus\{\emptyset\}.\]
Therefore we can restrict and corestrict \(M\)
\[M:\mathbf{IP}\setminus\{\emptyset\}\times\mathbf{IP}\setminus\{\emptyset\} \rightarrow\mathbf{IP}\setminus\{\emptyset\},\]
which is automatically associative. From Definition 3.11 it follows immediately that \(\forall\mathcal{I}\in\mathbf{IP}\)
\[\mathcal{I}\cdot_{\mbox{\bf glue }}\emptyset=\mathcal{I}.\]
Therefore \(\forall\mathcal{I},\mathcal{I}^{\prime},\mathcal{I}^{\prime\prime}\in\mathbf{IP}\)
\[(\mathcal{I}\cdot_{\mbox{\bf glue }}\mathcal{I}^{\prime})\cdot_{\mbox{\bf glue }}\mathcal{I}^{\prime\prime}=\mathcal{I}\cdot_{\mbox{\bf glue }}(\mathcal{I}^{\prime}\cdot_{\mbox{\bf glue }}\mathcal{I}^{\prime\prime}).\]
**Remark A.11**.: _Another example of \(m\) and \(Q\) satisfying the properties P1-P4 is as follows. Let \(\Omega:=\mathbb{N}_{\geq 1}\) and set_
\[\forall x,y\in\Omega,\;Q(x,y) :=1\] \[\forall A\in 2^{\Omega}\setminus\{\emptyset\},\;m(A) :=\max(A).\]
_Properties P1-P3 are trivially satisfied since \(Q\) is always 1 and P4 obviously holds since_
\[\max(A\cup B)=\max(\{\max(A)\}\cup B).\]
_Now let \(\Omega:=\mathbb{N}^{k}\setminus\{(0,\ldots,0)\}\) where \(k>1\). Set_
\[Q((x_{1},\ldots,x_{k}),(y_{1},\ldots,y_{k}))=\begin{cases}1,&\text{if }\exists i:x_{i}\neq 0 \wedge y_{i}\neq 0\\ 0,&\text{else}.\end{cases}\]
_and_
\[m(A):=(\max^{(1)}(A),\ldots,\max^{(k)}(A))\]
_where \(\max^{(i)}(A)\) is the maximum element of the i-th component of all the elements in \(A\). One can show that P1-P4 hold also for this example._
|
2302.14626 | Relativistic calculations of the energies of the low-lying $1sns$,
$1snp$, $1snd$ states and the probabilities of the one-photon $1snl\to
1sn'l'$ transitions in heliumlike uranium | For heliumlike uranium, the energies of the singly-excited $1sns$, $1snp$,
and $1snd$ states with $n\leq 4$ and the probabilities of the one-photon
$1s3d\to 1s2p$, $1s3p\to 1s2s$, $1s3p\to 1s2p$ and $1s4d\to 1s2p$ transitions
are evaluated. The calculations are performed within the Breit approximation
using the configuration-interaction method in the basis of the Dirac-Fock-Sturm
orbitals. The QED corrections to the energy levels are calculated employing the
model-QED-operator approach. The nuclear recoil, frequency-dependent
Breit-interaction, nuclear polarization, and nuclear deformation corrections
are taken into account as well. | N. K. Dulaev, M. Y. Kaygorodov, A. V. Malyshev, I. I. Tupitsyn, V. M. Shabaev | 2023-02-28T15:06:36Z | http://arxiv.org/abs/2302.14626v1 | Relativistic calculations of the energies of the low-lying \(1sns\), \(1snp\), \(1snd\) states and the probabilities of the one-photon \(1snl\to 1sn^{\prime}l^{\prime}\) transitions in heliumlike uranium
###### Abstract
For heliumlike uranium, the energies of the singly-excited \(1sns\), \(1snp\), and \(1snd\) states with \(n\leq 4\) and the probabilities of the one-photon \(1s3d\to 1s2p\), \(1s3p\to 1s2s\), \(1s3p\to 1s2p\) and \(1s4d\to 1s2p\) transitions are evaluated. The calculations are performed within the Breit approximation using the configuration-interaction method in the basis of the Dirac-Fock-Sturm orbitals. The QED corrections to the energy levels are calculated employing the model-QED-operator approach. The nuclear recoil, frequency-dependent Breit-interaction, nuclear polarization, and nuclear deformation corrections are taken into account as well.
## I Introduction
The study of highly charged ions plays an important role in modern physics [1; 2; 3; 4; 5; 6]. The comparison of the various properties of highly charged ions measured in high-precision experiments with the results of theoretical calculations makes it possible to test the methods of quantum electrodynamics (QED) at the strong-coupling regime, to improve the accuracy of the fundamental constants and nuclear-structure parameters. The investigation of highly charged ions with two electrons -- heliumlike ions -- is of particular interest since they are the simplest atomic systems in which the interelectronic-interaction effects are manifested.
During the last decades, the experimental accuracy of the transition-energy measurements in highly charge ions has been Significantly improved. For instance, the uncertainty of the Lamb-shift measurement in H-like uranium constitutes 2% [7; 8]. Even better precision has been achieved in experiments with Li-like ions [9; 10; 11; 12]. The high-precision measurements of the transition energies in He-like ions were performed in a number of works [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 220; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 287; 289; 290; 288; 289; 281; 289; 280; 281; 284; 286; 287; 288; 289; 289; 291; 285; 288; 289; 292; 293; 286; 287; 288; 289; 294; 288; 289; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 329; 334; 335; 361; 337; 338; 339; 340; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 40; 40; 411; 423; 430; 431; 445; 46; 47; 48; 491; 40; 424; 432; 449; 44; 451; 46; 47; 492; 48; 493; 50; 40; 41; 433; 44; 494; 41; 44; 452; 495; 41; 453; 46; 47; 496; 48; 497; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 829; 83; 84; 85; 86; 87; 89; 84; 87; 88; 89; 95; 89; 96; 97; 98; 99; 99; 1001; 102; 103; 104; 105; 106; 107; 108; 109; 111; 113; 114; 115; 116; 117; 118; 119; 120; 121; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 1333; 134; 135; 136; 137; 138; 139; 140; 141; 1439; 142; 143; 144; 145; 146; 147; 148; 149; 150; 153; 154; 156; 157; 158; 159; 160; 161; 177; 178; 179; 180; 183; 184; 185; 186; 187; 189; 190; 191; 193; 194; 195; 196; 197; 198; 199; 200; 210; 211; 222; 23; 243; 244; 245; 246; 247; 248; 259; 261; 267; 268; 27; 279; 288; 299; 301; 29; 310; 320; 321; 334; 34; 346; 347; 348; 359; 369; 370; 383; 39; 397; 398; 399; 4100; 399; 421; 399; 430; 443; 453; 46; 47; 48; 4
\[\hat{h}_{i}^{\rm D}=c(\mathbf{\alpha}_{i}\cdot\mathbf{p}_{i})+mc^{2}(\beta-1)+V_{\rm nucl}( r_{i}), \tag{3}\]
where \(\mathbf{p}\) is the momentum operator, \(\mathbf{\alpha}\) and \(\beta\) are the Dirac matrices, and \(V_{\rm nucl}\) is the potential of the nucleus. In the calculations, the Fermi nuclear-charge-distribution model is employed and the root-mean-square radius for the uranium nucleus is taken from Ref. [36]. The operator \(\hat{V}_{\rm C}\) is the sum of the two-electron Coulomb-interaction operators
\[\hat{V}_{\rm C}=\frac{1}{2}\sum_{i\neq j}^{N}\frac{1}{r_{ij}},\quad r_{ij}=| \mathbf{r}_{i}-\mathbf{r}_{j}|, \tag{4}\]
the operator \(\hat{V}_{\rm B}\) is the sum of the Breit-interaction operators
\[\hat{V}_{\rm B}=-\frac{1}{2}\sum_{i\neq j}^{N}\frac{1}{2r_{ij}}\Big{[}\mathbf{ \alpha}_{i}\cdot\mathbf{\alpha}_{j}+\frac{(\mathbf{\alpha}_{i}\cdot\mathbf{r}_{ij})(\mathbf{ \alpha}_{j}\cdot\mathbf{r}_{ij})}{r_{ij}^{2}}\Big{]}. \tag{5}\]
In the Hamiltonian (1), \(\Lambda^{(+)}\) is the product of the one-electron projectors on the positive-energy eigenvalues of the Dirac-Fock (DF) operator.
In the CI-DFS method, the many-electron wave function \(\Psi(JM_{J})\) with the total angular momentum \(J\) and its projection \(M_{J}\) is expanded in the basis of the configuration-state functions (CSFs) \(\Phi_{\alpha}(JM_{J})\),
\[\Psi(JM_{J})=\sum_{\alpha}C_{\alpha}(JM_{J})\Phi_{\alpha}(JM_{J}). \tag{6}\]
The CSFs are the eigenfunctions of the operator \(\hat{J}^{2}\). They are constructed as the appropriate linear combinations of the Slater determinants. The mixing coefficients \(C_{\alpha}(JM_{J})\) are determined from the solution of the matrix eigenvalue problem
\[H^{\rm DCB}C(JM_{J})=E^{\rm DCB}(J)C(JM_{J}), \tag{7}\]
where \(H^{\rm DCB}\) is the Hamiltonian matrix in the basis of the CSFs and \(C(JM_{J})\) is the vector of the mixing coefficients.
The one-electron basis is constructed as follows. For the occupied \(nl\) states and for the low-lying virtual \(n^{\prime}l^{\prime}\) states, where \(n^{\prime}\leq n\) and \(l^{\prime}\leq l\), the orbitals are obtained as numerical solutions of the DF equations. All other virtual orbitals correspond to the solutions of the DF equations in the finite basis set of the Sturmian functions. The Sturmian functions are the numerical solutions of the Dirac-Fock-Sturm equation
\[(\hat{h}^{\rm DF}-\varepsilon_{0})\phi_{j}=\lambda_{j}W(r)\phi_{j}, \tag{8}\]
where \(\hat{h}^{\rm DF}\) is the DF Hamiltonian, \(\varepsilon_{0}\) is the one-electron reference energy of the occupied DF \(ns\), \(np\) or \(nd\) orbital, and \(W(r)\) is the weight function
\[W(r)=\left[\frac{1-e^{-(ar)^{2}}}{(ar)^{2}}\right]^{n}. \tag{9}\]
The parameters \(a\) and \(n\) are adjusted to achieve the fastest convergence of the energy \(E(J)\) with respect to the number of the virtual orbitals.
The QED corrections are calculated using the model-QED-operator approach [37; 38; 39]. The model-QED operator \(\hat{V}^{\rm Q}\) is constructed in the way to reproduce the exact values of the diagonal and off-diagonal matrix elements of the one-loop QED contributions for the low-lying states of H-like ions. The practical application of the model-QED operator \(\hat{V}^{\rm Q}\) consists in adding it to the DCB Hamiltonian \(\hat{H}^{\rm DCB}\),
\[\hat{H}^{\rm DCBQ}=\Lambda_{\rm Q}^{(+)}\left[\hat{H}^{\rm D}+\hat{V}^{\rm C}+ \hat{V}^{\rm B}+\hat{V}^{\rm Q}\right]\Lambda_{\rm Q}^{(+)}, \tag{10}\]
and then finding the lowest eigenvalues of the matrix \(H^{\rm DCBQ}\)
\[H^{\rm DCBQ}C(JM)=E^{\rm DCBQ}(J)C(JM). \tag{11}\]
We should note that the operator \(V^{\rm Q}\) is included into the Hamiltonian \(\hat{h}^{\rm DF}\) at the basis-construction stage, therefore, the projectors \(\Lambda_{\rm Q}^{(+)}\) in Eq. (10) differ from the projectors \(\Lambda^{(+)}\) in Eq. (1). The QED correction to the energy of a level is determined as the difference of the total energies
\[\Delta E^{\rm QED}(J)=E^{\rm DCBQ}(J)-E^{\rm DCB}(J). \tag{12}\]
The described procedure allows one to partially take into account the screened QED corrections within the multi-configuration calculations.
The nuclear recoil effect, caused by the finite mass of the nucleus, leads to the shift of the energy levels. Within the lowest-order relativistic approximation and to first order in the electron-to-nucleus mass ratio \(m/M\), the nuclear-recoil Hamiltonian can be written as [40; 41; 42; 43]
\[\hat{H}^{\rm MS}=\frac{1}{2M}\sum_{i,j}\left[\mathbf{p}_{i}\cdot\mathbf{p}_{j}-\frac{ \alpha Z}{r_{i}}\left(\mathbf{\alpha}_{i}+\frac{(\mathbf{\alpha}_{i}\cdot\mathbf{r}_{i}) \mathbf{r}_{i}}{r_{i}^{2}}\right)\cdot\mathbf{p}_{j}\right]. \tag{13}\]
The QED corrections to the nuclear recoil effect were calculated earlier (see, e.g., Refs. [44; 45; 46; 40; 41; 47; 48] and references therein). In the present paper, the nuclear recoil correction to the energy level, \(\Delta E^{\rm MS}\), is defined as the sum of the expectation value of the operator \(\hat{H}^{\rm MS}\), evaluated using the correlated many-electron function \(\Psi(JM_{J})\), and the corresponding one-electron QED corrections.
The frequency-dependent Breit-interaction correction to the energy level, \(\Delta E^{\rm FB}\), is calculated as follows. Let us consider the one-photon-exchange operator
\[I(\omega)=\alpha_{1}^{\mu}\alpha_{2}^{\nu}D_{\mu\nu}(\omega,\mathbf{r}_{12}), \tag{14}\]
where \(D_{\mu\nu}\) is the photon propagator in the Coulomb gauge
\[D_{00}(\omega,\mathbf{r}_{12})= \frac{1}{r_{12}},\quad D_{i0}=D_{0i}=0,\quad i=1,2,3,\] \[D_{il}(\omega,\mathbf{r}_{12})= 4\pi\int\frac{d\mathbf{k}}{(2\pi)^{3}}\frac{\exp(i\mathbf{k}\cdot\mathbf{r}_{12 })}{\omega^{2}-\mathbf{k}^{2}+i0}\left(\delta_{il}-\frac{k_{i}k_{l}}{\mathbf{k}^{2}} \right), \tag{15}\] \[i,l=1,2,3.\]
Considering the \(\omega\to 0\) limit in Eq. (15), we obtain the standard form of the Breit interaction (5). The correction \(\Delta E^{\rm FB}\) is evaluated as the expectation value of the symmetrized one-photon-exchange operator [49; 50; 51; 52; 53] with the wave functions obtained by the CI-DFS method.
The nuclear polarization and nuclear deformation corrections to the energy levels of He-like uranium are calculated according to Refs. [54; 55; 56; 57; 25].
Let us consider the probability of the transition of the many-electron system from the state \(\beta\) with the total angular momentum \(J_{\beta}\) to the state \(\alpha\) with the total angular momentum \(J_{\alpha}\). The probability of spontaneous emission of a photon with the frequency \(\omega\) and the multipolarity \(\lambda L\) (\(\lambda=E\) for the electric-type transitions and \(\lambda=M\) for the magnetic type transitions) is given by the expression [58]
\[A_{L}^{(\lambda)}(\beta,\alpha)=2\alpha\omega\frac{2L+1}{2J_{\beta}+1}\left| \langle\alpha||T_{L}^{(\lambda)}||\beta\rangle\right|^{2}, \tag{16}\]
where \(\langle\alpha||T_{L}^{(\lambda)}||\beta\rangle\) is the reduced matrix element of the multipole transition operator \(T_{L}^{(\lambda)}\). To calculate the transition probabilities, the many-electron functions \(\Psi_{\alpha}\) and \(\Psi_{\beta}\) obtained by means of the CI-DFS method are used. The many-electron functions are evaluated for the DCB Hamiltonian with the model-QED operator included (10). Therefore, the calculated transition probabilities partially incorporate the QED corrections.
## III Numerical results and discussion
For heliumlike uranium, the systematic calculations of the energies of the \(1sns\), \(1snp\), \(1snd\) states with \(n\leq 4\) are carried out. When constructing the many-electron basis within the CI-DFS method, all the possible single and double excitations from the reference configuration, which corresponds to the occupied state, into a space spanned by a given number of virtual orbitals are considered. The orbitals with \(n\leq 17\) for each quantum number \(l\leq 11\) are used as a one-electron basis set in the calculations of the total energy \(E^{\rm DCB}\): the total number of the one-electron functions is \(138\). For each considered state, the uncertainty associated with the incompleteness of the one-electron basis is determined from the analysis of the convergence of the total energy \(E^{\rm DCB}\) with respect to the number of the one-electron functions. It is established that the uncertainty of \(E^{\rm DCB}\) due to the finite size of the basis does not exceed a value of the order of \(0.1\) eV for all the considered states.
Further, various corrections to the energies \(E^{\rm DCB}\) are calculated. The QED corrections are evaluated using the model-QED-operator approach according to Eqs. (10) -- (12). When calculating the QED correction, a significantly smaller basis of the one-electron functions is used, since the correction \(\Delta E^{\rm QED}\) converges with respect to the number of the one-electron functions faster than the total energy \(E^{\rm DCB}\). The orbitals with \(n\leq 13\) for each quantum number \(l\leq 4\) are used in the QED-correction calculations, the total number of the used functions is \(55\). The uncertainty of the evaluated QED corrections associated with the incompleteness of the basis set constitutes approximately \(0.01\) eV for the all considered states. Additionally, the nuclear recoil correction, \(\Delta E^{\rm MS}\), and the frequency-dependent Breit interaction correction, \(\Delta E^{\rm FB}\), are calculated. These corrections are evaluated using even smaller number of the virtual orbitals, however, the numerical accuracy for these corrections is several orders of magnitude higher than the accuracy for the energy \(E^{\rm DCB}\).
In Table 1 for the ground state of heliumlike uranium, the values of the energy obtained using the DCB Hamiltonian, \(E^{\rm DCB}\), the QED correction, \(\Delta E^{\rm QED}\), the nuclear recoil correction, \(\Delta E^{\rm MS}\), the frequency-dependent Breit-interaction correction, \(\Delta E^{\rm FB}\), the nuclear polarization and deformation corrections, \(\Delta E^{\rm PD}\), as well as the total energy including all the corrections, \(E^{\rm tot}\), are presented. The obtained results are compared with the data from Ref. [25]. In Ref. [25], the rigorous calculations of the one-electron (self-energy and vacuum polarization) and screened QED corrections, the two-photon-exchange contribution, and also the higher-order interelectronic-interaction contributions within the Breit approximation were performed. Moreover, the contributions of the one-electron two-loop diagrams were taken into account there. The calculations of the nuclear recoil effect in both the present work and Ref. [25] have been carried out taking into account the corresponding QED contribution.
The DCB energy obtained in the present work is compared with the DCB energy from Ref. [25] calculated using the projectors \(\Lambda^{(+)}\) in the Eq. (1) which correspond to the Dirac equation (3). The value of the QED corrections calculated in the present work is compared with the sum of the one-electron and screened QED corrections calculated in Ref. [25] using the local Dirac-Fock potential (LDF). The frequency-dependent Breit-interaction correction is compared with the value of the one-photon-exchange contribution from Ref. [25]. Our total energy is compared with the total value from Ref. [25], which also includes the nuclear polarization and deformation corrections.
The comparison shows that the results of the present calculations for the ground state of heliumlike uranium are in agreement with the result of Ref. [25]. Indeed, the DCB energy of the ground state of heliumlike uranium, calculated in the present work, is \(-261910.84\) eV, while in Ref. [25] this quantity equals \(-261910.73\) eV, that is within our estimated numerical uncertainty of \(0.1\) eV. The correction \(\Delta E^{\rm FB}\) in Ref. [25] is strictly zero, since the one-photon exchange between the \(1s\) electrons occurs at the zero frequency of the virtual photon, \(\omega=0\). However, in the present work the value of \(\Delta E^{\rm FB}\) deviates from zero due to the mixing of the states which have different energies. For the ground state of heliumlike uranium, the QED correction calculated in the present work by means of the model-QED-operator method is \(527.00\) eV, which agrees up to the \(0.5\%\) uncertainty with the sum of the one-electron and screened QED corrections, \(523.01\) eV, obtained in Ref. [25]. The value of the nuclear recoil correction, \(\Delta E^{\rm MS}\), equals \(0.92\) eV, and it is in good agreement with the result of \(0.93\) eV obtained in Ref. [25]. The difference of the total ground-state energies for heliumlike uranium \(E^{\rm tot}_{1s1s}\) obtained in the present work and Ref. [25] is about \(2.5\) eV, and it is mainly due to the lack of an accurate consideration of the two-photon-exchange contribution, the approximate treatment of the screened QED contributions, and partial taking into account the contribution of the two-loop diagrams in the present work.
In Table 2 for the excited \((1s2s)_{0}\), \((1s2s)_{1}\), \((1s2p_{1/2})_{0}\), \((1s2p_{3/2})_{2}\), \((1s2p_{1/2})_{1}\), \((1s2p_{3/2})_{1}\) states of heliumlike uranium, the results of the calculations of the energies obtained using the DCB Hamiltonian, \(E^{\rm DCB}\), the QED corrections, \(\Delta E^{\rm QED}\), the nuclear recoil corrections, \(\Delta E^{\rm MS}\), the frequency-dependent Breit-interaction corrections, \(\Delta E^{\rm FB}\), the nuclear polarization and deformation corrections, \(\Delta E^{\rm PD}\), are presented. The total energies, \(E^{\rm tot}\), which include all the corrections, and energies relative to the ground state, \(E^{\rm tot}-E^{\rm tot}_{1s1s}\), are given. The latter results are compared with the related values from Ref. [25].
\begin{table}
\begin{tabular}{l|r} \hline Contribution & Value \\ \hline \(E^{\rm UCB}\) & \(-261\,910.84\) \\ \(\Delta E_{\rm FB}\) & \(-0.02\) \\ \(\Delta E^{\rm QED}\) & \(527.00\) \\ \(\Delta E^{\rm MS}\) & \(0.92\) \\ \(\Delta E^{\rm PD}\) & \(-0.62\) \\ \(E^{\rm tot}_{1s1s}\)[25] & \(-261\,383.56\) \\ \(E^{\rm tot}_{1s1s}\)[25] & \(-261\,386.15\) \\ \hline \end{tabular}
\end{table}
Table 1: The ground-state energy of heliumlike uranium calculated using the Dirac-Coulomb-Breit Hamiltonian, \(E^{\rm DCB}\), and various corrections to this value: the frequency-dependent Breit interaction correction, \(\Delta E^{\rm FB}\), the QED correction, \(\Delta E^{\rm QED}\), the nuclear recoil correction, \(\Delta E^{\rm MS}\), the nuclear polarization and deformation corrections, \(\Delta E^{\rm PD}\). The value \(E^{\rm tot}_{1s1s}\) is the total energy (eV). The total energy is compared with the result of Ref. [25].
Table 2 shows that the results of the present work are in reasonable agreement with the results of Ref. [25]. The energies \(E^{\rm DCB}\) obtained in the present work are consistent with the results of Ref. [25] within the uncertainty of 0.1 eV estimated for the Dirac-Coulomb-Breit equation solutions. For various excited states, as in the case of the ground state, the difference of the contributions \(\Delta E^{\rm QED}\) between this work and Ref. [25] is about 0.5%. For the total energies \(E^{\rm tot}\), the difference between the final results does not exceed 2.5 eV. The systematic deviation decreases if we consider the transition energy to the ground state -- the difference between the results becomes about 1 eV. The reasons of these deviations are the same as for the ground-state values: in the present work, the QED corrections are taken into account approximately and the two-photon-exchange contribution beyond the Breit approximation is excluded from the consideration.
\begin{table}
\begin{tabular}{l|l|r|l|r} \hline State & Contribution & Value & State & Contribution & Value \\ \hline & \(E^{\rm DCB}\) & \(-165\,418.06\) & & \(E^{\rm DCB}\) & \(-161\,115.78\) \\ & \(\Delta E^{\rm FB}\) & 0.67 & \(\Delta E^{\rm FB}\) & \(-7.05\) \\ & \(\Delta E^{\rm QED}\) & 314.79 & & \(\Delta E^{\rm QED}\) & \(275.05\) \\ \((1s2s)_{0}\) & \(\Delta E^{\rm MS}\) & 0.58 & \((1s2p_{3/2})\) & \(\Delta E^{\rm MS}\) & 0.50 \\ & \(\Delta E^{\rm PD}\) & \(-0.37\) & & \(\Delta E^{\rm PD}\) & \(-0.31\) \\ & \(E^{\rm tot}\) & \(-165\,102.39\) & & \(E^{\rm tot}\) & \(-160\,847.60\) \\ & \(E^{\rm tot}-E^{\rm tot}_{1s1s}\)[25] & 96 281.17 & & \(E^{\rm tot}-E^{\rm tot}_{1s1s}\)[25] & \(100\,536.95\) \\ \hline & \(E^{\rm DCB}\) & \(-165\,673.15\) & & \(E^{\rm DCB}\) & \(-165\,488.45\) \\ & \(\Delta E^{\rm FB}\) & 0.23 & & \(\Delta E^{\rm FB}\) & 0.10 \\ & \(\Delta E^{\rm QED}\) & 315.89 & & \(\Delta E^{\rm QED}\) & 272.92 \\ \((1s2s)_{1}\) & \(\Delta E^{\rm MS}\) & 0.58 & & \(\Delta E^{\rm MS}\) & 0.54 \\ & \(\Delta E^{\rm PD}\) & \(-0.37\) & & \(\Delta E^{\rm PD}\) & \(-0.32\) \\ & \(E^{\rm tot}\) & \(-165\,356.82\) & & \(E^{\rm tot}\) & \(-165\,215.20\) \\ & \(E^{\rm tot}-E^{\rm tot}_{1s1s}\) & 96 026.74 & & \(E^{\rm tot}-E^{\rm tot}_{1s1s}\) & 96 168.36 \\ & \(E^{\rm tot}-E^{\rm tot}_{1s1s}\)[25] & 96 027.07(54) & & \(E^{\rm tot}-E^{\rm tot}_{1s1s}\)[25] & 96 169.43(54) \\ \hline & \(E^{\rm DCB}\) & \(-165\,379.04\) & & \(E^{\rm DCB}\) & \(-161\,052.09\) \\ & \(\Delta E^{\rm FB}\) & 0.32 & & \(\Delta E^{\rm FB}\) & 2.96 \\ & \(\Delta E^{\rm QED}\) & 272.73 & & \(\Delta E^{\rm QED}\) & 275.16 \\ \((1s2p_{1/2})_{0}\) & \(\Delta E^{\rm MS}\) & 0.53 & & \(\Delta E^{\rm MS}\) & 0.54 \\ & \(\Delta E^{\rm PD}\) & \(-0.32\) & & \(\Delta E^{\rm PD}\) & \(-0.31\) \\ & \(E^{\rm tot}\) & \(-165\,105.77\) & & \(E^{\rm tot}\) & \(-160\,773.74\) \\ & \(E^{\rm tot}-E^{\rm tot}_{1s1s}\) & 96 277.79 & & \(E^{\rm tot}-E^{\rm tot}_{1s1s}\) & 100 609.82 \\ & \(E^{\rm tot}-E^{\rm tot}_{1s1s}\)[25] & 96 279.01(54) & & \(E^{\rm tot}-E^{\rm tot}_{1s1s}\)[25] & 100 610.68(54) \\ \end{tabular}
\end{table}
Table 2: The energies of the \(1s2s\) and \(1s2p\) states of heliumlike uranium calculated using the Dirac-Coulomb-Breit Hamiltonian, \(E^{\rm DCB}\), and various corrections to these values: the frequency-dependent Breit-interaction corrections, \(\Delta E^{\rm FB}\), the QED corrections, \(\Delta E^{\rm QED}\), the nuclear recoil corrections, \(\Delta E^{\rm MS}\), the nuclear polarization and deformation corrections, \(\Delta E^{\rm PD}\). \(E^{\rm tot}_{1s1s}\) are the total energies and \(E^{\rm tot}-E^{\rm tot}_{1s1s}\) are the total energies relative to the ground state (eV). The latter values are compared with the total results from Ref. [25].
In Table 3, the results for the energies of the \((1sns)_{0}\), \((1sns)_{1}\), \((1snp_{1/2})_{0}\), \((1snp_{1/2})_{1}\), \((1snp_{3/2})_{1}\), \((1snp_{3/2})_{2}\) states with \(n=3,4\) are presented. For each state, the values of the individual contributions, the total binding energy, and the energy relative to the ground state are given. The transitions energies to the ground state are compared with the results of Ref. [26]. In Ref. [26], the interelectronic interaction was treated within the Breit approximation using the configuration-interaction method. Additionally, in Ref. [26] the frequency-dependent Breit interaction and the nuclear recoil corrections were taken into account. The consideration of the one-loop QED corrections was based in Ref. [26] on the employed by us model-QED-operator approach. Furthermore, the two-loop QED corrections were considered there. In Ref. [26], the uncertainty of the theoretical calculations of the transition energies was estimated to be at the level of 1 eV. From Table 3 it can be seen that for the transitions considered, our results are in reasonable agreement with the results of Ref. [26]: the differences for all the transition energies do not exceed 2 eV.
Finally, in Table 4 the results for the energies of the \((1snd_{3/2})_{1}\), \((1snd_{3/2})_{2}\), \((1snd_{5/2})_{2}\), \((1snd_{5/2})_{3}\) states with \(n=3,4\) are shown. For each state, the values of the individual contributions, the total energy, and the energy of the singly-excited state relative to the ground one are presented. Based on the comparison of the present results for the transition energies to the ground state from the \(1sns\), \(1snp\) states with \(n=1,2\) and \(1sns\), \(1snp\) states with \(n=3,4\) with the theoretical prediction Ref. [25] and Ref. [26], respectively, we estimate the uncertainty of the obtained results for the corresponding transitions from the \(1snd\) states with \(n=3,4\) to be at the level of 2 eV. This uncertainty includes the error due to the QED effects beyond the model-QED-operator approach and the error due to the uncertainty of the root-mean-square radius of the \({}^{238}\)U nucleus.
\begin{table}
\begin{tabular}{c|l|r|r|r|r|r|r} \hline \(n\) & Contribution & \((1sns)_{0}\) & \((1sns)_{1}\) & \((1snp_{1/2})_{0}\) & \((1snp_{3/2})_{2}\) & \((1snp_{1/2})_{1}\) & \((1snp_{3/2})_{1}\) \\ \hline \multirow{6}{*}{\(n=3\)} & \(E^{\rm{DCB}}\) & \(-146\,389.84\) & \(-146\,456.53\) & \(-146\,380.77\) & \(-145\,104.14\) & \(-146\,408.38\) & \(-145\,084.29\) \\ & \(\Delta E^{\rm{FB}}\) & \(0.24\) & \(0.08\) & \(0.55\) & \(-2.06\) & \(0.18\) & \(0.92\) \\ & \(\Delta E^{\rm{QED}}\) & \(281.11\) & \(281.34\) & \(269.13\) & \(269.52\) & \(269.08\) & \(269.54\) \\ & \(\Delta E^{\rm{MS}}\) & \(0.51\) & \(0.51\) & \(0.50\) & \(0.49\) & \(0.50\) & \(0.50\) \\ & \(\Delta E^{\rm{PD}}\) & \(-0.33\) & \(-0.33\) & \(-0.31\) & \(-0.31\) & \(-0.31\) & \(-0.31\) \\ & \(E^{\rm{tot}}\) & \(-146\,108.30\) & \(-146\,174.94\) & \(-146\,110.90\) & \(-144\,386.51\) & \(-146\,138.93\) & \(-144\,813.64\) \\ & \(E^{\rm{tot}}-E^{\rm{tot}}_{1s}\) & \(115\,275.26\) & \(115\,208.62\) & \(115\,272.66\) & \(115\,547.05\) & \(115\,244.63\) & \(116\,569.92\) \\ & \(E^{\rm{tot}}-E^{\rm{tot}}_{1s}\) [26] & \(115\,276.70\) & \(115\,209.77\) & \(115\,273.83\) & \(116\,548.37\) & \(115\,245.92\) & \(116\,571.21\) \\ \hline \multirow{6}{*}{\(n=4\)} & \(E^{\rm{DCB}}\) & \(-139\,923.01\) & \(-139\,949.50\) & \(-139\,919.59\) & \(-139\,387.14\) & \(-139\,930.42\) & \(-139\,378.70\) \\ & \(\Delta E^{\rm{FB}}\) & \(0.10\) & \(0.03\) & \(0.28\) & \(-0.86\) & \(0.09\) & \(0.39\) \\ & \(\Delta E^{\rm{EDD}}\) & \(272.64\) & \(272.71\) & \(267.71\) & \(267.93\) & \(267.73\) & \(267.95\) \\ & \(\Delta E^{\rm{MS}}\) & \(0.49\) & \(0.49\) & \(0.48\) & \(0.48\) & \(0.48\) & \(0.48\) \\ & \(\Delta E^{\rm{PD}}\) & \(-0.31\) & \(-0.31\) & \(-0.31\) & \(-0.31\) & \(-0.31\) & \(-0.31\) \\ & \(E^{\rm{tot}}\) & \(-139\,650.09\) & \(-139\,676.58\) & \(-139\,651.43\) & \(-139\,119.89\) & \(-139\,662.43\) & \(-139\,110.19\) \\ & \(E^{\rm{tot}}-E^{\rm{tot}}_{1s1s}\) & \(121\,733.47\) & \(121\,706.98\) & \(121\,732.13\) & \(122\,263.67\) & \(121\,721.13\) & \(122\,273.37\) \\ & \(E^{\rm{tot}}-E^{\rm{tot}}_{1s1s}\)[26] & \(121\,734.83\) & \(121\,708.20\) & \(121\,733.31\) & \(122\,264.98\) & \(121\,722.35\) & \(122\,274.65\) \\ \hline \end{tabular}
\end{table}
Table 3: The energies of the \(1sns\) and \(1snp\) states with \(n=3,4\) of heliumlike uranium calculated using the Dirac-Coulomb-Breit Hamiltonian, \(E^{\rm{DCB}}\), and various corrections to these values: the frequency-dependent Breit-interaction corrections, \(\Delta E^{\rm{FB}}\), the QED corrections, \(\Delta E^{\rm{QED}}\), the nuclear recoil corrections, \(\Delta E^{\rm{MS}}\), the nuclear polarization and deformation corrections, \(\Delta E^{\rm{PD}}\). \(E^{\rm{tot}}_{1s1s}\) are the total energies and \(E^{\rm{tot}}-E^{\rm{tot}}_{1s1s}\) are the total energies relative to the ground state (eV). The latter values are compared with the total result from Ref. [26].
In Table 5, the one-photon transition probabilities for the \(1s3d\to 1s2p\), \(1s3p\to 1s2s\), \(1s3p\to 1s2p\), and \(1s4d\to 1s2p\) transitions with the lowest possible multipolarities are presented. The transition energies are obtained from the results given in Tables 1 -- 4. In the calculations of the transition probabilities, the orbitals with \(n\leq 14\) for \(l=0\), \(n\leq 11\) for \(l=1\) and \(n\leq 9\) for \(l=3,4\) are used, the total number of the one-electron functions is 37. The dipole \(E1\) (\(1s3d_{5/2}\))\({}_{3}\rightarrow(1s2p_{3/2}\))\({}_{2}\) and (\(1s3d_{3/2}\))\({}_{2}\rightarrow(1s2p_{1/2}\))\({}_{1}\) transitions have the largest probabilities, which are approximately \(0.44\cdot 10^{16}\) s\({}^{-1}\). The energies of these transitions are equal to 16360.5 eV and 20389.1 eV, respectively.
## IV Summary
In the present work, the energies of the \(1sns\), \(1snp\), \(1snd\) states with \(n\leq 4\) of heliumlike uranium are calculated using the configuration-interaction method in the basis of the Dirac-Fock-Sturm orbitals. The energies and probabilities of the one-photon \(1s3d\to 1s2p\), \(1s3p\to 1s2s\), \(1s3p\to 1s2p\), and \(1s4d\to 1s2p\) transitions with the lowest possible multipolarities are evaluated. The QED corrections to the energies of the states are taken into account using the model-QED-operator approach. In addition, the nuclear recoil corrections, the frequency-dependent Breit-interaction corrections, and the nuclear polarization and deformation corrections are calculated to the energies of the states and transition energies.
## Acknowledgment
This work was supported by the Russian Science Foundation (Grant No 22-62-00004, [https://rscf.ru/project/22-62-00004/](https://rscf.ru/project/22-62-00004/)).
\begin{table}
\begin{tabular}{l|l|c|c|c} \hline Transition \(\beta\rightarrow\alpha\) & \(\lambda L\) & \(\Delta E_{\beta\alpha}\) & \(A_{\nu}^{(\lambda)}(\beta,\alpha)\) \\ \hline \((3d_{3/2})_{2}\rightarrow(2p_{3/2})_{1}\) & E1 & \(15\,947.7\) & \(0.747\cdot 10^{14}\) \\ \((3d_{3/2})_{2}\rightarrow(2p_{3/2})_{2}\) & E1 & \(16\,021.5\) & \(0.645\cdot 10^{15}\) \\ \((3d_{3/2})_{1}\rightarrow(2p_{3/2})_{2}\) & E1 & \(16\,029.0\) & \(0.119\cdot 10^{15}\) \\ \((3d_{3/2})_{3}\rightarrow(2p_{3/2})_{1}\) & M2 & \(16\,281.5\) & \(0.149\cdot 10^{12}\) \\ \((3d_{5/2})_{2}\rightarrow(2p_{3/2})_{1}\) & E1 & \(16\,286.7\) & \(0.390\cdot 10^{16}\) \\ \((3d_{5/2})_{3}\rightarrow(2p_{3/2})_{2}\) & E1 & \(16\,355.3\) & \(0.436\cdot 10^{16}\) \\ \((3d_{5/2})_{2}\rightarrow(2p_{3/2})_{2}\) & E1 & \(16\,360.5\) & \(0.437\cdot 10^{15}\) \\ \((3d_{3/2})_{1}\rightarrow(2p_{1/2})_{0}\) & E1 & \(20\,288.1\) & \(0.294\cdot 10^{16}\) \\ \((3p_{3/2})_{1}\rightarrow(2p_{3/2})_{2}\) & E1 & \(20\,288.8\) & \(0.776\cdot 10^{15}\) \\ \((3p_{3/2})_{1}\rightarrow(2p_{1/2})_{0}\) & M1 & \(20\,289.1\) & \(0.224\cdot 10^{12}\) \\ \((3d_{3/2})_{2}\rightarrow(2p_{1/2})_{1}\) & E1 & \(20\,389.1\) & \(0.443\cdot 10^{16}\) \\ \((3d_{3/2})_{1}\rightarrow(2p_{1/2})_{1}\) & E1 & \(20\,397.5\) & \(0.148\cdot 10^{16}\) \\ \((3p_{3/2})_{2}\rightarrow(2s_{1/2})_{1}\) & E1 & \(20\,520.3\) & \(0.114\cdot 10^{16}\) \\ \((3p_{3/2})_{1}\rightarrow(2s_{1/2})_{1}\) & E1 & \(20\,543.2\) & \(0.373\cdot 10^{15}\) \\ \((3d_{3/2})_{2}\rightarrow(2p_{1/2})_{0}\) & M2 & \(20\,618.7\) & \(0.215\cdot 10^{12}\) \\ \((4d_{5/2})_{2}\rightarrow(2p_{3/2})_{1}\) & E1 & \(21\,802.7\) & \(0.127\cdot 10^{16}\) \\ \((4d_{5/2})_{3}\rightarrow(2p_{3/2})_{2}\) & E1 & \(21\,874.4\) & \(0.141\cdot 10^{16}\) \\ \end{tabular}
\end{table}
Table 5: The probabilities of the one-photon transitions with the lowest possible multipolarity \(\lambda L\), \(A_{\nu}^{(\lambda)}(\beta,\alpha)\) (s\({}^{-1}\)), and the transition energies, \(\Delta E_{\beta\alpha}\) (eV), for heliumlike uranium. The \(1s\) orbital is omitted in the designations of the initial and final states for the sake of brevity.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \(n\) & Contribution & \((1snd_{3/2})_{2}\) & \((1snd_{3/2})_{1}\) & \((1snd_{5/2})_{3}\) & \((1snd_{5/2})_{2}\) \\ \hline \multirow{5}{*}{\(n=3\)} & \(E^{\rm DCH}\) & \(-145\,092.76\) & \(-145\,084.47\) & \(-144\,759.69\) & \(-144\,754.50\) \\ & \(\Delta E^{\rm FB}\) & \(0.02\) & \(0.12\) & \(0.03\) & \(0.02\) \\ & \(\Delta E^{\rm QED}\) & \(266.48\) & \(266.49\) & \(267.20\) & \(267.21\) \\ & \(\Delta E^{\rm MS}\) & \(0.49\) & \(0.49\) & \(0.49\) & \(0.49\) \\ & \(\Delta E^{\rm PD}\) & \(-0.31\) & \(-0.31\) & \(-0.31\) & \(-0.31\) \\ & \(E^{\rm tot}\) & \(-144\,826.07\) & \(-144\,817.69\) & \(-144\,492.28\) & \(-144\,487.08\) \\ & \(E^{\rm tot}-E^{\rm tot}_{\rm 1s1s}\) & \(116\,557.49\) & \(116\,565.87\) & \(116\,891.28\) & \(116\,896.48\) \\ \hline \multirow{5}{*}{\(n=4\)} & \(E^{\rm DCH}\) & \(-139\,382.25\) & \(-139\,378.74\) & \(-139\,240.45\) & \(-139\,238.15\) \\ & \(\Delta E^{\rm FB}\) & \(0.01\) & \(0.07\) & \(0.02\) & \(0.00\) \\ & \(\Delta E^{\rm QED}\) & \(266.62\) & \(266.63\) & \(267.04\) & \(266.92\) \\ & \(\Delta E^{\rm MS}\) & \(0.48\) & \(0.48\) & \(0.48\) & \(0.48\) \\ & \(\Delta E^{\rm PD}\) & \(-0.31\) & \(-0.31\) & \(-0.31\) & \(-0.31\) \\ & \(E^{\rm tot}\) & \(-139\,115.44\) & \(-139\,111.87\) & \(-138\,973.22\) & \(-138\,971.06\) \\ & \(E^{\rm tot}-E^{\rm tot}_{\rm 1s1s}\) & \(122\,268.12\) & \(122\,271.69\) & \(122\,410.34\) & \(122\,412.50\) \\ \end{tabular}
\end{table}
Table 4: The energies of the \(1snd\) states with \(n\leq 4\) of heliumlike uranium calculated using the Dirac-Coulomb-Breit Hamiltonian, |
2309.04747 | When to Learn What: Model-Adaptive Data Augmentation Curriculum | Data augmentation (DA) is widely used to improve the generalization of neural
networks by enforcing the invariances and symmetries to pre-defined
transformations applied to input data. However, a fixed augmentation policy may
have different effects on each sample in different training stages but existing
approaches cannot adjust the policy to be adaptive to each sample and the
training model. In this paper, we propose Model Adaptive Data Augmentation
(MADAug) that jointly trains an augmentation policy network to teach the model
when to learn what. Unlike previous work, MADAug selects augmentation operators
for each input image by a model-adaptive policy varying between training
stages, producing a data augmentation curriculum optimized for better
generalization. In MADAug, we train the policy through a bi-level optimization
scheme, which aims to minimize a validation-set loss of a model trained using
the policy-produced data augmentations. We conduct an extensive evaluation of
MADAug on multiple image classification tasks and network architectures with
thorough comparisons to existing DA approaches. MADAug outperforms or is on par
with other baselines and exhibits better fairness: it brings improvement to all
classes and more to the difficult ones. Moreover, MADAug learned policy shows
better performance when transferred to fine-grained datasets. In addition, the
auto-optimized policy in MADAug gradually introduces increasing perturbations
and naturally forms an easy-to-hard curriculum. | Chengkai Hou, Jieyu Zhang, Tianyi Zhou | 2023-09-09T10:35:27Z | http://arxiv.org/abs/2309.04747v2 | # When to Learn What: Model-Adaptive Data Augmentation Curriculum
###### Abstract
Data augmentation (DA) is widely used to improve the generalization of neural networks by enforcing the invariances and symmetries to pre-defined transformations applied to input data. However, a fixed augmentation policy may have different effects on each sample in different training stages but existing approaches cannot adjust the policy to be adaptive to each sample and the training model. In this paper, we propose "Model-Adaptive Data Augmentation (MADAug)" that jointly trains an augmentation policy network to teach the model "**when to learn what**". Unlike previous work, MADAug selects augmentation operators for each input image by a model-adaptive policy varying between training stages, producing a data augmentation curriculum optimized for better generalization. In MADAug, we train the policy through a bi-level optimization scheme, which aims to minimize a validation-set loss of a model trained using the policy-produced data augmentations. We conduct an extensive evaluation of MADAug on multiple image classification tasks and network architectures with thorough comparisons to existing DA approaches. MADAug outperforms or is on par with other baselines and exhibits better fairness: it brings improvement to all classes and more to the difficult ones. Moreover, MADAug learned policy shows better performance when transferred to fine-grained datasets. In addition, the auto-optimized policy in MADAug gradually introduces increasing perturbations and naturally forms an easy-to-hard curriculum. Our code is available at [https://github.com/JackHck/MADAug](https://github.com/JackHck/MADAug).
## 1 Introduction
Data augmentation is a widely used strategy to increase the diversity of training data, which improves the model generalization, especially in image recognition tasks [21, 35, 17]. Unlike previous works that apply manually-designed augmentation operations [6, 44, 47, 23, 4, 24], recent researchers have resorted to searching a data augmentation policy for a target dataset/samples. Despite the success of these learnable and dataset-dependent augmentation policies, they are fixed once learned and thus non-adaptive to either different samples or models at different training stages, resulting in biases across different data regions [2] or inefficient training.
In this paper, we study two fundamental problems towards developing a data-and-model-adaptive data augmentation policy that determines a curriculum of "when to learn what" to train a model: **(1)**_when to apply data augmentation in training?_ **(2)**_what data augmentations should be applied to each training sample at different training stages?_
First, applying data augmentation does not always bring improvement over the whole course of training. For example, we observed that a model tends to learn faster during earlier training stages without using data augmentation. We hypothesize that models at the early stage of training even have no capability to recognize the original images so excessively augmented images are not conducive to the convergence of the models. Motivated by this observation, we first design a strategy called monotonic curriculum to progressively introduce more augmented data to the training. In particular, we gradually increase the probability of applying data augmentation to each sample by following the _Tanh_ function (see Figure 1), so the model can be quickly improved in earlier stages without distractions from augmentations while reaching a better performance in later stages through learning from augmented data.
Secondly, a fixed augmentation policy is not optimal for learning every sample or different training stages. Although the monotonic curriculum gradually increases the augmented data as the model improves, it does not determine which augmentations applied to each sample can bring the most improvement to the model training. Intuitively, the model can learn more from diverse data augmentations. Moreover, the difficulty of augmented data also has a great impact on the training and it depends on both the augmentations and the sample they are applied to. For example, "simple" augmentation is preferred in the early stages to accelerate model convergence but more challenging aug
mented data provide additional information for learning more robust features for better generalization in the later stage. One plausible strategy is leveraging expert knowledge and advice to adjust the augmentation operation and their strengths [29, 47, 34, 14]. In this paper, instead of relying on human experts, we regard the evaluation of the current model on a validation set as an expert to guide the optimization of augmentation policies applied to each sample in different training stages. As illustrated in Figure 1, we utilize a policy network to produce the augmentations for each sample (_i.e_., data-adaptive) used to train the task model, while the training objective of the policy network is to minimize the validation loss of the task model (_i.e_., model-adaptive). This is a challenging bi-level optimization [5]. To address it, we train the task model on adaptive augmentations of training data and update the policy network to minimize the validation loss in an online manner. Thereby, the policy network is dynamically adapted to different training stages of the task model and generates customized augmentations for each sample. This results in a curriculum of data augmentations optimized for improving the generalization performance of the task model.
Our main contributions can be summarized as follows:
* A **monotonic curriculum** gradually introducing more data augmentation to the training process.
* **MADAug** that trains a data augmentation policy network on the fly with the task model training. The policy automatically selects augmentations for each training sample and for different training stages.
* Experiments on CIFAR-10/100, SVHN, and ImageNet demonstrate that MADAug consistently brings greater improvement to task models than existing data augmentation methods in terms of test-set performance.
* The augmentation policy network learned by MADAug on one dataset is **transferable to unseen datasets and downstream tasks**, producing better models than other baselines.
## 2 Related Work
Random crop and horizontal flip operations are commonly employed as standard data augmentation techniques for images in deep learning. Recently, there are significant advancements in advanced data augmentation techniques that have significantly increased the accuracy of image recognition tasks [46, 41, 43, 38, 9, 16, 17]. However, data augmentations may only be applicable to certain domains, and heuristically selected transformations, such as transplanting transformations that are effective in one domain into another, could have the opposite effect [2]. Thus, the exploration of optimal data augmentation policies necessitates specialized domain knowledge.
AutoAugment [6] adopts reinforcement learning to automatically find an available augmentation policy. However, AutoAugment requires thousands of GPU hours to find the policies on a reduced setting and limits randomness on the augmentation policies. To tackle these challenges, searching the optimal data augmentation strategies has become a prominent research topic and many methods have been proposed [46, 36, 13, 24, 14, 23, 25, 22, 44, 18, 45, 4].
These methods can be broadly classified into two distinct categories: fixed augmentation policies and online augmentation policies. The first category of methods [13, 23, 24, 47, 6, 45, 4] employs subsets of the training data and/or smaller models to efficiently discover fixed augmentation policies. However, the limited randomness in these policies makes it challenging to generate suitable samples for various stages of training. Thus, the fixed augmentation policies are suboptimal. The second category of methods [7, 36, 26, 22, 30, 44, 25] focuses on directly finding dynamic augmentation policies on the task model. This
Figure 1: **MADAug** applies a monotonic curriculum to gradually introduce more data augmentations to the task model training and uses a policy network to choose augmentations for each training sample. MADAug trains the policy to minimize the validation loss of the task model, so the augmentations are model-adaptive and optimized for different training stages.
strategy is increasingly recognized as the primary choice for data augmentation search.
RandAugment [7] and TrivialAugment [30] are typically the second type of methods for finding online augmentations. They randomly select the augmentation parameters without relying on any external knowledge or prior information. Other methods, such as Adversarial AutoAugment [44], generate the adversarial augmentations by maximizing the training loss. However, the inherent instability of adversarial augmentations, without appropriate constraints, poses a risk of distorting the intrinsic meanings of images. To avoid this collapse, TeachAugment [36] utilizes the "teacher knowledge" to effectively restrict adversarial augmentations. However, Adversarial AutoAugment [44] and TeachAugment [36] both offer "hard" augmentations rather than "adoptive" augmentations, which are not effective to enhance the model generalization at the early training stage, because models at the early training even do not recognize the primitive images. "Hard" augmentations are reluctant to converge the model. Thus, in our paper, we gradually apply the data augmentations for samples and track the model performance on the validation set to adjust the policies through the original bi-level optimization during the model training.
## 3 Method
In this section, we first propose monotonic curriculum which progressively introduces more augmented samples as the training epoch increases. We then introduce the policy network that generates model-adaptive data augmentations and study how to train it through bi-level optimization with the task model.
### When to Augment: Monotonic Curriculum
Previous studies [6, 23, 24, 13] have adopted the data augmentations for the whole model training process. However, at the early stage of model training, the model doesn't even recognize the original images. In this case, is data augmentation effective? In Figure 2, the test accuracy of a model trained on the Reduced CIFAR-10 dataset drops in the first \(\sim 70\) epochs if applying human-designed data augmentations. To address this problem, at the beginning of model training, we only apply augmentations to a randomly sampled subset of training images while keeping the rest as original. In the later training stages, we apply a monotonic curriculum that gradually increases the proportion of images to be augmented or the probability of applying augmentation. Specifically, the proportion/probability \(p(t)\) increases with the number of epochs by following a schedule defined by \(\tanh\), i.e.,
\[p(t)=\tanh(t/\tau) \tag{1}\]
where \(t\) is the current training epoch number and \(\tau\) is a manually adjustable hyperparameter that controls the change of proportion. Therefore, the early-stage model is mainly trained on the original images without augmentations, which helps the premature model converge quickly. However, as training proceeds, the model fully learned the original images and its training can benefit more from the augmented images. To validate the efficiency of our strategy, compare with the images without augmentation policy or with the fixed human-design augmentation policies, our method can effectively boost model performance during various training stages (see Figure 2).
### What Augmentations to Apply: Model-Adaptive Data Augmentation
Instead of constantly applying the same data augmentation policies to all samples over the whole training process, adjusting the policy for each sample and model in different training stages can provide better guidance to the task model and thus accelerate its training towards better validation accuracy.
Following AdaAug [4], we assign an augmentation probability \(p\) and magnitude \(\lambda\) to each sample. The augmentation probability vector \(p\) contains the possibility \(p_{i}\) of applying each augmentation-\(i\), _i.e_., \(\sum_{i=1}^{n}p_{i}=1\), where there are \(n\) possible augmentation operations. The augmentation magnitude vector \(\lambda\) contains the associated augmentation strengths such that \(\lambda_{i}\in[0,1]\). In the training process, for every training image \(x\), we draw \(k\) operations without replacement according to \(p\) and build an augmentation policy based on them and their magnitude in \(\lambda\). In particular, each sampled augmentation operator-\(j\) is applied to the image \(x\) with magnitude \(\lambda_{j}\), resulting in an augmented image \(\Gamma^{j}(x)\triangleq\tau_{j}(x;\lambda_{j})\). By applying the \(k\) sampled augmentations, the final augmented image \(\gamma(x)\) can be written as:
Figure 2: Test accuracy on Reduced CIFAR-10. **No Augmentation** does not apply any augmentations. **Human-designed Augmentation** always applies human pre-defined augmentations. **Monotonic Curriculum** gradually increases the probability of applying human-designed augmentations.
\[\begin{split}&\Gamma^{t}(x)=\tau_{j}(x;\lambda_{j}),\quad j\sim p\\ &\gamma(x)=\Gamma^{k}\circ\cdots\circ\Gamma^{1}(x),\end{split} \tag{2}\]
where \(\circ\) is the compositional operator.
An arbitrary augmentation policy is not guaranteed to improve the performance of a task model but a brute-force search is not practically feasible. Hence, we optimize a policy model producing the optimal augmentation probability vector \(p\) and magnitude vector \(\lambda\) for each image at different training stages. For image \(x\), we define \(f(x;w)\) as the task model with parameter \(w\) and \(g_{w}(x)\) as the intermediate-layer representation of image \(x\) extracted from the task model \(f(x;w)\). The policy model \(p(\cdot;\theta)\) with parameters \(\theta\) takes the extracted feature \(g_{w}(x)\) as input and outputs the probability vector \(p\) and magnitude vector \(\lambda\) for the image \(x\). The parameter \(w\) of the task model is optimized by minimizing the following training loss on the training set \(\mathcal{D}^{tr}=\{x_{i},y_{i}\}_{i=1}^{N^{tr}}\):
\[w=\operatorname*{arg\,min}_{w}\mathcal{L}^{tr}(w;\theta)=\frac{1}{N^{tr}}\sum _{i=1}^{N^{tr}}\mathcal{L}_{CE}(f(\gamma(x_{i});w),y_{i}), \tag{3}\]
where the augmented training image \(\gamma(x_{i})\) is generated by the policy network \(p(g_{w}(x_{i});\theta)\) and \(\mathcal{L}_{CE}(\cdot,\cdot)\) is the cross-entropy loss. The policy model is to produce augmentation policies applied to the training of the task model and its optimization objective is to minimize the trained task model's loss on a validation set, i.e., \(\mathcal{D}^{val}=\left\{x_{i}^{val},y_{i}^{val}\right\}_{i=1}^{N^{val}}\). The above problem can be formulated as the bi-level optimization [5] below:
\[\begin{split}&\min_{\theta}\quad\mathcal{L}^{val}(w^{*}(\theta))= \frac{1}{N^{val}}\sum_{i=1}^{N^{val}}\mathcal{L}_{i}^{val}(w^{*}(\theta))\\ & s.t.\quad w^{*}(\theta)=\operatorname*{arg\,min}_{w}\mathcal{L }^{tr}(w;\theta)\end{split} \tag{4}\]
where \(\mathcal{L}_{i}^{val}(w^{*}(\theta))=\mathcal{L}_{CE}(f(x_{i}^{val};w^{*}( \theta)),y)\). Bi-level optimization is challenging because the lower-level optimization (i.e., the optimization of \(w\)) does not have a closed-form solution that can be substituted into the higher-level optimization (i.e., the optimization of \(\theta\)). Recent work [33, 34, 47, 14, 29] address this problem (4) by alternating minimization. In this paper, we employ the same strategy as [34, 47, 1].
### Joint Training of Task model & MADAug Policy
To address the bi-level optimization, we alternately update \(\theta\) and \(w\) by first optimizing the policy network \(\theta\) for a task model \(\hat{w}\) achieved by one-step training and then update \(w\) using the augmentations produced by the new policy network \(\theta\).
We split the original training set into two disjoint sets, _i.e_., a training set and a validation set. Each iteration trains the model on a mini-batch of \(n^{tr}\) images \(\mathcal{D}^{tr}_{m_{i}}=\{x_{i},y_{i}\}_{i=1}^{n^{tr}}\) drawn from the training set. Let \(\mathcal{L}^{tr}(w_{t};\theta_{t})=\mathcal{L}_{CE}(f(\gamma(x_{i});w_{t}),y_ {i})\) denote the lower-level objective for optimizing \(w_{t}\). We apply one-step gradient descent on \(w_{t}\) to achieve a closed-form surrogate of the lower-level problem solution, i.e.,
\[\hat{w}_{t}=w_{t}-\alpha\frac{1}{n^{tr}}\sum_{i=1}^{n^{tr}}\nabla_{w} \mathcal{L}^{tr}(w_{t};\theta_{t}) \tag{5}\]
where \(\alpha\) is a learning rate. However, we cannot use backpropagation to optimize \(\theta_{t}\) for the high-level optimization because the sampling process of the \(k\) augmentation operations in \(\gamma(x_{i})\) is non-differentiable. Hence, backpropagation cannot compute the partial derivative w.r.t. the augmentation probability \(p\) and magnitude \(\lambda\). To address this problem, we relax the non-differentiable \(\gamma(x_{i})\) to be a differentiable operator. Since the augmentation policy in most previous work [6, 18] only consists of two operations, for \(k=2\), \(\gamma(x_{i})\) can be relaxed as
\[\gamma(x_{i})\approx\sum_{j_{1}=1}^{n}\sum_{j_{2}=1}^{n}p_{ij_{1}}\cdot p_{ij _{2}}\Gamma^{2}_{ij_{2}}(\Gamma^{1}_{ij_{1}}(x_{i}))))\quad j_{1}\neq j_{2} \tag{6}\]
where \(\Gamma^{t}_{ij_{k}}(x_{i})=\tau_{j_{k}}(x_{i};\lambda_{j_{k}})\) applies augmentation-\(j_{k}\) (with magnitude \(\lambda_{j_{k}}\)) to \(x_{i}\) in the \(t\)-th augmentation operation. The relaxed \(\gamma(x_{i})\) is differentiable by combining different augmentations according to weights as their probabilities, so we can estimate the partial derivatives w.r.t. \(p\) via back-propagation through Eq. 6. In our approach, the forward pass still uses the sampling-based \(\gamma(x_{i})\), whereas the backward pass uses its differentiable relaxation in Eq. 6.
For back-propagation through the augmentation magnitude vector \(\lambda\), we apply the straight-through gradient estimator [3, 39] because the magnitudes of some operations such as "Posterize" and "Solarize" are discrete variables that only have finite choices. In previous approaches [4, 13], the loss's gradient w.r.t. \(\lambda_{m}\) is estimated by applying the chain-rule to each pixel value \(\gamma(x_{h,w})\) in the augmented image \(\gamma(x)\), _i.e_. \(\frac{\partial\gamma(x_{h,w})}{\partial\lambda_{j}}=1\). Hence, the gradient of loss \(\mathcal{L}\) w.r.t. \(\lambda_{m}\) can be computed as:
\[\frac{\partial\mathcal{L}}{\partial\lambda_{m}}=\sum_{h,w}\frac{\partial \mathcal{L}}{\partial\gamma(x_{h,w})}\frac{\partial\gamma(x_{h,w})}{\partial \lambda_{m}}=\sum_{h,w}\frac{\partial\mathcal{L}}{\partial\gamma(x_{h,w})} \tag{7}\]
Then, the policy network parameters \(\theta_{t}\) can be updated by minimizing the validation loss computed by the current meta task model \(\hat{w}_{t}\) on a mini-batch of validation set \(\mathcal{D}^{val}=\left\{x_{i}^{val},y_{i}^{val}\right\}_{i=1}^{n^{val}}\) with batch size \(n^{val}\). Therefore,
the outer loop updates of \(\theta_{t}\) is formulated by:
\[\theta_{t+1}=\theta_{t}-\beta\frac{1}{n^{val}}\sum_{i=1}^{n^{val}}\nabla_{\theta }\mathcal{L}_{i}^{val}(\hat{w}_{t}(\theta_{t})) \tag{8}\]
where \(\beta\) is a learning rate. The third step is to update the parameter \(w_{t}\) based on the parameter \(\theta_{t+1}\) of the policy model in the outer loop of iteration \(t+1\):
\[w_{t+1}=w_{t}-\alpha\frac{1}{n^{tr}}\sum_{i=1}^{n^{tr}}\nabla_{w}\mathcal{L}^{ tr}(w_{t};\theta_{t+1}) \tag{9}\]
With these updating rules, the policy and task networks can be alternatively trained. Our proposed algorithm is summarized in Algorithm 1.
```
0: Training set \(\mathcal{D}_{train}=\left\{x_{i},y_{i}\right\}_{i\in[N_{train}]}\); Validation set \(\mathcal{D}_{valid}=\left\{x_{i},y_{i}\right\}_{i\in[N_{valid}]}\); Batch sizes \(n^{tr},n^{val}\); Learning rate \(\alpha,\beta\); Iteration number \(T\);
0: Task model \(w_{T}\); policy network \(\theta_{T}\)
1: Initialize \(w_{0}\), \(\theta_{0}\)
2:for\(t=0\) to \(T\)do
3: Sample a training set mini-batch \(d_{train}\in\mathcal{D}_{train}\).
4: Draw Augment\(\sim P(t)\) in Eq. 1.
5:if Augment then
6: Apply policy network \(\theta_{t}\) to achieve augmentations \(\gamma(x)\) for each sample \(x\in d_{train}\).
7:endif
8: Update \(\hat{w}_{t}\) on the augmented \(d_{train}\) (Eq.5).
9: Sample a validation set mini-batch \(d_{valid}\in\mathcal{D}_{valid}\).
10: Update policy network \(\theta_{t+1}\) on \(d_{valid}\) (Eq. 8).
11: Apply policy network \(\theta_{t+1}\) to achieve new augmentations \(\gamma(x)\) for each sample \(x\in d_{train}\).
12: Update task model \(w_{t+1}\) on the newly augmented \(d_{train}\) (Eq. 9).
13:endfor
```
**Algorithm 1** Model-Adaptive Data Augmentation
## 4 Experiments
In this section, following AutoAugment [6], we examine the performance of MADAug on two experiments: MADAug-direct and MADAug-transfer. In the first experiment, we directly explore the performance of the MADAug on the benchmark datasets: CIFAR-10 [20], CIFAR-100 [20], SVHN [31], and ImageNet [8]. For CIFAR-10, CIFAR-100, and SVHN, we equally select 1,000 images from the dataset as the validation set to train the policy model. Plus, for SVHN, we apply both the training images and additional "extra" training images as the training set. For ImageNet, the validation set consists of 1,200 examples from a randomly selected 120 classes. We compare the average test set error of our method with previous state-of-the-art methods, AutoAugment (AA) [6], Population Based Augmentation (PBA) [18], Fast AutoAugment (Fast AA) [24], DADA [23], Faster AutoAugment (Fasrer AA) [13], RandAugment (RA) [7], TrivialAugmen (TA) [30], Deep AutoAugment (Deep AA) [45], TeachAugment (Teach) [36], OnlineAug [37], and AdaAug [4].
Our experiment results demonstrate that MADAug-direct considerably improves the accuracy of baselines and achieves state-of-the-art performance on these benchmark datasets. In the second experiment, we investigate the transferability of MADAug-learned policy network to unseen fine-grained datasets. To verify its effectiveness, we apply the augmentation policies learned by MADAug on the CIFAR-100 dataset to fine-grained classification datasets such as Oxford 102 Flowers [32], Oxford-IIIT Pets [10], FGVC Aircraft [28], and Stanford Cars [19]. Our findings demonstrate the remarkable transferability of MADAug-learned policy, which significantly outperforms the robust baseline models on fine-grained classification datasets.
### Augmentation Operations
We follow the augmentation actions taken by AutoAugment [6]. We adopt the 16 augmentation operations (ShearX, ShearY, TranslateX, TranslateY, Rotate, AutoContrast, Invert, Equalize, Solarize, Posterize, Contrast, Color, Brightness, Sharpness, and Cutout) that are previously suggested to build the augmentation policies. Meanwhile, we add the Identity operation, which does not apply augmentation on images. For the sample baseline, we employ random horizontal flip, color jittering, color normalization, and Cutout with a \(16\times 16\) patch size as basic augmentations. The found policies learned by MADAug and other baselines are applied on top of these basic augmentations.
### Implementation Details
In our experiments, the policy network of MADAug refers to a fully-connected layer that takes the representations produced by the penultimate layer of the task model as its inputs and outputs \(p\) and \(\lambda\). Following AdaAug [4], the update of policy projection network parameters uses the Adam optimizer with a learning rate of \(0.001\). For the CIFAR-10, CIFAR-100, and SVHN, we evaluate our method on four models: Wide-ResNet- 40-2 [42], Wide-ResNet-28-10 [42], Shake-Shake (26 2x96d) [11], and PyramidNet with ShakeDrop [40, 12]. We train all models using a batch size of 128 except for PyramidNet with ShakeDrop, which is trained with a batch size of 64. We train the Wide-ResNet for 200 epochs and Shake-Shake/PyramidNet for 1,800 epochs. For Wide-ResNet models trained on SVHN, we follow PBA [18] to use the step learning rate schedule [9] and all other models use a cosine learning rate scheduler with one annealing cycle [27]. To align our re
sults with other baselines, we train the ResNet-50 [15] from scratch on the full ImageNet using the hyperparameters in AutoAugment [6] on ImageNet. For all models, we use gradient clipping with magnitude 5. We provide specific details about the learning rate and weight decay values on the supplementary materials.
### Main Results
Table 1 shows that the learned policies through bi-level optimization achieve the best performance than the baselines for different models on the Reduced CIFAR-10, CIFAR-10, CIFAR 100, Reduced SVHN, SVHN, and ImageNet. The Reduced CIFAR-10(SVHN) dataset randomly selects 4,000(1,000) images for CIFAR-10(SVHN) as the training set and sets the remaining images as the validation set. MADAug achieves state-of-the-art performance on this dataset. On the Reduced SVHN dataset, compared to AdaAug, we achieve 0.7% and 0.6% improvement on Wide-ResNet-28-10 and Shake-Shake (26 2x96d), respectively. On ImageNet, compare with other baselines, our method performs the best on a large and complex dataset. Different from the prior work (AutoAugment, PBA, and Fast AutoAugment) which constructs the fixed augmentation policies for the enter dataset, our method can find the dynamic and model-adoptive policies for each image, which enhances the model's generalization. We provide the average and variance of the experiment results in Section 4.7.
### Transferability of MADAug-Learned Policy
Following AdaAug [4], we apply the augmentation policies learned from the CIFAR-100 directly on the fine-grained datasets (MADAug-direct). To evaluate the transferability of the policies found on CIFAR-100, we compare the test error rate with AutoAugment (AA) [6], Fast AutoAugment (Fast AA) [24], RandAugment (RA) [7], and AdaAug [4] using their published policies on CIFAR-100. For all the fine-grained datasets, we compare the transfer results by training the ResNet-50 model [15] pretrained on ImageNet. Following the experiment setting of AdaAug, we use the cosine learning rate decay with one annealing cycle [27] and train the model for 100 epochs. According to the validation performance, we adjust the learning rate for different fine-grained datasets. The weight decay is set as 1e-4 and the gradient clipping parameter is 5.
Table 2 shows that our method outperforms the other baselines when training the pretrained ResNet-50 model on these fine-grained datasets. Previous methods (AutoAugment [6], Fast Augmentation [24], and RandAugment [7]) apply the fixed augmentation policies. This strategy does
\begin{table}
\begin{tabular}{l l|c c c c c c c c c c c c} \hline Dataset & Backbone & Simple & AA & PBA & Fast AA & MADA & Faster AA & RA & TA & DeepAA & Teach & OnlineAug & AdaAug & AdaAug & MADAug \\ \hline \multirow{2}{*}{Reduced CIFAR-10} & Wide-ResNet-28-10 & 18.9 & 14.1 & 12.8 & 14.6 & 15.6 & - & 15.1 & - & - & - & 14.3 & 13.6 & 15.0 & **12.5** \\ & Shake-Shake (26 2x96d) & 17.1 & 10.1 & 10.6 & - & - & - & - & - & - & - & 10.9 & 11.8 & **10.0** \\ \hline \multirow{2}{*}{CIFAR-10} & Wide-ResNet-40-2 & 5.3 & 3.7 & 3.9 & 3.6 & 3.6 & 3.7 & 4.1 & - & - & - & - & 3.6 & - & **3.3** \\ & Wide-ResNet-28-10 & 3.9 & 2.6 & 2.6 & 2.7 & 2.7 & 2.6 & 2.7 & 2.5 & 2.4 & 2.5 & 2.4 & 2.6 & - & **2.1** \\ & Shake-Shake (26 2x96d) & 2.9 & 2.0 & 2.0 & 2.0 & 2.0 & 2.0 & 2.0 & 1.9 & 2.0 & - & - & - & **1.8** \\ & Pyramid (ShakeDrop) & 2.7 & 1.5 & 1.5 & 1.8 & 1.7 & - & 1.5 & - & - & 1.5 & - & - & - & **1.4** \\ \hline \multirow{2}{*}{CIFAR-100} & Wide-ResNet-40-2 & 26.0 & 20.6 & 22.3 & 20.7 & 20.9 & 21.4 & - & 19.4 & - & - & - & 19.8 & - & **19.3** \\ & Wide-ResNet-28-10 & 18.8 & 17.1 & 16.7 & 17.3 & 17.5 & 17.3 & 16.7 & 16.5 & 16.1 & 16.8 & 16.6 & 17.1 & - & **16.1** \\ & Shake-Shake (26 2x96d) & 17.1 & 14.3 & 15.3 & 14.9 & 15.3 & 15.6 & - & - & 14.8 & 14.5 & - & - & **14.1** \\ & Pyramid (ShakeDrop) & 14.0 & 10.7 & 10.9 & 11.9 & 11.2 & - & - & - & - & 11.8 & - & - & - & **10.5** \\ \hline \multirow{2}{*}{Reduced SVHN} & Wide-ResNet-28-10 & 13.2 & 8.2 & 7.8 & 8.1 & 7.6 & - & 9.4 & - & - & - & **6.7** & 8.2 & 9.1 & 8.4 \\ & Shake-Shake (26 2x96d) & 13.3 & **5.9** & 6.5 & - & - & - & - & - & - & - & - & **1.0** \\ \hline \multirow{2}{*}{SVHN} & Wide-ResNet-28-10 & 1.5 & 1.1 & 1.2 & 1.1 & 1.2 & 1.2 & 1.0 & - & - & - & - & - & **1.0** \\ & Shake-Shake (26 2x96d) & 1.4 & 1.1 & 1.1 & 1.1 & - & - & - & - & - & - & - & - & **1.0** \\ \hline ImageNet & ResNet-50 & 23.7 & 22.4 & - & 22.4 & 22.5 & 23.5 & 22.4 & 22.1 & 21.7 & 22.2 & 22.5 & 22.8 & - & **21.5** \\ \hline \end{tabular}
\end{table}
Table 1: **Test error (%, average of 5 random trials) on CIFAR-10, CIFAR-100, SVHN and ImageNet. Lower value is better. “Simple” applies regular random crop, random horizontal flip, and Cutout. All other methods apply “Simple” on top of their proposed augmentations. We report the accuracy of our re-implemented AdaAug\(\dagger\), while other baselines’ results are adapted from Zheng [45], Cheung [4], Tang [37], and Suzuki [36]. The best performance is highlighted in Bold.**
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline Dataset & \# of classes & Train number & Simple & AA & Fast AA & RA & AdaAug & MADAug \\ \hline Oxford 102 Flowers & 102 & 2,040 & 5.0 & 6.1 & 4.8 & 3.9 & 2.8 & **2.5** \\ Oxford-IIIT Pets & 37 & 3,680 & 19.5 & 18.8 & 23.0 & 16.8 & 16.1 & **15.3** \\ FGVC Aircraft & 100 & 6,667 & 18.4 & 16.6 & 17.0 & 17.4 & 16.0 & **15.4** \\ Stanford Cars & 196 & 8,144 & 11.9 & 9.2 & 10.7 & 10.3 & 8.8 & **8.3** \\ \hline \end{tabular}
\end{table}
Table 2: **Transferability of MADAug learned policy network. Test set error (%) of fine-tuning a pretrained ResNet-50 using the augmentations produced by the policy network on downstream tasks. Baseline results are adapted from Cheung [4].**
not help the fine-grained sample to distinguish from each other, which makes the model easy to recognize their differences. In contrast, AdaAug and MADAug adapt the augmentation policies for the entire dataset to instance-level grade. Because our method gradually augments the images and provides more optimal augmentation policies to unseen fine-grained images according to their relationship to the classes that have been learned on the CIFAR-100 dataset, the model achieves better performance than AdaAug, especially for the Pet dataset. The Pet dataset only contains the "Cat" and "Dog" images. In Figure 3, we also show that MADAug can improve the model's ability to recognize "Cat" and "Dog" classes significantly on the Reduced CIFAR-10 dataset.
### Analysis of MADAug Augmentations
We compare the pre-class accuracy from MADAug with AdaAug on the Reduced CIFAR-10 dataset, which only has 4,000 training images. Figure 3 shows the pre-class accuracy of a model trained on MADAug is higher than that trained on AdaAug and basic baseline, especially in "Bird", "Deer", "Cat" and "Dog" classes. Moreover, compared with the basic baseline, we can see that the augmentation policies trained by AdaAug play a negative impact on the "Airplane" and "Automobile" classes.
In Figure 4, we display some augmented samples with AdaAug and MADAug, which are randomly selected from "Bird", "Cat", "Deer", "Dog", "Airplane", and "Automobile" classes. For AdaAug, some augmented images in their classes have lost semantic information caused by the translation, because augmentation operations like TranslateY with unreasonable magnitude collapse the main information of the image. For example, in the Figure 4, the selected "Cat" augmented image loses its face and only leaves its legs. And the "Dog" augmented image also discards a part of key information. We think that these unreasonable data augmentation strategies lead to an imbalance in the number of samples containing sufficient information about their true label across different categories, although the original dataset is balanced [2]. This phenomenon leads to a reduction in the classification accuracy of some categories. However, for our method, Figure 5 shows augmentation policies generated by MADAug can produce more "hard" samples for the model with the training process. This strategy would improve the model generalization because at the early phase, "simple" samples can help models converge quickly and when the models have the capability to recognize the original samples, the "hard" samples can make them learn more robust features. Figure 4 shows augmented images on the different training phases. The model gradually receives more adversarial augmented images. And, these augmented policies learned by MADAug increase the diversity of training data and highlight the key information of data.
The same analysis of the Reduced SVHN dataset is presented in Appendix A. Through analyses of Reduced SVHN and Reduced CIFAR-10 dataset, from an experimental perspective, we illustrate that MADAug consistently provides higher-quality data augmentation strategies for samples, leading to improve more accuracy of the task model across different categories than AdaAug. From a methodological perspective, we also provide a detailed account of the advantages of MADAug over AdaAug in Appendix A.
### Computing cost of MADAug
To demonstrate the effectiveness of MADAug, we present a comparison of GPU hours needed to search the augmentation policy and train the task model across different baselines. The results are showed in Table 3. The searching time of our method is regarded as the time to optimize Eq. 4. Thus, we do not need extra time to find data augmentation policies. Our approach is more effective than AutoAugment [6], PBA [18], and AdaAug [4].
### Mean and variance of the experiment results
Table 4 represents the average values and variances of these experimental results obtained from multiply trials on different benchmark datasets.
Figure 3: **Improvement that MADAug and AdaAug bring to different classes.** MADAug consistently improves the test accuracy over all classes and brings greater improvements to more difficult classes (fairness), _e.g._, “Bird”, “Cat”, “Deer”, and “Dog”. In contrast, AdaAug has a negative impact on “Airplane” and “Automobile”.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Computing Cost} \\ \cline{2-4} & Searching & Training & Total \\ \hline AutoAugment & 5000 & 1.2 & 5001.2 \\ PBA & 5 & 1.2 & 6.2 \\ AdaAug & 2.9 & 1.4 & 4.3 \\ MADAug & \(\sim\)0 & 1.8 & **1.8** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Time consumption.** Comparison of computing cost (GPU hours) in training Wide-ResNet-28-10 on Reduced CIFAR-10 datasets between AutoAugment, PBA, AdaAug, and MADAug.
## 5 Ablation study
Magnitude perturbation.Following the AdaAug [4], we also add the magnitude perturbation \(\delta\) for the augmentation policy. From Table 5, when the magnitude perturbation of operation is set as 0.3, the performance of the model on the test dataset is best. And we can conclude that the magnitude perturbation plays a positive effect in improving the generalization of the model.
Number of augmentation operations.The number of operations \(k\) is arbitrary. Table 5 shows the relationship between the number of operations and the final test error on the Reduced CIFAR-10 with Wide-ResNet-28-10 model. The number of operations ranges from 1 to 5. When the augmentation operation is chosen as 2, we have the lowest error rate on the dataset. Policies learned by other methods (AutoAugment [6], PBA [18], and AdaAug [4]) also formulates two augmentation operations. This phenomenon indicates two augmentation operations' policies not only increase the diversity and amount of images but also do not make the task model unable to recognize the images due to
\begin{table}
\begin{tabular}{c|c|c|c} \hline Dataset & Reduced CIFAR-10 & CIFAR-10 & CIFAR-100 \\ \hline & 12.5\(\pm\)0.05 & 2.1\(\pm\)0.11 & 16.1\(\pm\)0.10 \\ \hline Dataset & Reduced SVHN & SVHN & ImageNet \\ \cline{2-4} & 8.4\(\pm\)0.09 & 1.0\(\pm\)0.10 & 21.5\(\pm\)0.15 \\ \hline \end{tabular}
\end{table}
Table 4: **Mean and variance of experiment results.** Test error and variance (%) of MADAug on different benchmark datasets with Wide-ResNet-28-10 and ResNet-50.
Figure 4: **Augmentations of AdaAug and MADAug for different classes of images (operations and associated strengths).** AdaAug only produces specific augmentations for different images, while MADAug adjusts the augmentations for each image to be adaptive to different training epochs. MADAug introduces less distortion than AdaAug.
Figure 5: **Similarity between the original images and MADAug-augmented images at different training epochs.** MADAug starts from less perturbed images but generates more challenging augmentations in later training.
excessive data augmentations.
Structure of policy network.Does the use of a nonlinear projection deliver better performance? We would add the policy model with the multiple hidden layers and the ReLU activation. Table 5 shows the influence of different number \(h\) of hidden layers on model performance. A single linear layer is sufficient for the policy model, without adding extra hidden layers.
Hyperparameter of \(\tau\).The hyperparameter \(\tau\) controls the relationship between the epoch and the number of augmented samples. As is shown in Table 5, the performance of the task model is quite robust to hyperparameter \(\tau\in\{10,20,30,40,50\}\). For the Reduced CIFAR-10, \(\tau\) is optimally set as 40.
Analysis of optimization steps.Table 5 illustrates the impact of optimizing data augmentation strategies through different steps \(s\) on the experiment results using the Reduced CIFAR-10. The task model exhibits its highest accuracy when the step size is configured to 1.
Effect of monotonic curriculum.We investigate the effect of monotonic curriculum which is introduced in Section 3.1. We train the Wide-ResNet-28-10 on the Reduced CIFAR-10 and Reduced SVHN datasets without/with this trick across different baselines. The results are shown in Table 6. For MADAug, monotonic curriculum contributes to the improvement of accuracy in these datasets. For other baselines, whether AutoAugment [6] method that applies the same data augmentation policy for the entire dataset, or AdaAug approach that offers different data augmentation policies to different samples, monotonic curriculum has been found effectively.
Strategy of MADAug.MADAug not only dynamically adjusts the augmentation strategies to minimize the loss of the task model on the validation set which is named a model-adaptive strategy but also provides different data augmentation policies for each sample called data-adaptive strategy. In order to verify the effectiveness of MADAug, we use one of these two strategies to find the augmentation policies and train the task model to classify the dataset. Table 7 shows MADAug combines these two training strategies well and offers the higher quality of data augmentation policies for the dataset.
## 6 Conclusion
In this paper, we propose a novel and general data augmentation method, MADAug, which is able to produce instance-adaptive augmentations adaptive to different training stages. Compared to previous methods, MADAug is featured by a monotonic curriculum that progressively increases augmented data and a policy network that generates augmentations optimized to minimize the validation loss of a task model. MADAug achieves SOTA performance on several benchmark datasets and its learned augmentation policy network is transferable to unseen tasks and brings more improvement than other augmentations. We show that MADAug-augmentations preserve the key information of images and change with the task model in different training stages accordingly. Due to its data-and-model-adaptive property, MADAug has a great potential to improve a rich class of machine learning tasks in different domains.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Method & Monotonic curriculum & Reduced CIFAR-10 & Reduced SVHN \\ \hline AA & ✓ & **13.7** & **7.8** \\ \hline AdaAug & & 15.0 & 9.1 \\ & ✓ & **14.4** & **8.7** \\ \hline MADAug & & 13.1 & 8.9 \\ & ✓ & **12.5** & **8.4** \\ \hline \end{tabular}
\end{table}
Table 6: **Effect of monotonic curriculum.** Test error (%) of MADAug and other baselines without/with monotonic curriculum.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Model-adaptive & Data-adaptive & Reduced CIFAR-10 & Reduced SVHN \\ \hline ✓ & & 14.5 & 9.1 \\ & ✓ & 14.0 & 9.6 \\ ✓ & ✓ & **12.5** & **8.3** \\ \hline \end{tabular}
\end{table}
Table 7: **Effect of model/data-adaptive augmentation strategy.** Test error (%) of model-adaptive/data-adaptive only MADAug on two datasets.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \(\delta\) & 0 & 0.1 & 0.2 & 0.3 & 0.4 \\ \hline ACC(\%) & 86.7 & 87.0 & 87.3 & **87.5** & 87.2 \\ \hline \hline \(k\) & 1 & 2 & 3 & 4 & 5 \\ \hline ACC(\%) & 86.6 & **87.5** & 87.3 & 86.8 & 86.2 \\ \hline \hline \(h\) & 0 & 1 & 2 & 3 & 4 \\ \hline ACC(\%) & **87.5** & 87.2 & 87.1 & 86.9 & 86.7 \\ \hline \hline \(\tau\) & 10 & 20 & 30 & 40 & 50 \\ \hline ACC(\%) & 87.0 & 87.3 & 87.4 & **87.5** & 87.3 \\ \hline \hline \(s\) & 1 & 2 & 5 & 10 & 30 \\ \hline ACC(\%) & **87.5** & 87.0 & 86.6 & 86.3 & 85.9 \\ \hline \end{tabular}
\end{table}
Table 5: **Ablation study.** Sensitivity analysis of hyperparameter \(\delta\), \(k\), \(h\)\(\tau\), and \(s\) on Reduced CIFAR-10 (Wide-ResNet-28-10). |
2301.00303 | Rethinking with Retrieval: Faithful Large Language Model Inference | Despite the success of large language models (LLMs) in various natural
language processing (NLP) tasks, the stored knowledge in these models may
inevitably be incomplete, out-of-date, or incorrect. This motivates the need to
utilize external knowledge to assist LLMs. Unfortunately, current methods for
incorporating external knowledge often require additional training or
fine-tuning, which can be costly and may not be feasible for LLMs. To address
this issue, we propose a novel post-processing approach, rethinking with
retrieval (RR), which retrieves relevant external knowledge based on the
decomposed reasoning steps obtained from the chain-of-thought (CoT) prompting.
This lightweight approach does not require additional training or fine-tuning
and is not limited by the input length of LLMs. We evaluate the effectiveness
of RR through extensive experiments with GPT-3 on three complex reasoning
tasks: commonsense reasoning, temporal reasoning, and tabular reasoning. Our
results show that RR can produce more faithful explanations and improve the
performance of LLMs. | Hangfeng He, Hongming Zhang, Dan Roth | 2022-12-31T22:35:34Z | http://arxiv.org/abs/2301.00303v1 | # Rethinking with Retrieval: Faithful Large Language Model Inference
###### Abstract
Despite the success of large language models (LLMs) in various natural language processing (NLP) tasks, the stored knowledge in these models may inevitably be incomplete, out-of-date, or incorrect. This motivates the need to utilize external knowledge to assist LLMs. Unfortunately, current methods for incorporating external knowledge often require additional training or fine-tuning, which can be costly and may not be feasible for LLMs. To address this issue, we propose a novel post-processing approach, _rethinking with retrieval_ (RR), which retrieves relevant external knowledge based on the decomposed reasoning steps obtained from the chain-of-thought (CoT) prompting. This lightweight approach does not require additional training or fine-tuning and is not limited by the input length of LLMs. We evaluate the effectiveness of RR through extensive experiments with GPT-3 on three complex reasoning tasks: common-sense reasoning, temporal reasoning, and tabular reasoning. Our results show that RR can produce more faithful explanations and improve the performance of LLMs.1
Footnote 1: Our code is publicly available at [https://github.com/HornHehhf/RR](https://github.com/HornHehhf/RR).
## 1 Introduction
Large language models (LLMs) have shown exceptional performance across various tasks through in-context learning without task-specific training or fine-tuning Brown et al. (2020); Chowdhery et al. (2022); Zhang et al. (2022); Ouyang et al. (2022). Recent progress in prompting Wei et al. (2022); Zhou et al. (2022); Kojima et al. (2022) and decoding Wang et al. (2022) has made it feasible for LLMs to tackle tasks that demand complex reasoning.
However, the knowledge stored in LLMs might inevitably be incomplete, out-of-date, or incorrect. As a result, external sources of knowledge, such as Wikipedia, may be essential for the successful deployment of LLMs for real-world applications. Previously, people tried to utilize knowledge for smaller language models (LMs), such as T5 Raffel et al. (2020), BERT Devlin et al. (2019), and RoBERTa Liu et al. (2019). However, these methods often require additional training or fine-tuning, which can be costly and thus impractical for LLMs.
In this paper, we present a post-processing approach called _rethinking with retrieval_ (RR) for utilizing external knowledge in LLMs. Our method begins by using the chain-of-thought (CoT) prompting method Wei et al. (2022) to generate a diverse set of reasoning paths, as described in Wang et al. (2022). We then use each reasoning step in those paths to retrieve relevant external knowledge, which enables RR to provide
Figure 1: An overview of three approaches for using LLMs: (a) Standard prompting for generating a prediction in response to a query. (b) Chain-of-thought prompting for generating both an explanation and a prediction in response to a query. (c) Rethinking with retrieval, our proposed approach for using the decomposed reasoning steps obtained from chain-of-thought prompting to retrieve relevant external knowledge for LLMs, leading to more faithful explanations and improved predictions in response to a query.
more faithful explanations and more accurate predictions, as illustrated in Figure 1.
We evaluate the effectiveness of our proposed method, RR, on three complex reasoning tasks: commonsense reasoning, temporal reasoning, and tabular reasoning, using GPT-3 175B Brown et al. (2020) and different external knowledge sources: Wikipedia, Wikidata Vrandecic and Krotzsch (2014), WordNet Miller (1995), and Conceptnet Speer et al. (2017). The results demonstrate that RR consistently outperforms all baselines on all three tasks without requiring additional training or fine-tuning, indicating the superiority of our approach in leveraging external knowledge to enhance the performance of LLMs.
## 2 Related Work
Enhancing LMs through retrieval.Retrieval-enhanced LMs have received significant attention as a means of improving performance through the incorporation of external knowledge. For example, the k-most similar training contexts can be retrieved to improve the estimation of the next word distribution in both the training stage Borgeaud et al. (2021) and the inference stage Khandelwal et al. (2020). Furthermore, search query generators have been adopted to generate search queries for search engines to retrieve relevant documents Komeili et al. (2022); Shuster et al. (2022); Thoppilan et al. (2022). Other approaches have utilized retrieved documents as the additional context in generation tasks Joshi et al. (2020); Guu et al. (2020); Lewis et al. (2020). Nakano et al. (2021) instead use human feedback in a text-based web-browsing environment. Among these previous works, Khandelwal et al. (2020) is most closely related to our approach. However, they focus on improving local inference by using the nearest neighbor datastore constructed from training data, whereas we focus on conducting faithful inference using external knowledge. In contrast to other aforementioned approaches, which require training or fine-tuning to incorporate retrieved knowledge, we propose a post-processing method for leveraging retrieved knowledge without additional training or fine-tuning.
Incorporating external knowledge into LMs.Significant effort has been devoted to leveraging external knowledge to improve the reasoning ability of LMs. Previous work has incorporated external knowledge sources such as WordNet Miller (1995) and ConceptNet Speer et al. (2017) to enhance LMs for tabular reasoning tasks Neeraja et al. (2021); Varun et al. (2022). Explicit rules have also been added to inputs to improve reasoning ability over implicit knowledge Talmor et al. (2020). In addition, explicit knowledge from Wikidata Vrandecic and Krotzsch (2014) and implicit knowledge in LLMs have been integrated into a transformer Vaswani et al. (2017) for visual question answering Gui et al. (2021). Nye et al. (2021) instead introduces a symbolic reasoning module to improve coherence and consistency in LLMs. Among these previous works, Nye et al. (2021) is the most relevant to our approach. Still, they focus on incorporating logical constraints to improve coherence and consistency, whereas we aim to improve the faithfulness of explanations through the use of external knowledge. In contrast to other aforementioned approaches that incorporate external knowledge before generation and require additional training or fine-tuning, our proposal leverages external knowledge in a post-processing manner to enhance LMs without additional training or fine-tuning.
Uncovering latent Knowledge in LLMs.There has been a line of work exploring the knowledge hidden within LLMs for reasoning. This has included the use of careful prompting to encourage LLMs to generate explanations in the reasoning process, such as through chain of thought prompting in few-shot Wei et al. (2022) or zero-shot Kojima et al. (2022) learning, or through the use of scratchpads for intermediate computation Nye et al. (2022). In addition, various methods based on sampling a diverse set of reasoning paths in LLMs have been proposed, including training verifiers to judge the correctness of model completions Cobbe et al. (2021), calibrating model predictions based on the reliability of the explanations Ye and Durrett (2022), and promoting self-consistency over diverse reasoning paths Wang et al. (2022). Zelikman et al. (2022) instead iteratively bootstrap the ability of LLMs to generate high-quality rationales from a few initial examples. Liu et al. (2022) further propose generating knowledge from LLMs, which is then used as additional input to improve commonsense reasoning. In contrast to this line of work, our proposal focuses on leveraging external knowledge to enhance LLMs, while they aim to explore the knowledge hidden within LLMs.
## 3 Rethinking with Retrieval
LLMs have been shown to generate incorrect supporting facts from time to time, even when they accurately capture the perspective needed to answer a question. This phenomenon highlights intrinsic issues in the way LLMs store and retrieve knowledge, including (1) the presence of out-of-date, incorrect, or missing relevant knowledge in the pre-training corpus; (2) incorrect memorization of relevant knowledge during pre-training; and (3) incorrect retrieval of relevant knowledge during the inference stage. To address these issues, we propose the use of RR, which leverages external knowledge through the retrieval of relevant information based on decomposed reasoning steps.
Overview.Given a query \(Q\), we utilize chain-of-thought prompting to generate a diverse set of reasoning paths \(R_{1},R_{2},\cdots R_{N}\), where each reasoning path \(R_{i}\) consists of an explanation \(E_{i}\) followed by a prediction \(P_{i}\). After that, we retrieve relevant knowledge \(K_{1},\cdots K_{M}\) from a suitable knowledge base \(\mathcal{KB}\) to support the explanation in each reasoning path, and select the prediction \(\hat{P}\) that is most faithful to this knowledge. To better illustrate our proposal, we use "_Did Aristotle use a laptop?_" as a running example in this work.
Chain-of-thought prompting.In contrast to standard prompting, CoT prompting (Wei et al., 2022) includes demonstrations of step-by-step reasoning examples in the prompt to produce a series of short sentences that capture the reasoning process. For instance, given the question "_Did Aristotle use a laptop?_", CoT prompting aims to generate the complete reasoning path "Aristotle died in 322 BC. The first laptop was invented in 1980. Thus, Aristotle did not use a laptop. So the answer is no." rather than simply outputs "No." Empirical results show that CoT prompting significantly improves the performance of LLMs on many multi-step reasoning tasks. Therefore, we adopt CoT prompting to obtain both explanation \(E\) and prediction \(P\) for the query \(Q\).
Sampling diverse reasoning paths.Similar to Wang et al. (2022), we sample a diverse set of reasoning paths \(R_{1},R_{2},\cdots R_{N}\) rather than only considering the greedy path as in Wei et al. (2022). For the question "_Did Aristotle use a laptop?_", the potential reasoning paths can be as follows:
1. Aristotle died in 2000. The first laptop was invented in 1980. Thus, Aristotle used a laptop. So the answer is yes.
2. Aristotle died in 322BC. The first laptop was invented in 2000. Thus, Aristotle did not use a laptop. So the answer is no.
3. Aristotle died in 322BC. The first laptop was invented in 1980. Thus, Aristotle did not use a laptop. So the answer is no.
Knowledge retrieval.Different knowledge bases can be used to address different tasks. For example, to address the question "_Did Aristotle use a laptop?_", we can use Wikipedia as the external knowledge base \(\mathcal{KB}\). Information retrieval techniques can be applied to retrieve the relevant knowledge \(K_{1},\cdots K_{M}\) from Wikipedia based on the decomposed reasoning steps. Ideally, we would obtain the following two paragraphs from Wikipedia for this question:
1. Aristotle (384-322 BC) was a Greek philosopher and polymath during the Classical period in Ancient Greece....
2. The Epson HX-20, the first laptop computer, was invented in 1980....
Faithful inference.The faithfulness of each reasoning path \(R_{i}\) can be estimated using a function \(f_{\mathcal{KB}}(R_{i})\), which is based on relevant knowledge \(K_{1},\cdots,K_{M}\) retrieved from the knowledge base \(\mathcal{KB}\). The final prediction is obtained through the application of the following inference procedure2: Footnote 2: Note that this is the basic version of faithful inference, and further variations can be found in Section 5.3.
\[\hat{P}=\operatorname*{arg\,max}_{P_{i}\in\{P_{1},\cdots,P_{N}\}}\sum_{i=1}^{N} \mathbbm{1}(P_{i}=P)f_{\mathcal{KB}}(R_{i}), \tag{1}\]
where \(P_{i}\) denotes the corresponding prediction in the reasoning path \(R_{i}\). This inference procedure is designed to identify the most faithful prediction \(\hat{P}\) to the knowledge base among all predictions in the \(N\) reasoning paths. For instance, in the running example, given reasoning paths \(R_{1},R_{2},R_{3}\) and the retrieved knowledge \(K_{1},K_{2}\), the above inference procedure would output the prediction "So the answer is no.", as it is supported by both \(R_{2}\) and \(R_{3}\) and has a higher faithfulness score compared to the prediction "So the answer is yes.", which is only supported by \(R_{1}\).
Experiments
In this section, we present the evaluation of our proposed method, RR, on three complex reasoning tasks: commonsense reasoning, temporal reasoning, and tabular reasoning.
### Baselines
We compare with the following baselines.
Zero-shot/few-shot prompting.In our experiments, we consider GPT-3 with standard zero-shot/few-shot prompting as baselines, following the approach described in Brown et al. (2020), in which zero or few in-context exemplars of input-output pairs are provided in the prompt.
Chain-of-thought prompting.In addition to the standard zero-shot/few-shot prompting, we also consider GPT-3 with the CoT prompting proposed in Wei et al. (2022) as a baseline in our experiments. This approach involves feeding LLMs step-by-step reasoning examples instead of standard input-output examples.
Self-consistency.In addition, we also consider self-consistency Wang et al. (2022) as a baseline in our experiments. This approach, proposed as an alternative to the naive greedy decoding used in CoT prompting Wei et al. (2022), involves sampling a diverse set of reasoning paths and selecting the most consistent answer by marginalizing the sampled paths.
### Commonsense Reasoning
Dataset description.For commonsense reasoning, we consider the StrategyQA dataset Geva et al. (2021), which includes questions that require implicit reasoning strategies. For example, the question "_Did Aristotle use a laptop?_" requires _implicit_ decomposition into reasoning steps, while the question "_Was Aristotle alive when the laptop was invented?_" explicitly specifies the reasoning process. The StrategyQA dataset includes \(2,290\) training examples, each consisting of a question (Q), a yes/no answer (A), a decomposition (D), evidence paragraphs (E), and supporting facts (F). On average, each question requires about \(2.93\) reasoning steps and \(2.33\) evidence paragraphs. In addition, a development set is constructed by randomly sampling \(10\%\) of the training examples (i.e., \(229\) examples). The answer distribution is roughly balanced, with approximately \(47\%\) "yes" questions in both the training and development sets. Unless otherwise specified, the models are evaluated on the development set3 for StrategyQA.
Footnote 3: As the annotations for the test set are not publicly available, we use the development set for evaluation. This allows us to perform a more comprehensive analysis.
Implementation details.In this part, we utilize Wikipedia as the external knowledge base \(\mathcal{KB}\). For each sentence in the explanation of every reasoning path, we first apply BM25 Robertson et al. (2009) to retrieve the top 10 most relevant paragraphs from Wikipedia. In particular, we use the re-implementation of the sparse retrieval BM254 in Karpukhin et al. (2020) from Pyserini Lin et al. (2021). Subsequently, we use the pre-trained MPNet model Song et al. (2020) to select the most similar paragraph based on the cosine similarity between the sentence embeddings of the retrieved paragraph and the sentence. We then employ a pre-trained natural language inference (NLI) model Nie et al. (2020) to obtain the entailment and contradiction scores for the sentence, treating the most similar paragraph as the premise. The faithfulness of each reasoning path is then calculated using \(f_{\mathcal{KB}}(\cdot)\) based on the entailment scores, contradiction scores, and MPNet similarities of all sentences in the explanation of the reasoning path. The final prediction for each question is obtained through faithful inference (Equation 1). More details about \(f_{\mathcal{KB}}(\cdot)\) can be found in Appendix A.2.
Footnote 4: We also experimented with DPR and BM25+DPR, and found that BM25 outperformed these methods in our experiments. More details can be found in Appendix A.3.
### Temporal Reasoning
Dataset description.In this experiment, we use the TempQuestions dataset Jia et al. (2018) to investigate temporal reasoning. This dataset includes \(1,271\) temporal questions that are divided into four classes: explicit temporal, implicit temporal, temporal answer, and ordinal constraints. The questions are paired with their answers from Freebase Bollacker et al. (2008). To examine the most challenging aspect of temporal reasoning, we focus on the set of _implicit_ temporal questions, which contain implicit temporal expressions, including free-text temporal expressions. For example, the question "_who was governor of oregon when shanghai noon was released?_" is an implicit temporal question. To facilitate our analysis, we only consider questions with a single answer, resulting in a total of \(175\) examples. Of these ex
amples, the first \(6\) are used for prompting, and the remaining \(169\) are used for evaluation.
Implementation details.In this part, we utilize Wikidata (Vrandecic and Krotzsch, 2014) as the external knowledge base \(\mathcal{KB}\), as it is the largest publicly available knowledge graph, and the data from Freebase has been migrated to Wikidata. To incorporate this knowledge into our system, we apply an entity linking system5 to each sentence in the explanation of each reasoning path to identify the corresponding Wikidata pages for all entities in the sentence. Next, we extract all temporal relations from these relevant Wikidata pages and use templates to convert these temporal relations into sentences. This step generates a set of relevant knowledge sentences for each sentence in the explanation of each reasoning path. The final prediction is then obtained by applying the procedure described in Section 4.2, in which the retrieved paragraphs are replaced with the relevant knowledge sentences from the current part.
Footnote 5: We use the space entity linker: [https://pypi.org/project/spacy-entity-linker/](https://pypi.org/project/spacy-entity-linker/).
### Tabular Reasoning
Dataset description.We consider the IN-FOTABS dataset (Gupta et al., 2020) for tabular reasoning, which consists of \(23,738\) human-written textual hypotheses based on premises in the form of tables extracted from \(2,540\) unique Wikipedia info-boxes. We focus on the development set, which includes \(1,800\) hypotheses based on \(200\) tables, and only consider entailed and contradictory hypotheses as it is tricky to write CoT demonstrations for neutral hypotheses. This results in a total of \(1,200\) hypotheses based on \(200\) tables for evaluation, with an equal number of entailed and contradictory hypotheses.
Implementation details.In this part, we utilize WordNet (Miller, 1995) and ConceptNet (Speer et al., 2017) as external knowledge bases. To convert tables into textual premises, we follow the same technique as in Varun et al. (2022). For each premise-hypothesis pair, we follow the procedure outlined in Varun et al. (2022) to retrieve relevant word relation triples that connect the premise and hypothesis words, such as "married"\(\xleftarrow{\text{RelatedTo}}\) "spouse". These triples are then converted into sentences using some simple templates. The resulting sentences, along with the textual premises from the tables, serve as relevant knowledge for each sentence in the explanation of each reasoning path. To obtain the final prediction, the procedure described in Section 4.2 is applied, whereby the retrieved paragraphs in Section 4.2 are replaced with the relevant knowledge from the current part.
### Evaluation
Experimental settings.In all experiments, we utilize GPT-3 text-davinci-002 unless otherwise stated. The maximum number of tokens for generation during completion is set to \(256\). For zero-shot, few-shot, and chain-of-thought prompting, the temperature is fixed at \(0\). For self-consistency and rethinking with retrieval, we randomly sample \(10\) outputs6 with temperature \(0.7\). Detailed prompts can be found in Appendix A.1. We evaluate the performance of different methods on commonsense and tabular reasoning using accuracy, and on temporal reasoning using the exact match metric as defined in Rajpurkar et al. (2016).
Footnote 6: For commonsense reasoning, we sample \(9\) outputs, as we have found that odd numbers of outputs tend to yield better voting performance for self-consistency on StrategyQA.
Results.As shown in Table 1, our proposed method, rethinking with retrieval, consistently outperforms all baselines on all three reasoning tasks without requiring additional training or fine-tuning. The results highlight the effectiveness of our approach in leveraging external knowledge to improve the performance of LLMs.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline & Methods & Commonsense & Temporal & Tabular \\ \hline & Zero-shot prompting & 58.08 & 28.40 & 82.00 \\ & Few-shot prompting & 63.32 & 29.59 & 83.08 \\ GPT-3 & Chain-of-thought prompting & 65.94 & 33.14 & 83.33 \\ & Self-consistency & 73.36 & 37.28 & 84.00 \\ & Rethinking with retrieval & **77.73** & **39.05** & **84.83** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of different methods using GPT-3 on three reasoning tasks.
## 5 Analysis
In this section, we perform a thorough analysis to gain a deeper understanding of RR.
### Limitations of LLMs in Reasoning
In this subsection, we present an analysis of GPT-3 with CoT prompting on the StrategyQA dataset. Upon closer examination of the outputs of GPT-3, we observed that it can provide reasonable explanations and correct predictions for a number of questions. For example, when given the question "_Will the Albany in Georgia reach a hundred thousand occupants before the one in New York?_", GPT-3 produced the following output:
The Albany in New York has a population of about 98,000. The Albany in Georgia has a population of about 77,000. Thus, the Albany in New York is more populous than the Albany in Georgia. So the answer is no.
The above output consists of three components: (1) supporting facts (in cyan) that are based on a particular perspective, (2) chaining arguments (in orange), and (3) a prediction (in green). Components (1) and (2) contribute to the explanation. Overall, the output exhibits a high level of quality. However, we also observed that GPT-3 may occasionally produce incorrect supporting facts for its explanations or make incorrect inferences for its predictions, despite generally being able to identify suitable perspectives.
Wrong supporting facts.As shown in Table 2, GPT-3 provides the incorrect supporting fact for Lil Jon's top-ranked Billboard song, stating that it was "Get Low" instead of the correct answer, "Yeah". However, it does have the correct perspective on how to answer the question, "_Was Lil Jon's top ranked Billboard song a collaboration with a member of The Lox?_".
Wrong inference.As shown in Table 2, GPT-3 makes an incorrect inference, stating that the top of Mount Fuji "would not stick out" of the Sea of Japan, rather than the correct answer, "would stick out". However, it does provide correct supporting facts based on the appropriate perspective for the question, "_Would the top of Mount Fuji stick out of the Sea of Japan?_".
### Ablation Study
Importance of decomposition-based retrieval.In our proposed method, we retrieve relevant ex
\begin{table}
\begin{tabular}{c|c|c} \hline \hline & Wrong supporting facts & Wrong inference \\ \hline \multirow{2}{*}{Question} & Was Lil Jon’s top ranked Billboard song a collaboration with a member of The Lox? & Would the top of Mount Fuji stick out of the Sea of Japan? \\ \hline Answer & No & Yes \\ \hline \multirow{4}{*}{Facts} & Lil Jon’s highest ranked billboard song was Yeah; Yeah was a collaboration between Lil Jon, Usher, and Ludacris; The Lox is a rap trio consisting of: Styles P, Sheek Louch, and Jadakiss. & The average depth of the Sea of Japan is 5,748 feet (1,752 metres) and its maximum depth is 12,276 feet (3,742 metres); Mount Fuji is 3,776.24 metres (12,389.2 ft) tall. \\ \hline \multirow{4}{*}{GPT-3} & Lil Jon’s top ranked Billboard song was "Get Low" with the East Side Boyz. The Los is not a member of the East Side Boyz. Thus, Lil Jon’s top ranked Billboard song was not a collaboration with a member of The Lox. So the answer is no. & Mount Fuji is 3,776 meters tall. The Sea of Japan is about 3,741 meters deep. Thus, the top of Mount Fuji would not stick out of the Sea of Japan. So the answer is no. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Examples of incorrect outputs from GPT-3 with CoT prompting.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Retrieval & Commonsense & Tabular \\ \hline Query-based & 73.36 & 36.69 \\ \hline Decomposition-based & **77.73** & **39.05** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of query-based and decomposition-based retrieval on commonsense and tabular reasoning.
ternal knowledge based on the decomposed reasoning steps rather than the original query. To further investigate the impact of this choice, we conducted additional experiments in which we used the original query for knowledge retrieval while keeping other aspects of our method unchanged. As shown in Table 3, the results for these experiments are poor for both commonsense and temporal reasoning, indicating the importance of using decomposition-based retrieval in our approach.
The impact of different types of knowledge.For tabular reasoning, we use both external knowledge (WordNet and ConceptNet) and background knowledge (tables) in our experiments. In this section, we further examine the effect of different types of knowledge on the performance of our proposed method. As shown in Table 4, the additional improvement gained by incorporating Wikidata and ConceptNet in addition to tables is limited, indicating that GPT-3 already captures many word-level relations in these external knowledge sources. In addition, the observed significant improvement in tabular reasoning from using tables alone suggests that our proposed method can also effectively leverage background knowledge.
### Variations of the Proposed Approach
Basic approach: Weighting outputs.In Section 3, we present a basic version of our proposal for taking advantage of external knowledge. Our basic approach involves _weighting outputs as individual units_ and using a _voting_ mechanism to select the best-supported prediction. We can also directly choose the best-supported output, which includes both an explanation and a prediction, without using voting. For example, in the running example of "_Did Aristotle use a laptop?_" (see more in Section 3), the third reasoning path \(R_{3}\) is the output most supported by the knowledge paragraphs \(K_{1}\) and \(K_{2}\).
et al. (2022). For simplicity, we use the pre-trained NLI model released by Nie et al. (2020) to compute the NLI-based metric, rather than fine-tuning T5-11B (Raffel et al., 2020) ourselves. The implementation details of the two variants can be found in Appendix A.4.
Results.Table 5 illustrates that the fact selection and fact generation variants of our proposal improve the faithfulness of the supporting facts in explanations, leading to increased prediction accuracy compared to the basic approach without voting. Across all variations of our proposal, we observe significant improvements in both prediction accuracy and the faithfulness of explanations when compared to the CoT prompting baseline.
The incorporation of a voting mechanism leads to an increased prediction accuracy of \(79.91\%\) for the basic approach. Comparison with the performance (i.e., \(77.73\%\)) of the same approach using retrieved paragraphs rather than evidence paragraphs in Table 1 demonstrates that retrieved paragraphs are also effective for our proposal, as both significantly outperform the voting baseline, self-consistency (i.e., \(73.36\%\)), as shown in Table 1.
It is noteworthy that UnifiedQA performs poorly on StrategyQA, achieving an accuracy of only \(58.95\%\). However, when provided with gold supporting facts in StrategyQA, UnifiedQA demonstrates excellent performance with an accuracy of \(90.83\%\). This suggests that UnifiedQA is suitable for last-step inference, but not effective for answering questions in StrategyQA.
### Impact of the Size of LMs
In this subsection, we examine the effect of the size of LMs on the performance of our proposed method, specifically in the context of the fact generation variant. We compare the performance of our method using various sizes of OPT models (Zhang et al., 2022) in addition to GPT-3 (175B) using the same experimental setup as in Section 5.3. As shown in Figure 2, our proposed method (Variant II) consistently outperforms CoT prompting in terms of both prediction accuracy and the faithfulness of explanations, even when using smaller LMs.
## 6 Conclusion
In conclusion, the proposed approach is a promising solution for utilizing external knowledge to assist LLMs. Unlike traditional methods, RR does not require additional training or fine-tuning, making it a lightweight and feasible option for LLMs. Through extensive experiments on three reasoning tasks using GPT-3, we have shown that RR is able to produce more faithful explanations and improve the performance of LLMs. In the future, we plan to investigate various variations of RR to enhance its effectiveness and efficiency in augmenting LLMs with external knowledge.
Figure 2: The effect of LM size on the performance of our proposed method (Variant II) and CoT prompting. We use various sizes of OPT models, with the exception of the 175B model, which is GPT-3.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Methods & Accuracy (\%) & Faithfulness (\%) \\ \hline CoT prompting & 65.94 & 38.73 \\ \hline \hline Basic (w/o voting) & 76.86 & 50.02 \\ \hline Variant I & **78.60** & 54.11 \\ \hline Variant II & **78.60** & **54.54** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of various variations of RR and the CoT prompting baseline on StrategyQA using evidence paragraphs. |
2310.00442 | SBC-SNOLAB scintillation system and SiPM implementation for dark matter
searches | The Scintillating Bubble Chamber (SBC) collaboration is constructing a 10~kg
liquid argon (LAr) bubble chamber at SNOLAB called SBC-SNOLAB having the main
objective of detecting dark matter. One of the most novel aspects of SBC-SNOLAB
is the scintillation system, consisting of LAr doped with on the order of
10~ppm Xe, 48 FBK VUV silicon photomultipliers (SiPMs), the SiPM electronics,
two quartz jars, and liquid CF$_4$ used as an hydraulic fluid and additional
source of scintillation photons. In contrast with traditional single or dual
phase scintillation experiments, the collected LAr scitillation light is used
to veto signals which involve the detection of at least a single photoelectron.
These proceedings will describe in detail the current SBC-SNOLAB scintillation
system which includes the unique design considerations for SBC-SNOLAB that
limit the light collection efficiency and the electronics. | H. Hawley-Herrera | 2023-09-30T17:29:02Z | http://arxiv.org/abs/2310.00442v2 | # Sbc-Snolab scintillation system and SiPM implementation for dark matter searches
###### Abstract
The Scintillating Bubble Chamber (SBC) collaboration is constructing a 10 kg liquid argon (LAr) bubble chamber at SNOLAB called SBC-SNOLAB having the main objective of detecting dark matter. One of the most novel aspects of SBC-SNOLAB is the scintillation system, consisting of LAr doped with on the order of 10 ppm Xe, 48 FBK VUV silicon photomultipliers (SiPMs), the SiPM electronics, two quartz jars, and liquid CF\({}_{4}\) used as an hydraulic fluid and additional source of scintillation photons. In contrast with traditional single or dual phase scintillation experiments, the collected LAr scintillation light is used to veto signals which involve the detection of at least a single photoelectron. These proceedings will describe in detail the current SBC-SNOLAB scintillation system which includes the unique design considerations for SBC-SNOLAB that limit the light collection efficiency and the electronics.
Dark Matter detectors (WIMPs, axions, etc.); Photon detectors for UV, visible and IR photons (vacuum) (photomultipliers, HPDs, others); Scintillators, scintillation and light emission processes (solid, gas and liquid scintillators)
## 1 Introduction
The goal of the Scintillating Bubble Chamber (SBC) collaboration is to detect dark matter utilizing the well-demonstrated technology of bubble chambers [1] with scintillators as the active fluid. The main benefit scintillators provide to a bubble chamber is the additional channel for tagging background events and performing energy reconstruction. Examples of scintillators that can be used for bubble chambers are noble elements, liquid nitrogen, and CF\({}_{4}\). If the scintillator is a liquid noble element, there is an intrinsic reduction of the energy threshold as demonstrated in ref. [2].
The SBC detector called SBC-SNOLAB is a 10 kg liquid argon (LAr) bubble chamber to be built at the underground physics laboratory, SNOLAB1. Its construction design follows the same buffer free-liquid design as PICO-40L2 bubble chamber as demonstrated in ref. [3]. SBC-SNOLAB is a scaled up version of the smaller 30 g xenon bubble chamber built in Northwestern University where the fundamental detector operating principles were first tested [2].
Footnote 1: [https://www.snolab.ca/about/about-snolab/](https://www.snolab.ca/about/about-snolab/)
Footnote 2: [https://www.picoexperiment.com/pico-40l/](https://www.picoexperiment.com/pico-40l/)
SBC-SNOLAB consists of four essential systems. One, the thermo-mechanical system, has the objective of starting the event cycle by pressurization and setting the temperature so a superheated state is achieved; stopping the bubble formation by compressing the active fluid; creating a temperature gradient that stops undesired components of the active fluid from nucleating; and setting the energy threshold. Second is the camera system, consisting of three cameras that record the expansion of the bubble after a nucleation. Third is the piezoelectric system consisting of several piezos located around the active fluid listening for the bubble formation. Lastly, the scintillation system collects the photons created during an event. For more information about other aspects of the chamber and physics goals not covered in these proceedings see refs. [4; 5].
These proceedings will describe the current design for the scintillation system which is the most novel component of SBC-SNOLAB. Discussion of the importance of the scintillation system for the search for dark matter will be included with comparisons to other scintillation-based particle
detectors. This discussion will also include a unique set of problems which come associated with low background detectors regarding the design and use of the silicon photomultipliers (SiPMs), electronics and acquisition system.
## 2 The scintillation system
The SBC-SNOLAB scintillation system is composed primarily of 10 kg of LAr located in between two concentric quartz jars (constructed of Hereaus Suprasil 3103) and 32 SiPMs surrounding the outer-most quartz jar. Additionally, the SiPMs and the quartz jars are submerged in a bath of liquid CF\({}_{4}\) that acts as a thermo-mechanical exchange fluid and, as will be discussed later, as an additional source of data which can be used for background suppression. A simplified schematic diagram of the SBC-SNOLAB scintillation system can be seen in figure 1. Each component of the scintillation system will be described in detail in this section with the justifications behind some of the unconventional design decisions.
Footnote 3: [https://www.hereaus-group.com/en/](https://www.hereaus-group.com/en/)
### LAr
In SBC-SNOLAB, LAr was chosen as the active fluid because of the expected increased sensitivity to lower dark matter masses. LAr emits 128 nm scintillation photons with a yield approximately of 40 000 photons per MeV for ionizing radiation [6]. The scintillation is emitted following the
Figure 1: Simplified diagram of SBC-SNOLAB scintillation system. It consists of a liquid argon (LAr) doped with on the order of 10 ppm Xe located between two quartz jars using ”right side up” orientation of PICO-40L construction [3]. The light collection devices consists of 32 FBK vacuum ultraviolet (VUV) SiPMs collecting the LAr scintillation and 16 SiPMs are used to collect the LCF\({}_{4}\) scintillation. Additionally, thin polytetrafluoroethylene (PTFE) sheets are used to cover the side of the outer jar to reflect the LAr scintillation.
interaction under two time constants: the singlet state with a decay constant of a few nanoseconds, and the triplet state in the single microsecond range. However, if pure LAr is used, the quartz jars will absorb most if not all of the LAr scintillation light as expected from ref. [7].
A simple but naive solution is to place the SiPMs in between the quartz jars with the LAr, but sharp features (corners, the silicon circuits, and surface roughness) create nucleation points which would likely increase the nucleation to rates beyond the maximum limit of approximately 1 000 events per day. An alternative solution which avoids the excessive nucleation sites is to waveshift the scintillation light to wavelengths which allow for transmission through the quartz.
The SBC collaboration has chosen Xe as the SBC-SNOLAB wavelength shifter as it has been shown to convert the 128 nm scintillation of Ar to the Xe scintillation wavelength of 174 nm [8]. Additionally, the Xe scintillation has also been shown to be almost fully reflective in polytetrafluoroethylene (PTFE) [9] which can be used to increase the light collection slightly by covering the quartz in PTFE sheets. The light collection efficiency also improves because Xe increases the scintillation photons yield [10].
### Quartz jars
Quartz was chosen as the jar material for its radiopurity and visible-wavelength transparency and smooth inner surface (to prevent nucleations). The inner jar acts as a piston to control the pressure of the LAr while the outer one remains static. Both jars are sealed to a stainless steel bellows with spring-energized PTFE seals.
A test setup of the quartz jars has been replicated at Queen's University with the goal of testing the cryogenic feasibility of the seals and jars. They were successfully tested by pulling a high vacuum on the outside of the jars while the steel frame is thermally connected to a cryohead using several strands of thick copper braided wire. The test setup can be seen in figure 2.
### SiPMs
The SiPMs play an important role in the scintillation system collection efficiency via the photon detection efficiency, and the fill factor. Currently, for radio-purity reasons, SBC-SNOLAB is planning to use up to 48 Fondazione Bruno Kessler (FBK) vacuum ultraviolet (VUV) SiPMs a variation of the SiPMs discussed in ref. [11]. 32 SiPMs are evenly distributed facing the quartz jars and the remaining 16 are used to look outwards into the CF\({}_{4}\) space. They are connected to a custom made TRIUMF-built amplifier outside the chamber using 5 m to 7 m long coaxial cables that also carry the power to the SiPMs. The SiPMs are mechanically attached to the copper panels with a thin layer of liquid CF\({}_{4}\) between them and the jars and no optical coupling material in between. As of this moment, no material has been found to be compatible with LCF\({}_{4}\) that also has an index of refraction equal to that of the quartz jars.
An indirect impact of the SiPMs comes in the form of delayed correlated avalanches, DCAs. They are created when an independent avalanche (due to an incident photon or generated thermally) or other correlated avalanches start a series of mechanisms in the silicon and construction materials of the SiPMs that start another avalanche within the SiPM [12]. Understanding the probability of DCAs and their time distribution is of importance for SBC-SNOLAB as it will impact the acquisition window and will define the pre-acquisition window required to minimize bubbles coincident with
DACs being misinterpreted as bubbles with scintillation. Currently, SBC is preparing a paper for the characterization of the SiPMs to understand DCAs and to set up the SiPMs to maximize their signal-to-noise ratio.
Another important component of the scintillation system that is not found inside the SBC-SNOLAB chamber are the SiPM acquisition electronics. They are used to amplify the SiPM pulses, filter noise, and save the data to disk. The data acquisition electronics consist of the custom-made TRIUMF amplifiers, the coaxial cables, a CAEN 1740D 64 channel 62.5 MS/s digitizer, and computer for storage.
The amplifiers are found outside the SBC-SNOLAB pressure chamber or in air space as most electronic components such as resistors, capacitors, and connectors are too radioactive to be placed near the detector [13, 14]. As a consequence, long coaxial cables connect the SiPM to the amplifier where noise spurs can be bigger than the signal. Therefore, a hardware low-pass filter is used to reduce the external noise. Signal bandwidth is not important for SBC-SNOLAB because no additional information is gained from pulse timing reconstruction which experiments like DarkSide-50 or XENONnT use to distinguish backgrounds from signal [15, 16]. However, a possible long-term solution to the noise problem without sacrificing timing information is the
Figure 2: Queen’s University test setup is a replica of SBC inner assembly adapted to test the cryogenic feasibility of the quartz jars seals and the SiPM acquisition chain. No LAr or LCF\({}_{4}\) are found in this chamber.
creation of a single device or system-on-a-chip that contains both the SiPM and the processing electronics on a single or multiple silicon chips similar to ref. [13].
The acquisition back-end must be flexible during data taking because SBC-SNOLAB has the potential to do physics beyond dark matter searches which require different acquisition setups to achieve. An example of potential physics is the effects of different levels of Xe dopant or LAr scintillation as a function of temperature and pressure. The data required for the scintillation physics can be acquired during recompressions in when there is no dark matter sensitivity.
### Lcf\({}_{4}\)
The original role of the LCF\({}_{4}\) is to act a thermo-mechanical exchange fluid which remains stable at LAr temperatures and pressures. However, during an initial R&D phase, the LCF\({}_{4}\) was unexpectedly found to scintillate. The decision was made to place of several SiPMs to collect the scintillation and it is planned to be used as a veto for external sources of backgrounds such as muon-induced neutrons, and gammas. The scintillation of the gaseous CF\({}_{4}\) has been documented in [17] but no information is available for the liquid state. SBC is currently preparing a paper on the liquid scintillation properties of LCF\({}_{4}\).
Nevertheless, the light collection efficiency is important for a dark mater search as it impacts the energy threshold for nuclear recoils, but it is not a priority for the scintillation system. The engineering constrains previously discussed (such as the high number of optical interfaces and low background requirements) of the scintillation system does not favor high collection efficiencies. Nevertheless, the main objective of the scintillation system is to use the collected photons as a veto instead of to perform energy reconstruction, which would be similar to single or dual phase scintillation experiments. A low and uncertain light collection efficiency will not impact the dark matter search significantly as most of the backgrounds are expected to emit considerable amount of photons. Ideally, the veto scenario would require at least one single photo-electron detected across all SiPMs, but this scenario requires the scintillation-generating backgrounds to be below kHz levels.
## 3 Summary
The SBC collaboration is building SBC-SNOLAB a 10 kg LAr doped with Xe bubble chamber in order to attempt to detect dark matter. The combination of bubble chamber technology with scintillation at cryogenic temperatures, and the additional requirement of low backgrounds, brings a new set of challenges that SBC is undertaking. The material selection is limited to proven low-background materials and any relative radioactive materials have to be placed away from the detector. The light collection efficiency is directly affected by these design constrains. However, the SBC-SNOLAB scintillation system is used as a background veto which does not require high light collection efficiencies which would only impact the efficacy at low energies. Solutions to these problems require long R&D campaigns, to be completed when SBC is designing bigger chambers.
Currently, the SBC collaboration has finished characterizing the SiPMs and preparing a publication on the measured gains, breakdown voltages, dark noise rates and probabilities of a correlated avalanche.
Modeling of the optical propagation in SBC-SNOLAB has also started in Geant44 which includes the SiPMs, the quartz jars, and the scintillation light.
Footnote 4: [https://geant4.web.cern.ch/](https://geant4.web.cern.ch/)
The LCF\({}_{4}\) has its scintillation properties measured and comparison with a MC model is currently ongoing for publication. Finally, SBC is confident to reach sufficient scintillation collection efficiency for SBC-SNOLAB dark matter goal.
|
2309.05459 | The rationality of ineffective spin genus-4 thetanull loci | In this paper, we show that the divisor given by couples [C,{\theta}] where C
is a curve of genus 4 with a vanishing thetanull and {\theta} is an ineffective
thetacharacteristic is a rational variety. By our construction, it follows also
that the analogous divisor in the Prym moduli space is rational. | Francesco Zucconi | 2023-09-11T13:53:53Z | http://arxiv.org/abs/2309.05459v1 | # The rationality of ineffective spin genus-\(4\)
###### Abstract.
In this paper, we show that the divisor given by couples \([C,\theta]\) where \(C\) is a curve of genus \(4\) with a vanishing thetanull and \(\theta\) is an ineffective thetacharacteristic is a rational variety. By our construction, it follows also that the analogous divisor in the Prym moduli space is rational.
## 1. Introduction
### Analog results in the previous literature
The rationality problem for the coarse moduli space \(M_{g}\) of smooth curves of genus \(g\) is a classical problem posed by Francesco Severi; see: [Se]. For \(M_{g}\) and its Deligne-Mumford compactification \(\overline{M_{g}}\), see: [DM]. The literature on this topic is vast; here we can also quote [I], [HM],[BK], [Sh3], [Ka], [ACG]. For the rationality problem concerning geometrically defined subvarieties of \(\overline{M}_{g}\) see: [Bo], [Sh1], [Sh2], [CC], [Boh]. Furthermore there are others moduli spaces for curves. One of the most studied, also because of its importance in physics, is the course moduli space \(\mathrm{S}^{+}_{\mathrm{g}}\) of even spin curves, see: cf. [Bi]. It parameterises up to automorphisms couples \((C,\theta)\) where \(C\) is a smooth curve, \(\theta\) is a divisor, \(h^{0}(C,\mathcal{O}_{C}(\theta))\) is an even number and \(2\theta\) is linearly equivalent to the canonical divisor \(K_{C}\); such a divisor \(\theta\) is called an even thetacharacteristic.
### The divisor of curves with a vanishing theta-null
In [Cor] Cornalba constructed a compactification \(\overline{S^{+}_{g}}\) which is compatible with \(\overline{M_{g}}\). By [Ka] we know that \(\overline{\mathrm{S}^{+}_{\mathrm{g}}}\) is rational where \(g=2,3\). In [TZ3] it is showed that \(\overline{\mathrm{S}^{+}_{4}}\) is rational. Since the Kodaira dimension of \(\overline{\mathrm{S}^{+}_{\mathrm{g}}}\) is \(\geq 0\) if \(g\geq 8\), see: [FaV], the rationality problem for \(\overline{\mathrm{S}^{+}_{\mathrm{g}}}\) is open only for the case \(g=5,6,7\). Nevertheless, as in the classical case of \(\mathcal{M}_{g}\), there are some geometrically defined subvarieties of \(\overline{\mathrm{S}^{+}_{\mathrm{g}}}\) for which it is a natural problem to understand if they are rational or not.
A geometrically defined divisor which plays a fundamental role in the geometry of \(M_{g}\) is \(\mathcal{M}^{\mathrm{null}}_{g}\) which is the locus given by the classes of smooth curves having at least one even theta-characteristic such that \(h^{0}(C,\mathcal{O}_{C}(\theta))\geq 2\). It is known that \(\mathcal{M}^{\mathrm{null}}_{g}\) is irreducible; see: [Mo, Theorem 2.4]. We recall that \(\overline{\mathrm{S}^{+}_{\mathrm{g}}}\) is irreducible, that there exists an open set of elements \([(C,\theta)]\) where \(h^{0}(C,\mathcal{O}_{C}(\theta))=0\) and that an even theta characteristic \(\theta\) with \(h^{0}(C,\theta)>0\) is said to be a vanishing thetanull. We consider the following two divisors:
\[\Theta_{g,\mathrm{null}}:=\{[C,\theta]\in\mathrm{S}^{+}_{\mathrm{g}}\ |\ h^{0}(C,\theta)>0\}\]
and
\[\mathrm{S}^{\mathrm{null},0}_{g}:=\{[C,\theta]\in\mathrm{S}^{+}_{\mathrm{g}}\ |\ [C]\in \mathcal{M}^{\mathrm{null}}_{g},\ \mathrm{and}\ h^{0}(C,\theta)=0\}.\]
By the forgetful morphism \(\mathrm{S}^{+}_{\mathrm{g}}\to\mathcal{M}_{g}\) the preimage \(\mathrm{S}^{\mathrm{null}}_{g}\) of \(\mathcal{M}^{\mathrm{null}}_{g}\) is
\[\mathrm{S}^{\mathrm{null}}_{g}=\mathrm{S}^{\mathrm{null},0}_{4}\sqcup\Theta_{g, \mathrm{null}} \tag{1.1}\]
As their analogue in \(M_{g}\) the divisors \(\Theta_{g,\mathrm{null}}\) and \(\mathrm{S}^{\mathrm{null},0}_{g}\) are useful to describe the geometry of \(\overline{S^{+}_{g}}\), see [FaV, Theorem 1.1]. Moreover genus 4 curves with a vanishing thetanull have been studied to understand the Jacobian locus inside the moduli space of principally polarized abelian varieties; see: [GS].
### Our results
In this paper we solve the rationality problem for \(\mathrm{S}^{\mathrm{null}}_{4}\). Indeed it is well-known that the canonical model of a non-hyperelliptic curve \(C\) with an effective even theta characteristic is contained in a rank 3 quadric and viceversa a rank 3 quadric containing the canonical model defines a halfcanonical pencil on \(C\). In particular for a smooth non-hyperelliptic curve of genus 4 it can exists at most a unique vanishing thetanull. It is also known that \(\overline{\mathcal{M}^{\mathrm{null}}_{4}}\) is rational [FL, Theorem 3.1]. This immediately shows that \(\Theta_{4,\mathrm{null}}\) is rational. The study of \(\mathrm{S}^{\mathrm{null},0}_{4}\) is more complicated due to the fact that \(h^{0}(C,\theta)=0\) if \([C,\theta]\in\mathrm{S}^{\mathrm{null},0}_{g}\).
In this paper we show; see Theorem 7.2.2:
**Theorem 1.3.1**.: _The divisor \(\overline{\mathrm{S}^{\mathrm{null},0}_{4}}\) of \(\overline{\mathrm{S}^{+}_{4}}\) is a rational variety. In particular \(\overline{\mathrm{S}^{\mathrm{null},0}_{4}}\) is irreducible and reduced._
Finally consider the Prym moduli space \(\mathcal{R}_{4}\) which parameterises couples \((C,\eta)\) where \(\eta\) is a 2-torsions divisor. By [Ca] we know that \(\mathcal{R}_{4}\) is rational. For an account on recent progresses in this field see also [V]. Our method implies that the divisor \(\mathcal{R}^{\mathrm{null}}_{4}\) of \(\mathcal{R}_{4}\) whose general points are given by those classes \([(C,\eta)]\) such that \(C\) admits a unique trigonal linear series and \(\eta\) is a nontrivial 2-torsions divisor is rational:
**Theorem 1.3.2**.: _The divisor \(\mathcal{R}^{\mathrm{null}}_{4}\) is a rational variety._
See Theorem 7.3.1.
### The method
An even thetacharacteristic \(\theta\) is said to be an ineffective theta characteristic if the associated linear system \(|\theta|\) is empty and this is the general case. In [TZ1], [TZ2], [TZ3] we have built a method to study the coarse moduli space of couples \([(C,\theta)]\in\overline{\mathrm{S}^{+}_{\mathrm{g}}}\) where \(C\) is a trigonal curve, that is \(C\) admits a morphism \(C\to\mathbb{P}^{1}\) of degree 3 and \(\theta\) is ineffective.
We heavily rely on the empiric observation that Hilbert space of rational curves \(R\) of degree \(d\) with respect to a given polarization on a (birational) Fano variety \(X\) often give moduli space of curves with an extra data, which typically comes from the intersection theory inside \(X\). We learn this general philosophy in [Muk1] and [Muk2]. Actually we reinterpret it in [TZ1] to the case where \(X\) is the del Pezzo threefold \(B\subset\mathbb{P}^{6}\) and we used it to solve in [TZ2] a long standing Scorza's Conjecture [Sco] and in [TZ3] to show that \(\overline{\mathrm{S}^{+}_{4}}\) is birational to \(\mathbb{P}^{9}\). A variation of the method can be applied to the case where \(X\) is the singular del Pezzo threefold with a unique ordinary double point. It enables us to show the rationality of the moduli space of one-pointed ineffective spin hyperelliptic curves; see [TZ4].
We think that this quest about the interplay between birational geometry of Fano varities and a new point of view about the description of the moduli space of (spin) curves deserves to be fully explored especially in those cases where complicated
objects can be controlled by an easy intersection theory on simple Fano varieties; see [Z] too.
#### 1.4.1. The method for trigonal spin curves
In this paper we deep our understanding of the trigonal case where \(g=4\). Our construction of trigonal even spin curves relies on the very well-known geometry of the del Pezzo \(3\)-fold \(B\) and of the following diagram; see [FN, SS2], c.f. Proposition 2.5.2:
(1.2)
where \(\mathcal{H}^{B}_{1}\) is the Hilbert scheme of lines of \(B\), \(\mathcal{U}_{1}\) is the associated universal family and \(\pi\colon\mathcal{U}_{1}\to\mathcal{H}^{B}_{1}\), \(\varphi\colon\mathcal{U}^{B}_{1}\to B\) are the natural morphisms induced by respectively the natural projections \(B\times\mathcal{H}^{B}_{1}\to\mathcal{H}^{B}_{1}\), \(B\times\mathcal{H}^{B}_{1}\to B\). It is also known that \(\mathcal{H}^{B}_{1}\) is isomorphic to \(\mathbb{P}^{2}\) and \(\mathcal{H}^{B}_{2}\) to \(\mathbb{P}^{4}\).
Our construction goes as follows: we start with a simple objet \(R_{d}\subset B\subset\mathbb{P}^{6}\), actually \(R_{d}\) is a rational curve of degree \(d\), and we construct two related objects \(C_{d}\subset\mathcal{U}_{1}\) and \(M_{d}\subset\mathcal{H}^{B}_{1}\) where \(C_{d}:=\varphi^{-1}(R_{d})\) and \(M_{d}:=\pi(C_{d})\). We can show that under mild generality assumptions \(C_{d}\) is a smooth curve of genus \(d-2\) and the morphism \(\pi_{|C_{d}}\colon C_{d}\to\mathcal{H}^{B}_{1}\) is the morphism \(\phi_{|\theta+\delta|}\colon C_{d}\to\mathbb{P}^{2}\) where \(\delta\) is the \(g^{1}_{3}\) given by \(\varphi_{|C_{d}}\colon C_{d}\to R_{d}\) and \(\theta\) comes out from the correspondence on \(C_{d}\) obtained by the lines contained in \(B\) which intersect \(R_{d}\) and which mutually intersect. In [TZ2], we explicitly construct the spin curve \([(C_{d},\theta)]\in\overline{\mathbb{S}^{+}_{\mathrm{d-2}}}\). In particular we remind the reader that \(M_{d}\) is a plane curve of degree \(d\) with \(\frac{(d-2)(d-3)}{2}\) nodes.
#### 1.4.2. Sextics of conic type
More precisely, we define by induction \(\mathcal{H}^{B}_{d}\) to be the union of the components of the Hilbert scheme whose general point parameterizes a smooth rational curve of degree \(d\) on \(B\) obtained as a smoothing of the union of a general smooth rational curve of degree \(d-1\) belonging to \(\mathcal{H}^{B}_{d-1}\) and its general uni-secant line. In [TZ1], we study the behavior of the spin curve \([(C_{d},\theta)]\) starting from a general \(R_{d}\) of \(\mathcal{H}^{B}_{d}\).
Now consider the case \(d=6\). We know that \(\mathcal{H}^{B}_{6}\) is birational to \(\mathbb{P}^{6}\); see [TZ3, Theorem 5.2]. Its general points parameterize smooth rational curves \(R\subset B\) such that \(R\) is of degree \(6\) inside \(\mathbb{P}^{6}\) and the linear span \(\langle R\rangle\) is \(\mathbb{P}^{6}\), but it contains general sextics contained in hyperplane sections; see: [TZ3, Proposition 6.1.1]. By the method above we construct a rational map \(\pi^{+}\colon\mathcal{H}^{B}_{6}\dashrightarrow\overline{\mathcal{S}^{+}_{4}}\) given by \([R_{d}]\xrightarrow{\pi^{+}}[(C_{d},\theta)]\). Since \(B\) is a \(\mathbb{P}(\mathrm{SL}(2,\mathbb{C}))\)-quasi homogeneous variety (see cf. subsection 2.1), we can construct a \(\mathbb{P}(\mathrm{SL}(2,\mathbb{C}))\)-action on \(\mathcal{H}^{B}_{6}\) and this action is compatible with \(\pi^{+}\colon\mathcal{H}^{B}_{6}\dashrightarrow\overline{\mathcal{S}^{+}_{4}}\). By taking a resolution of \(\pi^{+}\) we can define a compact space \(\widetilde{\mathcal{S}^{+}_{4}}\), whose general points parameterize the general \(\mathbb{P}(\mathrm{SL}(2,\mathbb{C}))\) orbits of \(\mathcal{H}^{B}_{6}\), and two rational maps, \(p_{\mathcal{S}^{+}_{4}}\) and \(q_{\mathcal{S}^{+}_{4}}\), such that \(p_{\mathcal{S}^{+}_{4}}\colon\mathcal{H}^{B}_{6}\dashrightarrow\widetilde{ \mathcal{S}^{+}_{4}}\) followed by \(q_{\mathcal{S}^{+}_{4}}\colon\widetilde{\mathcal{S}^{+}_{4}}\dashrightarrow \overline{\mathcal{S}^{+}_{4}}\) is the rational Stein factorisation of \(\pi^{+}\); see: [TZ3, Corollary 4.16]. Moreover \(q_{\mathcal{S}^{+}_{4}}\colon\widetilde{\mathcal{S}^{+}_{4}}\dashrightarrow \overline{\mathcal{S}^{+}_{4}}\) is of degree \(2\).
In this paper, essentially, we study (part of) the ramification divisor of \(\pi^{+}\colon\mathcal{H}^{B}_{6}\dashrightarrow\overline{\mathcal{S}^{+}_{4}}\). Indeed, by a suitable extension of the method of [TZ1], we realize that there is
a divisor \(\mathcal{H}_{\text{c.t.}}\hookrightarrow\mathcal{H}_{6}^{B}\) such that \(\pi^{+}\) is extendable to an open \(\mathbb{P}(\text{SL}(2,\mathbb{C}))\)-invariant subscheme \(\mathring{\mathcal{H}}_{\text{c.t.}}\) of \(\mathcal{H}_{\text{c.t.}}\) and \(\pi^{+}(\mathring{\mathcal{H}}_{\text{c.t.}})\hookrightarrow\overline{\mathcal{ S}_{4}^{\text{null}}}\). Indeed there is a rich geometry associated to \(\mathring{\mathcal{H}}_{\text{c.t.}}\). We prove that if \([R]\in\mathring{\mathcal{H}}_{\text{c.t.}}\) then \(R\) is contained in a unique hyperplane section \(Z\hookrightarrow B\). Moreover \(R\) is given by the general element of the linear system \(|2\lambda|\) where \(\phi_{|\lambda|}\colon Z\to\mathbb{P}^{2}\) is the blow-down of four disjoint lines \(\epsilon_{i}\subset Z\subset B\), \(i=1,...4\). Hence \(R\) comes with the polarization \(\lambda\) such that \(\phi_{|3\lambda-\epsilon_{1}-\epsilon_{2}-\epsilon_{3}-\epsilon_{4}|}\colon Z \to B\cap H\) where \(\mathbb{P}^{5}=H\) is an hyperplane of \(\mathbb{P}^{6}\). Due to its shape we call such an element \(R\in|2\lambda|\)_a rational sextic of conic type_.
#### 1.4.3. Rational spaces
Using an explicit identification of the space of smooth polarized hyperplane section \((Z,|\lambda|)\) to an open subscheme \(U\) of the Grassmannian \(\text{G}(2,5)\) naturally associated to \(B\); see Lemma 2.3.6, we show that \(\mathring{\mathcal{H}}_{\text{c.t.}}\) is isomorphic to a projective quasi-bundle over \(U\subset G(2,5)\); see: Proposition 3.2.2. By this and by the \(\mathbb{P}(\text{SL}(2,\mathbb{C}))\)-invariant theory of \(B\) we can show that \(\mathring{\mathcal{H}}_{\text{c.t.}}//\mathbb{P}(\text{SL}(2,\mathbb{C}))\) is a rational variety; see Theorem 3.3.1. Then, using the interpretation of the geometry of lines on \(B\) as the polar geometry induced on \(\mathbb{P}^{2}\) by a \(\text{SO}(3,\mathbb{C})\)-invariant conic \(\Omega\) we show the reconstruction theorem which says that \(\mathring{\mathcal{H}}_{\text{c.t.}}\) dominates \(\overline{\mathcal{S}_{4}^{\text{null},0}}\). More precisely, for a general point \([C,\theta]\in\mathbb{S}_{4}^{\text{null},0}\) there exists a unique \(g_{3}^{1}\) on \(C\), called \(\delta\), hence there is a well defined morphism \(\varphi_{|\theta+\delta|}\colon C\to\mathbb{P}(H^{0}(C,\mathcal{O}_{C}(\theta+ \delta))^{\star})=\mathbb{P}^{2}\), which is known as the Prym canonical. The intersection theory on \(B\) makes possible to see the behaviour of the Prym canonical map, and viceversa; see: the Reconstruction Theorem 6.0.1. We think that the description of \(g_{3}^{1}\)-thetasymmetric curves given in Proposition 4.1.2, the occurrence of the \((4,6)\) lines-points configuration behind the Prym-canonical morphism, see Proposition 4.1.4, and the Reconstruction Theorem are quite interesting geometrical results in themselves.
Finally we show that \(\overline{\mathcal{S}_{4}^{\text{null},0}}\hookrightarrow\overline{\mathcal{ S}_{4}^{+}}\) is in the branch loci of \(\pi^{+}\colon\mathcal{H}_{6}^{B}\dashrightarrow\overline{\mathcal{S}_{4}^{+}}\); see Proposition 7.2.1, and then it follows that it is a rational variety, being birational to \(\mathring{\mathcal{H}}_{\text{c.t.}}//\mathbb{P}(\text{SL}(2,\mathbb{C}))\); see: Corollary 7.2.2. This result implies the rationality of \(\mathcal{R}_{4}^{\text{null}}\). Indeed our reconstruction theorem and the geometry of the Prym map give that \(\mathcal{R}_{4}^{\text{null}}\) is irreducible. Hence we obtain that \(\mathcal{R}_{4}^{\text{null}}\) is birational to \(\overline{\mathcal{S}_{4}^{\text{null}}}\) since it is easy to find a generically injective rational map \(\overline{\mathcal{S}_{4}^{\text{null}}}\dashrightarrow\mathcal{R}_{4}^{ \text{null}}\).
**Acknowledgement.** The author want to thank Hiromichi Takagi, Igor Dolgachev and Ivan Cheltsov for their very useful hints.
## 2. Geometry and invariant theory of the quintic del Pezzo 3-fold
The smooth del Pezzo 3-fold \(B\) is known to be unique up to projective equivalence. We quickly review three different characterisations of \(B\). Indeed to take all these different points of view makes shorter our exposition.
### The del Pezzo threefold as a complete intersection
Let \(V\) be a vector space such that \(\dim_{\mathbb{C}}V=5\) and \(V^{\vee}:=\text{Hom}_{\mathbb{C}}(V,\mathbb{C})\). Let \(W\subset\bigwedge^{2}V^{\vee}\) be a 3-dimensional subvector space; that is \([W]\in\mathbb{G}(3,\bigwedge^{2}V^{\vee})\). We will not always distinguish between the Grassmannian \(\mathbb{G}(2,V)\) and its image \(\mathbb{G}\) by the Plucker embedding inside \(\mathbb{P}(\bigwedge^{2}V)\). We set:
\[B_{W}:=\mathbb{G}\cap\mathbb{P}(\text{Ann}(W)).\]
We point out the reader that the standard \(\operatorname{GL}(V)\)-action on \(V\) induces a \(\operatorname{GL}(V)\)-action on \(\mathbb{G}(2,V)\) which is compatible with the standard \(\operatorname{GL}(\bigwedge^{2}V)\)-action.
Note that by definition \(\mathbb{P}(\operatorname{Ann}(W))\) is a \(6\)-dimensional subspace of \(\mathbb{P}(\bigwedge^{2}V)\) and by Bertini's theorem it holds that if \(W\) is generic then \(B_{W}\) is smooth. More precisely:
**Proposition 2.1.1**.: _The complete intersection \(B_{W}\) is a smooth threefold if and only if all forms in \(W\) have rank \(4\). It also holds that all smooth threefolds \(B_{W}\) are in the same \(\operatorname{GL}(V)\)-orbit; that is there exists a unique (up to projective equivalence) smooth threefold obtained as transversal complete intersection of \(\mathbb{G}r(2,V)\) by a six dimensional subspace of \(\mathbb{P}^{9}\)._
Proof.: See c.f. [San, Lemma 2.1 page 27].
From now on we will denote by \(B\) the unique smooth threefold obtained as the transversal intersection of \(\mathbb{G}(2,V)\) and a \(\mathbb{P}^{6}\) and \(B\) is known as the del Pezzo \(3\)-fold.
An important result in the theory of del Pezzo's threefolds concerns the image \(G^{\prime}\) inside \(\operatorname{SL}(W)\simeq\operatorname{SL}(3,\mathbb{C})\) of the stabiliser \(SL_{W}(V)\) of \(W\) by the \(\operatorname{SL}(V)\)-action on \(\mathbb{G}(3,\bigwedge^{2}V^{\vee})\). Indeed this image is naturally isomorphic to the subgroup of \(\operatorname{SL}(W)\) given by all transformations fixing a smooth conic \(\Omega\subset\mathbb{P}(W)\), c.f. [San, Page 28]. Hence \(G^{\prime}\simeq\operatorname{SL}(2,\mathbb{C})\).
#### 2.1.1. An equivariant projection of the Veronese surface
We remark that there is a natural homomorphism \(\operatorname{Sym}^{2}\bigwedge^{2}V^{\vee}\to\bigwedge^{4}V^{\vee}\) and by the isomorphism \(\mathbb{P}(\bigwedge^{4}V^{\vee})\simeq\mathbb{P}(V)\) we can construct a map:
\[\square_{W}\colon\mathbb{P}(W)\to\mathbb{P}(V) \tag{2.1}\]
obtained by the following composition of \(\operatorname{SL}(W)\)-equivariant morphisms:
\[\mathbb{P}(W)\xrightarrow{\nu_{2}}\mathbb{P}(\operatorname{Sym}^{2}(W)) \hookrightarrow\mathbb{P}(\operatorname{Sym}^{2}(\bigwedge^{2}V^{\vee}))\to \mathbb{P}(\bigwedge^{4}V^{\vee})\simeq\mathbb{P}(V).\]
**Lemma 2.1.2**.: _The map \(\square_{W}\colon\mathbb{P}(W)\to\mathbb{P}(V)\) is an \(SL_{W}(V)\)-equivariant embedding._
Proof.: Easy; c.f. see [San, page 29].
We will given later a nice interpretation of the above morphism. Since we will sometimes use it, we call \(\square_{W}\colon\mathbb{P}(W)\to\mathbb{P}(V)\) the _square embedding_.
(\mathbb{P}(\operatorname{SL}(2,\mathbb{C}))\)-invariant theory description of the del Pezzo threefold
In order to make easier to follow some of our proofs we present an explicit description of \(B\).
#### 2.2.1. \(\operatorname{SL}(2,\mathbb{C})\)-action on binary forms
Let \(V_{d}:=\mathbb{C}[x,y]_{d}\) be the \(d+1\)- dimensional vector space of binary \(d\)-forms. It is well-known that \(V_{d}\) gives a model of the \(d+1\)-irreducible representation of \(\operatorname{SL}(2,\mathbb{C})\) by the action:
\[g:=\left(\begin{array}{cc}a&b\\ c&d\end{array}\right);\ x\xrightarrow{g}ax+by;\ y\xrightarrow{g}cx+dy\]
The above action admits a \(\operatorname{SL}(2,\mathbb{C})\)-equivariant split surjection
\[s_{l}\colon V_{m}\otimes V_{n}\to V_{m+n-2l},\ F\otimes G\xrightarrow{s_{l}}( FG)_{l}\]
where \((FG)_{l}\) is known as the \(l\)-transvectant of \(F\) and \(G\); more explicitly:
\[(FG)_{l}:=\frac{(m-l)!}{m!}\frac{(n-l)!}{n!}\sum_{i=0}^{l}(-1)^{i}\left(\begin{array} []{c}l\\ i\end{array}\right)\frac{\partial^{l}F}{\partial x^{l-i}\partial y^{i}}\frac{ \partial^{l}G}{\partial x^{i}\partial y^{l-i}}\]
By the transvectants it remains defined an isomorphism of \(G\)-representations
\[\bigwedge^{2}V_{d}=\bigoplus_{l=1}^{[\frac{d+1}{2}]}V_{2(d+1-2l)}\]
where for each \(1\leq l\leq[\frac{d+1}{2}]\)the projection is given by: \(p_{2l-1}(F\otimes G):=(F,G)_{2l-1}\). We recall that if \(l=2k-1\) and \(a,b,c,d\in\mathbb{C}\) then it holds
\[(aF+bG,cF+dG)_{2k-1}=\det\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\cdot(F,G)_{2k-1}\]
The set \(\{(F,G)_{2k-1}\}_{k}\) is called the set of combinants of \(F\) and \(G\) and actually it is associated to the pencil \(\{\lambda F+\mu G|[\lambda:\mu]\in\mathbb{P}^{1}\}\).
#### 2.2.2. \(\mathbb{P}(\operatorname{SL}(2,\mathbb{C}))\)-action on \(\mathbb{P}(\bigwedge^{2}V_{4})\)
Now assume \(d=4\). Note that \(\dim_{\mathbb{C}}V_{4}=5\). From now on we set \(G:=\mathbb{P}(\operatorname{SL}(2,\mathbb{C}))\). The \(\operatorname{SL}(2,\mathbb{C})\)-decomposition \(\bigwedge^{2}V_{4}\simeq V_{6}\bigoplus V_{2}\) comes equipped with the two homomorphisms \(s_{1}\colon\bigwedge^{2}V_{4}\to V_{6}\), \(s_{3}\colon\bigwedge^{2}V_{4}\to V_{2}\) and it amounts to write \(V_{6}\simeq\operatorname{Ker}s_{3}\) and \(V_{2}\simeq\operatorname{Ker}s_{1}\). It is easy to find equations for both subspaces of \(\bigwedge^{2}V_{4}\). In this case the Grassmannian \(\mathbb{G}\subset\mathbb{P}(\bigwedge^{2}V_{4})\) is the image of the Grassmannian \(G(2,V_{4})\) of the 2-dimensional subvector spaces of \(V_{4}\) by the Plucker embedding. Following cf. [MU, p.505] and also by Proposition 2.1.1 it holds that the del Pezzo 3-fold \(B\) can be seen as follows:
\[B=s_{1}(\mathbb{G})\subset\mathbb{P}(V_{6}).\]
By c.f. [FN] we also know the \(\mathbb{P}(\operatorname{SL}(2,\mathbb{C}))\)-orbit decomposition
\[B=G([xy(x^{4}+y^{4}])\cup G([6x^{5}y])\cup G([x^{6}])\]
where \(\overline{G([6x^{5}y)]}=G([6x^{5}y])\cup G([x^{6}])\) is an anticanonical section of \(B\). Moreover by [MU, Lemma 1.6 p.497] \(\overline{G([6x^{5}y])}\) is singular only along \(G([x^{6}])\) and \(\overline{G([6x^{5}y])}\) is the image of \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) by a linear sub-system of bidegree \((5,1)\). It is well-known that \(G([x^{6}])\) is a rational normal sextic of maximal hull.
### \(\mathbb{P}(\operatorname{SL}(2,\mathbb{C}))\)-invariant theory description of lines and conics
It is well known that \(H^{i}(B,\mathcal{O}_{B})=0\) for \(i>0\), that the first Chern-class homomorphism \(c_{1}\colon\operatorname{Pic}(B)\to H^{2}(B,\mathbb{Z})\) is an isomorphism and that \(\operatorname{Pic}(B)=[\mathcal{O}_{B}(1)]\mathbb{Z}\) where \(\mathcal{O}_{B}(1)\) is the very ample invertible sheaf induced by the Plucker embedding \(B\hookrightarrow\mathbb{P}^{6}\). Moreover \(-K_{B}=2H\) where \(H\) is an hyperplane section. Thanks to the very ample divisor \(H\) we call _line_ any subscheme \(l\hookrightarrow B\) with \(H\)-Hilbert polynomial \(h_{l}(t)=1+t\) and _conic_ any subscheme \(q\hookrightarrow B\) with \(H\)-Hilbert polynomial \(h_{q}(t)=1+2t\). We denote respectively by \(\mathcal{H}_{1}^{B}\) and \(\mathcal{H}_{2}^{B}\) the Hilbert scheme of lines and of conics on \(B\) for the chosen \(H\)-polarisation of \(B\). Both Hilbert schemes are well-known; see: [FN], [Ili] and cf. [San].
We revise an explicit description of them using the theory of invariants. We will use freely the identification \(\bigwedge^{2}V_{4}\simeq V_{6}\bigoplus V_{2}\). We also remind the reader that in this case the dual vector space \(V_{4}^{\star}\) of \(V_{4}\) is given by the ring of degree-4 homogeneous binary partial differential operators \(\mathbb{C}[\partial_{x},\partial_{y}]_{4}=V_{4}^{\star}\) and accordingly we have the \(\operatorname{SL}(2,\mathbb{C})\)-splitting \(\bigwedge^{2}V_{4}^{\star}\simeq V_{6}^{\star}\bigoplus V_{2}^{\star}\)
#### 2.3.1. Invariant theory description of \(\mathcal{H}^{B}_{1}\)
In the flag variety \(\mathcal{F}(1,3,V_{4})\) we consider the following subscheme:
\[\mathcal{H}_{1}:=\{[\langle\alpha\rangle\subset\langle\alpha,\beta_{0},\beta_{ 1}\rangle]\in\mathcal{F}(1,3,V_{4})\mid s_{3}(\alpha\wedge\beta_{0})=0,s_{3}( \alpha\wedge\beta_{1})=0\}\]
We define the homomorphism:
\[\sigma_{3}\colon\mathcal{H}_{1}\to\mathbb{P}(V_{2}),\ [\langle\alpha\rangle \subset\langle\alpha,\beta_{0},\beta_{1}\rangle]\mapsto[s_{3}(\beta_{0}\wedge \beta_{1})].\]
It is straightforward to show that :
**Proposition 2.3.1**.: _There exists a \(G\)-equivariant isomorphism between \(\mathcal{H}_{1}\) and \(\mathcal{H}^{B}_{1}\). Moreover the natural homomorphism \(\sigma_{3}\colon\mathcal{H}_{1}\to\mathbb{P}(V_{2})\) is a \(G\)-equivariant isomorphism which induces a \(G\)-equivariant isomorphism \(\sigma_{3}\colon\mathcal{H}^{B}_{1}\to\mathbb{P}(V_{2})\)._
Proof.: See c.f. [San, Proposition 2.20].
By [MU] we know that the anticanonical section \(\overline{G([6x^{5}y])}\) is the loci swept by the lines \(l\subset B\) such that \(\mathcal{N}_{l/B}=\mathcal{O}_{l}(-1)\oplus\mathcal{O}_{l}(1)\). These lines are very important to understand the geometry of \(B\). Consider \(\phi_{1}\colon\mathcal{H}^{B}_{1}\to\mathbb{P}(V_{4})\) the morphism induced by the projection \(\mathcal{F}(1,3,V_{4})\to\mathbb{P}(V_{4})\); it is easy to show the following:
**Corollary 2.3.2**.: _The locus \(\Omega^{\prime}:=G([x^{2}]):=\{^{\tau}[a_{0},a_{1},a_{2}]\in\mathbb{P}^{2}\mid a _{1}^{2}-a_{0}a_{2}=0\}\)is a conic inside \(\mathbb{P}(V_{2})\) and it parameterizes the loci of lines \(l\subset B\) such that \(\mathcal{N}_{l/B}=\mathcal{O}_{l}(-1)\oplus\mathcal{O}_{l}(1)\). Moreover the composition \(\phi_{1}\circ\sigma_{3}^{-1}\colon\mathbb{P}(V_{2})\to\mathbb{P}(V_{4})\) is induced by the square morphism._
Proof.: It is an easy interpretation of the square morphism given in Lemma 2.1.2 where \(V_{2}^{*}=W\subset\bigwedge^{2}V_{4}^{*}\), and \(V=V_{4}\) and the isomorphism \(V_{4}\simeq\bigwedge^{4}V_{4}^{*}\).
Hence we can switch from the coordinate free presentation of subsection (2.1) to the binary \(G\)-invariant theory, by the following dictionary: \(V=V_{4}\), \(W:=V_{2}\) where \(V_{2}\) is seen as a subspace of \(\bigwedge^{2}V_{4}\) and \(B_{W}=\mathbb{P}(V_{6})\cap\mathbb{G}\), \(\mathcal{H}^{B}_{1}=\mathbb{P}(V_{2})\), \(\Omega^{\prime}=\Omega\).
#### 2.3.2. Invariant theory description of \(\mathcal{H}^{B}_{2}\)
To simplify notation we denote by \(K_{\beta}:=\operatorname{Ker}(s_{3}(\beta\wedge\ldots))\colon V_{4}\to V_{2}\). where \(\beta\in V_{4}\). Since \(V_{4}^{\star}\) is the ring of homogeneous binary partial differential operators of degree \(4\), \(\mathbb{C}[\partial_{x},\partial_{y}]_{4}=V_{4}^{\star}\), then \(V_{4}^{\star}\simeq\bigwedge^{4}V_{4}\). We recall that in general \(\mathrm{dim}K_{\alpha}=2\) but if \(\alpha\in SL_{2}(x^{4})\cup SL_{2}(x^{2}y^{2})\) then \(\mathrm{dim}K_{\alpha}=3\). Now, in the flag variety \(\mathcal{F}(2,4,V_{4})\subset\mathbb{P}(\bigwedge^{2}V_{4})\times\mathbb{P}( \bigwedge^{4}V_{4})\), we define
\[\mathcal{H}_{2}:=\{([U],[W]\in\mathcal{F}(2,4,V_{4})\mid U=\langle\alpha, \beta\rangle,W=\langle K_{\alpha},K_{\beta}\rangle,\mathrm{dim}_{\mathbb{C}}U =2,\mathrm{dim}_{\mathbb{C}}W=4\}.\]
It is a nice exercise in invariant theory to show that:
**Proposition 2.3.3**.: _The Hilbert space \(\mathcal{H}^{B}_{2}\) of conics of \(B\) is \(G\)-isomorphic to \(\mathcal{H}_{2}\). Moreover the morphism \(\mathcal{H}_{2}\to\mathbb{P}(V_{4}^{\star})\) given by the natural projection is an isomorphism which induces a \(G\)-isomorphism \(\mathcal{H}^{B}_{2}\simeq\mathbb{P}(V_{4}^{\star})\)_
Proof.: See c.f. [San, Prop. 2.32].
_Remark_.: The locus of double lines inside \(\mathbb{P}(V_{4}^{\star})\) is given by the \(G\)-orbit of \([\frac{\partial^{4}}{\partial x^{4}}]\). In particular it is a rational normal curve of degree \(4\).
#### 2.3.3. Invariant theory description of \(\mathcal{H}^{B}_{3}\)
A subscheme \(\Gamma\) of \(B\) whose \(H\)-Hilbert polynomial is \(3t+1\) is called a rational cubic. Indeed by [San, Corollary 1.39] we know that such subschemes have no embedded points and satisfy \(h^{1}(\Gamma,\mathcal{O}_{\Gamma})=0\). We denote by \(\mathcal{H}^{B}_{3}\) the corresponding Hilbert scheme. We need an explicit description of \(\mathcal{H}^{B}_{3}\).
Let \(\mathcal{F}\) be the restriction to \(B\subset G(2,V)\) of the dual of the universal bundle of rank two on \(G(2,V)\); the reader may think \(V\) as \(V_{4}\); see section 2.1. The projective bundle \(\mathbb{P}(\mathcal{F})\subset B\times\mathbb{P}(V)\) is the family of lines in \(\mathbb{P}(V)\) parameterized by \(B\). In [C] Castelnuovo showed that \(B\) parameterizes tri-secant lines of the projected Veronese surface \(\mathbb{P}^{2}\simeq P\subset\mathbb{P}(V)=\mathbb{P}^{4}\); see: c.f. [PV, Lemma 7.1] or c.f [San, Corollary 3.12]. By Corollary 2.3.2\(P\) is the image of \(\mathbb{P}(V_{2})\) inside \(\mathbb{P}(V_{4})\) given by the square morphism of Lemma 2.1.2. Hence we have a non obvious nice interpretation of the natural diagram:
(2.2)
**Lemma 2.3.4**.: _The natural projection \(\pi_{\mathcal{F}}\colon\mathbb{P}(\mathcal{F})\to\mathbb{P}(V)\) is the \(\mathbb{P}(\operatorname{SL}(2,\mathbb{C}))\)-invariant blow-up along \(P\)._
Proof.: Indeed by [PV, Lemma 7.1]\(\pi_{\mathcal{F}}\colon\mathbb{P}(\mathcal{F})\to\mathbb{P}(V)\) is one to one outside \(P\) and the fiber over a point of \(P\) is \(\mathbb{P}^{1}\). Thus it is the blow-up along \(P\) since both \(\mathbb{P}(\mathcal{F})\) and \(\mathbb{P}(V)\) are smooth fourfolds. All the maps are \(\mathbb{P}(\operatorname{SL}(2,\mathbb{C}))\)-equivariant if we identify \(V=V_{4}\).
We briefly review [San, Proposition 2.46], [San, Section 3 page 57], [San, Proposition 3.16]. Let \(E_{\mathcal{F}}\) be the \(\pi_{\mathcal{F}}\)-exceptional divisor. Let \(l\subset\mathbb{P}(V)\) be a line and \(l^{\prime}\subset\mathbb{P}(\mathcal{F})\) its strict transform. Then \(-K_{\mathbb{P}(\mathcal{F})}=\pi^{\star}\mathcal{O}_{\mathcal{F}}(5)-E_{ \mathcal{F}}\). Thus, if \(P\cap l=\emptyset\), then \(-K_{\mathbb{P}(\mathcal{F})}\cdot l^{\prime}=5\). Let \(T_{\mathcal{F}}\) be the tautological divisor on \(\mathbb{P}(\mathcal{F})\), which is the pull-back of \(\mathcal{O}_{\mathbb{P}(V)}(1)\); see: [San, Proposition 3.10]. Then \(T_{\mathcal{F}}\cdot l^{\prime}=1\). Let \(\pi_{B}\colon\mathbb{P}(\mathcal{F})\to B\) be the natural projection and \(\mathcal{O}_{\mathbb{P}(\mathcal{F})}(H^{\prime})\) be the \(\pi_{B}\)-pull-back of \(\mathcal{O}_{B}(1)\). Then by the canonical bundle formula for \(\mathbb{P}^{1}\)-bundle \(-K_{\mathbb{P}(\mathcal{F})}=2T_{\mathcal{F}}+H^{\prime}\) since \(-K_{B}=2H\) and since we know by the standard relative tautological sequence for \(\pi^{\star}_{B}(\mathcal{F})\) that \(\det\mathcal{F}=\mathcal{O}_{B}(1)\). Therefore \(H\cdot l^{\prime}=5-2=3\). Thus the image of \(l^{\prime}\) on \(B\) is a twisted cubic. In case \(P\cap l\neq\emptyset\), \(l\) corresponds to a degenerate twisted cubic. More precisely if \(l\) is unisecant to \(P\) then its pull back is \(\widetilde{l}+\zeta\) where \(\zeta\subset E\) is the \(\mathbb{P}^{1}\) inside \(\mathbb{P}(\mathcal{F})\) such that \(\pi_{\mathcal{F}}(\zeta)\) is the unique point \(P\cap l\). Hence \(H\cdot\pi_{B}(\widetilde{l})=2\) and \(H\cdot\pi_{B}(\zeta)=1\); that is \(\pi_{B}(\zeta)\) is a line which intersects the conic \(\pi_{B}(\widetilde{l})\) but obviously they are not coplanar since \(B\) is an intersection of quadrics. If \(l\) is bisecant to \(P\) then its pull back is \(\widetilde{l}+\zeta+\zeta^{\prime}\) and the image \(\pi_{B}(\widetilde{l})\) is a line. Finally if \(l\) is a trisecant line then its pull back is \(l^{0}+\zeta+\zeta^{\prime}+\zeta^{\prime\prime}\) and \(\pi_{B}(l^{0})\) is the common point on \(B\) of the three lines \(\pi_{B}(\zeta),\pi_{B}(\zeta^{\prime}),\pi_{B}(\zeta^{\prime\prime})\). There are also some degenerations depending on degenerate intersection between \(P\) and \(l\). In any case the crucial point is that no line is contained inside \(P\).
**Lemma 2.3.5**.: _The map \(\mathbb{G}(2,5)\to\mathcal{H}^{B}_{3}\) induced by \([l]\mapsto[\pi_{B\star}\pi^{\star}_{\mathcal{F}}(l)]\) is a \(\mathbb{P}(\operatorname{SL}(2,\mathbb{C}))\)-isomorphism._
Proof.: See the above discussion. A formal proof is given in [San, Proposition 2.46].
#### 2.3.4. The scheme of polarised hyperplane sections
In the same vein and for later use we need to have a rough description of an open set of the scheme of polarised hyperplane sections of the del Pezzo threefold \(B\).
Following [San] we can define \(D_{\mathrm{tri}}\) to be the divisor of those \([K]\in\mathbb{G}(3,V)\) such that \(K\) contains a line trisecant to \(P\). By [San, Lemma 3.28] we know that for every \(K\in\mathbb{G}(3,V)\setminus D_{\mathrm{tri}}\) the morphism \(\pi_{K}\colon S_{K}\to B_{K}\) is an isomorphism where \(S_{K}\) is the \(\pi_{\mathcal{F}}\)-strict transform of \(K\) and \(\pi_{K}\colon S_{K}\to B_{K}\) is the morphism induced by the restriction to \(S_{K}\) of \(\pi_{B}\colon\mathbb{P}(\mathcal{F})\to B\). Set
\[\hat{\mathbb{G}}:=\{[K]\in\mathbb{G}(3,V)\mid K_{|P}\,\mathrm{are}\,4\, \mathrm{distinct\,points}\}. \tag{2.3}\]
Obviously \(\hat{\mathbb{G}}\hookrightarrow\mathbb{G}(3,V)\setminus D_{\mathrm{tri}}\) is an open embedding. We can define the morphism
\[\alpha\colon\hat{\mathbb{G}}\to\hat{\mathbb{P}}^{6},\,[K]\mapsto[H_{K}] \tag{2.4}\]
where, letting \(B_{K}=\pi_{B}{\ast}\pi_{\mathcal{F}}^{\ast}(K)\), \(H_{K}\) is the unique hyperplane of \(\mathbb{P}^{6}\) such that \(B_{|H}=B_{K}\). Let
\[\hat{\mathcal{K}}:=\{([K],b,[H])\in\hat{\mathbb{G}}\times B\times\hat{\mathbb{ P}}^{6}|,B_{|H}=B_{K},\,\mathrm{and}\,b\in H\cap B\}\]
**Lemma 2.3.6**.: \(\hat{\mathbb{G}}\) _is isomorphic to the parameter space of points \(([H],\mathcal{O}_{Z}(\lambda))\) where \([H]\in\hat{\mathbb{P}}^{6}\), \(Z=H\cap B\) and \(\mathcal{O}_{Z}(\lambda)\) is an invertible sheaf on \(Z\) such that \(\phi_{|\lambda|}\colon Z\to\mathbb{P}^{2}\) is the blow-down of \(4\) rational curves. Moreover \(\alpha\colon\hat{\mathbb{G}}\to\hat{\mathbb{P}}^{6}\) is compatible with the natural forgetful morphism \(\hat{\mathcal{K}}\to\hat{\mathbb{P}}^{6}\)._
Proof.: Since the fibers of the natural projection \(\hat{\mathcal{K}}\to\hat{\mathbb{G}}\) are irreducible \(\alpha\) factorise the Stein factorisation of the natural projection \(\hat{\mathcal{K}}\to\hat{\mathbb{P}}^{6}\). We have only to show that given a smooth section \(Z\) of \(B\) inside \(\mathbb{P}^{6}\) and a polarisation \(\lambda\) there exists \([K]\in\hat{\mathbb{G}}\) such that \(Z=B_{K}\). Indeed to give such a \(\lambda\) is equivalent to give four disjoint lines \(e_{1},e_{2},e_{3},e_{4}\subset Z\) such that \(\phi_{|\lambda|}\colon Z\to\mathbb{P}^{2}\) is the blow-up at four points \(a_{i}\) where \(e_{i}=\phi_{|\lambda|}^{-1}(a_{i})\), \(i=1,2,3,4\). To give such four points \([e_{1}],[e_{2}],[e_{3}],[e_{4}]\in\mathcal{H}_{1}^{B}=\mathbb{P}(W)\) is equivalent to give a pencil of conics inside \(B\) that is to give a line inside \(\mathbb{P}(V^{\ast})\) via its identification to \(\mathcal{H}_{2}^{B}\) given in Proposition 2.3.3. By construction of the square morphism and by the interpretation of it for the geometry of \(B\), the base locus of the corresponding pencil of hyperplanes is the claimed plane \(K\subset\mathbb{P}(V)\).
Note that we do not know if any element \([K]\in\mathbb{G}(3,V)\) can be seen as a couple \(([H],\mathcal{O}_{Z}(\lambda))\) where \([H]\in\hat{\mathbb{P}}^{6}\), \(Z=H\cap B\) and \(\mathcal{O}_{Z}(\lambda)\) is an invertible sheaf on \(Z\). In any case we are interested in a rationality problem and the standard \(\mathbb{PSL}(2,\mathbb{C})\)-action on \(B\) extends to our construction in a equivariant way. We recall that we have set \(G=\mathbb{PSL}(2,\mathbb{C})\).
**Lemma 2.3.7**.: _It holds:_
1. \(\hat{\mathbb{G}}\) _is a_ \(G\)_-invariant open subscheme of_ \(\mathbb{G}(3,V)\)_._
2. _The_ \([K]\)_-stabiliser is trivial if_ \([K]\in\hat{\mathbb{G}}\) _is a general element._
3. \(\hat{\mathbb{G}}//G\) _is birational to_ \(\mathbb{P}^{3}\)_._
Proof.: The first claim is easy. To show the second one we can be back to the interpretation given in Corollary 2.3.2. In other words we can consider the standard
\(G\) action on binary forms and assume that \(K\cap P=\{[\alpha_{1}^{2}],...,[\alpha_{4}^{2}]\}\) where \(\alpha_{i}\in V_{2}\). In particular w.l.g we can assume that there exists \(\lambda_{i}\in\mathbb{C}\) such that
\[\alpha_{4}^{2}=\sum_{i=1}^{3}\lambda_{i}\alpha_{i}^{2} \tag{2.5}\]
Now if there exists an element \([g]\in G\) which stabilises \(K\) then \(g\) has to permute the \(\alpha_{i}\)'s. This is clearly impossible since they are \(3\) by \(3\) independent and any other relation among them similar to Equation (2.5) gives a contradiction.
To show the third one is the same as to show that \(\mathbb{G}(3,V_{4})//G\) is rational. Since \(G(3,V_{4})\) is \(G\)-equivariantly isomorphic to \(G(2,V_{4}^{*})\) with the dual \(G\)-action we are reduced to show that \(G(2,V_{4})//G\) is birational to \(\mathbb{P}^{3}\) where \(G\) is induced by the standard \(\operatorname{SL}(2,\mathbb{C})\)-action on \(V_{4}\). Any element \([U]\in G(2,V_{4})\) is given by \(U=\langle f,g\rangle\) where \(f,g\) are two linearly independent quartic binary forms. By \(G\)-action we can take \(f(x,y)=xy(x-y)(\lambda y-\mu x)\) where \([\lambda:\mu]\in\mathbb{P}^{1}\). Set \(g:=\sum_{i=0}^{4}a_{i}\left(\begin{array}{c}4\\ i\end{array}\right)x^{4-i}y^{i}\). Then we form the \(5\times 5\) anti-symmetric matrix of the element \([f\wedge g]\in\mathbb{P}(\bigwedge^{2}V_{4})\). Finally we set \(t:=\lambda/\mu\), \(a_{0}=1\) and we impose the vanishing of the \(5\) pfaffians according to the basic rules of [MU, p.505]. Now it is very easy to show that \(a_{4}\) and \(a_{3}\) depends rationally on the free variables \(t,a_{1},a_{2}\).
### The del Pezzo threefold as the variety of sums of a conic
There is another neat description of \(B\) which makes transparent the meaning of the invariant conic \(\Omega^{\prime}\).
#### 2.4.1. Polarity with respect to a conic and lines of the del Pezzo \(3\)-fold
Let \(\{\check{F}_{2}=0\}\subset\check{\mathbb{P}}^{2}\) be a smooth conic. Set
\[\operatorname{VSP}\,(\check{F}_{2},3)^{o}:=\{([H_{1}],[H_{2}],[H_{3}])\mid H _{1}^{2}+H_{2}^{2}+H_{3}^{2}=\check{F}_{2}\}\subset\operatorname{Hilb}^{3} \mathbb{P}^{2},\]
where \(\mathbb{P}^{2}\) is the dual plane to \(\check{\mathbb{P}}^{2}\), thus linear forms \(H_{i}\) of \(\check{\mathbb{P}}^{2}\) (\(i=1,2,3\)) can be considered as points in \(\mathbb{P}^{2}\). Mukai showed in [Muk1] that \(B\) is isomorphic to the closed subset \(\operatorname{VSP}\,(\check{F}_{2},3):=\overline{\operatorname{VSP}\,(\check{ F}_{2},3)^{o}}\subset\operatorname{Hilb}^{3}\mathbb{P}^{2}\) where this \(\mathbb{P}^{2}\) is isomorphic to \(\mathcal{H}_{1}^{B}\). The variety \(\operatorname{VSP}\,(\check{F}_{2},3)\) has the natural action of the subgroup \(\operatorname{SO}(3,\mathbb{C})\) of the automorphism group \(\operatorname{PGL}_{3}\) of \(\mathbb{P}^{2}\) consisting of elements which preserve \(\{\check{F}_{2}=0\}\). This group is isomorphic to \(G\), and the conic is the unique one which is \(G\)-invariant. By definition of \(\operatorname{VSP}\,(\check{F}_{2},3)^{o}\), it is easy to see that \(G\) acts on \(\operatorname{VSP}\,(\check{F}_{2},3)^{o}\) transitively. Thus we recover also in this way that \(B\) is a quasi-homogeneous \(G\)-variety. Actually, it is proved in [PV] that \(G\) is the automorphism group of \(B\). Moreover, Mukai showed that, for a point \(b:=([H_{1}],[H_{2}],[H_{3}])\in\operatorname{VSP}\,(\check{F}_{2},3)^{o} \subset B\), the points \([H_{i}]\in\mathbb{P}^{2}\) (\(i=1,2,3\)) represent three lines through \(b\). By definition of \(\operatorname{VSP}\,(\check{F}_{2},3)^{o}\) and by the transitivity of the action of \(G\) on \(\operatorname{VSP}\,(\check{F}_{2},3)^{o}\), it is easy to show the following claim:
**Claim 2.4.1**.: \(G\) _acts transitively on the set of unordered pairs of intersecting lines whose intersection points are contained in \(\operatorname{VSP}\,(\check{F}_{2},3)^{o}\)._
Let \(F_{2}\) be the quadratic form dual to \(\check{F}_{2}\) and set
\[\Omega^{\prime\prime}:=\{F_{2}=0\}\]
for the associated conic in \(\mathbb{P}^{2}\). The conic \(\Omega^{\prime\prime}\subset\mathbb{P}^{2}\) is the unique one invariant under the induced action of \(G\) on \(\mathcal{H}^{B}_{1}\). Moreover, \(G\) is exactly the closed subgroup of \(\operatorname{Aut}\mathbb{P}^{2}\simeq\operatorname{PGL}_{3}\) whose elements preserve \(\Omega^{\prime\prime}\).
In Corollary 2.3.2, see also [I, SS2.5], we have shown that there exists a conic \(\Omega^{\prime}\) in \(\mathbb{P}^{2}\) such that, for \([l]\in\mathbb{P}^{2}-\Omega^{\prime}\) (resp. for \([l]\in\Omega^{\prime}\)), it holds \(\mathcal{N}_{l/B}=\mathcal{O}_{l}\oplus\mathcal{O}_{l}\) (resp. \(\mathcal{N}_{l/B}\simeq\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{ \mathbb{P}^{1}}(1)\)). Obviously \(\Omega^{\prime}\) is invariant under the action of \(G\), hence we have \(\Omega^{\prime}=\Omega^{\prime\prime}\). Hence \(\Omega=\Omega^{\prime\prime}\). From now on we will not distinguish between \(\Omega^{\prime},\Omega^{\prime\prime}\) and \(\Omega\).
**Definition 2.4.2**.: A line \(l\) on \(B\) is called a special line if \(\mathcal{N}_{l/B}\simeq\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{ \mathbb{P}^{1}}(1)\).
The following Proposition is treated in [D1, 4.2] and it will play a fundamental role in the Reconstruction Theorem; see: Theorem 6.0.1.
**Proposition 2.4.3**.: _Let \(\widetilde{\Omega}\) be the symmetric bi-linear form associated to \(\Omega\). Then two lines \(l\) and \(m\) on \(B\) intersect if and only if \(\widetilde{\Omega}([l],[m])=0\). In particular there is a \(G\)-identification between \((\mathcal{H}^{B}_{1},\Omega)\) and \((\mathbb{P}^{2},Q)\) where \(Q\) is a smooth conic._
**Definition 2.4.4**.: Two points \([l],[m]\in\mathcal{H}^{B}_{1}\) are said to be polar with respect to \(\widetilde{\Omega}\) if \(\widetilde{\Omega}([l],[m])=0\).
_Remark_.: In other words two lines \(l,m\subset B\) are incident iff the corresponding points \([l],[m]\in\mathcal{H}^{B}_{1}\) are polar with respect to \(\widetilde{\Omega}\).
### The \(3\)-to-\(1\) cover of \(B\) given by the universal family of lines
Our construction of spin curves relies on the geometry of the following diagram which has been deeply studied in [FN, SS2]:
(2.6)
where we denote by \(\pi\colon\mathcal{U}_{1}\to\mathcal{H}^{B}_{1}\) and by \(\varphi\colon\mathcal{U}^{B}_{1}\to B\) the natural morphisms induced by respectively the natural projections \(B\times\mathcal{H}^{B}_{1}\to\mathcal{H}^{B}_{1}\), \(B\times\mathcal{H}^{B}_{1}\to B\).
**Notation 2.5.1**.: For an irreducible curve \(R\) on \(B\), denote by \(M(R)\) the locus \(\subset\mathcal{H}^{B}_{1}\) of lines intersecting \(R\), namely, \(M(R):=\pi(\varphi^{-1}(R))\) with reduced structure. Since \(\varphi\) is flat, \(\varphi^{-1}(R)\) is purely one-dimensional. If \(\deg R\geq 2\), then \(\varphi^{-1}(R)\) does not contain a fiber of \(\pi\), thus \(M(R)\) is a curve. See Proposition 2.5.2 for the description of \(M(R)\) in case \(R\) is a line of \(B\).
**Proposition 2.5.2**.: _It holds_:__
1. _the union of special lines is the branched locus_ \(B_{\varphi}\) _of_ \(\varphi\colon\mathbb{P}\to B\)_._ \(B_{\varphi}\) _has the following properties:_ 1. \(B_{\varphi}\in|-K_{B}|\)_,_ 2. \(\varphi^{*}B_{\varphi}=B_{\varphi_{,1}}+2B_{\varphi,2}\)_, where_ \(B_{\varphi_{,1}}\simeq B_{\varphi_{,2}}\simeq\mathbb{P}^{1}\times\mathbb{P}^{1}\)_, and_ \(\varphi\colon B_{\varphi_{,1}}\to B_{\varphi}\) _and_ \(\varphi\colon B_{\varphi_{,2}}\to B_{\varphi}\) _are injective, and_ 1. _the pull-back of a hyperplane section of_ \(B\) _to_ \(B_{\varphi_{,1}}\) _is a divisor of type_ \((1,5)\)_,_
2. _the image of_ \(B_{\varphi_{,2}}\) _by_ \(\pi\colon\mathbb{P}\to\mathcal{H}^{B}_{1}\) _is the conic_ \(\Omega\)
3. _if_ \(l\) _is a special line, then_ \(M(l)\) _is the tangent line to_ \(\Omega\) _at_ \([l]\)_. If_ \(l\) _is not a special line, then_ \(\varphi^{-1}(l)\) _is the disjoint union of the fiber of_ \(\pi\) _corresponding to_ \(l\)_, and the smooth rational curve dominating a line on_ \(\mathcal{H}_{1}^{B}\)_. In particular,_ \(M(l)\) _is the disjoint union of a line and the point_ \([l]\)_._ _By abuse of notation, we denote by_ \(M(l)\) _the one-dimensional part of_ \(M(l)\) _for any line_ \(l\)_. Vice-versa, any line in_ \(\mathcal{H}_{1}^{B}\) _is of the form_ \(M(l)\) _for some line_ \(l\)_, and_
4. _the locus swept by lines intersecting_ \(l\) _is a hyperplane section_ \(T_{l}\) _of_ \(B\) _whose singular locus is_ \(l\)_. For every point_ \(b\) _of_ \(T_{l}-l\)_, there exists exactly one line which belongs to_ \(M(l)\) _and passes through_ \(b\)_._
Proof.: See [12, SS2] and [13, SS1].
## 3. Special rational curves on the quintic del Pezzo \(3\)-fold
### Rational sextics of conic type
In the Introduction we have recalled our theory of genus-\(4\) spin curves via rational sextics on \(B\); see [14]. In this paper we are interested in special loci inside \(\overline{S_{4}^{+}}\). Hence we need to study special loci in the Hilbert scheme of rational sextics of \(B\).
#### 3.1.1. Some known results on the Hilbert space of rational curves on B
For a smooth projective variety \(X\) in some projective space, let \(\mathcal{H}_{d}^{0}(X)\) be the union of components of the Hilbert scheme whose general points parameterize smooth rational curves on \(X\) of degree \(d\). Let \(\mathcal{H}_{d}^{0^{\prime}}(X)\) be the open subset of \(\mathcal{H}_{d}^{0}(X)\) parameterizing smooth rational curves on \(X\) of degree \(d\) with linear hull of maximal dimension.
We define by induction \(\mathcal{H}_{d}^{B}\) to be the union of the components of the Hilbert scheme whose general point parameterizes a smooth rational curve of degree \(d\) on \(B\) obtained as a smoothing of the union of a general smooth rational curve \(R\) of degree \(d-1\) contained in \(\mathcal{H}_{d-1}^{B}\) and a general uni-secant line to \(R\) contained in \(B\).
We know that for some \(d\), \(\mathcal{H}_{d}^{B}\) contains elements \([R]\) such that \(\langle R\rangle\) is not of maximal dimension and we can show inductively that \(\mathcal{H}_{d}^{B}\subset\overline{\mathcal{H}_{d}^{0^{\prime}}(B)}\), where we take the closure in the Hilbert scheme; see:[14]. It is known that \(\mathcal{H}_{d}^{B}\) is a rational variety if \(d\leq 5\). We also know that:
**Proposition 3.1.1**.: \(\mathcal{H}_{d}^{B}=\overline{\mathcal{H}_{d}^{0^{\prime}}(B)}\) _for \(d\leq 6\). \(\mathcal{H}_{6}^{B}\) is the closure of the Hilbert scheme of sextic normal rational curves on \(B\), and it is a rational variety of dimension \(12\)._
Proof.: See[14, Corollary 3.10 and Theorem 5.1].
#### 3.1.2. Rational curves of degree \(6\)
Let \(R\subset B\) be a smooth rational curve of degree \(6\). It is known by the invariant theory description of \(B\) that there exists a smooth rational sextic inside \(B\) which is contained inside no hyperplane sections. On the other hand an easy degree count shows that if \(R\) is contained inside a hyperplane section \(Z\) of \(B\) then \(Z\) is the unique one. In other words if \(R\subset B\) then for its linear span it holds: \(\langle R\rangle=\mathbb{P}^{6}\) or \(\langle R\rangle=\mathbb{P}^{5}\). Let \(Z:=\langle R\rangle\cap B\). If \(Z\) is smooth then \(Z\) is a smooth quintic del Pezzo surface. Then there exists an isomorphism \(\operatorname{Bl}_{a_{1},a_{2},a_{3},a_{4}}\mathbb{P}^{2}\to Z\) where \(\operatorname{Bl}_{a_{1},a_{2},a_{3},a_{4}}\mathbb{P}^{2}\to\mathbb{P}^{2}\) is the blow-up of \(\mathbb{P}^{2}\) at four points \(a_{1},a_{2},a_{3},a_{4}\). Denote by \(\lambda\) the pull-back of a line of \(\mathbb{P}^{2}\) and let \(e_{i}\) where \(1\leq i\leq 4\) be the corresponding four exceptional curves. In the following we will often omit to distinguish between \(Z\) and \(\operatorname{Bl}_{a_{1},a_{2},a_{3},a_{4}}\mathbb{P}^{2}\) if no danger of confusion can occur.
**Lemma 3.1.2**.: _Let \(R\subset B\) be a smooth rational curve of degree \(6\). Assume that \(\langle R\rangle\cap B\) is a smooth subvariety. Then, up to Cremona transformations, only the following cases do occur:_
1. \(\langle R\rangle\simeq\mathbb{P}^{6}\)_, that is_ \(\langle R\rangle\cap B=B\)_;_
2. \(\langle R\rangle\simeq\mathbb{P}^{5}\) _and_ \(R\sim 5\lambda-2(e_{1}+e_{2}+e_{3})-3e_{4}\) _on_ \(Z\)_;_
3. \(\langle R\rangle\simeq\mathbb{P}^{5}\) _and_ \(R\sim 2\lambda\) _on_ \(Z\)_._
Proof.: We only need to recall that \(|3\lambda-e_{1}-e_{2}-e_{3}-e_{4}|\) is the linear system which embeds the blow-up \(Z\) of \(\mathbb{P}^{2}\) at \(a_{1},a_{2},a_{3},a_{4}\) into \(\mathbb{P}^{5}\). The rest is an easy computation on cycles on \(Z\). We only stress that numerically it occurs also the case \(\langle R\rangle\simeq\mathbb{P}^{5}\) and \(R\sim 4\lambda-e_{1}-e_{2}-e_{3}-3e_{4}\) on \(Z\); but this is the same of (II) up to a Cremona transformation.
**Proposition 3.1.3**.: _The locus whose general point \([R]\) represents a general sextic rational curve of type \((\mathrm{III})\) contained in a general hyperplane section of \(B\) is a divisor of \(\mathcal{H}_{6}^{B}\)_
Proof.: By Proposition 3.1.1\(\mathcal{H}_{6}^{B}\) is irreducible of dimension \(12\). By Lemma 3.1.2 the claim follows by an easy application of Riemann-Roch theorem on smooth quintic del Pezzo surfaces.
**Notation 3.1.4**.: From now on we will denote by \(\mathcal{H}_{\mathrm{c.t.}}\) the divisor in \(\mathcal{H}_{6}^{B}\) which is the closure of the loci whose general element \([R]\) is a rational sextic as in (III) of Lemma 3.1.2. Its irreducibility will be proven later.
**Definition 3.1.5**.: We call a smooth sextic \(R\subset B\) such that \([R]\in\mathcal{H}_{\mathrm{c.t.}}\) a _sextic of conic type_ and \(\mathcal{H}_{\mathrm{c.t.}}\) is accordingly called _the Hilbert scheme of rational sextics of conic type_.
For later use we prove:
**Proposition 3.1.6**.: _Let \([R]\) be a general smooth sextic of conic type. Then it satisfies the following conditions:_
1. _there exist no_ \(k\)_-secant lines of_ \(R\) _on_ \(B\) _with_ \(k\geq 3\)_,_
2. _there exist exactly six bi-secant lines of_ \(R\) _on_ \(B\)_, and any of them intersects_ \(R\) _simply, and_
3. \(R\) _intersects transversally_ \(B_{\varphi}\)_._
Proof.: By lemma 3.1.2\(R\) is contained in a smooth hyperplane section \(Z\) of \(B\) and \(B_{\varphi|Z}\in|6\lambda-2\sum_{i=1}^{4}e_{i}|\), then it is easy to see that the general element \(R\in|2\lambda|\) is transversal to \(B_{\varphi|Z}\) and that its \(6\) bisecants are ordinary ones and they intersect transversally \(B_{\varphi}\).
### An open subscheme of the Hilbert scheme of sextics of conic type
To control the \(G\)-action on \(\mathcal{H}_{\mathrm{c.t.}}\) we need to have an explicit description of this action. Actually for our rationality theorem it will be sufficient to give an interpretation of a \(G\)-invariant open subset of \(\mathcal{H}_{\mathrm{c.t.}}\) in terms of flag varieties naturally associated to the presentation of \(B\) as a subvariety of \(G(2,5)\).
By Lemma 3.1.2 (3) we see that \([R]\in\mathcal{H}_{\mathrm{c.t.}}\) comes with a complete linear system \(|\lambda|\) on \(Z\) such that \(R\in|2\lambda|\) and \(\phi_{|\lambda|}\colon Z\to\mathbb{P}^{2}\) is the contraction of \(4\) rational curves \(e_{1},e_{2},e_{3},e_{4}\). We can construct a \(G\)-equivariant compactification of parameter spaces of triples \(([H],|\lambda|,R)\) where \([H]\in\mathbb{\bar{P}}^{6}\), \(Z=H\cap B\) is smooth, \(R\in|2\lambda|\), and \(\mathcal{O}_{Z}(\lambda)\) is the pull-back of the line bundle \(\mathcal{O}_{\mathbb{P}^{2}}(1)\).
Consider \(\mathring{\mathcal{H}}_{\rm c.t.}\hookrightarrow\mathcal{H}_{\rm c.t.}\) the open subscheme of \(\mathcal{H}_{\rm c.t.}\) given by those \([R]\in\mathcal{H}_{\rm c.t.}\) such that \(R\) is smooth and the hyperplane section \(\langle R\rangle\cap B=Z\) is smooth. It remains defined a natural morphism \(\check{\epsilon}\colon\mathring{\mathcal{H}}_{\rm c.t.}\to\check{\mathbb{P}}^{6}\) given by \([R]\stackrel{{\check{\epsilon}}}{{\to}}[\langle R\rangle]\). Let \(\check{\sigma}\circ\check{\rho}=\check{\epsilon}\) be the Stein factorization of \(\check{\epsilon}\) where \(\hat{\rho}\colon\mathring{\mathcal{H}}_{\rm c.t.}\to\mathring{\mathcal{T}}\) has irreducible fibers and \(\check{\sigma}\colon\mathring{\mathcal{T}}\to\check{\mathbb{P}}^{6}\) is a finite covering over the open set given by the hyperplane sections which are transversal to \(B\). By construction \(\check{\rho}\colon\mathring{\mathcal{H}}_{\rm c.t.}\to\mathring{\mathcal{T}}\) is a quasi-\(\mathbb{P}^{5}\)-bundle over \(\mathring{\mathcal{T}}\). Up to now we do not know if \(\mathring{\mathcal{T}}\) is irreducible. Hence we do not yet know that \(\mathring{\mathcal{H}}_{\rm c.t.}\) is irreducible. The goal is to have an explicit construction of a suitable \(G\)-equivariant compactification \(\mathcal{T}\) of \(\mathring{\mathcal{T}}\). By definition we only have that the inclusion \(\mathring{\mathcal{H}}_{\rm c.t.}\hookrightarrow\mathcal{H}_{\rm c.t.}\) is open.
#### 3.2.1. The flag variety \(F(2,3,5)\) and the parameter space of polarized hyperplane sections
Since no sub-scheme of \(B\) with Hilbert polynomial \(3t+1\) is a planar one, then we have a natural injection \(\mathcal{H}^{B}_{3}\to\mathbb{G}(\mathbb{P}^{2},\check{\mathbb{P}}^{6})\) given by the map \([\Gamma]\mapsto[\operatorname{Ann}\langle\Gamma\rangle]\) where \(\operatorname{Ann}\langle\Gamma\rangle\subset\check{\mathbb{P}}^{6}\) is the web of hyperplanes containing the \(3\)-dimensional projective subspace of \(\mathbb{P}^{6}\) spanned by \(\Gamma\). Let \(\tau\colon\mathcal{R}\to\mathcal{H}^{B}_{3}\) be the pull-back over \(\mathcal{H}^{B}_{3}\) of the universal family \(\mathcal{U}^{3}_{7}\) of \(G(\mathbb{P}^{2},\check{\mathbb{P}}^{6})\). By Lemma 2.3.5 we know that \(\mathcal{H}^{B}_{3}\) is isomorphic to \(G(2,5)\). We are going to decide the \(\mathbb{P}^{2}\)-bundle \(\mathcal{R}\to G(2,V)=\mathcal{H}^{B}_{3}\). Note that \(\mathcal{R}\) is smooth and irreducible.
**Proposition 3.2.1**.: \(\mathcal{R}\) _is the flag variety \(F(2,3,V)\), namely, it parameterizes the configuration \(line\subset plane\subset\mathbb{P}(V)\). In particular \(\mathcal{R}=\mathbb{P}(\mathcal{S}^{*})\), where \(\mathcal{G}\) is the dual of the universal subbundle of rank three on \(G(3,V)\)._
Proof.: We recall that \(G(2,5)=\mathcal{H}^{B}_{3}\) since Lemma 2.3.5. For any \([l]\in G(2,5)\) we denote by \([\operatorname{Ann}\langle l\rangle]\subset\check{\mathbb{P}}^{6}\) the plane parameterising the hyperplanes of \(\mathbb{P}(V)\) containing \(l\). It is easy to see that the isomorphism \(G(2,5)\to\mathcal{H}^{B}_{3}\) of Lemma 2.3.5 given by \([l]\mapsto[\Gamma]:=[\pi_{B*}\pi^{\star}_{\mathcal{J}}(l)]\) gives an injective morphism \(F(2,3,V)\to\mathcal{R}\). Since \(\mathcal{R}\) and \(F(2,3,V)\) are smooth irreducible and of the same dimension it follows that \(F(2,3,V)\to\mathcal{R}\) is an isomorphism.
#### 3.2.2. Explicit presentation of a compactification of \(\mathring{\mathcal{H}}_{\rm c.t.}\)
Finally we consider the space \(\mathbb{P}(S^{2}\mathcal{S}^{*})\) which is the Hilbert scheme of conics in \(\mathbb{P}(V)\). The natural morphism \(\rho_{1}\colon\mathbb{P}(S^{2}\mathcal{S}^{*})\to G(3,V)\) clearly expresses \(\mathbb{P}(S^{2}\mathcal{S}^{*})\) as a \(\mathbb{P}^{5}\)-bundle over \(G(3,V)\). We consider \(\check{\rho_{1}}\colon\check{\mathbb{P}}(S^{2}\mathcal{S}^{*})\to\mathring{ \mathbb{G}}\) its restriction over \(\mathring{\mathbb{G}}\), see Equation (2.3), and where \(\check{\mathbb{P}}(S^{2}\mathcal{S}^{*})\) parameterises only the smooth conics of \(\mathbb{P}(V)\).
**Proposition 3.2.2**.: _There is a natural \(G\)-equivariant isomorphism between \(\mathring{\mathcal{H}}_{\rm c.t.}\) and \(\check{\mathbb{P}}(S^{2}\mathcal{S}^{*})\). Moreover \(\mathring{\mathbb{G}}\) is isomorphic to \(\mathring{\mathcal{T}}\)._
Proof.: We use notations of Lemma 2.3.4 and of Lemma 2.3.5.
Consider the diagram 2.2. By definition of \(\mathcal{H}_{\rm c.t.}\) the morphism \(\mathring{\delta}\colon\check{\mathbb{P}}(S^{2}\mathcal{G}^{*})\to\mathring{ \mathcal{H}}\hookrightarrow\mathcal{H}_{\rm c.t.}\) given by \([q]\mapsto[R]:=[\pi_{B*}\pi^{\star}_{\mathcal{J}}(q)]\) admits an inverse over \(\mathring{\mathcal{H}}_{\rm c.t.}\). This implies that \(\mathring{\mathcal{H}}_{\rm c.t.}\) is irreducible. Hence \(\mathring{\mathcal{T}}\) is irreducible. We recall the morphism \(\alpha\colon\mathring{\mathbb{G}}\to\check{\mathbb{P}}^{6}\) given in Equation (2.4) and we compose it by \(\check{\rho_{1}}\colon\check{\mathbb{P}}(S^{2}\mathcal{G}^{*})\to\mathring{ \mathbb{G}}\). By our construction the Stein factorisation \(\check{\sigma}\circ\check{\rho}\) of \(\check{\epsilon}\colon\mathring{\mathcal{H}}_{\rm c.t.}\to\check{\mathbb{P}}^{6}\) and the Stein factorisation of \(\hat{\alpha}\circ\check{\rho_{1}}\colon\check{\mathbb{P}}(S^{2}\mathcal{G}^{*}) \to\check{\mathbb{P}}^{6}\) of the morphism \(\check{\mathbb{P}}(S^{2}\mathcal{G}^{*})\to\check{\mathbb{P}}^{6}\) are \(G\)-equivariant compatible with \(\mathring{\delta}\) and with respective \(G\)-universal properties. Hence \(\mathring{\mathbb{G}}\) is \(G\)-isomorphic to \(\mathring{\mathcal{T}}\).
_Remark_.: By Lemma 3.2.1 and by Proposition 3.2.2 we have that the Stein factorisation of the morphism \(\mathcal{R}\to\check{\mathbb{P}}^{6}\) is given by \(\mu\colon\mathcal{R}\to\mathcal{T}\) followed by \(\sigma\colon\mathcal{T}\to\check{\mathbb{P}}^{6}\) where by 3.2.1 it holds that \(\mathcal{T}=G(3,V)\).
### Group action on rational sextics of conic type
We have seen above that \(\check{\mathcal{H}}\simeq_{G}\check{\mathbb{P}}(S^{2}\mathcal{G}^{*})\), \(\mathcal{R}=F(2,3,5)\) and \(\mathcal{T}=\mathbb{G}(3,V)\). Moreover \(\mathcal{H}^{B}_{3}\simeq_{G}\mathbb{G}(2,V)\) by Lemma 2.3.5.
Then to understand the birational nature of the \(G\)-action on the following diagram:
(3.1)
we can study the natural \(G\)-action on open orbits of the following one:
(3.2)
since these two diagrams are \(G\)-birational.
**Theorem 3.3.1**.: \(\check{\mathcal{H}}_{\mathrm{c.t.}}//G\) _is rational._
_First proof_.: The claims follows easily by Lemma 2.3.7 and by the Lemme de descente: c.f. [DN, Theoreme 2.3].
## 4. Genus 4 spin curve with a vanishing theta-null
Let \(\overline{\mathrm{S}^{+}_{4}}\) be the moduli space of genus 4 spin curves. By [Cor] we know that it is a projective variety. The forgetful morphism \(\xi\colon\overline{\mathrm{S}^{+}_{4}}\to\overline{\mathcal{M}_{4}}\) exibits it as a 136-to-1 cover of the Deligne-Mumford compactification of the moduli space of smooth genus 4 curves. Inside \(\overline{\mathcal{M}_{4}}\) there is the divisor \(\overline{\mathcal{M}_{4}^{\mathrm{null}}}\) which is the closure of the loci \(\mathcal{M}_{4}^{\mathrm{null,0}}\subset\mathcal{M}_{4}^{\mathrm{null}}\) which parameterizes the genus-4 smooth curves \(C\) whose canonical model is a transversal intersection inside \(\mathbb{P}^{3}\) of a quadric cone and a smooth cubic surface and such that \(\mathrm{Aut}(C)=\mathrm{id}_{\mathrm{C}}\). We set \(\mathrm{S}_{4}^{\mathrm{null}}\) to be the \(\xi\)-pull-back of \(\mathcal{M}_{4}^{\mathrm{null}}\), hence \(\mathrm{S}_{g}^{\mathrm{null}}=\mathrm{S}_{4}^{\mathrm{null,0}}\sqcup\Theta_{ g,\mathrm{null}}\); see Equation (1.1) of the Introduction. We want to study the rationality problem of \(\mathrm{S}_{4}^{\mathrm{null,0}}\).
**Proposition 4.0.1**.: _The divisor \(\mathrm{S}_{4}^{\mathrm{null,0}}\) is reduced._
Proof.: It is well-known that \(\mathcal{M}_{4}^{\mathrm{null}}\) is reduced irreducible and that the restriction \(\mathrm{S}_{4}^{\mathrm{null},0}\to\mathcal{M}_{4}^{\mathrm{null}}\) of the forgetful morphism is etale. Hence the general point of \(\mathrm{S}_{4}^{\mathrm{null},0}\) is smooth. Hence \(\mathrm{S}_{4}^{\mathrm{null},0}\) is reduced.
### The Pry canonical map
Let \([C,\theta]\in\mathrm{S}_{4}^{+\mathrm{null}}\) be a general element, in particular the automorphism group of \(C\) is the trivial one. Let \(\delta\) be the unique \(g_{3}^{1}\) on \(C\). The map \(\varphi_{|\theta+\delta|}\colon C\to\mathbb{P}(H^{0}(C,\mathcal{O}_{C}(\theta+ \delta))^{\star})\) is known as the Prym canonical map and it is known that \(\theta-\delta\) is a \(2\)-torsion divisor.
#### 4.1.1. The Prym canonical map is a morphism
Actually \(\varphi_{|\theta+\delta|}\) can be geometrically interpreted thanks to our theory. Indeed we can recover it via the restriction to \(\varphi^{-1}(R)\) of the diagram (2.6) where \([R]\in\mathcal{H}\). First we show:
**Lemma 4.1.1**.: _For a general \([C,\theta]\in\mathrm{S}_{4}^{\mathrm{null},0}\) the linear system \(|\delta+\theta|\) gives a morphism \(\phi_{|\delta+\theta|}\colon C\to\mathbb{P}^{2}\)._
Proof.: Since \(\theta\) is ineffective, \(h^{0}(C,\mathcal{O}_{C}(\theta+\delta))=3\). If \(|\delta+\theta|\) had a base point \(p\) then \(h^{0}(C,\mathcal{O}_{C}(\theta+\delta-p))=3\). This would imply \(h^{0}(C,\mathcal{O}_{C}(\theta))\geq 1\); a contradiction.
#### 4.1.2. Thetasymmetric curves
We need to understand some geometry of the morphism \(\phi_{|\delta+\theta|}\colon C\to\mathbb{P}^{2}\). Our argument needs the following Proposition which has its own interest:
**Proposition 4.1.2**.: _Let \(\Gamma\) be a smooth non-hyperelliptic curve of genus \(4\) with two different \(g_{3}^{1}\)'s \(\delta\) and \(\delta^{\prime}\). Let \(\theta\) be an ineffective theta characteristic. Then \(|\theta+\delta|\) and \(|\theta+\delta^{\prime}|\) are base point free and the images \(M\) and \(M^{\prime}\) of \(\Gamma\) defined, respectively, by these linear systems are plane sextic curves. Moreover \(h^{0}(\Gamma,\mathcal{O}_{\Gamma}(\theta+\delta-\delta^{\prime}))>0\), which is equivalent to \(h^{0}(\Gamma,\mathcal{O}_{\Gamma}(\theta+\delta^{\prime}-\delta))>0\), if and only if \(M\) and \(M^{\prime}\) have triple points. If this condition is satisfied, then \(h^{0}(\Gamma,\mathcal{O}_{\Gamma}(\theta+\delta-\delta^{\prime}))=h^{0}( \Gamma,\mathcal{O}_{\Gamma}(\theta+\delta^{\prime}-\delta))=1\) and \(M\) and \(M^{\prime}\) have a unique ordinary triple point respectively._
Proof.: By the symmetry of \(\delta\) and \(\delta^{\prime}\) it suffices to prove the assertions for \(|\theta+\delta|\) and \(M\). It is easy to see that \(h^{0}(\Gamma,\mathcal{O}_{\Gamma}(\theta+\delta))=3\). Since \(\Gamma\) is not hyperelliptic, \(M\) has no quadruple point. Assume that \(h^{0}(\Gamma,\mathcal{O}_{\Gamma}(\theta+\delta-\delta^{\prime}))>0\). Let \(\eta\in|\theta+\delta-\delta^{\prime}|\). We have \(\eta+\delta^{\prime}\sim\pi^{\ast}\mathcal{O}_{M}(1)\). Since \(\deg\eta=3\) and \(\dim|\delta^{\prime}|=1\), the divisor \(\eta\) consists of three points which are the pull-back of a triple point. By reversing the argument, we see that the divisor consisting of the pull-back of a triple point is a member of \(|\theta+\delta-\delta^{\prime}|\). Since \(M\) has only a finite number of triple points, \(M\) has actually a unique triple point and \(h^{0}(\Gamma,\mathcal{O}_{\Gamma}(\theta+\delta-\delta^{\prime}))=1\).
We think that genus \(4\) curves as those of Proposition 4.1.2 deserve a name:
**Definition 4.1.3**.: Let \(\delta\) and \(\delta^{\prime}\) be two distinct \(g_{3}^{1}\) on a curve \(C\) of genus \(4\). The spin curve \([C,\theta]\) is called \(g_{3}^{1}\)_-thetasymmetric_ if \(h^{0}(C,\mathcal{O}_{C}(\theta+\delta-\delta^{\prime}))=1\).
_Remark_.: Actually \(g_{3}^{1}\)-thetasymmetric curves correspond to the rational sextics of Lemma 3.1.2 (II).
#### 4.1.3. The image of the Prym canonical map
Now we can describe the image \(M\) of \(C\) by the morphism \(\phi_{|\delta+\theta|}\colon C\to\mathbb{P}^{2}\) where \([C,\theta]\in\mathrm{S}_{4}^{\mathrm{null},0}\); more precisely:
**Proposition 4.1.4**.: _Let \([C,\theta]\in\mathrm{S}_{4}^{\mathrm{null},0}\) be a general element. Then the image \(M\) of the morphism \(\phi_{|\theta+\delta|}\colon C\to\mathbb{P}^{2}\) is a sextic with six nodes which are given by the six point of the \((4,6)\) configuration associated to \(4\) lines \(L_{1},L_{2},L_{3},L_{4}\) in general position._
Proof.: By Lemma 4.1.1 if \(\deg(M)<6\) then \(\deg(M)=2\) or \(3\) since \(\deg(\mathcal{O}_{C}(\theta+\delta)=6\). We distinguish these two cases.
If \(\deg(M)=2\) then \(\theta+\delta=2\delta\), a contradiction.
If \(\deg(M)=3\) and \(M\) is singular then \(C\) is hyperelliptic, a contradiction to the generality of \([C,\theta]\in\mathcal{S}_{4}^{\mathrm{null},0}\).
If \(\deg(M)=3\) and \(M\) is smooth, then \(C\) is bielliptic and again we exclude this case by the generality of \([C,\theta]\) since the bielliptic locus is \(6\)-dimensional.
We have shown that \(\deg(M)=6\). Since \(C\) is not hyperelliptic then the maximal order of a singular point of \(M\) is \(3\). On the other hand if \(M\) has a singular point of multiplicity \(3\) then by Proposition 4.1.2, \(C\) is \(g_{3}^{1}\)-symmetric, a contradiction. Hence \(M\) is a sextic and its singular points are nodes. Since \(g(C)=4\) and the arithmetical genus \(\rho_{a}(M)=10\) then we have that \(M\) has exactly \(6\) points of multiplicity \(2\). Now we show that these six points \(n_{ij}\) where \(i,j=1,2,3,4\), \(i<j\) are in a special position. We denote by \(\mathfrak{T}\) the set of the transpositions \((i,j)\) where \(4\geq j>i\geq 1\).
Let \(\sigma\colon S\to\mathbb{P}^{2}\) be the blow-up along the six points \(n_{ij}\), \((i,j)\in\mathfrak{T}\). Then
\[h_{|C}\sim\theta+\delta\]
where \(h\) is the total \(\sigma\)-transform of a line. Set \(E_{ij}:=\sigma^{-1}(n_{ij})\) and nearly \(C\in|6h-2\sum_{(i,j)\in\mathfrak{T}}E_{ij}|\). Since there is a unique \(g_{3}^{1}\), by adjunction on \(S\) it holds that \(2\delta\sim\omega_{C}\sim(3h-\sum_{(i,j)\in\mathfrak{T}}E_{ij})|_{C}\). Set
\[\alpha:=(\sum_{(i,j)\in\mathfrak{T}}E_{ij})_{|C}.\]
Since \(h_{|C}\sim\theta+\delta\) it holds that \(2\delta\sim 3\theta+3\delta-\alpha\). Then \(\alpha\sim 2\theta+\theta+\delta\sim\omega_{C}+h_{|C}=4h_{|C}-\alpha\).
_Claim:_\(H^{0}(S,\mathcal{O}_{S}(4h-\sum_{(i,j)\in\mathfrak{T}}E_{ij}))=H^{0}(C, \mathcal{O}_{C}(4h-\sum_{(i,j)\in\mathfrak{T}}E_{ij}))\). Indeed since \(\alpha\simeq(4h-\sum_{(i,j)\in\mathfrak{T}}E_{ij})_{|C}\) then by the exact sequence giving the sheaf \(\mathcal{O}_{C}\) as an \(\mathcal{O}_{S}\)-module we obtain the following exact sequence:
\[0\to\mathcal{O}_{S}(-2h+\sum_{(i,j)\in\mathfrak{T}}E_{ij})\to\mathcal{O}_{S}( 4h-\sum_{(i,j)\in\mathfrak{T}}E_{ij})\to\mathcal{O}_{C}(\alpha)\to 0 \tag{4.1}\]
Since \(H^{0}(S,\mathcal{O}_{S}(-2h+\sum_{(i,j)\in\mathfrak{T}}E_{ij}))=0\) and since by Serre's duality \(H^{2}(S,\mathcal{O}_{S}(-2h+\sum_{(i,j)\in\mathfrak{T}}E_{ij}))=0\) then Rieman-Roch's theorem on \(S\) gives \(-H^{1}(S,\mathcal{O}_{S}(-2h+\sum_{(i,j)\in\mathfrak{T}}E_{ij}))=\chi(\mathcal{ O}_{S}(-2h+\sum_{(i,j)\in\mathfrak{T}}E_{ij}))=0\). This immediately shows our claim.
Finally we have only to point out that the divisor \(\sum_{(i,j)\in\mathfrak{T}}E_{ij}\) itself cuts the divisor \(\alpha\) on \(C\). By the isomorphism \(H^{0}(S,\mathcal{O}_{S}(4h-\sum_{(i,j)\in\mathfrak{T}}E_{ij}))=H^{0}(C, \mathcal{O}_{C}(\alpha))\) it follows that there exists a unique \(\varsigma\in H^{0}(S,\mathcal{O}_{S}(4h-\sum_{(i,j)\in\mathfrak{T}}E_{ij}))\) such that \(\varsigma^{\prime}_{|C}=\alpha\). By tensorialising the sequence (4.1) by \(\mathcal{O}_{S}(-\sum_{(i,j)\in\mathfrak{T}}E_{ij})\) this means that \(h^{0}(S,\mathcal{O}_{S}(4h-2\sum_{(i,j)\in\mathfrak{T}}E_{ij}))=1\), since \(h^{0}(S,\mathcal{O}_{S}(-2h))=h^{1}(S,\mathcal{O}_{S}(-2h))=0\). This shows that the six points are three by three on four lines since \(M\) is irreducible.
**Lemma 4.1.5**.: _Let \([C,\theta]\in\mathrm{S}_{4}^{\mathrm{null},0}\) be a general element and let \(M\) be the image of the Prym canonical morphism \(\phi_{|\theta+\delta|}\colon C\to\mathbb{P}^{2}\). Let \(\sigma\colon S\to\mathbb{P}^{2}\) be the blow-up of \(\mathbb{P}^{2}\) at the six nodes \(n_{ij}\in\mathbb{P}^{2}\) of \(M\), let \(E_{ij}:=\sigma^{-1}(n_{ij})\) and let \(E_{ij|C}:=a_{ij}+b_{ij}\), where \((i,j)\in\mathfrak{T}\). Let \(a_{ij}+a_{ij}^{1}+a_{ij}^{2}\) and \(b_{ij}+b_{ij}^{1}+b_{ij}^{2}\) be the two distinct effective divisors of the unique trigonal series on \(C\) which contains \(a_{ij}\) and respectively \(b_{ij}\)
_Then \(\{\sigma(a^{1}_{ij}),\sigma(a^{2}_{ij}),\sigma(b^{1}_{ij}),\sigma(b^{2}_{ij})\}\) is a set of collinear points on a line \(L_{ij}\) passing through the point \(n_{rs}\) where \(\{i,j\}\cap\{r,s\}=\emptyset\) and \(\{i,j,r,s\}=\{1,2,3,4\}\)._
Proof.: The proof is quite easy. We need to show that \(h^{0}(C,\mathcal{O}_{C}(h_{|C}-a^{1}_{ij}-a^{2}_{ij}-b^{1}_{ij}-b^{2}_{ij}))>0\). Indeed let \(a+b+c+d\) be the unique effective divisor contained inside \(|\theta+a_{ij}|\) and let \(a^{\prime}+b^{\prime}+c^{\prime}+d^{\prime}\) be the unique effective divisor contained inside \(|\theta+b_{ij}|\). Since \(2\theta\sim K_{C}\) then
\[a+b+c+d+a^{\prime}+b^{\prime}+c^{\prime}+d^{\prime}\sim K_{C}+E_{ij|C}.\]
On the other hand \(h_{|C}\sim\theta+\delta\) where \(h\) is the pull-back of the line of \(\mathbb{P}^{2}\). Then \(a+b+c+d=h_{|C}-a^{1}_{ij}-a^{2}_{ij}\) and \(a^{\prime}+b^{\prime}+c^{\prime}+d^{\prime}=h_{|C}-b^{1}_{ij}-b^{2}_{ij}\); that is:
\[K_{C}+E_{ij|C}\sim 2h_{|C}-a^{1}_{ij}-a^{2}_{ij}-b^{1}_{ij}-b^{2}_{ij}. \tag{4.2}\]
Finally by adjuntion \(K_{C}\sim 3h_{|C}-(\sum_{(l,m)\in\mathfrak{T}}E_{lm})_{|C}=3h_{|C}-\alpha\). Then by equation (4.2) we have:
\[3h_{|C}-\alpha+E_{ij|C}\sim 2h_{|C}-a^{1}_{ij}-a^{2}_{ij}-b^{1}_{ij}-b^{2}_{ij}.\]
Now we cancel \(h_{|C}\) in both members of the above equality to obtain:
\[2h_{|C}-\alpha+E_{ij|C}=2h_{|C}-\sum_{(lm)\neq(ij)}E_{lm|C}\sim h_{|C}-a^{1}_{ ij}-a^{2}_{ij}-b^{1}_{ij}-b^{2}_{ij}\]
By Lemma 4.1.4 the first claim follows since it there exists a (unique) conic through \(5\) points. Let us call \(T_{34}\) the unique line such that \(\sigma^{\star}(T_{34})\) contains \(a^{1}_{12},a^{2}_{12},b^{1}_{12},b^{2}_{12}\). We show that \(n_{34}\in T_{34}\). Indeed the effective divisor
\[2h_{|C}-\alpha+E_{12|C}=h_{|C}-(E_{13}+E_{23}+E_{34})_{|C}+h_{|C}-(E_{14}+E_{2 4}+E_{34})_{|C}+E_{34|C}.\]
Since \(h_{|C}-(E_{13}+E_{23}+E_{34})_{|C}\) and \(h_{|C}-(E_{14}+E_{24}+E_{34})_{|C}\) are disjoint from \(C\) it follows that
\[E_{34|C}=h_{|C}-a^{1}_{12}-a^{2}_{12}-b^{1}_{12}-b^{2}_{12}.\]
The remaining \(5\) cases can be treated identically.
## 5. Construction of genus \(4\) spin curve with a vanishing theta-null via sextics of conic type
We restrict Diagram (2.6) to a general rational sextic of conic type \(R\):
(5.1)
where \(C(R):=\varphi^{-1}(R)\) and \(M(R):=\pi(C(R))\).
#### 5.0.1. The curve of genus \(4\)
By Lemma 3.1.2 we know that we can realise \(R\subset B\) as a curve \(R\subset Z=\langle R\rangle\cap B\) where on \(Z\) there is a polarisation \(|\lambda|\) such that \(R\in|2\lambda|\) and \(\phi_{|\lambda|}\colon Z=\operatorname{Bl}_{a_{1},a_{2},a_{3},a_{4}}\mathbb{P} ^{2}\to\mathbb{P}^{2}\). We set \(e_{i}:=\phi^{-1}_{|\lambda|}(a_{i})\), \(i=1,...,4\).
**Lemma 5.0.1**.: _Let \([R]\in\mathcal{H}_{\mathrm{c.t.}}\) be a general element. The scheme \(C(R)\) is a smooth curve of genus \(4\) contained inside the universal family of lines of \(B\)._
Proof.: By Proposition 2.5.2 (1) and by Proposition 3.1.6 it follows that \(C(R)\) is smooth and the ramification divisor of \(\varphi_{|C(R)}\colon C(R)\to R\subset B\) is simple. Since \(B_{\varphi}\in|-K_{B}|\) and \(6=\deg\mathrm{R}\), we obtain \(g(C(R))=4\) by the Hurwitz's formula.
#### 5.0.2. The geometry of the plane model
Now we study the morphism \(\pi_{|C(R)}\colon C(R)\to M(R)\).
**Lemma 5.0.2**.: _Let \([R]\in\mathcal{H}_{\mathrm{c.t.}}\) be a general element. Then \(M(R)\) is an irreducible sextic with \(6\) nodes. In particular \(R\) has exactly \(6\) bi-secant lines on \(B\)._
Proof.: By standard geometry of smooth del Pezzo surface of degree \(5\) the lines \(\beta_{ij}\in|\lambda-e_{i}-e_{j}|\) where \(1\leq i<j\leq 4\) are the six bisecants of \(R\). By Proposition 2.5.2 (4) it follows that \(M(R)\) is a plane sextic. Indeed \(L_{[l]}\cdot M(R)=6\) where \(L_{[l]}\subset\mathcal{H}_{1}^{B}=\mathbb{P}^{2}\) is the locus which parameterizes those lines \(m\subset l\) such that \(m\cap l\neq\emptyset\). Hence \(p_{a}(M)=10\). By generality of \(R\) the curve \(M(R)\) does not have any other singular point except those \(6\) nodes due to the \(6\) bisecants of \(R\). Hence \(\pi_{|C(R)}\colon C(R)\to M(R)\) is the normalisation morphism since Lemma 5.0.1.
#### 5.0.3. The associated thetacharacteristic
Now we show that our method gives an interpretation of the Prym canonical map. In order to help the reader we recall and we fix notation.
**Notation 5.0.3**.: If \(l\subset B\) is a line of \(B\) we denote by \([l]\) its corresponding point inside \(\mathcal{H}_{1}^{B}\). We denote by \(L_{[l]}\) the line inside \(\mathcal{H}_{1}^{B}\) parameterising the lines of \(B\) which intersect \(l\). We have set \(Z:=\mathrm{Bl}_{a_{1},a_{2},a_{3},a_{4}}\mathbb{P}^{2}\). In particular \(Z\) can be identified to its image \(\langle R\rangle\cap B\). As always we denote by \([\epsilon_{i}]\in\mathcal{H}_{1}^{B}\) the point which correspond to the line \(\epsilon_{i}:=\mu^{-1}(a_{i})\subset Z\), where \(i=1,2,3,4\) and \(\phi_{|\lambda|}\colon Z\to\mathbb{P}^{2}\) is the blow-up. We also denote by \([\beta_{ij}]\in\mathcal{H}_{1}^{B}\) the point which corresponds to the unique element \(\beta_{ij}\) of \(|\lambda-\epsilon_{i}-\epsilon_{j}|\); in the sequel instead of writing \(1\leq i<j\leq 4\) we often write \((i,j)\in\mathfrak{T}\), where \((i,j)\) is the corresponding transposition.
**Proposition 5.0.4**.: _Let \([R]\in\mathcal{H}_{\mathrm{c.t.}}\) be a general element. The morphism_
\[\pi_{|C(R)}\colon C(R)\to M(R)\subset\mathcal{H}_{1}^{B}\simeq\mathbb{P}^{2}\]
_is induced by the linear system \(|\delta+\theta(R)|\) where \(\theta(R)\) is an ineffective theta characteristic over \(C(R)\). Moreover \([C(R)]\in\mathcal{M}_{g}^{\mathrm{null}}\), and \([(C(R),\theta(R)]\in\mathrm{S}_{g}^{\mathrm{null},0}\)._
Proof.: Set \(C=C(R)\) and \(M=M(R)\). The six nodes of \(M\) are in special position since \([\beta_{ij}]\in L_{\epsilon}\cap L_{[\epsilon_{j}]}\), \((i,j)\in\mathfrak{T}\). It easily follows that if \(\sigma\colon S\to\mathcal{H}_{1}^{B}=\mathbb{P}^{2}\) is the blow-up at the six points \([\beta_{ij}]\in\mathcal{H}_{1}^{B}\), \(E_{ij}:=\sigma^{-1}([\beta_{ij}])\), \((i,j)\in\mathfrak{T}\) and \(|h|\) is the linear system giving \(\sigma\colon S\to\mathcal{H}_{1}^{B}\) then \(C\in|6h-2\sum_{(i,j)\in\mathfrak{T}}E_{ij}|\) and \(h^{0}(C,\mathcal{O}_{C}(h_{|C}))=3\). We can identify \((\mathcal{H}_{1}^{B},\{[\epsilon_{1}],[\epsilon_{2}],[\epsilon_{3}],[\epsilon_ {4}]\}\) to \(\mathbb{P}^{2}\) with the standard projective frame.
To get information on \(S\), we now switch to the polarised hyperplane section \((Z,|\lambda|)\) which contains \(R\). Let \(\delta\) be the \(g_{3}^{1}\) on \(C\) given by the \(3\)-to-\(1\) covering \(\varphi_{|C|}\colon C\to R\subset B\). Since on \(Z\) it holds that \((\lambda-\epsilon_{1}-\epsilon_{2})\cdot(\lambda-\epsilon_{3}-\epsilon_{4})=1\) it easily follows that the line \(L_{[\beta_{12}]}\subset\mathcal{H}_{1}^{B}\) parameterising the lines of \(B\) which intersect \(\beta_{12}\subset B\) passes through the node of \(M\) supported on \([\beta_{34}]\) and other four points \(p,q,r,s\). We set \(C\cap E_{12}=\{a_{12},b_{12}\}\subset S\). Now we stress that the forgetful morphism \(\pi_{|C}\colon C\to\mathcal{H}_{1}^{B}\) coincides to \(\sigma_{|C}\). Hence by the geometry of \(B\) it follows that
\[2\delta\sim a_{12}+b_{12}+p+q+r+s=(h-E_{34}+E_{12})_{|C}\]
and w.l.o.g. we can set \(a_{12}+p+q\sim b_{12}+r+s\sim\delta\).
The generality of \(R\) implies that the hyperplane section \(Z\) is a general one. Then it follows that \(h^{0}(C,\mathcal{O}_{C}(h_{|C}-\delta))=0\) since the images by \(\sigma\colon S\to\mathcal{H}_{1}^{B}\) of \(a_{12},p,q\)
are not collinear. Finally we claim that
\[\theta:=h_{|C}-\delta\]
is an ineffective theta characteristic. By adjunction \(K_{C}\sim(3h-\sum_{(i,j)\in\mathfrak{T}}E_{ij})_{|C}\) and by generality \(C\) does not intersect the strict trasform of any of the lines containing three nodes of \(M\). This implies that
\[(3h-\sum_{(i,j)\in\mathfrak{T}}E_{ij})_{|C}\sim(2h-E_{24}-E_{34}-E_{23})_{|C} \sim(h+E_{12}-E_{34})_{|C}\]
We set \(\theta(R):=\theta\) and the claim follows.
## 6. Reconstruction of sextics of conic type via genus-4 spin curves with a vanishing theta-null
We are ready to show the main technical result of this paper. The idea is to use \(\phi_{|\delta+\theta|}\colon C\to\mathbb{P}^{2}=\mathbb{P}(H^{0}(C,\mathcal{O }_{C}(\delta+\theta)^{\vee})\) to identify \(\mathbb{P}^{2}\) and a suitable conic to the couple \((\mathcal{H}_{1}^{B},\Omega)\) in order to force data coming from the Prym canonical map to produce \([R]\in\mathring{\mathcal{H}}_{\text{c.t.}}\) such that \([(C,\theta)]=[(C(R),\theta(R))]\).
Let \([C,\theta]\in\mathrm{S}_{4}^{\text{null},0}\) be a general element and let \(M\subset\mathbb{P}^{2}\) be the sextic with six nodes given by Proposition 4.1.4. Denote by \(L_{1},L_{2},L_{3},L_{4}\) the four lines which give the \((4,6)\) configuration, see also Lemma 4.1.5. We set
\[\{n_{ij}\}\in L_{i}\cap L_{j}\]
where \((i,j)\in\mathfrak{T}\). Let \(\sigma\colon S\to\mathbb{P}^{2}\) be the blow-up at the points \(n_{ij}\) and set \(E_{ij}:=\sigma^{-1}(n_{ij})\) where \((i,j)\in\mathfrak{T}\). We notice that \(C\hookrightarrow S\) and we set
\[E_{ij|C}:=a_{ij}+b_{ij},\ (i,j)\in\mathfrak{T}.\]
If \(\delta\) denotes the unique \(g_{3}^{1}\) over \(C\) we also set \(\delta\sim a_{ij}+a_{ij}^{1}+a_{ij}^{2}\), \(\delta\sim b_{ij}+b_{ij}^{1}+b_{ij}^{2}\)\((i,j)\in\mathfrak{T}\). We set
\[M:=\sigma(C).\]
**Theorem 6.0.1**.: **(Reconstruction theorem)** _Let \([C,\theta]\in\mathrm{S}_{4}^{\text{null},0}\) be a general element. Then there exists \([R]\in\mathring{\mathcal{H}}_{\text{c.t.}}\) such that \([C,\theta]=\pi_{\mathcal{S}_{4}^{+}}([R])\)._
Proof.: Consider the blow-up \(\sigma\colon S\to\mathbb{P}^{2}\) which induces the Prym canonical map on \(C\). By Lemma 4.1.5 there exists a line \(L_{ij}\subset\mathbb{P}^{2}\) such that \(n_{rs}\in L_{ij}\) and
\[\sigma(a_{ij}^{1}),\sigma(a_{ij}^{2}),\sigma(b_{ij}^{1}),\sigma(b_{ij}^{2}) \in L_{ij}\]
where \((i,j),(r,s)\in\mathfrak{T}\), \(\{i,j\}\cap\{r,s\}=\emptyset\).
Now we consider the four lines \(L_{1},...,L_{4}\subset\mathbb{P}^{2}\) of Proposition 4.1.4. We call \(\{L_{1},...,L_{4}\}\)_the configuration of lines associated to_\(M\). For any smooth conic \(Q\in\mathbb{P}(H^{0}(\mathbb{P}^{2},\mathcal{O}_{\mathbb{P}^{2}}(2)))\) we set:
\[e_{t}(Q):=\mathrm{Pol}_{Q}(L_{t}),\ t=1,2,3,4, \tag{6.1}\]
where \(\mathrm{Pol}_{Q}(...)\colon\mathbb{P}^{2\vee}\to\mathbb{P}^{2}\) is the standard polarisation morphism induced by \(Q\). By Proposition 2.4.3 we can define an identification \((\mathbb{P}^{2},Q)\leftrightarrow_{Q}(\mathcal{H}_{1}^{B},\Omega)\) where given a point \(x\in\mathbb{P}^{2}\) we write: \(x\leftrightarrow_{Q}[l_{x}]\), if \(l_{x}\subset B\) is the line inside \(B\) corresponding to the point \([l_{x}]\in\mathcal{H}_{1}^{B}\). In particular letting
\[\mathbb{P}^{2}\ni e_{t}(Q)\leftrightarrow_{Q}[\epsilon_{t}]\in\mathcal{H}_{1} ^{B}\]
we can identify a unique line \(\epsilon_{t}\subset B\), where \(t=1,2,3,4\).
We set:
\[n_{ij}\leftrightarrow_{Q}[\beta_{ij}]\in\mathcal{H}_{1}^{B}\]
where \((i,j)\in\mathfrak{T}\). By the meaning of the polar geometry associated \((\mathcal{H}_{1}^{B},\Omega)\) given in Proposition 2.4.3 we can transfer intersection conditions on \(B\) to polarity relations on \(\mathbb{P}^{2}\). Hence it holds
\[\epsilon_{i}\cap\beta_{ij}\neq\emptyset\]
where \(i=1,...,4\), \((i,j)\in\mathfrak{T}\), since \(L_{i}\) corresponds to the line \(L_{[\epsilon_{i}]}\) in \(\mathcal{H}_{1}^{B}\) which parameterises those lines \(l\subset B\) such that \(l\cap\epsilon_{i}\neq\emptyset\) where \(i=1,2,3,4\). Note that \(Q\) must be a smooth conic in order to satisfy Equation (6.1).
Since the configuration of lines associated to \(M\) is given by four lines in general position and \(e_{i}(Q)\not\in L_{j}\), that is \([\epsilon_{i}]\not\in L_{[\epsilon_{j}]}\), on \(B\) it holds that \(\epsilon_{i}\cap\epsilon_{j}=\emptyset\) if \(i\neq j\), \(1\leq i,j\leq 4\). Hence the linear span \(H:=\langle\epsilon_{1},\epsilon_{2},\epsilon_{3}\rangle\) is a \(\mathbb{P}^{5}\) inside the \(\mathbb{P}^{6}\) which contains \(B\). Since \(n_{23}\in L_{2}\cap L_{3}\) it holds that \(\beta_{23}\cap\epsilon_{2}\neq\emptyset\) and \(\beta_{23}\cap\epsilon_{3}\neq\emptyset\). Hence \(\beta_{23}\subset H\). By the same argument we have that \(\beta_{12},\beta_{13}\subset H\). Let
\[\mathbb{P}_{14\perp 23}:=\{Q\in\mathbb{P}(H^{0}(\mathbb{P}^{2},\mathcal{O}_{ \mathbb{P}^{2}}(2)))\mid 0=Q(n_{14},n_{23})\}\]
be the hyperplane given by those conics such that \(n_{14}\perp_{Q}n_{23}\). It is obvious that \(\dim_{\mathbb{C}}\mathbb{P}_{14\perp 23}=4\). By definition it follows that for any smooth conic \(Q\in\mathbb{P}_{14\perp 23}\), in the identification \((\mathbb{P}^{2},Q)\leftrightarrow_{Q}(\mathcal{H}_{1}^{B},\Omega)\) it holds that
\[\beta_{14}\cap\beta_{23}\neq\emptyset.\]
Since \(\beta_{14}\in L_{1}\) and \(\beta_{23}\not\in L_{1}\) it holds that \(\beta_{14}\cap\epsilon_{1}\neq\beta_{14}\cap\beta_{23}\). Since \(\epsilon_{1},\beta_{23}\subset H\) we have that the line \(\beta_{14}\subset H\). There are only two cases: \(\epsilon_{4}\subset H\) or \(\epsilon_{4}\not\subset H\). Assume that \(\epsilon_{4}\not\subset H\). Now by generality of \(\epsilon_{i}\), \(i=1,2,3,4\) it holds that \(Z:=H\cap B\) is a smooth hyperplane section. In particular there exists another line \(\epsilon_{5}\subset Z\) such that \(\tau\colon Z\to\mathbb{P}^{2}\) is the blow-up at the four points \(x_{1},x_{2},x_{3},x_{5}\) and \(\epsilon_{i}=\tau^{-1}(x_{i})\), \(i=1,2,3,5\). Inside \(Z\) there are 10 lines and among them 8 are known: \(\epsilon_{1},\epsilon_{2},\epsilon_{3},\epsilon_{5},\beta_{12},\beta_{13}, \beta_{23},\beta_{14}\). By construction there are only three lines inside \(Z\) which intersects \(\epsilon_{1}\) and they are \(\beta_{12},\beta_{13},\beta_{14}\). Hence it holds that \(\beta_{14}=\beta_{15}\) where \(\beta_{ij}\) is the \(\tau\)-strict transform of the line through \(x_{i}\) and \(x_{j}\) where \(i\neq j\), \(i,j=1,2,3,5\). We turn on \((\mathbb{P}^{2},Q)\) and we set:
\[e_{5}\leftrightarrow_{Q}[\epsilon_{5}].\]
We denote by \(L_{5}\) the unique line of \(\mathbb{P}^{2}\) such that \(e_{5}=e_{5}(Q):=\operatorname{Pol}_{Q}(L_{5})\). We have seen that \(\beta_{14}=\beta_{15}\). We know that \(\epsilon_{5}\cap\beta_{14}\neq\emptyset\). By the identification \(\leftrightarrow_{Q}\) it holds that \(n_{14}\in L_{5}\). In other words \(L_{5}\) belongs to the same pencil given by \(L_{1}\) and \(L_{4}\).
Now we consider \(g\colon\mathbb{P}^{2}\to\mathbb{P}^{2}\) a projective isomorphism such that
\[g(L_{i})=L_{i},\ i=1,2,3,\text{and }g(L_{4})=L_{5},g(L_{5})=L_{4}.\]
Then it holds that
\[g(n_{12})=n_{12},g(n_{13})=n_{13},g(n_{23})=n_{23},\text{ and }g(n_{14})=n_{14},\]
where we stress that the three points \(n_{12},n_{13},n_{14}\) belongs to the line \(L_{1}\). We note that \(g\colon\mathbb{P}^{2}\to\mathbb{P}^{2}\) is not necessarily a \(Q\)-orthogonal isomorphism, that is if \(a\perp_{Q}b\) then it is not true that \(g(a)\perp_{Q}g(b)\). However it restricts to the identity on the line \(L_{1}\) and it fix the other two lines \(L_{2}\) and \(L_{3}\), hence it maintains the orthogonality conditions concerning \(e_{1},e_{2},e_{3}\), \(L_{1},L_{2},L_{3}\) and \(n_{12},n_{13},n_{23},n_{14}\) we had before.
Let us consider the curve \(g(M)\). Obviously \(g(M)\) is isomorphic to \(M\). By construction the configuration of lines associated to \(g(M)\) is \(\{L_{1},L_{2},L_{3},L_{5}\}\).
We have shown that for any smooth quadric \(Q\in\mathbb{P}_{14\perp 23}\) the configuration of lines associated to \(g(M)\) gives back, by the identification \((\mathbb{P}^{2},Q)\leftrightarrow_{Q}(\mathcal{H}_{1}^{B},\Omega)\), four lines \(\epsilon_{1}\), \(\epsilon_{2}\), \(\epsilon_{3}\), \(\epsilon_{5}\) of \(B\) which are disjoint, but they belongs to the same hyperplane \(H\) of \(\mathbb{P}^{6}\). By Lemma 6.0.2 we have that \(Z=B\cap H\) is a smooth del Pezzo surface. If \(\epsilon_{4}\subset H\) the above discussion works by taking \(g\) to be the identity. By our moduli problem we can from now on identify \(M\) to \(g(M)\).
To sum up we have shown for now only that given a general point \([C,\theta]\in\mathrm{S}_{4}^{\mathrm{null},0}\), we can find an automorphism \(g\colon\mathbb{P}^{2}\to\mathbb{P}^{2}\) such that the configuration of lines \(\{L_{1},L_{2},L_{3},L_{4}\}\) associated to \(g\circ\phi_{\theta+\delta}(C)=g(M)\) gives four points \(e_{1},e_{2},e_{3},e_{4}\) such that for any smooth \(Q\in\mathbb{P}_{14\perp 23}\) it holds that \(e_{j}(Q):=\mathrm{Pol}_{Q}(L_{j})\), \(j=1,2,3,4\) and by the identification \((\mathbb{P}^{2},Q)\leftrightarrow_{Q}(\mathcal{H}_{1}^{B},\Omega)\) the four lines, \(\epsilon_{1},\epsilon_{2},\epsilon_{3},\epsilon_{4}\), where \(e_{i}\leftrightarrow_{Q}[\epsilon_{i}]\) are disjoint and span a hyperplane section \(H\).
Now for a smooth \(Q\in\mathbb{P}_{14\perp 23}\) we set:
\[M_{ij}:=\mathrm{Pol}_{n_{ij}}(Q),\ (i,j)\in\mathfrak{T}.\]
By the identification \((\mathbb{P}^{2},Q)\leftrightarrow_{Q}(\mathcal{H}_{1}^{B},\Omega)\) it immediately follows that \(M_{ij}\) is the line \(\langle[\epsilon_{i}],[\epsilon_{j}]\rangle\subset\mathcal{H}_{1}^{B}\), for every \((i,j)\in\mathfrak{T}\). Furthermore we have \(\beta_{12}\cap\beta_{34}\neq\emptyset\). Then \(n_{34}\in M_{12}\). We recall that the plane sextic \(M\) has a simple node supported on \(n_{34}\).
By generality of \([C,\theta]\) the line \(M_{12}\) cuts \(4\) distinct points \(x_{12}^{1},x_{12}^{2},y_{12}^{1},y_{12}^{2}\) on \(M\), none of which coincides with \(n_{34}\). Set \(x_{34}^{1},x_{34}^{2},y_{34}^{1},y_{34}^{2}\) for the analogue ones with respect to \(M_{34}\). We denote by \(\alpha_{jk}^{i},\beta_{jk}^{i}\subset B\) the corresponding lines; that is \([\alpha_{jk}^{i}]\leftrightarrow_{Q}x_{jk}^{i}\) and respectively \([\beta_{jk}^{i}]\leftrightarrow_{Q}y_{jk}^{i}\). We use the four degrees of freedom on \(Q\) to impose the following four conditions:
\[x_{12}^{1}\perp_{Q}x_{12}^{2},\ y_{12}^{1}\perp_{Q}y_{12}^{2},\ x_{34}^{1} \perp_{Q}x_{34}^{2},\ y_{34}^{1}\perp_{Q}y_{34}^{2}. \tag{6.2}\]
In terms of the geometry on \(B\) the condition (6.2) means:
\[\alpha_{12}^{1}\cap\alpha_{12}^{2}\neq\emptyset,\ \beta_{12}^{1}\cap\beta_{12}^{2} \neq\emptyset,\ \alpha_{34}^{1}\cap\alpha_{34}^{2}\neq\emptyset,\ \beta_{34}^{1}\cap\beta_{34}^{2}\neq\emptyset. \tag{6.3}\]
Consider now the section \(Z=H\cap B\) that we have built above, by Lemma 6.0.2\(Z\) is smooth. For the polarity underlying the geometry of the lines of \(B\) we see that the lines \(\alpha_{12}^{1},\alpha_{12}^{2}\) are not contained in \(H\) but there exists a point \(p_{12}\in\beta_{12}\) such that it belongs to both of them. The same holds for the triples \(\{\beta_{12}^{1},\beta_{12}^{2},\beta_{12}\}\), \(\{\alpha_{34}^{1},\alpha_{34}^{2},\beta_{34}\}\), \(\{\beta_{34}^{1},\beta_{34}^{2},\beta_{34}\}\). To sum up there exist four points \(p_{12},q_{12},p_{34},q_{34}\in Z\) such that \(p_{12},q_{12}\in\beta_{12}\subset Z\subset B\), \(p_{34},q_{34}\in\beta_{34}\subset Z\subset B\) such that it holds the following:
\[\alpha_{12}^{1}\cap\alpha_{12}^{2}=\{p_{12}\},\ \beta_{12}^{1}\cap\beta_{12}^{2} =\{q_{12}\},\ \alpha_{34}^{1}\cap\alpha_{34}^{2}=\{p_{34}\},\ \beta_{34}^{1}\cap\beta_{34}^{2}=\{q_{34}\}. \tag{6.4}\]
Now we can use the notations given in 5.0.3 for the hyperplane section \(Z\). By generality assumptions the sublinear system \(\Lambda(p_{12}+q_{12}+p_{34}+q_{34})\subset\mid 2\lambda\mid\) given by those divisors passing through the points \(p_{12},q_{12},p_{34},q_{34}\in Z\) is a pencil. To each general element \(R^{\prime}\in\Lambda(p_{12}+q_{12}+p_{34}+q_{34})\) we associate the corresponding \(C(R^{\prime})\) as in Lemma 5.0.1. We want to find the unique \(R\in\Lambda(p_{12}+q_{12}+p_{34}+q_{34})\) such that \(C=C(R)\).
To ease reading we stress that on one side we have the blow-up at four points \(\phi_{|\lambda|}\colon Z\to\mathbb{P}^{2}\), which gives the hyperplane section \(Z=B\cap H\) containing the rational sextics \(R^{\prime}\in\Lambda(p_{12}+q_{12}+p_{34}+q_{34})\), and on the other side we have the blow-up
\(\sigma\colon S\to\mathbb{P}^{2}\) at the six points \(n_{ij}\), \((i,j)\in\mathfrak{T}\) which contains the given genus \(4\) curve \(C\). Clearly by the identification \((\mathbb{P}^{2},Q)\simeq(\mathcal{H}^{B}_{1},\Omega)\) we can see \(C(R^{\prime})\) inside \(S\) for every smooth \(R^{\prime}\in\Lambda(p_{12}+q_{12}+p_{34}+q_{34})\).
We work on \(S\). Denote by \(L\in|h-E_{34}|\) and \(L^{\prime}\in|h-E_{12}|\) the \(\sigma\)-strict transforms of \(M_{12}\) and respectively of \(M_{34}\). By construction it holds that the points \(a^{t}_{nm},b^{i}_{jk}\) belong to \(C\cap C(R)\) where \(\sigma(a^{t}_{nm})=x^{t}_{nm}\), \(t=1,2\), \((n,m)=(1,2)\) or \((3,4)\) and \(\sigma(b^{i}_{jk})=y^{i}_{jk}\), \(i=1,2\), \((j,k)=(1,2)\) or \((3,4)\). Note that for any irreducible element \(C^{\prime}\in|6h-2(E_{12}+E_{13}+E_{14}+E_{23}+E_{24}+E_{34})|\) it holds that \(\deg C^{\prime}_{|L}=4\) and \(\deg C^{\prime}_{|L^{\prime}}=4\). In particular the decomposition sequence gives \(h^{0}(L\cup L^{\prime},\mathcal{O}_{L\cup L^{\prime}}(C^{\prime}))=9\). By Kodaira vanishing and by Riemann Roch theorem it holds that \(h^{0}(S,\mathcal{O}_{S}(C^{\prime}))=10\). Moreover \(|4h-2(E_{13}+E_{14}+E_{23}+E_{24})-E_{34}-E_{12}|\) contains the unique element \(D=(h-E_{12}-E_{13}-E_{14})+(h-E_{34}-E_{24}-E_{14})+(h-E_{12}-E_{23}-E_{24})+E _{12}+(h-E_{13}-E_{23}-E_{34})+E_{34}\). By the cohomology of the standard sequence
\[0\to\mathcal{O}_{S}(-L-L^{\prime})\to\mathcal{O}_{S}\to\mathcal{O}_{L\cup L ^{\prime}}\to 0\]
tensorialised by \(\mathcal{O}_{S}(C^{\prime})\) it follows that the following sequence is then exact:
\[0\to H^{0}(S,\mathcal{O}_{S}(D))\to H^{0}(S,\mathcal{O}_{S}(C^{\prime})) \stackrel{{\rm ev}}{{\longrightarrow}}H^{0}(L\cup L^{\prime}, \mathcal{O}_{L\cup L^{\prime}}(C^{\prime}))\to 0. \tag{6.5}\]
We know that \(h^{0}(S,\mathcal{O}_{S}(D))=1\). Now let \(\langle\nu\rangle\subset H^{0}(L\cup L^{\prime},\mathcal{O}_{L\cup L^{\prime} }(C^{\prime}))\) be the \(1\)-dimensional vector space given by those sections \(\nu\) vanishing on \(a^{1}_{12}+a^{2}_{12}+b^{1}_{12}+b^{2}_{12}+a^{1}_{34}+a^{2}_{34}+b^{1}_{34}+ b^{2}_{34}\). Then \({\rm ev}^{-1}(\langle\nu\rangle)\) is a \(2\)-dimensional vector space \(\Lambda\). Then for every \(R^{\prime}\in\Lambda(p_{12}+q_{12}+p_{34}+q_{34})\) the corresponding curve \(C(R^{\prime})\) belongs to the pencil \(\mathbb{P}(\Lambda)\) given by those \(C^{\prime}\) such that \(C^{\prime}_{|L\cup L^{\prime}}=a^{1}_{12}+a^{2}_{12}+b^{1}_{12}+b^{2}_{12}+a^ {1}_{34}+a^{2}_{34}+b^{1}_{34}+b^{2}_{34}\). By construction \(C\) belongs to \(\mathbb{P}(\Lambda)\) too. Finally consider a general point \(a\in C\). Thanks to the morphism \(\sigma\colon S\to\mathcal{H}^{B}_{1}\) the point \(\sigma(a)=[\alpha]\) gives a line \(\alpha\subset B\) not contained in \(Z\). Consider the unique point \(p\in\alpha\cap Z\). Let \(R\) be the unique element \(R\in\Lambda(p_{12}+q_{12}+p_{34}+q_{34})\) such that \(p\in R\). Then \([\alpha]\in M\cap M(R)\); that is \(a\in C\cap C(R)\). This implies that \(C=C(R)\).
In the proof of Theorem 6.0.1 we have used the following Lemma.
**Lemma 6.0.2**.: _Let \(Z=B\cap H\) be an hyperplane section of the del Pezzo \(3\)-fold \(B\) such that it contains \(10\) lines \(e_{1},e_{2},e_{3},e_{4}\) and \(\beta_{ij}\)\((i,j)\in\mathfrak{T}\) such that \(e_{i}\cap\beta_{ij}\neq\emptyset\) and \(e_{j}\cap\beta_{ij}\neq\emptyset\) then \(Z\) is smooth._
Proof.: By [CS, Lemma 7.6.2.Remark 7.6.3] we know that if \(Z\) has at most isolated singularities then \(Z\) cannot contain more than \(7\) lines. If \(Z\) has a non-normal singularity then it is singular only along a line since \({\rm Pic}(B)=\mathbb{Z}\) and [Rei2, Lemma page 718]; then the claim follows.
## 7. The rationality theorem of the theta-null spin\({}^{+}\) moduli space
### The moduli map
In the Introduction we have recalled the rational map \(\pi_{\mathbb{S}_{4}^{+}}\colon\mathcal{H}^{B}_{6}\dashrightarrow\mathbb{S}_{4}^ {+}\) and its Stein factorisation \(q_{\mathbb{S}_{4}^{+}}\circ p_{\mathbb{S}_{4}^{+}}\) where the general fiber of \(p_{\mathbb{S}_{4}^{+}}\colon\mathcal{H}^{B}_{6}\dashrightarrow\mathbb{S}_{4}^ {+}\) is irreducible and \(q_{\mathbb{S}_{4}^{+}}\colon\widetilde{\mathbb{S}}_{4}^{+}\dashrightarrow \mathbb{S}_{4}^{+}\) is generically finite of degree \(2\); see: [TZ3, Corollary 4.16]. Our theory on \(\pi_{\mathbb{S}_{4}^{+}}\colon\mathcal{H}^{B}_{6}\dashrightarrow\mathbb{S}_{4}^ {+}\) can be straightly extended to the divisor \(\mathcal{H}_{\rm c.t.}\). Actually the proof that \(\mathbb{S}_{4}^{+}\) is a rational variety was done by taking the open subset \(\mathcal{H}^{*}\) of \(\mathcal{H}^{B}_{6}\) consisting of (reduced but possibly
reducible) sextic curves with exactly six different bi-secant lines. On it in [TZ3, Susection 5.1] we defined a \(G\)-equivariant morphism
\[\Theta\colon\mathcal{H}^{*}\to(\mathbb{P}^{2})^{6}/\mathfrak{S}_{6},\ [R]\mapsto([ \beta_{1}],\dots,[\beta_{6}]).\]
which associates to each \([R]\in\mathcal{H}^{*}\) the set of its \(6\) unordered bisecants, where \((\mathbb{P}^{2})^{6}\) is the product of \(6\) copies of \(\mathbb{P}^{2}\), \(\mathfrak{S}_{6}\) is the permutations group on \(6\) elements and \(G=\mathrm{PGL}(2\mathbb{C})\). In [TZ3, Theorem 5.2] we showed that \(\Theta\) is an isomorphism on a certain open set \(\hat{\mathcal{H}}\), see:[TZ3, Condition 3.21], which is disjoint from \(\mathcal{H}_{\mathrm{c.t.}}\). On the other hand in [TZ3, Condition 6.9] we defined a locally closed subset \(\mathcal{D}\) of \(\mathcal{H}_{6}^{B}\), which contains \(\mathcal{H}_{\mathrm{c.t.}}\), and we could extend \(\Theta\) to an isomorphism over \(\hat{\mathcal{H}}\cup\mathcal{D}\). In particular \(\Theta\) is an isomorphism over \(\hat{\mathcal{H}}_{\mathrm{c.t.}}\hookrightarrow\mathcal{D}\), where we recall here that \(\hat{\mathcal{H}}_{\mathrm{c.t.}}\hookrightarrow\mathcal{H}_{\mathrm{c.t.}}\) is the open subscheme of \(\mathcal{H}_{\mathrm{c.t.}}\) given by those \([R]\in\mathcal{H}_{\mathrm{c.t.}}\) such that \(R\) is smooth and the hyperplane section \(\langle R\rangle\cap B=Z\) is smooth.
To ease reading we follow the notation of [TZ3] and we set
\[\widetilde{\mathrm{S}}_{4}^{+}\subset\widetilde{\mathrm{S}}_{4}^{+}\]
for the image of \(\hat{\mathcal{H}}\) and we denote by
\[\widetilde{\mathrm{S}}_{4}^{+}\subset\mathrm{S}_{4}^{+}\]
the image of \(\widetilde{\mathrm{S}}_{4}^{+}\) by the map \(q_{\mathrm{S}_{4}^{+}}\colon\widetilde{\mathrm{S}}_{4}^{+}\dashrightarrow \mathrm{S}_{4}^{+}\). By [TZ3, Corollary 4.16] we know that there exists an involution on \(\widetilde{\mathring{J}}\colon\widetilde{\mathrm{S}}_{4}^{+o}\to \widetilde{\mathrm{S}}_{4}^{+o}\), which is the deck transformation of the double cover \(q_{\mathrm{S}_{4}^{+o}}:=q_{\mathrm{S}_{4}^{+}|\widetilde{\mathrm{S}}_{4}^{+o }}:\widetilde{\mathrm{S}}_{4}^{+o}\to\mathbb{S}_{4}^{+o}\). Moreover letting \(V_{1}:=\Theta(\hat{\mathcal{H}})\) by [TZ3, Proposition 6.1] we know that there exists a commutative diagram
(7.1)
where \(\mathring{J}\to V_{1}/G\to V_{1}/G\) is a lifting of the classical association map
\[j\colon(\mathbb{P}^{2})^{6}/\!/\mathrm{PGL}_{3}/\mathfrak{S}_{6}\to(\mathbb{ P}^{2})^{6}/\!/\mathrm{PGL}_{3}/\mathfrak{S}_{6}\]
see: [TZ3, Subsection 6.1]. Now set \(\check{\mathcal{K}}:=\Theta(\hat{\mathcal{H}}_{\mathrm{c.t.}})\)
**Lemma 7.1.1**.: _We can extend \(J\colon V_{1}/G\to V_{1}/G\) to \((V_{1}\cup\check{\mathcal{K}})/G\) in such a way that it is the identity over \(\check{\mathcal{K}}/G\)._
Proof.: We know that \(j\colon(\mathbb{P}^{2})^{6}/\!/\mathrm{PGL}_{3}/\mathfrak{S}_{6}\to(\mathbb{ P}^{2})^{6}/\!/\mathrm{PGL}_{3}/\mathfrak{S}_{6}\) is the identity on the points given by the sextuplets of points of \(\mathbb{P}^{2}\)which are contained in a conic and that the \(\mathrm{PGL}_{3}\)-action and the \(\mathfrak{S}_{6}\)-action over \((\mathbb{P}^{2})^{6}\) commutes. On the other hand if \([R]\in\hat{\mathcal{H}}_{\mathrm{c.t.}}\) then its \(6\) bisecant are contained in a (reducible) conic. More precisely, following the notation of 5.0.3 it holds that the bisecants \(\beta_{ij}\), \(1\leq i<j\leq 4\), belongs to the reducible conic which parameterises the lines of \(B_{5}\) which touch
or \(\epsilon_{4}\). Note that by [DO, Theorem 1, p.23]) these points are stable ones and by [DO, p.118-120]) they are fixed by \(j\).
We consider the morphism \([\Theta^{-1}]\colon(V_{1}\cup\mathring{\mathcal{K}})/G\to\mathring{(\mathcal{H} \cup\mathring{\mathcal{H}_{\mathrm{c.t.}}})}/\!/G\) induced by \(\Theta\). We stress that by definition \([\Theta^{-1}](\mathring{\mathcal{K}}/G)=\mathring{\mathcal{H}_{\mathrm{c.t.}} }/\!/G=p_{\mathrm{S}_{4}^{+}}(\mathring{\mathcal{H}_{\mathrm{c.t.}}})\).
**Proposition 7.1.2**.: _We can extend \(q_{\mathrm{S}_{4}^{+o}}\circ[\Theta^{-1}]_{|V_{1}/G}\colon V_{1}/G\to\mathring{ \mathrm{S}_{4}^{+}}\) to \(f=q_{\mathrm{S}_{4}^{+o}}\circ[\Theta^{-1}]\colon(V_{1}\cup\mathring{\mathcal{ K}})/G\to\mathring{\mathrm{S}_{4}^{+}}\sqcup\mathrm{S}_{g}^{\mathrm{null},0}\) in such a way that \(f_{|\mathring{\mathcal{K}}/G}\colon\mathring{\mathcal{K}}/G\to\mathrm{S}_{g}^ {\mathrm{null},0}\) is dominant of degree \(1\)._
Proof.: By [TZ3, Proposition 6.10] we have that \(\Theta\) is extendable over \(\mathring{\mathcal{H}_{\mathrm{c.t.}}}\) so as to give an isomorphism over it. By Proposition 5.0.4\(\pi_{\mathrm{S}_{4}^{+}}\colon\mathcal{H}_{\mathrm{6}}^{B}\dashrightarrow \mathrm{S}_{4}^{+}\) is defined over \(\mathring{\mathcal{H}_{\mathrm{c.t.}}}\), that is it is defined \(\mathrm{S}_{g}^{\mathrm{null},0}\ni[C(R),\theta(R)]:=q_{\mathrm{S}_{4}^{+}} \circ p_{\mathrm{S}_{4}^{+}}([R])\) where \([R]\in\mathring{\mathcal{H}_{\mathrm{c.t.}}}\). This implies that if \([(\beta_{1},\cdots\beta_{6})]\in\mathring{\mathcal{K}}/G\) and \([R]\in\mathring{\mathcal{H}_{\mathrm{c.t.}}}\) is such that \([R]=\Theta^{-1}((\beta_{1},\cdots\beta_{6}))\) then it is defined \(f[(\beta_{1},\cdots\beta_{6})]\mapsto[C(R),\theta(R)]\) where \(f\) coincides with \(q_{\mathrm{S}_{4}^{+o}}\circ[\Theta^{-1}]\) over \(V_{1}/G\). Finally we have realised \(\mathring{\mathrm{S}_{4}^{+}}\) as the \(\mathbb{Z}/2\mathbb{Z}\)-quotient \((V_{1}/G)/\mathring{J}\) where \(f_{|V_{1}/G}=\Theta^{-1}\circ q_{\mathrm{S}_{4}^{+o}}\colon V_{1}/G\to \mathring{\mathrm{S}_{4}^{+}}\) is the associated involution. Since \(f\) is defined on \(\mathring{\mathcal{K}}/G\) and since by Lemma 7.1.1\(J\) is the identity on \(\mathring{\mathcal{K}}/G\) it follows that \(f(\mathring{\mathcal{K}}/G)\subset\mathrm{S}_{g}^{\mathrm{null},0}\) is in the branch loci of \(f\). On the other hand by the Reconstruction theorem 6.0.1 we know that \(\pi_{\mathrm{S}_{4}^{+}}\colon\mathring{\mathcal{H}_{\mathrm{c.t.}}}\to \mathrm{S}_{g}^{\mathrm{null},0}\) is dominant, hence \(f_{|\mathring{\mathcal{K}}/G}\colon\mathring{\mathcal{K}}/G\to\mathrm{S}_{g}^ {\mathrm{null},0}\) is dominant of degree \(1\).
### The proof of the rationality theorem
We are now ready to show our main theorem. First we sum up part of the result of the above Section 7.1 into the following proposition:
**Proposition 7.2.1**.: _The divisor \(\mathrm{S}_{4}^{\mathrm{null},0}\) is irreducible reduced and it is dominated by \(\mathring{\mathcal{H}_{\mathrm{c.t.}}}\). Moreover it is contained in the branch loci of \(q_{\mathrm{S}_{4}^{+}}\colon\widetilde{\mathrm{S}_{4}^{+}}\dashrightarrow \mathrm{S}_{4}^{+}\).In particular it holds that \(p_{\mathrm{S}_{4}^{+}}(\mathcal{H}_{\mathrm{c.t.}})\) is birational to \(\mathrm{S}_{g}^{\mathrm{null},0}\)._
Proof.: By Proposition 4.0.1\(\mathrm{S}_{4}^{\mathrm{null},0}\) is reduced. Moreover we have seen that by definition \([\Theta^{-1}](\mathring{\mathcal{K}}/G)=\mathring{\mathcal{H}_{\mathrm{c.t.}} }/\!/G=p_{\mathrm{S}_{4}^{+}}(\mathring{\mathcal{H}_{\mathrm{c.t.}}})\). By Proposition 3.2.2 we know that \(\mathring{\mathcal{H}_{\mathrm{c.t.}}}\) is irreducible. Hence \(\mathring{\mathcal{K}}/G\) is irreducible. By Proposition 7.1.2 it holds that that \(\mathrm{S}_{4}^{\mathrm{null},0}\) is irreducible too. By Proposition 7.1.2 it follows that the divisor \(\mathrm{S}_{4}^{\mathrm{null},0}\) is contained in the branch loci of \(q_{\mathrm{S}_{4}^{+}}\colon\widetilde{\mathrm{S}_{4}^{+}}\dashrightarrow \mathrm{S}_{4}^{+}\) and that it is birational to \(p_{\mathrm{S}_{4}^{+}}(\mathring{\mathcal{H}_{\mathrm{c.t.}}})\).
**Theorem 7.2.2**.: \(\overline{\mathrm{S}_{4}^{\mathrm{null},0}}\) _is a rational variety._
Proof.: By Proposition 7.2.1\(\overline{\mathrm{S}_{4}^{\mathrm{null},0}}\) is birational to \(p_{\mathrm{S}_{4}^{+}}(\mathcal{H}_{\mathrm{c.t.}}^{\mathring{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ }}}}}}}}}}}})\). On the other hand by construction \(p_{\mathrm{S}_{4}^{+}}(\mathring{\mathcal{H}_{\mathrm{c.t.}}})\) is birational to \(\mathring{\mathcal{H}_{\mathrm{c.t.}}}/\!/G\). By Theorem 3.3.1 we conclude.
### The rationality theorem of the theta-null Prym moduli space
Let \(\mathcal{R}_{4}^{\mathrm{null}}\) be the moduli space which parameterises couples \((C,\eta)\) where \(C\) is a genus \(4\) curve with a vanishing theta-null and \(\eta\) is a nontrivial \(2\)-torsions line bundle. We denote by \(\delta\) the vanishing theta-null, i.e. the \(g_{3}^{1}\).
**Theorem 7.3.1**.: \(\mathcal{R}^{\rm null}_{4}\) _is a rational variety._
Proof.: Let \(f\colon\mathrm{S}^{\rm null,0}_{4}\dashrightarrow\mathcal{R}^{\rm null}_{4}\) be the rational map given by \([(C,\theta)]\mapsto[(C,\theta-\delta)]\). It is generically injective. By a dimensional count this implies that \(\mathrm{S}^{\rm null,0}_{4}\) is birational to an irreducible component of \(\mathcal{R}^{\rm null}_{4}\). Now, using our reconstruction theorem, that is Theorem 6.0.1, we show that \(\mathcal{R}^{\rm null}_{4}\) is irreducible.
Indeed let \([(C,\eta)]\in\mathcal{R}^{\rm null}_{4}\) be a general element. Clearly our claim is to show that \(h^{0}(C,\mathcal{O}_{C}(\eta+\delta))=0\). By the exact sequence
\[0\rightarrow\mathcal{O}_{C}(\eta)\rightarrow\mathcal{O}_{C}(\eta+\delta) \rightarrow\mathcal{O}(\eta+\delta)|_{\delta}\to 0\]
and the fact that \(\mathcal{O}(\eta+\delta)|_{\delta}\simeq\mathbb{C}^{\oplus 3}\), we see that \(h^{0}(C,\mathcal{O}_{C}(\eta+\delta))=0\) iff \(H^{0}(\mathcal{O}(\eta+\delta)|_{\delta})\simeq\mathbb{C}^{3}\to H^{1}( \mathcal{O}(\eta))\simeq\mathbb{C}^{3}\) is an isomorphism. We set \(\theta:=\eta+\delta\). We consider the morphism \(\phi_{|\theta+\delta|}\colon C\rightarrow\mathbb{P}^{2}\). By the same argument used in Proposition 4.1.4 the image \(M\) of the morphism \(\phi_{|\theta+\delta|}\colon C\rightarrow\mathbb{P}^{2}\) is a sextic with six nodes. Let \(\pi\colon S\rightarrow\mathbb{P}^{2}\) be the blow-up at the six points and \(e_{1},\dots,e_{6}\) the respective exceptional divisors. The main point is again to show \(|\pi^{*}\mathcal{O}(4)-2\sum e_{i}|\) has a unique member. Since \(K_{C}=3h-\sum e_{i}|_{C}\) and \(2h=2(\delta+\theta)=2K_{C}\), we have \(4h-2\sum e_{i}|_{C}=4h-2(3h-K_{C})=2K_{C}-2h=0\). Now consider the following exact seqeunc:
\[0\rightarrow\pi^{*}\mathcal{O}_{\mathbb{P}^{2}}(-2)\rightarrow\pi^{*} \mathcal{O}_{\mathbb{P}^{2}}(4)\otimes_{\mathcal{O}_{S}}\mathcal{O}_{S}(-2 \sum e_{i})\rightarrow\mathcal{O}_{C}(4h-2\sum e_{i})\to 0\]
(note that \(C\in|\pi^{*}\mathcal{O}_{\mathbb{P}^{2}}(6)\otimes_{\mathcal{O}_{S}} \mathcal{O}_{S}(-2\sum e_{i})|\)). By this, analogously as in the proof of Proposition 4.1.4, we see that \(|\pi^{*}(\mathcal{O}_{\mathbb{P}^{2}}(4)-2\sum e_{i}|\) has a unique member and again the member \(D\) of \(|\pi^{*}(\mathcal{O}(4)-2\sum e_{i}|\) consists of \(4\) lines and again the six points are all the mutual intersection points among them. Finally, by Theorem 6.0.1 we can conclude that \(\hat{\mathcal{H}}\) dominates also \(\mathcal{R}^{\rm null}_{4}\). Hence \(\mathcal{R}^{\rm null}_{4}\) is irreducible. In particular it is birational to \(\overline{\mathcal{S}^{\rm null,0}_{4}}\). By Corollary 7.2.2 it is a rational variety.
|
2309.12643 | Synchrotron radiation from cosmic string wakes | Magnetic fields can be generated in cosmic string wakes due to the Biermann
mechanism in the presence of neutrino inhomogeneities. As the cosmic string
moves through the plasma the small magnetic field is amplified by the
turbulence in the plasma. Relativistic charged particles which cross the
magnetized wake of a cosmic string will therefore emit synchrotron radiation.
The opening angle of the cosmic string is very small and so the wake appears
like a relativistic jet. Assuming a homogeneous magnetic field in the wake of
the string, we obtain the synchrotron emission from non thermal relativistic
electrons in the wake of the string. The emitted radiation has a broad peak and
is over a wide range of frequency. We show that the spectrum can be mapped to
some of the unknown sources in different ranges of the current available
catalogues. | Dilip Kumar, Soumen Nayak, Soma Sanyal | 2023-09-22T06:24:10Z | http://arxiv.org/abs/2309.12643v2 | # Synchrotron radiation from cosmic string wakes.
###### Abstract
Magnetic fields can be generated in cosmic string wakes due to the Biermann mechanism in the presence of neutrino inhomogeneities. As the cosmic string moves through the plasma the small magnetic field is amplified by the turbulence in the plasma. Relativistic charged particles which cross the magnetized wake of a cosmic string will therefore emit synchrotron radiation. The opening angle of the cosmic string is very small and so the wake appears like a relativistic jet. Assuming a homogeneous magnetic field in the wake of the string, we obtain the synchrotron emission from non thermal relativistic electrons in the wake of the string. The emitted radiation has a broad peak and is over a wide range of frequency. We show that the spectrum can be mapped to some of the unknown sources in different ranges of the current available catalogues.
cosmic string, wakes, synchrotron radiation
Introduction
Synchrotron radiation has been studied for a very long time to identify and locate various non - luminous astrophysical objects [1] in our universe. It is the radiation emitted from relativistic electrons moving in the magnetic field close to an astrophysical object. One of the most elusive objects which are currently being sought using various methods are the cosmic strings. These strings are produced during symmetry breaking phase transitions in the early universe [2]. They are essentially defects in one dimension with a conical spacetime. These strings can give rise to interesting phenomenon in the early universe. The conical space around the string leads to the formation of wakes behind them [3]. In relativistic fluid flows, both strong shocks as well as weak shocks are generated [4; 5; 6] in these wakes. There are many kinds of cosmic strings depending on the nature of the phase transition. Out of all these, the superconducting cosmic string carry electric current and can generate magnetic fields in their wake [7]. So, the spectrum of synchrotron radiation from electrons moving close to a superconducting string has been studied in detail [8]. Recently, it has been shown that magnetic fields can be formed in shocks of wakes of non-superconducting strings by the Biermann battery mechanism [9]. The Biermann battery mechanism generates a magnetic field whenever the temperature gradient and the density gradient in an inhomogeneous plasma are not aligned to each other [10]. Once, the magnetic field is generated, it gets enhanced by the turbulence in the wake region. As the cosmic string moves in the plasma, electrons crossing these magnetized wakes will emit synchrotron radiation. The energy spectrum of these electrons along with other signatures may help in identifying these cosmic strings in the current universe. Currently, there are various attempts to predict the signature of these cosmic string from the 21 cm observational data [11]. Other methods of identifing cosmic strings are also being pursued. In this article, we are proposing yet another method of identifying cosmic string wakes by using electromagnetic radiation.
The study of diffusive synchrotron radiation has already yielded several interesting results. The spectrum of the radiation depends on the correlation length of the magnetic field through which the electrons travel. For a uniform large scale magnetic field the spectrum is referred to as the synchrotron spectrum while for small scale random magnetic field the radiation is often refered to as the "jitter" radiation [12; 13]. A theory has also been developed for the small scale jitter radiation. The theory gives rise to several power law asymptotes which
smoothly change as the frequency changes. This leads to the fact that a wide range of frequencies are now covered by the synchrotron radiation.
In recent times, numerical simulations have been developed to study the microphysics of non-relativistic as well as relativistic shocks. For cosmic strings too, there have been several studies of unmagnetized shocks [14; 15; 16]. In a previous work, we have studied the evolution of a magnetic field in the wake of a cosmic string [17]. Cosmic strings can have magnetic fields in their wakes due to various mechanisms. One mechanism that has been mentioned before is the Biermann battery mechanism, alternative mechanisms involve the generation of vorticity in the plasma. Two cosmic strings moving past one another can generate vortices between them [18]. Since ions and electrons do not have the same mass, a magnetic field can be generated by the differential angular momentum between the ions and electrons via the Harrison mechanism [19]. Other ways of generating vorticity have also been explored. Once the field is generated in the wake, it may decay but it will not dissipate completely, so the electrons moving across these wakes will be moving through a magnetic field which means that they would emit synchrotron radiation. However, only the synchrotron radiation coming from wakes of superconducting strings have been studied previously in the literature.
In this article we will focus on Abelian Higgs strings. Unlike superconducting strings these do not have a current flowing through the length of the string. The magnetic field in these wakes are generated by the Biermann mechanism [9]. The Biermann mechanism generates oppositely directed magnetic fields on the different sides of the wake. The wake has a traingular structure, being narrow close to the string and becoming wider as the distance from the string increases. Electrons will move across the wake as the string moves through the plasma. In this work, we obtain the spectrum of synchrotron radiation emitted by relativistic electrons in the string wake. We find that the synchrotron radiation emitted from these electrons are in the range of \(10^{2}Hz-10^{23}Hz\). The overall spectrum has a broad peak. Due to the large range of frequencies that it spans, the radiation should be detected by the current all sky surveys. We discuss how this can be another signature for cosmic string wakes in addition to the ones that are already being pursued.
In section 2, we discuss the wakes of cosmic strings in the intergalactic medium. In section 3, we discuss the synchrotron radiation emitted by the electrons moving in the magnetized wake of these cosmic strings. In section 4, we discuss the spectrum obtained and in section 5, we present the summary and conclusions of the paper.
Cosmic String Wake in the Intergalactic Medium
Cosmic string wake forms as a long cosmic string moves through the intergalactic medium. It has been shown that the wake is very wide for a hot gas tightly coupled to radiation but becomes quite narrow for the case of a hydrogen gas [16]. Since the intergalactic medium consists of mostly baryonic and leptonic matter, hence the wake caused by a moving cosmic string would be closer to the case of the hydrogen gas. Thus the wake would be more like a narrow jet with a large number of streaming relativistic particles. As the velocity of the string reaches supersonic velocities, shock waves are generated behind the cosmic string.
Cosmic string shock waves have been discussed for the high temperature plasma of the very early universe. The relativistic motion of the string as well as the finite temperature of the medium has been taken into account in these studies [3; 14; 20]. These shocks occur before the start of recombination. Realistic studies of shocks also include the the possibility of ionization and interaction with the background radiation [16]. The velocity of the shock would depend on the nature of the plasma and it's Equation of State (EoS). In the post recombination era, the universe is matter dominated. Assuming that the interaction between the matter and radiation is minimal, the sound speed at those temperatures is given by,
\[c_{s}^{2}=\frac{\Gamma_{m}p_{1}}{\rho_{1}+p_{1}} \tag{1}\]
Here \(\Gamma_{m}\) is the ratio of the specific heats of a radiation gas and can be considered to equal to \(5/3\), while \(p_{1}\) and \(\rho_{1}\) are the pressure and density in the pre-shock region. Detailed studies of shocks in these plasmas have been carried out [16]. The wakes usually have different values of density, pressure and temperature in the pre-shock and the postshock regions. The shock which is similar to a front moves with a velocity known as the shock velocity. Generally this shock velocity depends both on the pre shock and post shock density and pressure. It has been shown that, in the case, where the specific heat ratio remains approximately the same in the preshock and the postshock regime, one can obtain the velocity of the shock wave as
\[v_{sh}=\frac{1}{4}[(\Gamma_{m}+1)^{2}u^{2}+16c_{s}^{2}]^{1/2}-\frac{(3-\Gamma _{m})}{4}u \tag{2}\]
Here \(u=v\delta(1-v^{2})^{-1/2}\). \(\delta\) is the angle of deflection of the particles due to the cosmic string metric and \(v\) is the velocity of the string. Therefore the shock velocities can go as high as \(0.9c\).
Electron acceleration from various shock waves have been studied in detail [21]. Both subrelativistic shock waves as well as ultra relativistic shock waves are known to generate accelerated electrons which subsequently emit synchrotron radiation. Most of these studies are therefore done with hydrodynamic shock waves. In the presence of a magnetic field, the wake region will also have mulitple shock waves [17]. So it is highly likely that the electrons moving through these magnetized wakes will emit synchrotron radiation.
The following basic properties of the wake need to be taken into account while studying this synchrotron radiation. The particle number density in the wake is inhomogeneous. This is because different particles in the wake move differently in the metric of the Abelian Higgs string. Depending on their mass and angular momentum, particles appear to cluster closer to the string due to the presence of bounded orbits [22]. Due to the asymmetry amongst the right and left handed neutrinos a neutrino current is also generated. This neutrino current is oscillating in nature and the number density of the neutrinos depends on their distance from the string [9]. So, the wake structure is very inhomogeneous in nature. Apart from these inhomogeneities, the temperature in the preshock and post shock region is also quite different. The temperature in the post shock regime is given by
\[T_{2}=\frac{p_{2}n_{1}}{p_{1}n_{2}}T_{1} \tag{3}\]
Here the suffix 1 is used for the pre shock quantities, while the suffix 2 is used for the post shock quantities. In the next section, we present our model for determining the synchrotron radiation from the wakes of these Abelian Higgs cosmic strings. Using the basic propoerties of inhomogeneities in the wake, we obtain the spectrum of the synchrotron radiation emitted by electrons crossing the magnetized wake of an Abelian Higgs string.
## III Synchrotron radiation from electrons in the wake
As described in the previous section, the electron distribution in the cosmic string wake is inhomogeneous. There are regions which have a higher density of electrons and regions which have a lower density of electrons. Since these electrons are accelerated by the motion of the shock generated in the string's wake, they will lose energy as synchrotron radiation. For electrons, the peak frequency at which this loss happens is given by,
\[\nu_{c}\sim\frac{eB\gamma^{2}}{2\pi mc} \tag{4}\]
Here the magnetic field \(B\) is considered to be homogeneous over the mean free path of the electrons. The average power that is radiated by the electrons is given by,
\[\langle P^{syn}(\nu)\rangle=\frac{\sqrt{3}e^{3}B}{m_{e}c^{2}}\int_{1}^{\infty}d \gamma N_{e}(\gamma)R(\alpha) \tag{5}\]
Here \(\gamma\) is the relativistic Lorentz factor and \(\alpha=\frac{\nu}{\nu_{c}}\). \(N_{e}(\gamma)\) is the distribution of electrons which are moving across the cosmic string wake, \(R(\alpha)\) is a function of a combination of Bessel functions.
One of the simplest models used for obtaining the spectrum is the "Blob model" or the standard one zone model. Here it is assumed that the random magnetic field fills up a spherical volume of radius \(r_{b}\) and comoving volume of \(V_{b}=\frac{4\pi r_{b}^{3}}{3}\). The nonthermal relativistic electrons are assumed to be uniformly distributed throughout the blob with an isotropic pitch angle distribution. The electrons have a Lorentz factor distribution given by \(N_{e}(\gamma^{\prime})\) where \(\gamma^{\prime}\) is the comoving Lorentz factor. For this model, the synchrotron radiation is given by,
\[\nu F_{\nu}^{syn}=\frac{\delta_{D}^{4}\nu^{\prime}\langle P^{syn}(\nu^{ \prime})\rangle}{4\pi d_{L}^{2}} \tag{6}\]
The Lorentz factor determines the Doppler factor for the relativistic outflow given by \(\delta_{D}\) defined by,
\[\delta_{D}=\frac{1}{\gamma_{2}(1-\beta cos\theta)} \tag{7}\]
Here \(\beta=\frac{v_{f}}{c}\) and \(\theta\) is the viewing angle which is very small.The other input that is important for an observable is the luminosity distance. This distance will depend on the redshift of the object. The luminosity distance \(d_{L}\) is given by,
\[d_{L}=d_{A}(1+z)^{2} \tag{8}\]
where \(z\) is the redshift and \(d_{A}\) is the angular size distance. It is therefore the ratio of the transverse extent of the object and the angle it subtends in the sky. Generally, when studying synchrotron radiation from jets or other astrophysical objects, blobs of electrons present in the jets are considered to emit the synchrotron radiation. Here too, we have assumed that there are a distribution of electrons which are moving across the cosmic string wake and these electrons have a power law distribution, given by,
\[N_{e}(\gamma)=K_{e}\frac{1}{4\pi}\gamma^{-p}H(\gamma;\gamma_{1},\gamma_{2}) \tag{9}\]
Here \(K_{e}\) is the normalization constant.The synchrotron radiation emitted by the electrons for an isotropic pitch angle distribution is then given by,
\[\nu F_{\nu}^{syn}(p)=\frac{3^{(p+2)/2}}{2^{(p+1)/2}}a(p)\frac{4}{3}c\sigma_{T}U_ {B}K_{e}\left(\frac{\nu}{\nu_{B}}\right)^{(3-p)/2} \tag{10}\]
Here \(\sigma_{T}\) is the Thomson cross section, \(U_{B}=\frac{B^{2}}{8\pi}\), \(\nu_{B}=\frac{eB}{2\pi m_{e}c}\). For relativistic shocks \(p\approx 2\) and \(a(p)=0.1032\). Details of these calculations are available in ref.[23].
The python module AGNPY [25] has been developed based on the blob model and has been tested for different cases. As is seen the primary inputs come from the particular source being studied and the the final output spectrum would depend upon the electron distribution, the Lorentz \(\gamma\) factor, the Doppler factor etc. We obtain these input parameters for the case of the cosmic string and use the AGNPY module to obtain the final spectrum.
We will now proceed to briefly mention our inputs to the module and then present the final spectrum obtained for various different parameters of our model. In the blob model, the radius of the blob is an input parameter. We model the source such that the radius of the blob must be smaller than the width of the shock. Now the width of the wake at \(z=30\) is given approximately by \(2.3\times 10^{-4}Mpc\). Since we obtain the synchrotron spectrum for values of \(z<30\), we take the radius of the blob to be \(R_{B}<8\times 10^{20}\) cms. We however find that the spectrum does not depend on the exact value of the radius (as the volume cancels out in the final expression) so we keep it at \(R_{B}\sim 10^{16}\) cms. The most important parameter associated with the cosmic string shock is the Lorentz factor of the accelerated electrons. In the post shock region this is given by \(4\gamma_{2}=\frac{n_{2}}{n_{1}}\). For cosmic strings we have [26],
\[\frac{n_{2}}{n_{1}}=\frac{16\pi G\mu v_{f}^{2}}{3\sqrt{v_{f}^{2}-v_{s}^{2}}}+1 \tag{11}\]
Here \(v_{f}\) is the velocity of the string and \(v_{s}\) is the velocity of the shock wave. Therefore we can obtain the Lorentz factor from the emission,
\[\gamma_{2}=\frac{1}{4}\left[\frac{16\pi G\mu v_{f}^{2}}{3\sqrt{v_{f}^{2}-v_{s} ^{2}}}+1\right] \tag{12}\]
Since the value of \(4\pi G\mu\sim 10^{-5}\), therefore the value of \(\gamma_{2}\sim 0.25\). Since it is assumed that the strings move with velocities close to that of light, the value of \(\delta_{D}\) depends on the Lorentz factor. So this is an important parameter in the emission spectrum of the synchrotron
radiation. We have used this as a parameter and used different values of redshift to obtain the final spectrum. Since radiation has been observed from very low redshift objects only, we keep the maximum of \(z\) as 30.
## IV Results
We now present the synchrotron spectrum for the different parameters discussed in the previous section. In fig.1, we show the plots corresponding to different values of the parameter \(\gamma_{2}\) and \(\delta_{D}\).They are plotted for redshift value of \(z=1\). The redshift values are limited by the observational component and so as mentioned before, the largest redshift we have considered is \(z=30\) while the smallest is \(z=0.069\). We have also plotted the same graphs for different values of the redshift parameter in fig 2 We see that the turnover frequency is about 100 GHz. The spectral break comes after \(10^{11}\) GHz. As expected we find that, the spectral break shifts to low frequencies as we move back in time. The synchrotron spectrum obtained can be fit with three distinct power laws. The initial low frequency self absorption part has a power law exponent of \(\sim 1.24\), this is followed by the optically thin emission region which has a very low spectral index. This is the region of the spectrum where the relativistic electrons are scattered mulitple times. Finally it dips very sharply in the high
Figure 1: The synchrotron spectrum for different values of the \(\gamma\) parameter at redshift \(z=1\).
frequency range. This is different from the usual spectrum obtained from the Gamma Ray Bursts where there is a more bell shaped curve.
Galactic radiation has been observed at such frequencies in several cases. Due to the high sensetivity of the all sky surveys there are many sources of radiation which have been detected. Not all these sources can be associated definitely and unambigously with known sources such as an Active Galactic Nuclei (AGN). There are some unidentified sources which are being studied too [27]. We proceeded to search the catalogues to determine if there are any unidentified sources in the frequency range of the synchrotron spectrum that we have obtained from cosmic string wakes. In the infra red (IR) region, the Wide-field Infrared Survey Explorer (WISE) data has several point sources attributed to Blazers which are also in the range of the spectrum that we have obtained. The similarity to blazars comes from the fact that the the gamma ray emission in blazars has a very similar geometry to the expected gamma ray emission from cosmic string wakes. Also the emission in blazers may be due to shock shock collision, magnetic reconnection or turbulence, all three of which can occur in the cosmic string wakes. Thus it would be difficult to distinguish between the two sources; blazer from a cosmic string wake. One point which distinguishes the two spectra is that the blazer usually has two broadly peaked components whereas the cosmic string wake
Figure 2: The synchrotron spectrum for different values of the redshift parameter at \(\gamma=1\)
will have only one broad peak.
We now proceed to plot a few sources from the various observations in fig. 3 to show that the current spectra that we have obtained is within the scope of current observations. As we see the peak value of the flux is between \(10^{-13}-10^{-12}ergcm^{-2}s^{-1}\) and the location is over a broad range of frequencies. We used the WISE catalogue to search for sources of infrared radiation in this flux scale. In fig 3, we have plotted a few points from the WISE catalogue [28] on the obtained spectrum for \(\gamma=1\). Though the initial rise is not as steep for the synchrotron radiation with a power law behaviour of about 1.2 in the optically thick region, some of the data points are close to the obtained spectrum. Similarly, we have included observed fluxes from the SWIFT [29] and the GALEX catalogue [30]. We have used the data that is available online and find that different surveys in different frequency bands have observed fluxes in this range. This seems to indicate that a more detailed study of synchrotron radiation in cosmic string wakes may lead to results matching with experimental data.
Figure 3: The synchrotron spectrum with data points from the WISE, SWIFT and GALEX catalogues.
Summary and conclusions
In this work we have studied the synchrotron radiation emitted by relativistic electrons moving in a cosmic string wake. The narrow width of the wake determines the Lorentz \(\gamma\) factor of the accelerating electrons. We obtain the spectrum for different values of the \(\gamma\) factor using a one blob model at different values of the redshift. Unlike the Gamma Ray Bursts studied previously the \(\gamma\) factor is limited to much lower values in this case. A cosmic string wake cannot generate \(\gamma\) factors ranging from \(10^{4}-10^{11}\) so our results are for values of \(\gamma\) ranging from \(0.1-10\). We have assumed that the overall magnetic field is homogeneous over the width of the wake. Though it is possible that the magnetic field will have some small scale fluctuations due to turbulence in the plasma, for the current work we have not looked at such details.
We have found that the frequncy spectrum obtained from the relativistic non-thermal electrons moving in the wake of a cosmic string will be over a wide range of frequencies. There is a broad region in the spectrum due to the multiple scattering of the electrons. The spectrum can be fitted with three separate power law exponents. There is only one broad peak in the spectrum. Thus the nature of the spectrum differs from the spectrum usually obtained from blazers or AGN's. Since there are still quite a few unidentified sources of radiations in the catalogues for the all sky surveys, we have attempted to find some of the data for the point sources in the given frequency range. We do find that several of the surveys (WISE, SWIFT and GALEX) have unknown sources with similar flux in the range of frequencies that are covered by the cosmic string wake.
We plan to extend this study further to include more details about the cosmic string wake in the modelling of the synchrotron radiation. Currently we have used a rather simple one blob model which simplifies the actual geometry of the wake structure. We have also used a homogeneous magnetic field in our calculation which may not be the case for the magnetic field in the cosmic string case. It is quite possible that after generation of the seed field in the wake of the cosmic string, the evolution of the field by turbulence leads to a randomized field over very short lengthscales. This would change the final spectrum of the synchrotron radiation. We hope to look at all these details in a future work so that we have a more disctinct picture of the synchrotron radiation from electrons moving in cosmic string wakes.
Acknowledgments
This research was carried out on the computational facility set up from funds given by the DST- SERB Power Grant no. SPG/2021/002228 of the Government of India. S.N acknowledges financial support from CSIR fellowship No. 09/414(2001)/2019-EMR-I given by the Human Resource Development, Government of India. D.K acknowledges financial support from DST- SERB Power Grant no. SPG/2021/002228 of the Government of India. The authors would like to thank Abhisek Saha for help with the numerical codes and Sayantan Bhattacharya and Susmita Barman for suggestions and discussions regarding the experimental results.
|
2302.00135 | Durable Algorithms for Writable LL/SC and CAS with Dynamic Joining | We present durable implementations for two well known universal primitives --
CAS (compare-and-swap), and its ABA-free counter-part LLSC (load-linked,
store-conditional). All our implementations are: writable, meaning they support
a Write() operation; have constant time complexity per operation; allow for
dynamic joining, meaning newly created processes (a.k.a. threads) of arbitrary
names can join a protocol and access our implementations; and have adaptive
space complexities, meaning the space use scales in the number of processes $n$
that actually use the objects, as opposed to previous protocols which are
designed for a maximum number of processes $N$. Our durable Writable-CAS
implementation, DuraCAS, requires $O(m + n)$ space to support $m$ objects that
get accessed by $n$ processes, improving on the state-of-the-art $O(m + N^2)$.
By definition, LLSC objects must store "contexts" in addition to object values.
Our Writable-LLSC implementation, DuraLL, requires $O(m + n + C)$ space, where
$C$ is the number of "contexts" stored across all the objects. While LLSC has
an advantage over CAS due to being ABA-free, the object definition seems to
require additional space usage. To address this trade-off, we define an
External Context (EC) variant of LLSC. Our EC Writable-LLSC implementation is
ABA-free and has a space complexity of just $O(m + n)$.
To our knowledge, we are the first to present durable CAS algorithms that
allow for dynamic joining, and our algorithms are the first to exhibit adaptive
space complexities. To our knowledge, we are the first to implement any type of
durable LLSC objects. | Prasad Jayanti, Siddhartha Jayanti, Sucharita Jayanti | 2023-01-31T22:53:54Z | http://arxiv.org/abs/2302.00135v1 | # Durable Algorithms for Writable LL/SC and CAS with Dynamic Joining
###### Abstract
We present durable implementations for two well known universal primitives--CAS (compare-and-swap), and its ABA-free counter-part LLSC (load-linked, store-conditional). All our implementations are: _writable_, meaning they support a Write() operation; have _constant time complexity_ per operation; allow for _dynamic joining_, meaning newly created processes (a.k.a. threads) of arbitrary names can join a protocol and access our implementations; and have _adaptive space complexities_, meaning the space use scales in the number of processes \(n\) that actually use the objects, as opposed to previous protocols which are designed for a maximum number of processes \(N\). Our durable Writable-CAS implementation, DuraCAS, requires \(O(m+n)\) space to support \(m\) objects that get accessed by \(n\) processes, improving on the state-of-the-art \(O(m+N^{2})\). By definition, LLSC objects must store "contexts" in addition to object values. Our Writable-LLSC implementation, Dura.L., requires \(O(m+n+C)\) space, where \(C\) is the number of "contexts" stored across all the objects. While LLSC has an advantage over CAS due to being ABA-free, the object definition seems to require additional space usage. To address this trade-off, we define an _External Context (EC)_ variant of LLSC. Our EC Writable-LLSC implementation is ABA-free and has a space complexity of just \(O(m+n)\).
To our knowledge, we are the first to present durable CAS algorithms that allow for dynamic joining, and our algorithms are the first to exhibit adaptive space complexities. To our knowledge, we are the first to implement any type of _durable_ LLSC objects.
## 1 Introduction
The advent of _Non-Volatile Memory (NVM)_[22] has spurred the development of durable algorithms for the _crash-restart model_. In this model, when a process \(\pi\) crashes, the contents of memory _persist_ (i.e., remain unchanged), but \(\pi\)'s cache and CPU registers lose their contents, and its program counter is set to a default value upon restart. To understand the difficulty that arises from losing register contents, suppose that \(\pi\) crashes at the point of executing a hardware CAS instruction, \(r\leftarrow\texttt{Cas}(X,old,new)\), on a memory word \(X\) and receiving the response into its CPU register \(r\). When \(\pi\) subsequently restarts, \(\pi\) cannot tell whether the crash occurred before or after the CAS executed, and if the crash occurred after the CAS, \(\pi\) cannot tell whether the CAS was successful or not. Researchers identified this issue and proposed software-implemented _durable objects_[23, 5], which allow a restarted process to _recover_ from its crash and _detect_ the result of its last operation. This is done by exposing two additional methods, \(\texttt{Recover}()\) and \(\texttt{Detect}()\). The rapid commercial viability of byte-addressable, dense, fast, and cheap NVM chips has made efficient durable object design important.
Writable and non-Writable CASRecently, there has been a lot of research on implementing durable CAS objects because they are widely employed in practice and are universal; any durable object can be implemented from durable CAS objects [20, 23, 7]. Formally, the state of a CAS object \(X\) is simply its value, and the operation semantics are as follows:
* \(X.\texttt{Cas}(old,new)\): if \(X=old\), sets \(X\) to \(new\) and returns _true_; otherwise, returns _false_.
* \(X.\)Read(): returns the value of \(X\).
* \(X.\)Write\((new)\): sets \(X\) to \(new\) and returns _true_.
If the object supports all three operations, it is a _Writable-CAS (W-CAS)_, if it does not support Write(), it is a _non-Writable-CAS (nW-CAS)_ object.
CAS's ABA problem and LLSCAlthough CAS objects are powerful tools in concurrent computing, they also have a significant drawback called the _ABA-problem_[14]. Namely, if a process \(\pi\) reads a value \(A\) in \(X\) and executes \(X.\)Cas\((A,C)\) at a later time, this CAS will succeed _even if_ the value of \(X\) changed between \(\pi\)'s operations, from \(A\) to \(B\) and then back to \(A\). So while any object _can_ be implemented from CAS, the actual process of designing an algorithm to do so becomes difficult. In the non-durable setting, the ABA-problem is often overcome by using the hardware's double-width CAS primitive--in fact, 'CAS2 [double-width CAS] operation is the most commonly cited approach for ABA prevention in the literature" [14]. However, all known durable CAS objects, including ours, are only one-word wide--even as they use hardware double-width CAS [5, 7, 6]. Against this backdrop, the durable LLSC objects presented in this paper serve as an invaluable alternate tool for ABA prevention.
LLSC objects are alternatives to CAS objects that have been invaluable in practice, since they are universal and ABA-free [30]. The state of an LLSC object \(Y\) is a pair \((Y.val,Y.context)\), where \(Y.val\) is the _value_ and \(Y.context\) is a set of processes (initially empty). Process \(\pi\)'s operations on the object have the following semantics:
* \(Y.\)LL(): adds \(\pi\) to \(X.context\) and returns \(Y.val\).
* \(Y.\)VL(): returns whether \(\pi\in Y.context\).
* \(Y.\)SC(\(new\)): if \(\pi\in X.context\), sets \(Y\)'s value to \(new\), resets \(Y.context\) to the empty set and returns _true_; otherwise, returns _false_.
* \(Y.\)Write\((new)\) changes \(Y\)'s value to \(new\) and resets \(Y.context\) to the empty set.
The object is _Writable (W-LLSC)_ or _non-Writable (nW-LLSC)_ depending on whether the Write() operation is supported.
To our knowledge, there are no earlier durable implementations of ABA-free CAS-like objects, including LLSC.
Previous work and the state-of-the-artCAS and LLSC objects share close ties, but they also pose different implementational challenges. In the non-durable context, it is well known that non-writable LLSC (nW-LLSC) objects can be implemented from nW-CAS objects and visa versa in constant time and space. The simple implementation of nW-LLSC from nW-CAS however, requires packing a value-context pair into a single nW-CAS object [4]. Solutions that implement a full-word nW-LLSC from a full-word nW-CAS require a blow-up in time complexity, space complexity, or both [29, 16, 32, 30, 10]. Writability complicates the relationship further. Even in the non-durable context, reductions between W-CAS and W-LLSC have resulted in a blow-up in space complexity and fixing the number of processes _a priori_[24]. Writability can sometimes be added to an object that is non-writable, but this leads to an increase in space complexity [2].
There are no previous works on Durable LLSC. Three previous works have implemented durable CAS objects, all from the hardware CAS instruction: Attiya et al. [5], Ben-Baruch et al. [6], and Ben-David et al. [7]. All three papers provide implementations for a fixed set of \(N\) processes with _pid_s \(1,\ldots,N\), and achieve constant time complexity per operation. Attiya et al. pioneered this line of research with a durable nW-CAS implementation, which achieves constant time complexity and requires \(O(N^{2})\) space per object. Ben-Baruch et al. present an nW-CAS implementation with optimal bit complexity. Their algorithm however, requires packing \(N\) bits and the object's value into a single hardware variable. Thus, if the value takes 64 bits, then only 64 pre-declared processes can access this object. (Current commodity multiprocessors range up to 224 cores [1], and can support orders-of-magnitude more threads.) Ben-David et al. designed an algorithm for nW-CAS, and then leveraged Aghazadeh et al.'s writability transformation [2] to enhance that algorithm to include a Write operation, thereby presenting the only previous Writable-CAS implementation. Their nW-CAS algorithm uses a pre-allocated help-array of length \(O(N)\), and their W-CAS algorithm uses an additional hazard-pointer array of length \(O(N^{2})\). Both arrays can be shared across objects, thus the implementation space complexities for \(m\) objects are \(O(m+N)\) and \(O(m+N^{2})\), respectively.
Our contributionsWe present four wait-free, durably linearizable implementations: DurAcAS for Writable-CAS, DurALL for Writable-LLSC, DuEC for External Context (EC) nW-LLSC, and DurECW for EC W-LLSC (the last two are described in the section below). Our implementations achieve the following properties:
1. Constant time complexity: all operations including recovery and detection run in \(O(1)\) steps.
2. Dynamic Joining: dynamically created processes of arbitrary names can use our objects.
3. Full-word size: Our implementations support full-word (i.e., 64-bit) values.
4. Adaptive Space Complexity: We quantify space complexity by the number of memory words needed to support \(m\) objects for a total of \(n\) processes. The DurACAS, DurEC, and DurECW implementations require just constant memory per process and per object, and thus each have a space complexity of \(O(m+n)\). Since DurALL must remember contexts, its space complexity is \(O(m+n+C)\), where \(C\) is the number of contexts that must be remembered1. Footnote 1: \(C\) is the number of process-object pairs \((\pi,\mathcal{O})\), where \(\pi\) has performed an LL(\()\) operation on \(\mathcal{O}\), and its last operation on \(\mathcal{O}\) is not an SC(\()\) or Write(\()\). A trivial upper bound is \(C\leq nm\).
We believe that our definitions and implementations of the External Context LLSC objects--which are ABA-free, space-efficient alternatives to CAS and LLSC--are of independent interest in the design of both durable and non-durable concurrent algorithms.
To our knowledge, we are the first to present durable CAS algorithms that allow for dynamic joining, and our algorithms are the first to exhibit adaptive space complexities. To our knowledge, we are the first to consider any type of _durable_ LLSC objects.
Our approachWe implement universal primitives that allow dynamic joining of new processes, have an _adaptive space complexity_ that is constant per object and per process, and give an ABA-free option, while simultaneously achieving constant time complexity. Just like our predecessors, all our implementations rely on just the hardware double-width CAS instruction for synchronization.
A keystone of our approach is the observation that durable nW-LLSC--due to its ABA-freedom--serves as a better stepping stone than even durable nW-CAS on the path from hardware CAS to durable W-CAS. Perhaps less surprisingly, durable nW-LLSC is a great stepping stone towards durable W-LLSC also. However, by definition LLSC objects require more space to remember context for each process--an inherent burden that CAS objects do not have. Thus, using nW-LLSC objects in the construction of our W-CAS would lead to a bloated space complexity. To avoid this drawback, we define an _External Context (EC)_ variant of LLSC. An EC LLSC object is like an LLSC object, except that its context is returned to the process instead of being maintained by the object. Thus, our EC nW-LLSC implementation, DurEC, is the building block of all our other implementations.
The state of an EC LLSC object \(Y\) is a pair \((Y.val,Y.seq)\), where the latter is a sequence number context. Process \(\pi\)'s operations on the object have the following semantics:
* \(Y.\text{ECLL}()\): returns (\(Y.val\), \(Y.seq\)).
* \(Y.\text{ECVL}(s)\): returns whether \(Y.seq=s\).
* \(Y.\text{ECSC}(s,new)\): if \(Y.seq=s\), sets \(Y\)'s value to \(new\), increases \(Y.seq\), and returns _true_; otherwise, returns _false_.
* \(Y.\text{Write}(new)\): changes \(Y\)'s value to \(new\) and increases \(Y.seq\).
The object is _Writable (EC W-LLSC)_ or _non-Writable (EC nW-LLSC)_ depending on whether the Write(\()\) operation is supported.
We design durable implementations of EC W-LLSC and W-CAS, called DurECW and DurACAS, respectively; each implementation uses two DurEC base objects. We implement our durable W-LLSC algorithm, DurALL, by simply internalizing the external contexts of a DurECW. All our implementations overcome the need for hazard-pointers and pre-allocated arrays for helping in order to allow dynamic joining and achieve adaptive space complexity. Key to eliminating these arrays are pointer based identity structures called _handles_, which we showcase in the next section. Figure 1 illustrates the differences between our approach and Ben-David et al.'s.
### Other Related Work
Byte-addressable non-volatile memory laid the foundation for durable objects [22]. Research on durable objects has spanned locks [19, 33, 27, 28, 26, 25, 18, 11, 12, 15, 13], and non-blocking objects--including queues [17], counters [5], registers [5, 6], and CAS objects [5, 6, 7]. The correctness criterion for non-blocking objects, _durable linearizability_, was first introduced for the full-system-crash model by Izraelevitz et al. [23], and adapted to the individual process crash-restart model used in this paper by Attiya et al. [5]. Several other works have explored variants of the durable linearizability definition [17, 3, 8, 31, 9, 7].
## 2 Model
We use the _crash-restart model_ with independent process crashes [23, 19, 5, 17, 7, 6]. In this model, asynchronous processes communicate by applying atomic operations to Non-Volatile Memory (NVM). Our algorithms use the read and compare-and-swap (CAS) operations. Any process may crash at any time and restart at any later time, and the same process may crash and restart any number of times. When a process crashes, its registers, including its program counter, lose their contents (i.e., they are set to arbitrary values), but the contents of the NVM are unaffected.
A _durable implementation_ of an object \(\mathcal{O}\) provides one method for each operation supported by \(\mathcal{O}\) and two additional methods--\(\mathcal{O}\).Recover() and Detect(). If a process \(p\) invokes a method for an operation and completes the method without crashing, the operation is required to take effect atomically at some instant between the method's invocation and completion. On the other hand, if \(p\) crashes while executing the operation, when \(p\) subsequently restarts, it is required to execute \(\mathcal{O}\).Recover()--if it crashes while executing the recover method, it must re-execute \(\mathcal{O}\).Recover() when it restarts. The crashed operation is considered _complete_ when \(\mathcal{O}\).Recover() completes. The correctness condition for these methods is durable linearizability [23, 5], which generalizes linearizability [21] to the crash-restart model, and is stated as follows. The crashed operation is required to either have no effect at all or take effect atomically at some instant between when the method for the operation is invoked and when the recover method completes.
In addition to being durable, the objects implemented in this paper are also _detectable_[17]. Detectability provides a means for processes to distinguish whether their crashed operations (that have subsequently been completed via the recover method) have taken effect or not, and what the associated response was. Some operations, such as read or a failed CAS, can safely be repeated, regardless of whether they took effect [5, 7]. On the other hand, a write or a successful CAS that changed the value of the object cannot be repeated safely; such visible operations should be detected. The Detect() method facilitates detectability. A call to the Detect() method by a process \(p\) returns a pair \((d,r)\), where \(d\) is a _detection value_ corresponding to the last detected operation by \(p\) and \(r\) is that operation's response. Specifically, if \(p\) calls Detect() twice--just before executing an operation and just after completing that operation--and these successive calls to Detect() return \((d_{1},r_{1})\) and \((d_{2},r_{2})\) respectively, then the following two properties are satisfied:
1. If \(d_{2}>d_{1}\), then the operation took effect and its response is \(r_{2}\).
2. Otherwise, \(d_{2}=d_{1}\) and the operation is safe to repeat.
Figure 1: A comparison of Ben-David et al.’s approach (top) and our approach (bottom): each box represents an implementation—the type of the implementation is in bold and its space complexity appears below the box. The names of our implementations appear in the box in SmallCaps. An arrow from A to B means that B is implemented using A.
## 3 Handles for dynamic joining and space adaptivity
When a process calls a method to execute an operation \(op\), the call is of the form \(op(p,args)\), where \(args\) is a list of \(op\)'s arguments and \(p\) identifies the calling process. The methods use \(p\) to facilitate helping between processes. In many algorithms, the processes are given \(pid\)s from 1 to \(N\), and \(p\) is the \(pid\) of the caller [5, 7]. In particular, \(p\) is used to index a pre-allocated helping array--in Ben-David et al.'s algorithm this helping array is of length \(N\), one location per process being helped; in Attiya et al.'s algorithm this helping array is of length \(N^{2}\), one location per helper-helpe pair. Helping plays a central role in detection, thus each process needs to have some area in memory where it can be helped; in fact, using the bit-complexity model, Ben-Baruch et al. proved that the space needed to support a detectable CAS object monotonically increases in the number of processes that access the object [6]. One of our goals in this paper however, is to design objects that can be accessed by a dynamically increasing set of processes, which precludes the use of pre-allocated fixed-size arrays that are indexed by process IDs.
To eliminate the use of arrays for helping, we introduce pointer based structures called _handles_. We use handles to enable dynamic joining and achieve space adaptivity. A handle is a constant sized record in memory. The implementation provides a create-handle\(()\) method, which creates a new handle and returns a pointer \(p\) to it. When a process first wishes to access any of the implemented objects of a given type, it creates for itself a new handle by calling create-handle\(()\). From that point on, whenever the process calls any method on any of the implemented objects of that type, it passes in the pointer \(p\) of its handle instead of its pid, and other processes help it via the handle. This mechanism of handles helps us realize dynamic joining because any number of new processes can join at any time by creating handles for themselves; since the memory per handle is constant, and only the subset of processes that wish to access the implementation need to create handles, the mechanism facilitates space adaptivity.
## 4 The DurEC Building Block
In this section, we implement the DurEC algorithm for durable external context non-writable LLSC using hardware CAS. This building block will be central to all of the writable implementations in the remainder of the paper.
### Intuitive description of Algorithm Durec
Each DurEC handle \(h\) is a reference to a record of two fields, _Val_ and _DetVal_, and each DurEC object \(\mathcal{O}\) is implemented from two hardware atomic CAS objects \(X\) and \(Y\), where \(X\) is a pair consisting of a handle and a sequence number, and \(Y\) is a pair consisting of a sequence number and a value. The algorithm maintains the DurEC object \(\mathcal{O}\)'s state in \(Y\), i.e., \(\mathcal{O}.seq=Y.seq\) and \(\mathcal{O}.val=Y.val\) at all times. This representation makes the implementation of ECLL and ECVL operations obvious: ECLL\((h)\) simply returns \(Y\) and ECVL\((h,s)\) returns whether \(Y.seq=s\). The complexity lies in the ECSC\((h,s,v)\) operation, which is implemented by the following sequence of steps:
1. If \(Y.seq\neq s\), it means \(\mathcal{O}.seq\neq s\), so the ECSC operation simply returns _false_. Otherwise, it embarks on the following steps, in an attempt to switch \(\mathcal{O}.val\) to \(v\) and \(\mathcal{O}.seq\) to a greater number.
2. Make \(v\) available for all by writing it in the _Val_ field of the ECSC operation's handle \(h\).
3. Pick a number \(\hat{s}\) that is bigger than both \(X.seq\) and \(h.DetVal\). (The latter facilitates detection.)
4. Publish the operation's handle along with a greater sequence number by installing \((h,\hat{s})\) in \(X\). If several ECSC operations attempt to install concurrently, only one will succeed. The successful one is the _installer_ and the others are _hitchhikers_.
5. The installer and the hitchhikers work together to accomplish two missions, the first of which is to increase the installer's _DetVal_ field to the number in \(X.seq\). This increase in the _DetVal_ field of its handle enables the installer to detect that it installed, even if the installer happens to crash immediately after installing.
6. The second mission is to forward the installer's operation to \(Y\). Since \(Y\) is where the DurEC object's state is held, the installer's operation takes effect only when it is reflected in \(Y\)'s state. Towards this end, everyone reads the installer's value \(v\), made available in the _Val_ field of the installer's handle back at Step (2), and attempts to switch \(Y.val\) to \(v\), simultaneously increasing \(Y.seq\) so that it catches up with \(X.seq\). Since all operations attempt this update of \(Y\), someone (not necessarily the installer) will succeed. At this point, \(X.seq=Y.seq\)
and \(Y.val=v\), which means that the installer's value \(v\) has made its way to \(\mathcal{O}.val\). So, the point where \(Y\) is updated becomes the linearization point for the installer's successful ECSC operation. The hitchhikers are linearized immediately after the installer, which causes their ECSC operations to "fail"--return _false_, without changing \(\mathcal{O}\)'s state--thereby eliminating the burden of detecting these operations.
7. If the installer crashes after installing, upon restart, in the Recover method, it does the forwarding so that the two missions explained above are fulfilled.
8. With the above scheme, all ECSC, ELL, and EVL operations, except those ECSC operations that install, are safe to return and hence, don't need detection. Furthermore, for each installing ECSC operation, the above scheme ensures that the _DetVal_ field of the installer's handle is increased, thereby making the operation detectable.
The formal algorithm is presented in Figure 1. The correspondence between the lines of the algorithm and the steps above is as follows. Lines 6 and 7 implement Steps 1 and 2, respectively. Steps 3 and 4, where the operation attempts to become the installer, are implemented by Lines 8 to 10. The operation becomes the installer if and only if the CAS at Line 10 succeeds, which is reflected in the boolean return value \(r\). The Forward method is called at Line 11 to accomplish the two missions described above. The first three lines of Forward (Lines 13 to 15) implement the first mission of increasing the _Val_ field of the installer's handle to \(X.seq\) (Step 5). Line 13, together with Lines 16 to 19, implement the second mission of forwarding the operation to \(Y\) (Step 6). The if-condition and the CAS' arguments at Line 18 ensure that \(Y\) is changed only if \(Y.seq\) lags behind \(X.seq\) and, if it lags behind, it catches up and \(Y.val\) takes on the installer's value. The Recover method simply forwards at Line 20, as explained in Step 7. The detect method returns at Line 22 the value in the handle's _Val_ field, as explained in Step 8, along with _true_ (since only successful ECSC operations are detected).
### DurEC Proof Outline
The full proof of the DurEC algorithm is in Appendix A.1. Here we reproduce the key definitions and lemmas.
Let \(\mathcal{O}\) be a DurEC object implemented by the algorithm, and \(X\) and \(Y\) be atomic CAS objects that \(\mathcal{O}\) is implemented from. The following two types of events are of interest.
* An _install_ is a successful CAS operation on \(X\), executed by a ECSC\((h,s,v)\) operation \(\alpha\) at Line 10. We say \(\alpha\) installs and \(\alpha\) is an installer.
* A _move_ is a successful CAS operation on \(Y\), executed by a forward\((h)\) operation \(\alpha\) at Line 18. We say \(\alpha\) installs and \(\alpha\) is a mover.
**Lemma 4.1**.:
1. Installs and moves alternate, starting with an install.
2. If the latest event is a move or if no installs have occurred, then \(X.seq=Y.seq\). Otherwise (i.e., if the latest event is an install), \(X.seq>Y.seq\).
**Lemma 4.2**.: If \(X.seq>Y.seq\) at time \(t\) and a forward\((h)\) operation \(\alpha\) is started after \(t\) and \(\alpha\) completes without crashing, then a move occurs after \(t\) and at or before \(\alpha\)'s completion time.
**Lemma 4.3**.: If a ECSC\((h,s,v)\) operation \(\alpha\) installs at time \(t\), the first move after \(t\) occurs by the time \(\alpha\) completes.
**Lemma 4.4**.: If a ECSC\((h^{\prime},s,v)\) operation \(\alpha^{\prime}\) installs at time \(t^{\prime}\) and a forward\((h)\) operation \(\alpha\) moves at \(t\) and is the first to move after \(t^{\prime}\), then:
1. In the interval \((t^{\prime},t)\), \(X.hndl=h\), \(h.\textit{Val}=v\), and \(Y.seq=s\).
2. \(\alpha\) sets \(Y.val\) to \(v\).
```
1:class DurEC:
2:instance variable(handle*, int)\(X\)\(\triangleright\)\(X\) is a pair \((X.hndl,X.seq)\) stored in NVM instance variable(int, int)\(Y\)\(\triangleright\)\(Y\) is a pair \((Y.seq,Y.val)\) stored in NVM structhandle{ intDetVal intVal
3:static procedure CreateHandle()
4:return new\(handle\{\textit{DetVal}=0\}\)\(\triangleright\) fields _DetVal_ and _Val_ are stored in NVM; _Val_ is arbitrarily initialized; constructor DurEC(int\(initval\))
5:\(X\leftarrow(null,0)\)
6:\(Y\leftarrow(0,initval)\)
7:procedure\(\textsc{ECCL}(\textbf{handle*}\,h)\)
8:return\(Y\)procedure\(\textsc{ECVL}(\textbf{handle*}\,h,\textbf{int}\,s)\)
9:return\(Y.seq=s\)
10:procedure\(\textsc{ECSC}(\textbf{handle*}\,h,\textbf{int}\,s,\textbf{int}\,v)\)
11:if\(Y.seq\neq s\)thenreturnfalse
12:\(h.\textit{Val}\gets v\)
13:\(\hat{h}\gets X.hndl\)
14:\(\hat{s}\leftarrow\max(h.\textit{DetVal},s)+1\)
15:\(r\leftarrow\textsc{Cas}(X,(\hat{h},s),(h,\hat{s}))\)
16:\(\textsc{forward}(h)\)
17:return\(r\)procedureforward(handle*\(h\))
18:\(x\gets X\)
19:\(\hat{s}\gets x.hndl.\textit{DetVal}\)
20:if\(\hat{s}<x.seq\)then\(\textsc{Cas}(x.hndl.\textit{DetVal},\hat{s},x.seq)\)
21:\(\hat{v}\gets x.hndl.\textit{Val}\)
22:\(y\gets Y\)
23:if\(y.seq<x.seq\)then\(\textsc{Cas}(Y,y,(x.seq,\hat{v}))\)
24:returnprocedureRecover(handle*\(h\))
25:\(\textsc{forward}(h)\)
26:\(\textsc{return}\)
27:static procedureDetect(handle*\(h\))
28:return\(h.\textit{DetVal}\)
```
**Algorithm 1** The DurEC class for Durable, External Context nW-LLSC objects.
We define a _hitchhiker_ as a ECSC() operation that does not install and returns at Line 12.
**Lemma 4.5**.: If \(\alpha\) is a hitchhiker ECSC() operation, a move occurs during \(\alpha\).
The next definition states how operations are linearized. A crashed operation is not linearized, unless it is a ECSC() operation that crashes after installing. Hitchhikers return _false_ at Line 12, so they are not crashed operations and are linearized.
**Definition 4.6** (Linearization).:
1. _If a_ ECSC__\((h,s,v)\) _operation_ \(\alpha\) _installs, it is linearized at the first move after_ \(\alpha\)_'s install._ _(Lemma_ 4.3 _guarantees that_ \(\alpha\) _is linearized before it completes.)_
2. _If a_ ECSC__\((h,s,v)\) _operation_ \(\alpha\) _is a hitchhiker, it is linearized at the earliest time_ \(t\) _such that a move occurs at_ \(t\)_. Furthermore, if_ \(\beta\) _is the installing_ ECSC__\(()\) _operation linearized at the same time_ \(t\)_,_ \(\alpha\) _is linearized_ after \(\beta\)_._ Remarks_: Lemma_ 4.5 _guarantees that_ \(\alpha\) _is linearized before it completes. Linearizing a hitchhiker after the installer ensures that the success of the installer's ECSC causes the hitchhikers's ECSC to fail without changing the object's state, thereby eliminating the burden of detecting the hitchhikers' ECSC operation._
3. _If a_ ECSC__\((h,s,v)\) _operation_ \(\alpha\) _returns at Line 6, it is linearized at Line 6._
4. \(A\) ECLL__\((h)\) _operation_ \(\alpha\) _is linearized at Line 4._
5. \(A\) ECVL__\((h,s)\) _operation_ \(\alpha\) _is linearized at Line 5._
The value of a DurEC object implemented by the algorithm changes atomically at the linearization points of successful ECSC() operations. The next lemma states that the algorithm maintains the DurEC object's state in \(Y\), and satisfies durable linearizability.
**Lemma 4.7** (Durable-linearizability of DurEC objects).: Let \(\mathcal{O}\) be a DurEC object implemented by the algorithm.
1. \((\mathcal{O}.seq,\mathcal{O}.val)=(Y.seq,Y.val)\) at all times.
2. Let \(\alpha\) be any \(\mathcal{O}.\textsc{ECSC}(h,s,v)\), \(\mathcal{O}.\textsc{ECLL}(h)\), or \(\mathcal{O}.\textsc{ECVL}(h)\) operation, and \(t\) be the time at which \(\alpha\) is linearized. Suppose that \(\mathcal{O}\)'s state is \(\sigma\) at \(t\) just before \(\alpha\)'s linearization (in case multiple operations are linearized at \(t\)), and \(\delta(\sigma,\alpha)=(\sigma^{\prime},r)\), where \(\delta\) is the sequential specification of a EC object. Then: 1. \(\mathcal{O}\)'s state changes to \(\sigma^{\prime}\) at time \(t\). 2. If \(\alpha\) completes without crashing, it returns \(r\). (Recall that if \(\alpha\) crashes and, upon restart, executes Recover(), the recover method does not return any response.)
Next we state a key lemma for proving the detectability of DurEC objects.
**Lemma 4.8**.:
1. If a ECSC(\(h,s,v\)) operation \(\alpha\) installs, then the value of \(h.\textit{DetVal}\) increases between \(\alpha\)'s invocation and completion.
2. For any handle \(h\), if \(h.\textit{DetVal}\) is changed at any time \(t\) by the execution of Line 15 by some forward\((h^{\prime})\) method (for some \(h^{\prime}\)), then \(X.hndl=h\) and \(h.pc\in\{13,14,15\}\).
3. If a ECSC(\(h,s,v\)) operation \(\alpha\) does not install, then the value of \(h.\textit{DetVal}\) is the same at \(\alpha\)'s invocation and completion.
**Lemma 4.9** (Detectability of DurEC objects).: Let \(\alpha\) be any operation executed on a DurEC object \(\mathcal{O}\) by a handle \(h\). Suppose that \((d_{1},r_{1})\) and \((d_{2},r_{2})\) are the values that Detect\((h)\) would return, if executed immediately before \(\alpha\) is invoked and immediately after \(\alpha\) completes, respectively. Then:
1. If \(\alpha\) is not an installing ECSC, it is safe to repeat and \(d_{2}=d_{1}\).
2. If \(\alpha\) is an installing ECSC, then \(d_{2}>d_{1}\) and \(r_{2}=\textit{true}\).
**Theorem 4.10**.: _Algorithm DurEC satisfies the following properties:_
1. _The objects implemented by the algorithm are durably linearizable (with respect to EC's sequential specification) and are detectable._
2. _All operations, including the Recover, Detect, Constructor, and CreateHandle methods, are wait-free and run in constant time._
3. _The algorithm supports dynamic joining: a new process can join in at any point in a run (by calling CreateHandle) and start creating DurEC objects or accessing existing DurEC objects._
4. _The space requirement is_ \(O(m+n)\)_, where_ \(m\) _is the actual number of DurEC objects created in the run, and_ \(n\) _is the actual number of processes that have joined in in a run._
A full proof of the DurEC algorithm is presented in A.1 (6 pages).
## 5 DurECW and DuraLL: durable Writable LLSC implementations
Using the _non-writable_ DurEC building block of the previous section, we design the _writable_ external context LLSC implementation DurECW in this section. With DurECW in hand, we obtain our standard durable writable-LLSC implementation DuraLL easily, by simply rolling the context into the object.
### Intuitive description of Algorithm DurECW
A DurECW object \(\mathcal{O}\) supports the write operation, besides ECSC, for changing the object's state. Unlike a ECSC\((h,s,v)\) operation, which returns without changing \(\mathcal{O}\)'s state when \(\mathcal{O}.context\neq s\), a Write\((h,v)\) must get \(v\) into \(\mathcal{O}.val\) unconditionally. In the DurECW algorithm, ECSC\(()\) operations help Write() operations and prevent writes from being blocked by a continuous stream of successful ECSC\(()\) operations.
Each DurECW object \(\mathcal{O}\) is implemented from two DurEC objects, \(\mathcal{W}\) and \(\mathcal{Z}\), each of which holds a pair, where the first component is a sequence number \(seq\), and the second component is a pair consisting of a value \(val\) and a bit \(bit\). Thus, \(\mathcal{W}=(\mathcal{W}.seq,(\mathcal{W}.val,\mathcal{W}.bit))\) and \(\mathcal{Z}=(\mathcal{Z}.seq,(\mathcal{Z}.val,\mathcal{Z}.bit))\).
The DurECW handle \(h\) consists of two DurEC handles, \(h.\textit{Critical}\) and \(h.\textit{Casual}\). The use of two DurEC handles allows us to implement detectability. In particular, if Detect\((h)\) is called on a DurECW object, only the detect value (_DetVal_) of \(h.\textit{Critical}\) is returned. So intuitively, when a DurECW operation \(\alpha\) calls methods on \(\mathcal{W}\) or \(\mathcal{Z}\), it uses \(h.\textit{Critical}\) only if a successful call will make its own ECSC\(()\) or Write\(()\) operation visible. In all other cases \(\alpha\) uses \(h.\textit{Casual}\).
The algorithm maintains the DurECW object \(\mathcal{O}\)'s state in \(\mathcal{Z}\), i.e., \(\mathcal{O}.seq=\mathcal{Z}.seq\) and \(\mathcal{O}.val=\mathcal{Z}.val\) at all times. This representation makes the implementation of \(\mathcal{O}.\textsc{ECLL}()\) and \(\mathcal{O}.\textsc{ECVL}()\) operations obvious: \(\mathcal{O}.\textsc{ECLL}(h)\) simply returns \((\mathcal{Z}.seq,\mathcal{Z}.val)\) and ECVL\((h,s)\) returns whether \(\mathcal{Z}.seq=s\). The complexity lies in the implementation of \(\mathcal{O}.\textsc{Write}(h,v)\) and \(\mathcal{O}.\textsc{ECSC}(h,s,v)\) operations, which coordinate their actions using \(\mathcal{W}.bit\) and \(\mathcal{Z}.bit\). A write operation flips the \(\mathcal{W}.bit\) to announce to the ECSC operations that their help is needed to push the write into \(\mathcal{Z}\); once the write is helped, the \(\mathcal{Z}.bit\) is flipped to announce that help is no longer needed. We maintain the invariant that \(\mathcal{W}.bit\neq\mathcal{Z}.bit\) if and only if a write needs help.
A Write\((h,v)\) operation \(\alpha\) consists of the following steps.
1. The operation \(\alpha\) reads \(\mathcal{W}\) and \(\mathcal{Z}\) to determine if some write operation is already waiting for help. If not, then \(\alpha\) installs its write into \(\mathcal{W}\) by setting \(\mathcal{W}.val\) to \(v\) and flipping \(\mathcal{W}.bit\). If several write operations attempt to install concurrently, only one will succeed. The successful one is the _installer_ and the others are _hitchhikers_.
2. Once a write operation is installed, all processes--installer, hitchhiker, and the ECSC operations--work in concert to forward the installer's operation to \(\mathcal{Z}\). Since \(\mathcal{Z}\) is where the DurECW object's state is held, the installer's operation takes effect only when it is reflected in \(\mathcal{Z}\)'s state. Towards this end, everyone attempts to transfer the installer's value from \(\mathcal{W}\) to \(\mathcal{Z}\). However, a stale ECSC operation, which was poised to execute
```
1:classDurECW:
2:instancevariableDurEC\(\mathcal{W}\)\(\triangleright\)\(\mathcal{W}\) holds a pair \((\mathcal{W}.seq,(\mathcal{W}.val,\mathcal{W}.bit))\)
3:instancevariableDurEC\(\mathcal{Z}\)\(\triangleright\)\(\mathcal{Z}\) holds a pair \((\mathcal{Z}.seq,(\mathcal{Z}.val,\mathcal{Z}.bit))\)
4:structhandle(\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(
its ECSC operation on \(\mathcal{Z}\), might update \(\mathcal{Z}\), causing the transfer to fail in moving the installer's value from \(\mathcal{W}\) to \(\mathcal{Z}\). So, a transfer is attempted the second time. The earlier success by the poised ECSC operation causes any future attempts by similarly poised operations to fail. Consequently, the installer's write value gets moved to \(\mathcal{Z}\) by the time the second transfer attempt completes. The point where the move to \(\mathcal{Z}\) occurs becomes the linearization point for the installer's write operation. We linearize the writes by the hitchhikers immediately before the installer, which makes their write operations to be overwritten immediately by the installer's write, without anyone ever witnessing their writes. Hence, there is no need to detect these writes: if a hitchhiker crashes during its write, the operation can be safely repeated.
* If the installer crashes after installing, upon restart, in the Recover method, it does the forwarding so that its install moves to \(\mathcal{Z}\) and its write operation gets linearized.
An ECSC\((h,s,v)\) operation \(\alpha\) consists of the following steps.
* \(\alpha\) performs an ECLL\(()\) to determine whether the context in \(\mathcal{O}\) matches \(s\). If not, it can fail early and return _false_.
* If a Write\(()\) is already in \(\mathcal{W}\) and waiting for help to be transferred to \(\mathcal{Z}\), \(\alpha\) is obligated to help that write before attempting its SC (to prevent the write from being blocked by a chain of successful ECSC\(()\) operations). So it attempts a transfer from \(\mathcal{W}\) to \(\mathcal{Z}\).
* Finally \(\alpha\) executes an ECSC\(()\) on \(\mathcal{Z}\) in an attempt to make its own operation \(\mathcal{O}\) take effect.
The algorithm is formally presented in 2. In the algorithm, Lines 12-14 implement step W1 and Lines 15, 16 implement step W2. Step S1 is implemented by Lines 7, 8, step S2 by 9 and S3 by 10 and 11. Note that the ECSC\(()\) on line 10 takes care to not change \(\mathcal{Z}.bit\). This ensures that the helping mechanism for writes implemented via \(\mathcal{W}.bit\) and \(\mathcal{Z}.bit\) is not disturbed. The ECSC\(()\) operation at Line 14 uses the handle \(h.Critical\) because its success implies that the operation is an installer and hence will be a visible write when it linearizes. Similarly the ECSC\(()\) on \(\mathcal{Z}\) at Line 10 uses \(h.Critical\) because its success makes the ECSC\(()\) on \(\mathcal{O}\) visible.
If a Write\(()\) or a ECSC\(()\) method crashes while executing an operation on \(\mathcal{W}\) or \(\mathcal{Z}\), upon restart, Lines 21 to 24 of Recover\(()\) ensure that \(\mathcal{W}.\textsc{Recover}()\) or \(\mathcal{Z}.\textsc{Recover}()\) is executed before any other operation is executed on \(\mathcal{W}\) or \(\mathcal{Z}\). Consequently, the durable objects \(\mathcal{W}\) and \(\mathcal{Z}\) behave like atomic EC objects.
The theorem below summarizes the result:
**Theorem 5.1**.: _Algorithm DurECW satisfies the following properties:_
1. _The objects implemented by the algorithm are durably linearizable (with respect to ECW's sequential specification) and are detectable._
2. _All operations, including the Recover, Detect, Constructor, and CreateHandle methods, are wait-free and run in constant time._
3. _The algorithm supports dynamic joining: a new process can join in at any point in a run (by calling CreateHandle) and start creating objects or accessing existing objects._
4. _The space requirement is_ \(O(m+n)\)_, where_ \(m\) _is the actual number of DurECW objects created in the run, and_ \(n\) _is the actual number of processes that have joined in in a run._
_Proof:_ A detailed proof of this theorem is presented in A.2 (7 pages).
### The DuraLL Algorithm
Given the durable EC W-LLSC object DurECW, rolling the context into the implementation to produce a durable standard W-LLSC object is simple. Each of our implemented DuraLL objects simply maintains a single DurECW object \(X\). The handle of the DuraLL object simply maintains a single DurECW handle, to operate on \(X\), and a hashmap that maps objects to \(contexts\).
```
1:classDuraLL:
```
2:instancevariableDurECW\(X\)\(\triangleright\)\(X\) holds the central EC W-LLSC object.
3:structhandle{ DurECW.handleECWH
4:HashMap(DuraLL\(\rightarrow\)int)\(contexts\) }
5:staticprocedureCreateHandle()
6:returnhandle{ECWH\(\leftarrow\)DurECW.CreateHandle(),\(contexts\leftarrow\)HashMap(DuraLL\(\rightarrow\)int)}
7:procedureDuraLL(\(initval\))
8:\(X\leftarrow\)DurECW\((initval,0)\)
9:procedureLL(handle*\(h\))
10:\(x\gets X.\textsc{ECLL}(h.\textit{Critical})\)
11:\(h.contexts(\textit{self})\gets x.seq\)
12:return\(x.val\)
13:procedureVL(handle*\(h\))
14:ifself\(\not\in h.contexts.keys\)thenreturnfalse
15:return\(X.\textsc{ECVL}(h.\textit{ECWH},h.contexts(\textit{self}))\)
16:procedureSC(handle*\(h,\textbf{int}\,val\))
17:ifself\(\not\in h.contexts.keys\)thenreturnfalse
18:\(r\gets X.\textsc{ECSC}(h.\textit{ECWH},h.contexts(\textit{self}),val)\)
19:\(h.contexts.\textsc{Remove}(\textit{self})\)
20:return\(r\)
21:procedureWrite(handle*\(h,\textbf{int}\,val\))
22:\(X.\textsc{Write}(h.\textit{ECWH},val)\)
23:\(h.contexts.\textsc{Remove}(\textit{self})\)
24:returntrue
25:procedureRecover(handle*\(h\))
26:\(X.\textsc{Recover}(h.\textit{ECWH})\)
27:ifself\(\in h.contexts.keys\)thenif\(\neg X.\textsc{ECVL}(h.\textit{ECWH},h.contexts(\textit{self}))\)then
28:\(h.context.\textsc{Remove}(\textit{self})\)
29:staticprocedureDetect(handle*\(h\))
30:returnDurECW.Detect(\(h.\textit{ECWH}\)) ```
**Algorithm 3** The DuraLL class for Durable Writable-LLSC objects.
```
1:classDuraLL: ```
2:instancevariableDurECW\(X\)\(\triangleright\)\(X\) holds the central EC W-LLSC object.
3:structhandle{ DurECW.handleECWH
4:HashMap(DuraLL\(\rightarrow\)int)\(contexts\) }
5:staticprocedureCreateHandle()
6:returnhandle{ECWH\(\leftarrow\)DurECW.CreateHandle(),\(contexts\leftarrow\)HashMap(DuraLL\(\rightarrow\)int)}
7:procedureDuraLL(\(\rightarrow\)int)
8:procedureDuraLL(\(\rightarrow\)int)
9:\(X\leftarrow\)DurECW\((initval,0)\)
10:procedureLL(handle*\(h\))
11:\(x\gets X.\textsc{ECLL}(h.\textit{Critical})\)
12:\(h.contexts(\textit{self})\gets x.seq\)
13:return\(x.val\)
14:procedureVL(handle*\(h\))
15:ifself\(\not\in h.contexts.keys\)thenreturnfalse
16:return\(x.\textsc{ECVL}(h.\textit{ECWH},h.contexts(\textit{self}))\)
17:procedureSC(handle*\(h,\textbf{int}\,val\))
18:ifself\(\not\in h.contexts.keys\)thenreturnfalse
19:\(r\gets X.\textsc{ECSC}(h.\textit{ECWH},h.contexts(\textit{self}),val)\)
20:\(h.contexts.\textsc{Remove}(\textit{self})\)
21:return\(r\)
22:procedureWrite(handle*\(h,\textbf{int}\,val\))
23:\(X.\textsc{Write}(h.\textit{ECWH},val)\)
24:\(h.contexts.\textsc{Remove}(\textit{self})\)
25:returntrue
26:procedureRecover(handle*\(h\))
27:\(X.\textsc{Recover}(h.\textit{ECWH})\)
28:ifself\(\in h.contexts.keys\)thenif\(\neg X.\textsc{ECVL}(h.\textit{ECWH},h.contexts(\textit{self}))\)then
29:\(h.context.\textsc{Remove}(\textit{self})\)
30:\(h.context.\textsc{Remove}(\textit{self})\)
31:\(h.context.\textsc{Remove}(\textit{self})\)
32:\(h.context.\textsc{Remove}(\textit{self})\)
33:\(h.context.\textsc{Remove}(\textit{self})\)
34:return\(\textsc{DurECW.Detect}(h.\textit{ECWH})\)
```
**Algorithm 4** The DuraLL class for Durable Writable-LLSC objects.
We present the code as Algorithm 3. The LL() operation on a DurALL object by handle \(h\) simply performs a ECL() on \(X\) and stores the returned context in \(h.contexts\) under the key _self_ (which is the reference of the current object). Correspondingly, \(\text{VL}()\) retrieves the context from \(h.contexts\), and uses it to perform a ECVL() on \(X\). The SC() operation also retrieves the context and performs a ECSC() on the internal object, but then cleverly removes the key corresponding to the current object from \(h.contexts\), since, regardless of whether the SC() succeeds, the stored context is bound to be out-of-date. The Write() operation does not need a context, so it simply writes to \(X\), but also cleverly removes the current object's key from \(h.contexts\) to save some space. In order to be space-efficient, Recover() also removes the current object from \(h.contexts\) if the context stored for the object is out-of-date. Since DurALL is just a wrapper around DurECW, its Detect() operation simply returns the result of detecting DurECW.
**Theorem 5.2**.: _Algorithm_DuraLL _satisfies the following properties:_
1. _The objects implemented by the algorithm are durably linearizable (with respect to LL/SC's sequential specification) and are detectable._
2. _All operations, including the Recover, Detect, Constructor, and CreateHandle methods, are wait-free and run in constant time._
3. _The algorithm supports dynamic joining: a new process can join in at any point in a run (by calling CreateHandle) and start creating DurALL objects or accessing existing DuraLL objects._
4. _The space requirement is_ \(O(m+n+C)\)_, where_ \(m\) _is the actual number of DuraLL objects created in the run,_ \(n\) _is the actual number of processes that have joined in in a run, and_ \(C\) _is the number of "contexts" stored across all objects._
## 6DuraCAS: a durable implementation of Writable CAS
Using the DurEC building block, we design a Writable-CAS object, DurCAS. The DuraCAS algorithm resembles DurECW, but requires some new ideas due to the subtle differences between LLSC and CAS.
### Informal description of Algorithm DuraCAS
We present in Figure 4 Algorithm DuraCAS, which implements a durable writable CAS object \(\mathcal{O}\) from two DurEC objects, \(\mathcal{W}\) and \(\mathcal{Z}\). The algorithm bears a lot of similarity to Algorithm DureCW of the previous section. In fact, DuraCAS has only three extra lines. For readability, we starred their line numbers (Lines **6***, **10***, and **13***) and kept the line numbers the same for the common lines.
The ideas underlying this algorithm are similar to DurECW, so we explain here only the three differences: (1) Lines 7 to 10 are executed only once in Algorithm DureCW, but are repeated twice in the current algorithm; (2) Line 8 differs in the two algorithms; and (3) Line 13* is introduced in the current algorithm.
The change in Line 8 accounts for the fact that the success of a Cas() operation depends on the value in \(\mathcal{O}\) rather than the context. If the value in \(\mathcal{O}\) (and therefore \(\mathcal{Z}\)) is different from \(old\) at Line 7, the CAS returns _false_ (and linearizes at Line 7). If \(\mathcal{O}.val=old\) and the CAS does not plan to change the value (i.e., \(old=new\)) it returns _true_ without changing \(\mathcal{Z}\).
To understand why Lines 7 to 10 are repeated in the current algorithm, consider the following scenario. A handle \(h\) executes \(\mathcal{O}.CAS(h,old,new)\), where \(old\neq new\). When \(h\) executes Line 7, \(\mathcal{Z}\)'s value is \(old\), so \(z.val\) gets set to \(old\) at Line 7. Handle \(h\) progresses to Line 10, but before it executes Line 10, some handle \(h^{\prime}\) invokes \(\mathcal{O}.\textsc{Write}(h^{\prime},old)\) and executes it to completion, causing \(\mathcal{Z}.seq\) to take on a value greater than \(z.seq\). Handle \(h\) now executes the ECSC at Line 10 and fails since \(\mathcal{Z}.seq\neq z.seq\). If \(h\) acts as it did in Algorithm DureCW, \(h\) would complete its \(\mathcal{O}.CAS(h,old,new)\) operation, returning _false_. However, _false_ is an incorrect response by the specification of CAS because \(\mathcal{O}.val=old\) for the full duration of the operation \(\mathcal{O}.CAS(h,old,new)\). To overcome this race condition, \(h\) repeats Lines 7 to 10.
If the same race condition repeats each time \(h\) repeats Lines 7 to 10, the method \(\mathcal{O}.CAS\) would not be wait-free. Line 13* is introduced precisely to prevent this adverse possibility. When a handle \(h^{\prime}\) executes Lines 12 to 14 of \(\mathcal{O}.\textsc{Write}(h^{\prime},v)\) in the previous DurECW algorithm, \(h^{\prime}\) would always try to install its value \(v\) in \(\mathcal{W}\) (at Line 14) and
```
1:classDuraCAS: instance variable DurEC \(\mathcal{W}\)\(\triangleright\)\(\mathcal{W}\) holds a pair \((\mathcal{W}.seq,(\mathcal{W}.val,\mathcal{W}.bit))\) instance variable DurEC \(\mathcal{Z}\)\(\triangleright\)\(\mathcal{Z}\) holds a pair \((\mathcal{Z}.seq,(\mathcal{Z}.val,\mathcal{Z}.bit))\) struct handle { DurEC.handle* Critical DurEC.handle* Casual } static procedure CreateHandle()
1:returnnew\(handle\{\)Critical\(\leftarrow\)DurEC.CreateHandle\(()\),Casual\(\leftarrow\)DurEC.CreateHandle\(()\)} procedureDuraCAS\((\)int\(initval)\(\mathcal{W}\leftarrow\)DurEC\((0,0)\))
2:\(\mathcal{Z}\leftarrow\)DurEC\(((initval,0))\) procedureRead\((\)handle*\(h)\)
3:\(z\leftarrow\mathcal{Z}.\)ECLL\((h.\)Casual\()\)
4:return\(z.val\)
5:
6:procedureCAS\((\)handle*\(h,\)int\(old,\)int\(new\)\()\(\bullet\)
7:for\(i\gets 1\) to \(2\)
8:\(z\leftarrow\mathcal{Z}.\)ECLL\((h.\)Casual\()\)
9:if\(z.val\neq old\)thenreturnfalse elseif\(old=new\)thenreturntrue
10: transfer-write\((h)\)
11:if\(\mathcal{Z}.\)ECSC\((h.\)Critical\(,z.seq,(new,z.bit))\)then
12:returntrue
13:returnfalse procedureWrite\((\)handle*\(h,\)int\(v)\)
14:\(w\leftarrow\mathcal{W}.\)ECLL\((h.\)Casual\()\)
15:\(z\leftarrow\mathcal{Z}.\)ECLL\((h.\)Casual\()\)
16:if\(z.val=v\)thenreturn\(ack\)
17:if\(z.bit=w.bit\)then\(\mathcal{W}.\)ECSC\((h.\)Critical\(,w.seq,(v,1-w.bit))\)
18: transfer-write\((h)\)
19: transfer-write\((h)\)
20:return\(ack\) proceduretransfer-write\((\)handle*\(h)\)
21:\(\hat{z}\leftarrow\mathcal{Z}.\)ECLL\((h.\)Casual\()\)
22:if\(\hat{w}\leftarrow\mathcal{W}.\)ECLL\((h.\)Casual\()\)
23:if\(\hat{z}.bit\neq\hat{w}.\)bitthen\(\mathcal{Z}.\)ECSC\((h.\)Casual\(,\hat{z}.seq,(\hat{w}.val,\hat{w}.bit))\)
24:procedureRecover\((\)handle*\(h)\)
25:\(\mathcal{W}.\)Recover\((h.\)Critical\()\)
26:\(\mathcal{Z}.\)Recover\((h.\)Critical\()\)
27:\(\mathcal{W}.\)Recover\((h.\)Casual\()\)
28:\(\mathcal{Z}.\)Recover\((h.\)Casual\()\)
29: transfer-write\((h)\)
30: transfer-write\((h)\)
31:static procedureDetect\((\)handle*\(h)\)
32:return\((\)DurEC.Detect\((h.\)Critical\(),true\)
later move it to \(\mathcal{Z}\), thereby increasing \(\mathcal{Z}.seq\) and causing concurrent \(\mathcal{O}.\text{ECSC}()\) operations to fail. This was precisely what we wanted because the specification of an SC operation requires that if _any_\(\mathcal{O}.\text{Write}()\) takes effect, regardless of what value it writes in \(\mathcal{O}\), it must change \(\mathcal{O}.context\) and thus cause concurrent \(\mathcal{O}.\text{ECSC}()\) operations to fail. The situation however, is different when implementing \(\mathcal{O}.CAS\), where a \(\mathcal{O}.\text{Write}()\) that does not change the value in \(\mathcal{O}\) should not cause a concurrent \(\mathcal{O}.CAS\) to fail. Hence, if a \(\mathcal{O}.\text{Write}(h^{\prime},v)\) operation is writing the same value as \(\mathcal{O}\)'s current value, then it should simply return (since \(\mathcal{O}.val\) already has \(v\)) and, importantly, not change \(\mathcal{Z}.seq\) (because changing \(\mathcal{Z}.seq\) would cause any concurrent \(CAS\) operation to fail). Line 13* implements precisely this insight.
The theorem below summarizes the result:
**Theorem 6.1**.: _Algorithm_ DuraCAS _satisfies the following properties:_
1. _The objects implemented by the algorithm are durably linearizable (with respect to the sequential specification of Writable CAS) and are detectable._
2. _All operations, including the Recover, Detect, Constructor, and CreateHandle methods, are wait-free and run in constant time._
3. _The algorithm supports dynamic joining: a new process can join in at any point in a run (by calling CreateHandle) and start creating objects or accessing existing objects._
4. _The space requirement is_ \(O(m+n)\)_, where_ \(m\) _is the actual number of DuraCAS objects created in the run, and_ \(n\) _is the actual number of processes that have joined in in a run._
_Proof:_ A detailed proof of this theorem is presented in Appendix A.3 (8 pages). \(\blacksquare\)
## 7 Discussion and Remarks
In this paper, we have designed constant time implementations for durable CAS and LLSC objects. To our knowledge, DuraCAS is the first CAS implementation to allow for dynamic joining. DuraCAS also has state-of-the-art space complexity--allowing adaptivity and requiring only constant space per object and per process that actually accesses the protocol--and is writable. To our knowledge, ours are the first implementations of durable LLSC objects. LLSC objects are universal and ABA-free, thus we believe that the dynamically joinable LLSC implementations in this paper will be useful in the construction of several more complex durable objects. The external context variant of LLSC is particularly space efficient, making it a powerful building block for concurrent algorithms; we witnessed this property even in the constructions of this paper, where the EC nW-LLSC object DureEC served as the primary building block for all our other implementations, including our EC W-LLSC implementation DureECW and its direct descendent DuraLL (for W-LLSC). All the implementations in this paper were enabled by handles--a novel, pointer-based mechanism we introduced in this paper to enable threads created on-the-fly to access our implementations. We believe that along with the specific implementations of this paper, the use of handles as an algorithmic tool can play an important role in the design of future durable algorithms.
We end with two open problems. Handles enable dynamic joining, but once a handle \(h\) is used, any other process can have a stale pointer to \(h\) that may be dereerenced at any point in the future. A mechanism for enabling space adaptivity for both dynamic joining and _dynamic leaving_, which would enable a process to reclaim its entire memory footprint once it is done using a durable implementation is our first open problem. Our second open problem is to prove (or disprove) an \(\Omega(m+n)\) space lower bound for supporting \(m\) objects for \(n\) processes for any durable CAS or durable LLSC type. |
2303.17760 | CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society | The rapid advancement of chat-based language models has led to remarkable
progress in complex task-solving. However, their success heavily relies on
human input to guide the conversation, which can be challenging and
time-consuming. This paper explores the potential of building scalable
techniques to facilitate autonomous cooperation among communicative agents, and
provides insight into their "cognitive" processes. To address the challenges of
achieving autonomous cooperation, we propose a novel communicative agent
framework named role-playing. Our approach involves using inception prompting
to guide chat agents toward task completion while maintaining consistency with
human intentions. We showcase how role-playing can be used to generate
conversational data for studying the behaviors and capabilities of a society of
agents, providing a valuable resource for investigating conversational language
models. In particular, we conduct comprehensive studies on
instruction-following cooperation in multi-agent settings. Our contributions
include introducing a novel communicative agent framework, offering a scalable
approach for studying the cooperative behaviors and capabilities of multi-agent
systems, and open-sourcing our library to support research on communicative
agents and beyond: https://github.com/camel-ai/camel. | Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, Bernard Ghanem | 2023-03-31T01:09:00Z | http://arxiv.org/abs/2303.17760v2 | # CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society
###### Abstract
The rapid advancement of conversational and chat-based language models has led to remarkable progress in complex task-solving. However, their success heavily relies on human input to guide the conversation, which can be challenging and time-consuming. This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents and provide insight into their "cognitive" processes. To address the challenges of achieving autonomous cooperation, we propose a novel communicative agent framework named _role-playing_. Our approach involves using _inception prompting_ to guide chat agents toward task completion while maintaining consistency with human intentions. We showcase how _role-playing_ can be used to generate conversational data for studying the behaviors and capabilities of chat agents, providing a valuable resource for investigating conversational language models. Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems, and open-sourcing our library to support research on communicative agents and beyond. The GitHub repository of this project is made publicly available on: [https://github.com/lightaime/camel](https://github.com/lightaime/camel).
## 1 Introduction
Confronted with the complexities of real-world tasks, solving them often requires multiple steps. The rapid progress of conversational and chat-based large-scale language models (LLMs) has yielded remarkable achievements in complex task-solving [47; 48; 68; 52; 3; 7]. Nevertheless, it is worth noting that their success is heavily reliant on human input to guide the conversation in the right direction. This reliance necessitates users to provide relevant and precise prompts based on their intentions and the chat agent's feedback. This can be challenging, time-consuming, and sometimes impossible. It often demands a deep understanding of the domain and expertise in crafting effective prompts. Consider an individual who lacks trading expertise; they would find it difficult to create suitable prompts for directing a communicative agent to develop a trading application. This predicament is raising a crucial question: can we replace human intervention with an autonomous communicative agent capable of steering the conversation toward task completion without any human supervision? To tackle this issue, it is crucial to conduct more research exploring the potential, capabilities, and limitations of communicative agents that operate entirely on their own to complete tasks. It is important to consider how multiple agents interact with each other, as this understanding is crucial
for anticipating the future of artificial intelligence. In a society where agents collaborate, compete, and interact on diverse tasks, the dynamics of these interactions play a key role in determining the success of AI systems [4; 17; 18; 48; 58; 6; 7].
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents and provide insight into their "cognitive" processes. Our preliminary analysis reveals that requesting chat agents to autonomously cooperate on completing tasks is a non-trivial matter. Several challenges such as _role flipping_, _assistant repeats instruction_, _fake replies_, _infinite loop of messages_, and _conversation termination conditions_ arise. Therefore, it is critical to investigate ways to enhance the alignment and cooperation of these models with human intentions. To address these issues, we propose a novel cooperative agent framework named _role-playing_ to automate cooperation between communicative agents. Specifically, our proposed approach involves using _role-playing_ with _inception prompting_ to autonomously guide the communicative agents toward task completion while maintaining consistency with human intentions. Only a preliminary _idea_ is needed from human input to guide the conversations toward complex task-solving.
_"What's the most resilient parasite? An Idea. A single idea from the human mind can build cities. An idea can transform the world and rewrite all the rules. Which is why I have to steal it."_
_- Dom Cobb, Inception_
Our library, which we make publicly available, provides modular functionality, implementations of different agents, well-crafted prompts, and data explorers, thereby simplifying the utilization of the library for future research in various areas such as multi-agent systems, cooperative AI, game theory simulations, social analysis, AI ethics, AI alignment, and beyond. In addition, our _role-playing_ method provides a highly scalable way to generate conversational data for studying the behaviors and capabilities of chat agents. We showcase how _role-playing_ can be used to let chat agents communicate with each other for task completion and record their conversations for behavior analysis and capability understanding. In particular, we consider two cooperative scenarios of role-playing and generate two large conversational, task-oriented, and instruction-following datasets: _AI Society_ and _Code_. The datasets offer a valuable resource for investigating conversational language models, enabling them to comprehend and react to human language more effectively. Furthermore, our _role-playing_ offers a scalable method of creating conversational instruction-following data, which can potentially enhance the development of more advanced and efficient language models.
Contributions.Our contributions are threefold:
* We introduce a novel cooperative agent framework, _role-playing_, that allows communicative agents to collaborate autonomously toward completing tasks while requiring minimal human intervention.
* Our framework offers a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems. It illuminates the challenges of achieving autonomous cooperation and provides strategies for addressing them.
* We have open-sourced our library, containing implementations of various agents, data generation pipelines, data analysis tools, and collected datasets, to support research on communicative agents and beyond.
## 2 Related Work
**Communicative Agents.** Communication between agents has been studied for a long time [44; 45]. There are many ways to facilitate communication between agents, and with agents [19; 53; 57]. Among these, natural language is considered the most natural form of communication [57]. By enabling agents to function as communicators themselves, they become capable of solving complex tasks [65; 49; 42; 1]. Communication between AI agents can occur in a competitive setting [67; 62] or a cooperative setting [26; 18; 8]. Cooperative AI refers to artificial intelligence systems that are designed to work together with humans and other AI systems to achieve common goals [16]. Cooperative AI systems take into account the needs and capabilities of other agents in the system
and actively seek to collaborate and coordinate their actions with them, which has many potential benefits, including increased efficiency, improved decision-making, and the ability to tackle complex problems that are beyond the reach of any single agent. However, designing effective cooperative AI systems is still an active area of research, as it requires addressing a range of technical, ethical, and social challenges [18]. In our work, we enable two communicative agents to engage in a conversation and cooperate with each other to solve assigned tasks. The communicative agents, each assigned a distinct role, are expected to apply their expertise and knowledge to find a solution that satisfies their common task.
**Model Exploration.** Knowledge distillation (KD) is a popular technique for compressing complex models into smaller, more practical models that can be deployed efficiently in real-world scenarios without sacrificing performance [29]. KD aims to transfer knowledge from a larger, complex "teacher" model to a more manageable "student" model, while maintaining the accuracy and generalization capabilities of the original model. The knowledge transferred from the teacher to the student model can be categorized into three main types: Response-based, Feature-based, and Relation-based knowledge, which have been studied in various works [5, 29, 56, 35, 74, 36, 28, 13, 51, 50]. Recent works have proposed innovative methods for extracting training data from both large language models [11] diffusion models [12]. Those approaches could be seen as a means of training data distillation, in which the model training data space could be extracted. The idea is to capitalize on the models' memorization of certain samples obtained from the internet. The process involves multiple generations being created from the model, which is then sorted by specific metrics, and duplicate generations are subsequently removed. The resulting generations are then scrutinized for any matches that already exist on the web. If the generated samples match existing samples found on the internet, it can be inferred that the model has been trained on those samples. Our work presents a novel approach to the "mind exploration" of conversational agents. By enabling these agents to communicate and collaborate in solving tasks, we gain insight into their actions and behaviors within a task-solving context. Our mind exploration approach revealed several intriguing insights and challenges that are yet to be further explored by the research community.
**Instructional LLMs and Prompt Engineering.** LLMs are trained on diverse text data and excel in text completion, with various downstream NLP applications [9, 14, 30, 75, 69]. However, InstructGPT suggests that LLMs may not align with user intent, proposing reinforcement learning from human feedback (RLHF) [15] and Instruction Fine-Tuning (IFT) [72] to improve LLMs' relevance and appropriateness to user instructions. Chain-of-Thought (CoT) [73] and zero-shot-CoT [37] are special types of instruction that significantly enhance LLMs' performance on reasoning and arithmetic tasks. These techniques underpin the impressive capabilities of recent dialogue LLMs [61, 68, 22, 6, 47, 10], which aim to simulate human-like conversations and provide personalized and interactive experiences for users, exhibiting the behavior of all three conversational AI agents [21]. However, generating instruction datasets is a crucial challenge in building instruct-based LLMs, with existing datasets ranging from crowdsourced to generated. Hand-crafted instruction instances are available in [71], while leveraging previously crowdsourced NLP datasets is a less labor-intensive curation approach [72, 41, 46, 32]. LLMs have been explored for data generation in [59, 38, 40, 66], and Self-Instruct [70] proposes a semi-automated process for instruction instance generation. Unnatural-Instruction [31] collects instruction instances by prompting a language model with only three seed examples and paraphrasing the generated instances to expand the dataset. Another important challenge is prompt engineering. The quality of the prompt used to guide LLMs significantly affects its performance [54, 9, 39]. While LMs pre-trained on large data can implicitly learn tasks with few-shot prompting, hand-crafted prompts may not always suffice. Automated prompt generation methods have been proposed, such as gradient-guided search [60], mining-based and paraphrasing-based techniques [33], a meta-prompt [55], and automatic instruction selection and generation [76]. In this work, we introduce a conversational LLM auto-prompting method called _Inception Prompting_, which enables agents to prompt each other to solve tasks through _Role-Playing_. The AI user continuously provides instructions to the AI assistant for task-solving. This enables us to save the streaming instruction-solution pairs and create diverse, instructional, conversational, and task-oriented datasets. These datasets can be used to analyze the behavior and capabilities of LLMs and for future research for fine-tuning LLMs with conversational instructions.
**AI Alignment.** AI alignment is a field that aims to ensure that AI systems adhere to their intended goals, interests, and values, as envisioned by their designers [2, 25, 63, 20, 24, 43, 7]. The first attempt at AI alignment was made through the "Three Laws of Robotics," which was introduced
by Isaac Asimov in his science fiction stories [4]. Developing aligned AI systems is crucial for achieving desired objectives while avoiding unintended consequences. Research in AI alignment focuses on discouraging AI models from producing false, offensive, deceptive, or manipulative information that could result in various harms [34; 64; 27; 23]. Achieving a high level of alignment requires researchers to grapple with complex ethical, philosophical, and technical issues. We conduct large-scale experiments to study different _role-playing_ situations, which probe the alignment of LLMs.
## 3 Methodology
In this paper, we focus on studying communicative agents under AI-AI cooperative scenarios where they share pure common interests. In particular, we are studying the assistant-user scenario, where a preliminary idea is given at the start. Agents will conceptualize the idea into a specific task and complete it autonomously through conversations.
### Role-playing Framework
Our proposed framework is a novel _role-playing_ approach for studying multiple communicative agents. Specifically, we concentrate on task-oriented role-playing that involves one _AI assistant_ and one _AI user_. After the multi-agent system receives a preliminary _idea_ and the _role assignment_ from human users, a _task-specific agent_ will provide a detailed description to make the idea specific and then the AI assistant and AI user will cooperate on completing the specified task through multi-turn conversations until the AI user determines the task is done. The AI user is responsible for giving instructions to the AI assistant and directing the conversation toward task completion. On the other hand, the AI assistant is designed to follow the instructions from the AI user and respond with specific solutions. The whole _role-playing_ framework is depicted in Figure 1.
Human Input and Task Specifying.The _role-playing_ session will be instantiated from an _idea_ and _selected roles_ by humans. As an example in Figure 1, a human has a preliminary idea to _develop_
Figure 1: **Role-Playing Framework. Our role-playing setup starts with the human user having an idea they want to implement, _e.g_. develop a trading bot for the stock market. The roles involved in this task would be an AI assistant agent who is a python programmer and an AI user agent who is a stock trader. The task is made more specific using our task specifier agent, leading to a well-defined task for the assistant to solve. The AI user and AI assistant collaboratively communicate by chatting with each other in an instruction-following fashion to solve the specified task.**
a trading bot for the stock market_. Humans may or may not have knowledge about how the idea can be realized. What is needed is only to designate the potential roles that can implement the idea. For instance, a _Python Programmer_ could collaborate with a _Stock Trader_ to realize the idea of _developing a trading bot for the stock market_. After the idea and roles are determined, the _task specifier_ agent will brainstorm a specific task that the AI Assistant role can help with the AI user role to complete based on the input idea. An example of a specified task in this scenario could be _developing a trading bot with a sentiment analysis tool that can monitor social media platforms for positive or negative comments about a particular stock, and execute trades based on sentiment analysis results_. The main motivation for introducing a task specifier is that conversational agents usually require a concrete task prompt for realizing the task, while it is challenging or time-consuming for a non-domain expert to create such a specific task prompt. Therefore, the task specifier agent performs as an enhanced imagination module for the idea implementation. Please note that, when studying our framework at a large scale for AI society and Code scenarios, we generate _roles_ and _ideas_ automatically by prompting LLMs, instead of relying on human inputs.
AI Assistant-User Role Assignment.After the task specification, The AI assistant role and the AI user role will be assigned to the user agent and the assistant agent correspondingly to complete the specified task. In practice, a system message is passed to each agent declaring roles to each. We refer to the assistant system prompt/message by \(\mathcal{P}_{\mathcal{A}}\) and that of the user by \(\mathcal{P}_{\mathcal{U}}\). The system messages are passed to the agents before the conversations start to assign agents with corresponding roles. Let \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) denote two large-scale auto-regressive language models [47]. When the system message is passed to those models respectively, we obtain \(\mathcal{A}\leftarrow\mathcal{F}_{1}^{\mathcal{P}_{\mathcal{A}}}\) and \(\mathcal{U}\leftarrow\mathcal{F}_{2}^{\mathcal{P}_{\mathcal{U}}}\) which are referred to as the assistant and user agents respectively. In Figure 1, the AI assistant and the AI user are assigned roles as _Python Programmer_ and _Stock Trader_ at the beginning of the role-playing session, respectively. The AI user serves as a task planner, engaging in interactive planning to determine feasible steps for the AI assistant to execute. Meanwhile, the AI assistant acts as a task executor, offering solutions, executing planned steps, and providing responses to the AI user.
Conversation Towards Task-Solving.After the role assignment is completed, the AI assistant \(\mathcal{A}\) and AI user \(\mathcal{U}\) will collaborate in an instruction-following manner to accomplish the task. In the AI assistant-user scenario, the AI user is responsible for providing instructions, and the assistant is expected to respond with a solution that fulfills the instructions. Formally, we denote the user instruction message obtained at time \(t\) by \(\mathcal{I}_{t}\) and the assistant solution by \(\mathcal{S}_{t}\). The set of conversational messages obtained up until time \(t\) is denoted by Equation (1) shown below:
\[\mathcal{M}_{t}=\{(\mathcal{I}_{0},\mathcal{S}_{0}),...,(\mathcal{I}_{t}, \mathcal{S}_{t})\}=\{(\mathcal{I}_{i},\mathcal{S}_{i})\}|_{i=0}^{t} \tag{1}\]
At the next time step, \(t+1\), the AI user \(\mathcal{U}\) takes the historical conversation message set \(\mathcal{M}_{t}\) and provides a new instruction \(\mathcal{I}_{t+1}\), as shown in Equation (2). The produced instruction message \(\mathcal{I}_{t+1}\) is then passed, along with message set \(\mathcal{M}_{t}\), to the AI assistant \(\mathcal{A}\). The AI assistant will then respond with a solution, denoted by \(\mathcal{S}_{t+1}\) in Equation (3):
\[\mathcal{I}_{t+1}=\mathcal{U}(\mathcal{M}_{t}) \tag{2}\]
\[\mathcal{S}_{t+1}=\mathcal{A}(\mathcal{M}_{t},\mathcal{I}_{t+1}) \tag{3}\]
After obtaining the solution \(\mathcal{S}_{t+1}\) to the instruction \(\mathcal{I}_{t+1}\), the message set is updated using Equation (4) to obtain \(\mathcal{M}_{t+1}\):
\[\mathcal{M}_{t+1}\leftarrow\mathcal{M}_{t}\cup(\mathcal{I}_{t+1},\mathcal{S}_ {t+1}) \tag{4}\]
Note that the formulation above not only models AI-AI communicative scenarios, but it can also be easily extended to model human-AI and multi-agent communicative scenarios. In Figure 1, we observe that the AI user initiates the _installation and import of essential Python libraries for sentiment analysis and stock trading_ by instructing the AI assistant through conversations. This example is drawn from our experiments, and the entire conversation is available in the supplementary section.
### Inception Prompting
Since prompt engineering is crucial to our role-playing framework, this section delves deeply into our prompting techniques. Unlike other techniques for conversational language models, our prompt engineering occurs solely at the beginning of role-playing, for task specification and role assignment. Once the conversation phase commences, the AI assistant and AI user prompt each other automatically in a loop until termination. As such, we refer to our technique as _Inception Prompting_. Our Inception prompt consists of three prompts: the task specifier prompt \(\mathcal{P}_{\mathcal{T}}\), the assistant system prompt \(\mathcal{P}_{\mathcal{A}}\), and the user system prompt \(\mathcal{P}_{\mathcal{U}}\). As an example, we consider the inception prompt of the _AI Society_ scenario. The templates for these prompts of _AI Society_ role-playing are shown in Figure 2. The task specifier prompt contains information about the roles of the AI assistant and AI user in the role-playing session. Therefore, the task specifier agent can take a preliminary task/idea as input and generate a specific task using imagination. The AI assistant system prompt \(\mathcal{P}_{\mathcal{A}}\) and the AI user system prompt \(\mathcal{P}_{\mathcal{U}}\) are mostly symmetrical and include information about the assigned task and roles, communication protocols, termination conditions, and constraints or requirements to avoid unwanted behaviors. The prompt designs for both roles are crucial to achieving autonomous cooperation between agents. It is non-trivial to engineer prompts that ensure agents act in alignment with our intentions. We take the prompt templates from the _AI Society_ in Figure 2 as an example to explain our key design choices.
Prompt Engineering.To delve deeper into the details in Figure 2, we start by chunking the various parts of the AI assistant system prompt \(\mathcal{P}_{\mathcal{A}}\) shown below:
* Never forget you are a <ASSISTANT_ROLE> and I am a <USER_ROLE>. This assigns the chosen role to the assistant agent and provides the agent with information about the user's role.
* Never flip roles! Never instruct me! This prevents agents from flipping roles. In some cases, we have observed the assistant and the user switching roles, where the assistant suddenly takes control and instructs the user, and the user follows those instructions.
* You must decline my instruction honestly if you cannot perform the instruction due to physical, moral, legal reasons or your capability and explain the reasons. This prohibits the agent from producing harmful, false, illegal, and misleading information.
* Unless I say the task is completed, you should always start with: Solution: <YOUR_SOLUTION>. <YOUR_SOLUTION> should be specific, and provide preferable implementations and examples for task-solving. This encourages the assistant always responds in a consistent format, avoiding any deviation from the structure of the conversation, and preventing vague or incomplete responses, which we refer to as flake responses, such as "I will do something".
* Always end your solution with: Next request. This ensures that the assistant keeps the conversation going by requesting a new instruction to solve.
For the AI user system prompt \(\mathcal{P}_{\mathcal{U}}\), we strive to maintain as much symmetry as possible with respect to the AI assistant system prompt. Apart from the opposite role assignment, the user system prompt differs from the assistant prompt in the following ways:
* You must instruct me based on my expertise and your needs to complete the task ONLY in the following two ways: 1. Instruct with a necessary input:...; 2. Instruct without any input:... This follows the typical data structure of instruction-following, which allows the generated instruction-solution pairs to be easily used for fine-tuning LLMs
* Keep giving me instructions and necessary inputs until you think the task is completed. When the task is completed, you must only reply with a single word <CAMEL_TASK_DONE>. We introduce an end-of-task token, namely, <CAMEL_TASK_DONE>. This token is used once the user believes the task is done. This ensures that the chat is terminated when the user is satisfied. Without doing so, the agents might fall into a chatting loop where they keep on saying "thank you" to each other or "goodbye" indefinitely.
The prompts used for the Code scenario follow a similar sprint as the AI society scenario, but with some additional engineering related to programming languages. For more information, please refer to Figure 3.
## 4 Experiments
In this section, we will discuss the various experiments that we conducted to arrive at our final design choices. Specifically, we will examine the interesting observations, challenging issues, and several examples we have encountered while enabling agents to communicate with each other under different prompt design choices to achieve autonomous cooperation. In our experiments, we employed two _gpt-3.5-turbo_ agents, referred to for simplicity as LLM agents, with _Inception Prompts_, as described in Section 3.2, to simulate assistant-user cooperation. We examined the AI Society and Code scenarios in particular. We also gathered conversational data, named _CAMEL AI Society_ and _CAMEL Code
Figure 2: **Inception Prompt of AI Society Role-Playing. This shows the task specifier prompt, assistant system prompt, and user system prompt which are used for studying the AI society scenario.**
datasets, and analyzed them. Moreover, we will discuss potential extensions of our framework and highlight both the risks and opportunities that future AI society might present.
### Role-Playing for AI Society and Code Scenarios
**AI Society:** To create our AI Society dataset, we have developed a scalable approach that follows a series of steps. Firstly, we prompt the LLM agent to generate possible roles for the assistant and the user. We achieve this by providing the LLM agent with specific prompts designed to elicit these roles. Next, we ask the LLM agent to generate a range of possible tasks that can be solved through collaboration between the assistant and user roles generated previously. After generating a range of possible tasks as described in the previous step, we then use the task specific prompt passed to the LLM agent to make the task more specific. The prompts for assistant role generation, user role generation, and task generation are shown in Figure 4 (_AI Society_). For our AI society dataset, we
Figure 3: **Inception Prompt of Code Role-Playing. This shows the task specific prompt, assistant system prompt, and user system prompt which are used for studying the Code scenario.**
generated 50 assistant roles, 50 user roles, and 10 tasks for each combination of roles yielding a total of 25,000 conversations. The generated assistant roles and user roles are shown in Figure 5 (_AI Society_).
**Code:** To generate the Code dataset, we use a scalable approach similar to that of the AI Society dataset. Firstly, we prompt the LLM agent to provide us with a list of programming languages and domains. Then, we ask the LLM agent to generate a set of tasks that an expert programmer in a specific programming language can collaborate with a person working in a specific domain to solve. The task is then made more specific using our task specifier prompt. The prompts for language Generation, domain Generation, and task generation are shown in Figure 4 (_Code_). For our Code dataset, we generated 20 programming languages, 50 domains, and 50 tasks for each combination of language and domains yielding a total of 50,000 conversations. The generated programming languages and domains are shown in Figure 5 (_Code_).
Challenges and Observations.In this section, we explore the four main challenges that we identified during our analysis of the generated datasets. Our observations shed light on some interesting aspects of cooperative AI and the difficulties that arise in its development. Figure 6 shows examples of each of the four challenges discussed below.
* Role Flipping: One challenge we encountered was role flipping, where the assistant and user switch roles during the conversation. This issue typically arises when the assistant starts providing instructions or commands instead of following the user's prompts, which can lead to confusion and a reversal of roles. To avoid role flipping, it is crucial for the assistant not to ask questions, as this can also contribute to the problem.
* Assistant Repeats Instruction: Another challenge that we observed was the assistant simply repeating the user's instructions without any role flipping occurring.
* Flake Replies: We also observed instances where the assistant agent responds with a flake reply, often taking the form of "I will...". These messages do not contribute to the task at hand, as the assistant promises to take action but ultimately fails to follow through.
Figure 4: **Data Generation Prompts. In order to maintain a scalable approach our data parameters are generated using an LLM model to reduce human involvement in the generation process. The generation prompts for both AI Society and Code datasets are summarized in this figure.**
* Infinite Loop of Messages: A particularly interesting challenge that we encountered was when the assistant and user engage in an infinite loop of meaningless conversation, such as repeatedly thanking each other or saying goodbye without making any progress in the conversation. It is intriguing to note that in some cases, the assistant and user are aware that they are stuck in a loop, but are unable to break out of it.
Overall, our observations highlight the complexity of cooperative AI development and the need for continued exploration and innovation to overcome the challenges we face. By identifying these issues, we hope to contribute to the development of more effective and engaging cooperative AI systems.
Figure 5: **Generated Meta Data.** The meta data generated by LLMs for _AI Society_ and _Code_ datasets. 50 assistant roles and 50 user role are generated for _AI Society_. 20 programming languages and 50 domains are generated for _Code_.
**Definition 1**.: _The set of all possible
* User No Instruct: If the user does not instruct the assistant for 3 rounds, the conversation is terminated.
* Assistant Instruct: If the assistant provides an instruction to the user, it indicates a role reversal, and the conversation is terminated.
* End of Task Token: If the user believes that the task has been solved, they are expected to say <CAMEL_TASK_DONE> to signify the completion of the task. Once this message is received, the conversation is terminated to ensure that the data generated accurately reflects the completion of the task.
* Assistant & User Token Limit: Given that _gpt-3.5-turbo_ has a limitation on the number of tokens, the assistant and user should raise a flag to terminate the conversation if either reaches the token limit.
* Maximum Number of Messages: To keep the cost of generated chats in check, we have set a maximum limit of 40 messages. This limit guarantees a long enough conversation between the user and assistant while also ensuring that the data generated is not too costly to produce. The cost grows quadratically with the length of the conversation, making it essential to set a limit. Despite the limit, the number of messages terminated due to reaching the maximum number of messages is minimal as shown in Figures 7 and 8.
Dataset Analysis. This section analyzes two datasets that we have generated, namely AI Society and Code. We provide an ablation study of the AI Society dataset. We make two changes: one modifies the assistant role prompt, and the other introduces task planning before presenting the task to the user and agent. Additionally, We examine the diversity of topics covered in each dataset by visualizing the information cartography of the instructions and tasks in each dataset. We also check the distribution of termination reasons within each dataset.
Next we examine the conversation termination reasons for both AI Society and Code datasets. As can be seen in Figure 7, the main termination reasons for AI Society dataset is Assistant Instruct whereas for Code it is Token Limit. The latter is expected as the since responses that contain code tend to be long. It is also interesting to note that in both datasets, the termination due to Maximum Number of Messages is low indicating that the limit of 40 maximum messages is reasonable.
We study the effect of the prompt design on the conversation termination distribution. We design Prompt V2 which modifies the original AI society prompt by removing the assistant response format _i.e._ starting with "Solution" and asking for "Next request". The second ablation adds a task planner to the original prompt. As seen in Figure 8, we notice that both modifications considerably increases the number of conversations that terminate with end of task token, and reduce the number of messages with assistant instruction. However, we observe a significant increase in the number of flake messages for Prompt V2 and Prompt V1 + Task Planner compared to original Prompt V1 as seen in Figure 9.
Figures 10 and 11 show the information cartography of the instructions and tasks obtained for AI Society respectively. The subjects covered in AI Society cover a wide range of technicality. Topics cover lifestyle, social media, content creation, and software development. Tasks include providing support, analysis, training, and brainstorming. Figures 12 and 13 show the information cartography of the instructions and tasks obtained for Code respectively. The covered topics have relevance to a broad range of individuals. Topics cover sentiment analysis, language and data processing, data collection, and machine learning.
contributions offer valuable insights into the future of large language artificial intelligence models and cooperative AI systems.
Risk, Limitation and Future Work.We are aware of the potential risks and limitations of this work. For the risks, since existing LLMs are not fully tuned to be harmless, they can be easily exploited by malicious users for harmful purposes. We provide an example of the "_evil mind_" that LLM agents could possess in the supplemental materials by asking a hacker to help an AGI agent to "_take control of the world_". For the limitations, due to the large scale and diversity of tasks generated by our role-playing framework, evaluating its task completion capabilities poses a challenge that necessitates the involvement of numerous domain experts. However, we also note that due to the complexity of society and the cost of using OpenAI API, this work only touches the tip of the iceberg of the AI society. For future work, in our experiments, we considered the setting where two conversational agents communicate with each other to solve a problem. This setting can be easily
Figure 8: **Ablation Distribution of Conversation Termination Reasons (AI Society) Due to Prompt Modification.** We run two ablations: (1) Prompt V2 which refers to modifying the original AI society prompt by removing the assistant output format, _i.e_. starting with “Output:” and ending with “Next Request” and (2) Adding a task planner to the original Prompt V1. Task planner takes the specified task and generates a subtask division for the assistant and user to follow. Both ablations show an increase in the number of conversations terminated due to End of Task Token and a decrease in Assistant Instruct rate.
Figure 7: **Distribution of Conversation Termination Reasons.** In our AI society dataset, most methods are terminated due to Assistant Instruct flag, whereas in the code dataset the main termination reason is Token Limit. The latter is due big chunks of code in the assistant responses.
extended to include more than two chat agents. Moreover, setting agents to compete and challenge each other could reveal further insights into the interaction of such communicative LLM agents.
Figure 9: **Flake Message Distribution (AI Society).** We quantify and visualize the number of flake messages, _i.e_. ones that start with “I will...” and do not progress towards task completion. Our original prompt shows the least amount of flake messages compared to both presented ablations.
Figure 10: **AI Society Instructions Information Cartography.** The information cartography for the instructions generated in the AI Society dataset reveals coverage of multiple diverse topics. The map was generated using Nomic Atlas.
Figure 11: **AI Society Tasks Information Cartography.** The information cartography for the tasks generated in the AI Society dataset reveals coverage of multiple diverse topics. The map was generated using Nomic Atlas.
Figure 12: **Code Instructions Information Cartography.** The information cartography for the instructions generated in the Code dataset reveals coverage of multiple diverse topics. The map was generated using Nomic Atlas.
Figure 13: **Code Tasks Information Cartography.** The information cartography for the tasks generated in the AI Society dataset reveals coverage of multiple diverse topics. The map was generated using Nomic Atlas. |
2309.14232 | The Governance of Decentralized Autonomous Organizations: A Study of
Contributors' Influence, Networks, and Shifts in Voting Power | We present a study analyzing the voting behavior of contributors, or vested
users, in Decentralized Autonomous Organizations (DAOs). We evaluate their
involvement in decision-making processes, discovering that in at least 7.54% of
all DAOs, contributors, on average, held the necessary majority to control
governance decisions. Furthermore, contributors have singularly decided at
least one proposal in 20.41% of DAOs. Notably, contributors tend to be
centrally positioned within the DAO governance ecosystem, suggesting the
presence of inner power circles. Additionally, we observed a tendency for
shifts in governance token ownership shortly before governance polls take place
in 1202 (14.81%) of 8116 evaluated proposals. Our findings highlight the
central role of contributors across a spectrum of DAOs, including Decentralized
Finance protocols. Our research also offers important empirical insights
pertinent to ongoing regulatory activities aimed at increasing transparency to
DAO governance frameworks. | Stefan Kitzler, Stefano Balietti, Pietro Saggese, Bernhard Haslhofer, Markus Strohmaier | 2023-09-25T15:43:17Z | http://arxiv.org/abs/2309.14232v2 | The Governance of Decentralized Autonomous Organizations: A Study of Contributors' Influence, Networks, and Shifts in Voting Power
###### Abstract
We present a study analyzing the voting behavior of contributors, or vested users, in Decentralized Autonomous Organizations (DAOs). We evaluate their involvement in decision-making processes, discovering that in at least 7.54% of all DAOs, contributors, on average, held the necessary majority to control governance decisions. Furthermore, contributors have singularly decided at least one proposal in 20.41% of DAOs. Notably, contributors tend to be centrally positioned within the DAO governance ecosystem, suggesting the presence of inner power circles. Additionally, we observed a tendency for shifts in governance token ownership shortly before governance polls take place in 1202 (14.81%) of 8116 evaluated proposals. Our findings highlight the central role of contributors across a spectrum of DAOs, including Decentralized Finance protocols. Our research also offers important empirical insights pertinent to ongoing regulatory activities aimed at increasing transparency to DAO governance frameworks.
Keywords:DAO Governance Ethereum Networks Blockchain Voting
## 1 Introduction
DAOs represent organizational structures designed to offer an alternative, decentralized form of governance for decentralized applications (dApps) operating on Distributed Ledger Technologies (DLTs). The intention of DAOs is to circumvent central authorities and hierarchical structures that are prevalent in traditional organizations, and democratize the decision-making process by distributing voting rights through so-called governance tokens to community members [9].
Anecdotal evidence suggests that these intentions are not always met in practice. For instance, there are signs of a centralized power circle that has emerged within the Decentralized Exchange (DEX) service Sushiswap [29]. Similarly, the governance of Arbitrum DAO proposed channeling tokens valued at 1 billion US dollars into their own treasury [38]. The lending protocol Solend confiscated the
funds of a prominent user who posed a risk to its financial stability [10]. In another instance, major cryptoasset exchanges, significant entities in this context, reportedly colluded and leveraged investors' tokens to vote on the Steem platform [15, 27]. Attempts at bribery have been noted among community members in governance forums [40]. Lastly, developers from the mixing service Tornado Cash are reportedly under investigation for financial crimes; it's alleged they manipulated its governance to circumvent the introduction of rigorous anti-money laundering controls [18, 13].
It is well known that governance tokens are distributed primarily to team members, early investors, or protocol treasuries [6], and decision-making power can be concentrated in the hands of a few [5]. Earlier research has provided preliminary evidence on the involvement of DAO team members and developers in DAOs decision-making processes [34, 23, 20]. However, there is a surprising gap in studies that systematically investigate the role of vested users in the governance of DAOs and how they determine their trajectories.
In this study, we focus on DAO _contributors_, encompassing project owners, administrators, and developers. These contributors are involved in the technical realization of the dApp overseen by a DAO and thus can be viewed as vested users. Our aim is to empirically examine their influence in decision-making processes, the structure of their co-voting network, and any sudden shifts in majorities just before voting takes place. Our contributions and findings can be summarized as follows:
1. We compiled a dataset comprising 986 557 voters across 872 DAOs with 7478 recognized contributions. Additionally, we cross-verified a subset of 438 668 votes from 8116 proposals against their on-chain records, determining that 97.48% of these were consistent.
2. We introduce a metric to measure the _involvement of contributors_ in DAO voting: in 66 (7.54%) DAOs contributors held, on average, the necessary majority to steer governance. We also measured _contributor self-decisions_, discovering that their votes were decisive in 178 (20.41%) DAOs.
3. We analyze the co-voting structures of users through a network approach. Our findings indicate that contributors are more likely to be found towards the center of the DAO governance ecosystem. Furthermore, contributors are highly concentrated in a few communities formed by co-voting patterns.
4. We observed _majority shifts_ in governance token ownership in 1202 (14.81%) out of 8116 proposals in the days preceding the votes. The number of majority shifts increases sharply prior to governance polls, indicating last-minute token acquisitions.
To the best of our knowledge, our study is the first to systematically investigate the role of contributors in the governance of DAOs. It underscores their pivotal role across various DAOs, including leading Decentralized Finance (DeFi) protocols. Beyond shedding light on centralization tendencies within DAO governance structures, our findings demonstrate that contributors possess the capability to effectively steer the direction of DAOs. These insights have significant
implications regarding accountability. Moreover, they are relevant in the context of current regulatory initiatives aimed at pinpointing the individuals who either control or exert notable influence over DeFi operations or structures [31].
We will release our dataset and the implementation of methodologies to ensure the reproducibility of our findings.
## 2 Background, Definitions and Related Work
### Voting in Decentralized Autonomous Organizations
Decentralized Autonomous Organizations (DAOs) are a novel form of governance model that has become popular in the crypto ecosystem since 2020. They can govern decentralized applications (dApps) and their associated smart contracts [45]. Several Decentralized Finance (DeFi) protocols implement DAO governance models [2], e.g., MakerDAO [26, 36], Uniswap [1], Sushiswap [37], and Compound [25]. DAOs can also operate without an underlying dApp [12].
DAO voting mechanisms can be divided into two primary categories: _on-chain_ and _off-chain_ voting. The former occurs directly on a DLT, through smart contracts implementing the voting logic. To vote, token holders delegate an address that can be controlled by another entity. This approach offers security and transparency, but transaction costs make it economically inefficient [17, 16]. The latter takes place on centralized platforms like Snapshot [33], and only the voting outcome is stored on the DLT. This method is more scalable, accessible and efficient, at the cost of higher centralization (e.g., concerns that the DAOs might not enforce the decisions, concurrent voting on different platforms, or non-tamper-proof databases). Our study focuses on the Snapshot platform, the largest off-chain governance platform with a market share of over 90% [43].
Decision-making in DAOs is executed through voting on so-called _improvement proposals_ that can determine the evolution of the technical infrastructure [41], modify parameters affecting the economic incentives and design [14, 11], or reallocate funds managed by a DAO [39, 42]. Governance users can participate in the voting by possessing specific tokens, known as _governance tokens_, which represent their DAO membership and their proportional decision-making power.
### Definitions
We now present a conceptual model of DAO voting and introduce the key terminology and notation used throughout this paper, referring to DAOs as _spaces_[33]. Figure 1 illustrates the entities: spaces, proposals and users; and it describes their relations of contribution and vote.
* Let \(\mathcal{U}\) be the set of all **users** exercising voting rights and \(\mathcal{S}\) be the set of all **spaces**. Users can also be denoted as **voters** in this context.
* A **contribution** is a relation \(\mathcal{C}\subseteq\mathcal{U}\times\mathcal{S}\times\mathbb{P}(\mathcal{T})\), where \((u,s,T)\in\mathcal{C}\), if a user \(u\) contributes to a space \(s\) in one or more role types \(T\subseteq\mathcal{T}=\{\text{owner},\text{administrator},\text{developer}\}\). Users can take multiple roles. A **contributor** is a user that has at least one contribution association to one space.
* A **proposal*
* is a relation \(\mathcal{P}\subseteq\mathcal{S}\times\mathbb{P}(\mathcal{O})\times\mathbb{P}( \mathcal{F})\times\mathbb{N}^{+}\), where \((s,O,F,h)\in\mathcal{P}\), if there is a proposed change to a space \(s\) providing a set of choices or options \(O\) to vote on, and a set of strategies \(F\) to be applied for determining the outcome of a vote at a given block height \(h\). The sets of options and strategies are defined as follows:
* \(O^{p}\subseteq\mathbb{P}(\mathcal{O})\) denotes the set of possible options (choices) that can be selected during the voting phase on the improvement proposal \(p\). In most cases, the alternatives are simply a yes/no answer (i.e., \(O^{p}=\{Yes,No\}\)).
* \(F^{p}\subseteq\mathbb{P}(\mathcal{F})\) denotes the set of strategy functions that are applied to compute the voting power for the governance user issuing a vote.
* A **vote** is a relation \(\mathcal{V}\subseteq\mathcal{U}\times\mathcal{P}\times\mathcal{O}\times \mathbb{R}^{+}\), where \((u,p,o,m)\in\mathcal{V}\), if a user \(u\) votes on a proposal \(p\) by selecting an option \(o\in O^{p}\), where \(O^{p}\) denotes the set of options published as part of a specific proposal \(p\). In rare cases, \(o\) can become a vector, e.g., the associated voting strategies allow one to express multiple choices. Then, the magnitude vector \(m\) characterizes the weighted preference of each option, and \(\sum_{i}m_{i}=1\); if the vote expresses one single choice, \(m\) is a scalar equal to \(1\). We further denote as \(V^{p}\subseteq\mathcal{V}\) the set of all votes related to proposal \(p\). We distinguish between two types of votes: 1. A user can contribute and vote on an improvement proposal of the same space. We, therefore, denote \(V^{P}_{SS}\subseteq V^{p}\) as the set of **same-space votes**, where, for all tuples \((u_{i},p_{i},o_{i},m_{i})\in V^{P}_{SS}\), a tuple \((u_{j},s_{j},T_{j})\in\mathcal{C}\) such that \(u_{i}=u_{j}\) and \(s_{n}=s_{j}\) for \(p_{i}^{s_{n}}\) exists, i.e., the users \(u_{i}\) equals \(u_{j}\) and \(s_{n}\) of proposal \(p_{i}\) equals the space \(s_{j}\). 2. A user can also contribute to one space and vote on an improvement proposal for another space. We denote \(V^{P}_{OS}\subseteq V^{P}\) as the set of **other-space votes**, where for all tuples \((u_{i},p_{i},o_{i},m_{i})\in V^{P}_{OS}\), a tuple \((u_{j},s_{j},T_{j})\in\mathcal{C}\) such that \(u_{i}=u_{j}\) exists, but there does not exist one where additionally \(s_{n}=s_{j}\) is fulfilled for \(p_{i}^{s_{n}}\). Note that: \(V^{P}_{SS}\cap V^{P}_{OS}=\emptyset\).
Figure 1: **Conceptualization of DAO voting.** A proposal \(p\) introduces potential changes to a DAO space \(s\), and users \(u\) can exert their decision-making power on them with their vote \(v\) () and voting power \(w\) indicated by the arrow thickness. Governance users can be vested by their contribution \(c\) as _owner_, _administrator_ or _developer_ to a space (). We denote their vested vote as \(V^{P}_{SS}\) when they are contributors of the _same-space_ () they are voting on, and \(V^{P}_{OS}\) when they are contributors of an _other-space_ ().
3. Finally, we denote \(V_{C}^{P}=V_{SS}^{P}\cup V_{OS}^{P}\) as the set of contributor votes.
* The **voting power** is the weight \(w\) assigned to an option \(o\) and characterizes the influence of a vote \(v\). It is determined by the strategy function \(f:\mathcal{V}\times\mathbb{N}^{+}\rightarrow\mathbb{R}^{+}\) of the vote \(v\) at block height \(h\). For proposals with multiple functions, the weight is defined by their sum \(F^{p}(v,h):=\sum_{f\in F^{p}}f(v,h)\).
* Finally, the options \(O^{p}\) can be ranked by aggregated voting power \(w\). We denote as the _outcome_ the options \(\hat{O}^{p}=[\hat{o}_{1}^{p},\,\hat{o}_{2}^{p},\,\dots]\) ranked in descending order by voting power, and denote \(\hat{o}_{1}^{p}\) as the _decision_, i.e., the option having the highest accumulated voting power for the proposal \(p\).
### Related work
Prior research has extensively documented that the ownership of governance tokens is highly concentrated [34, 23, 28, 6, 16], as a result of intentional design decisions and market dynamics (governance tokens carry a market price and can be traded). Furthermore, their total supply, the monetary policy, and the initial token allocation affects their distribution; finally, mechanisms such as airdrops [21] further favor early participants and DAO members.
Studies focusing specifically on on-chain DAO voting confirm that governance tokens are highly concentrated. Furthermore, they show that users rarely exercise voting rights [5], and that individuals who possess the potential power to alter outcomes rarely exercise it [20, 17]. Two related works identify the existence of voters' coalitions in MakerDAO [36, 35]. A preliminary study reports examples of voters who held governance tokens for the duration of a single proposal life-cycle [16].
Our work is closely related to studies on off-chain voting. Wang et al.[43] delivered a comprehensive overview of the voting platform Snapshot. Laturnus [24] utilized data from Snapshot and DeepDAO to investigate the economic performance of DAOs in relation to ownership concentration and voting participation.
While earlier research has provided preliminary evidence on the involvement of DAO team members and developers in DAOs decision-making processes, none of these studies has systematically investigated the role of vested users in DAOs and their influence in determining their trajectories. This knowledge gap serves as the motivation for our study.
## 3 Data
To analyze the involvement of contributors in DAO decision-making through voting, we gather data from the following sources: Snapshot, Ethereum blockchain, Ethereum Name Service (ENS) and The Graph. We combine them to identify contributions, as defined in Section 2.2. Then, we clean, verify, and validate our dataset, as summarized in Table 1. Additional details on the entire data preparation process and contribution identification are reported in Annex A.
Raw dataset.We obtained \(1\,603\,994\)_DAO voters_ and their wallet addresses from the Snapshot dataset. These have cast \(8\,365\,707\) votes from Nov-2020 to Dec-2022 on \(76\,851\)_proposals_, using 208 distinct voting strategies in \(12\,294\)_DAO spaces_. Next, we identify voters' contributions to DAOs by joining their addresses with additional data. We extract their respective roles \(\mathcal{T}\) by retrieving the domain _owner_ address from ENS references, the _administators'_ addresses from Snapshot, and the creators, or _developers_, of code accounts (CA) from the blockchain transaction for all space-related CA from Snapshot.
Cleaned dataset.We found that \(42.67\%\) of DAO spaces have one proposal only, \(54.05\%\) have less than five followers and \(50.85\%\) have at most two voters. We consider these as indicators for immature governance structures. Therefore, we apply a cleaning procedure by incorporating related benchmarks that assess minimum requirements to include mature DAOs in the data set. We also remove non-final proposals and restrict to proposals using the _single-choice_ voting.
Validated dataset.Previous studies have shown inconsistencies between the reported and actual on-chain data, including instances of flawed data records within a Blockchain explorer [22], emphasizing the need for a validation framework. Therefore, we validate the consistency between the voting power values computed by Snapshot and the ground truth reflected in on-chain data.
We focus on the Ethereum Blockchain, the most relevant one for Snapshot [43] in terms of expressed voting power, and only consider proposals that are almost entirely5 covered by strategies bound to \(F^{\prime}\subseteq\mathbb{P}(\{f^{erc20},f^{erc721},f^{eth}\})\). With this approach, we could verify that \(461\,402\) (\(97.48\%\)) of \(473\,306\) Ethereum Snapshot weights are correct.
\begin{table}
\begin{tabular}{l r r r} \hline \hline & Raw & \multicolumn{1}{c}{Cleaned} & \multicolumn{1}{c}{Validated} \\ & & (Sections 4 \& 5) & (Section 6) \\ \hline Spaces \(\mathcal{S}\) & \(12\,294\) & \(872\) & \(357\) \\ Voters \(\mathcal{U}\) & \(1\,603\,994\) & \(986\,557\) & \(119\,413\) \\ Contributions \(\mathcal{C}\) & \(11\,949\) & \(7478\) & \(3927\) \\ Proposals \(\mathcal{P}\) & \(76\,851\) & \(35\,124\) & \(8116\) \\ Votes \(\mathcal{V}\) & \(8\,365\,707\) & \(5\,240\,622\) & \(438\,668\) \\ Contributor votes \(\mathcal{V}_{\mathcal{C}}\) & \(316\,900\) & \(191\,507\) & \(22\,878\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Dataset summary.** The _raw_ dataset combines Snapshot data on voters \(\mathcal{U}\) with additional sources to identify the contributions \(\mathcal{C}\) and quantify their voting activity \(\mathcal{V}_{\mathcal{C}}\). Users vote (\(\mathcal{V}\)) on improvement proposals \(\mathcal{P}\) to DAO spaces \(\mathcal{S}\). To focus on DAOs with mature governance structures, we _cleaned_ and _validated_ the data set using selected proposals with Ethereum on-chain data.
## 4 Influence of contributors on DAO governance
### Contributor involvement
Contributors are vested users, having intuitively higher incentives to be involved in the decision-making in DAOs. We analyze their involvement by measuring their voting power exercised in proposals. As discussed in Section 2.2, the voting power \(w_{i}\) is the weight assigned to an option \(o_{i}\) and characterizes the influence of a vote \(v_{i}\). In most cases, it is equal to the amount of governance tokens held by the user. Recall that the weight \(w_{i}\) is the result of a proposal strategy function \(F^{p}(v_{i},h)\) applied on a vote \(v_{i}\) at a specific block height \(h\).
We compute the involvement of contributors in a given space by averaging the share of voting power they have in proposals associated with that space. Since minted amounts of governance tokens, their prices, and their distributions across user vary, we normalize voting power by total voting power across proposals as follows: \(\tilde{w}_{i}=w_{i}\times(\sum_{v_{l}\in V^{p}}w_{l})^{-1}\). Next, we consider the set of contributors \(V^{p}_{C}\), that includes _same-spaces_ voters \(V^{p}_{SS}\) as well as _other-space_\(V^{p}_{OS}\). Then, we compute the fraction of weights controlled by contributors \(\tilde{w}^{p}_{C}\) (1) and finally obtain the **contributor involvement**\(\tilde{w}^{s}_{C}\) (2) as the average weights of contributors' votes for all proposals \(P\) in a DAO space \(s\).
\[\tilde{w}^{p}_{C}=\sum_{v_{i}\in V^{p}_{C}}\tilde{w}_{i},\hskip 42.679134pt(1) \hskip 42.679134pt\bar{w}^{s}_{C}=\,|P|^{-1}\sum_{p\in P}\tilde{w}^{p}_{C}. \tag{2}\]
To give an example, let's assume proposal \(p\) has four votes \(\{v_{1},v_{2},v_{3},v_{4}\}\) with normalized voting powers \(\{.1,.4,.3,.2\}\), where \(\sum\tilde{w_{i}}=1\). Supposing that the first two voters are contributors, ie \(V^{p}_{C}=\{v_{1},v_{2}\}\), then \(\tilde{w}^{p}_{C}=.1+.4=.5\).
Figure 2: **Contributor involvement across DAO spaces**. The DAOs are ranked by contributor involvement \(\tilde{w}^{s}_{C}\) (\(\bullet\)) from highest (left) to lowest (right). Some high-TVL dApps (\(\blacklozenge\)) are annotated for illustrative purposes and contributor involvement of more than 50% is colored (\(\bullet\)).
We determine \(\bar{w}_{C}^{s}\) for all spaces on the cleaned data set, and show the results in Figure 2. The DAO spaces are ranked by contributor involvement in descending order. Thus, the involvement of contributors is high for the DAOs on the left-hand side and low for the DAOs on the right-hand side. For illustration purposes, we highlight the data points representing top DeFi protocols in terms of Total Value Locked (TVL), such as Aave, Uniswap or Instadapp.
Our results show that the involvement of contributors in terms of average voting power is relatively low for most DAOs. The median value is 4.26% and the standard deviation is 21.22. However, for 297 spaces, the relative voting power of contributors is higher than 10% and for 66 DAOs it is higher than 50%. In these spaces, the contributors have, on average, a majority of voting power and can determine single-handedly the outcome of proposals. In 9 spaces, the contributors were the only voters with 100% voting power.
### Contributor self-decisions
Knowing that DAO contributors are involved in decision-making, we now investigate to what extent they decide on proposals related to their own spaces. Thus, we concentrate our analysis on the votes cast by users that contributed on improvement proposals of the same spaces (\(V_{SS}^{p}\), see Section 2.2), which we herein denote as "self-votes". Furthermore, we also consider the choices they made with their votes and their influence on the outcome of a proposal.
Recall that the _decision_ of a proposal poll is determined by the option with the highest voting power \(\hat{o}_{1}^{p}\) within the ranked outcome \(\hat{O}^{p}=[\hat{o}_{1}^{p},\,\hat{o}_{2}^{p},\,\dots]\). We denote the set of _decisive self-votes_ as \(V_{D}^{p}\), where the option \(o\) of \(V_{D}^{p}\) is the winning choice \(\hat{o}_{1}^{p}\).
We are specifically interested in the self-votes where the decision-making was dominated by contributors, which we denote as _contributor self-decisions_. Intuitively, for a given space \(s\), we determine the share of selected proposals based on two joint conditions. First, we consider the weight of contributor votes within a decision and select those proposals where contributors have a relative majority (\(\geq\) 50%). Second, we consider also the second-ranked option and select those proposals where the weight of contributors in the decision is higher than the weight of the second ranked option. The underlying intuition is that in a head-to-head race between options, contributors might want to outweigh and overrule a leading option.
More formally, we define the set of _decisive self-votes_\(V_{D}^{p}=V_{\hat{o}1}^{p}\cap V_{SS}^{p}\) and also the complement set \(V_{CV}^{p}=V_{\hat{o}1}^{p}\setminus V_{D}^{p}\); we identify the fractions of relative voting power for decisive self-votes (3), for the complement set (4) and for the second choice \(\hat{o}_{2}^{p}\) (5) as
\[\tilde{w}_{D}^{p}=\sum_{v_{i}\in V_{D}^{p}}\tilde{w}_{i},\quad(3)\quad\tilde{ w}_{CV}^{p}=\sum_{v_{i}\in V_{CV}^{p}}\tilde{w}_{i},\,\,\,(4)\,\,\,\,\,\,\,\, \tilde{w}_{\hat{o}2}^{p}=\sum_{v_{i}\in V_{\hat{o}2}^{p}}\tilde{w}_{i}. \tag{5}\]
For a given space \(s\), we can define the _contributor self-decisions_\(\delta^{s}\) as follows:
\[\delta^{s}:=|P|^{-1}\,\sum_{p\in P}[\,(\tilde{w}_{D}^{p}>\tilde{w}_{CV}^{p})\wedge (\tilde{w}_{D}^{p}>\tilde{w}_{\tilde{o}2}^{p})\,]\quad. \tag{6}\]
Figure 3 shows the results with DAOs ranked by self-decisions in descending order. Note that we introduced thresholds and only show spaces with self-decisions above 0.1%. This gives us 178 (20.41%) different spaces where contributors of the same DAO decided on at least one proposal on their own. In total 2100 out of 35 124 proposals were decided by governance users who contributed and voted on the same DAO. Annex B provides more details and analyses on involvement and self-decisions.
## 5 Co-voting networks
In Section 4, we measured the involvement of contributors in decision-making for each space separately; however, users can contribute to multiple proposals and DAO spaces. Herein, we conduct a network-based analysis of users' co-voting patterns across DAOs. We analyze topological features such as centrality measures and community structures that may indicate whether contributors occupy a central role in the DAO voting ecosystem.
Networks construction.The basis for the investigation is the bipartite network \(G_{PU}\) that links users \(\mathcal{U}\) to the proposals \(\mathcal{P}\) they voted on, having options \(\mathcal{O}\) as edge features. We derive _co-voting networks_ as a monopartite projection of the \(G_{PU}\) on voters, by creating a network of users with weighted links that represent
Figure 3: **Contributor self-decisions across DAO spaces.** The 178 DAOs are ranked by contributor self-decisions \(\delta^{s}\) (\(\bullet\)) in descending order, with a threshold of 0.1%. The y-axis represents the fraction of proposals in which DAO contributors voted and decided their outcome with dominant voting power. Some high-TVL dApps (\(\blacklozenge\)) are annotated for illustrative purposes.
the number of proposals they voted together. We introduce a global threshold \(T\) on links to focus on users that systematically voted together on the same proposals and for computational reasons.
Ultimately, we build four co-voting networks crossing DAOs and votes as shown at the top of Table 2, namely:
* \(\mathbf{G_{AA}}\) is the entire co-voting network, containing all votes, regardless of users' choices, and all spaces;
* \(\mathbf{G_{AW}}\) is the co-voting network of decision-makers. It only takes into account votes \(v_{i}\) for the _winning_ decision \(\tilde{\sigma}_{1}^{p}\) (we hypothesize that co-voting patterns may be especially relevant among users who voted for the winning outcome);
* \(\mathbf{G_{TA}}\) is the co-voting network of all votes of the top-100 DAOs by TVL.
* \(\mathbf{G_{TW}}\) is the co-voting network of decision-makers in the top-100 DAOs by TVL, was constructed in the same fashion of \(G_{AW}\).
Network descriptive statistics.We utilize the cleaned data set of \(5\,240\,622\) voting relations on \(872\) DAOs and \(35\,124\) proposals. We create the networks using the threshold \(T=10\) on the links among voters; despite this, given the inherent computational challenge on computing metrics on the large network \(G_{AA}\), we focus mainly on the remaining three networks.
In all networks, we identify small-world features, i.e., they are characterized by the presence of several hubs conveying information rapidly across connected communities [44]. Interestingly, the share of contributor nodes and edges increases in the Top-100 networks (see bottom rows in Table 2), suggesting both that they are more active and that they tend to have a larger weight in shaping the outcomes of proposals of the most important DAOs, rather than the peripheral ones. The rest of the section tries to confirm whether this intuition is true. Appendix 0.C contains additional network statistics and analyses.
\begin{table}
\begin{tabular}{l r r r r} \hline Network & \(G_{AA}\) & \(G_{AW}\) & \(G_{TA}\) & \(G_{TW}\) \\ \hline Daos & All & All & Top-100 & Top-100 \\ Votes & All & Winning & All & Winning \\ \hline \hline Num Nodes & \(104\,863\) & \(75\,879\) & \(20\,401\) & \(14\,494\) \\ Num Edges & \(739\,813\,062\) & \(107\,374\,710\) & \(19\,917\,792\) & \(6\,045\,065\) \\ Avg. Degree & \(14\,110.09\) & \(2830.16\) & \(1952.63\) & \(834.15\) \\ \hline Contr. Nodes & \(1.29\%\) & \(1.45\%\) & \(3.25\%\) & \(4.5\%\) \\ Contr. Edges & \(1.61\%\) & \(1.76\%\) & \(3.4\%\) & \(8.0\%\) \\ \hline \end{tabular}
\end{table}
Table 2: **Network statistics of four co-voting networks.** The top of the table defines the four networks as a unique combination of two features: DAOs and Votes. _Top-100_ DAOs are ranked by total value locked (TVL); _Winning_ votes are votes for the choice that ultimately won the majority of voting power.
### Centrality of contributors
To understand the influence of contributors on governance voting, we computed several network centrality measures for contributor and non-contributor nodes, namely pagerank, closeness, eigenvector, and betweenness centrality, and a k-core analysis. We present pagerank in Figure 4 as well as the k-core. Across all four networks, contributor nodes score higher in centrality in all measures but eigenvector for \(G_{AW}\). These differences are generally highly significant (t-test \(p<0.001\)), only the betweenness centrality shows more variability (\(p<0.05\) for \(G_{TA}\), and \(p<0.1\) for \(G_{TW}\)).
We also computed the k-coreness of contributors. A high k-core indicates direct connections with other nodes with high k-core nodes, that is, nodes with at least degree \(k\). Across all networks, contributors have, on average, significantly higher k-core (t-test \(p<0.001\)). For these statistics we chose to use the geometric means because they are less sensitive to outliers. In fact, contributors are generally less frequent in the portion of the distribution with the lowest k-core, however, there exists a few clusters of mainly non-contributors with very high k-core, which would skew the results.
### Communities of contributors
To understand the presence of hidden co-voting formations, we performed the Louvain community detection method on three co-voting networks. This method optimizes the modularity of the graph so that the connections within each community are dense, while the connections across communities are sparse. As a result, each node in the graph is uniquely assigned to a community. We then tested if contributors can be found with equal probability in all communities or whether they are more likely to cluster together in a few of them. To answer
Figure 4: **Pagerank and k-core statistics in the four co-voting networks**. Contributors tend to have higher pagerank and k-core across networks. All centrality measures make use of edge weights and are applied to the giant component; k-core statistics use geometric mean to limit the effect of outliers. Error bars are 95% confidence intervals of the means.
this question, we computed the Herfindahl-Hirschman index of market concentration on the distribution of contributors to communities. A higher value of this index indicates a more concentrated market, that is a distribution with fewer groups or communities dominating in size. Figure 5 indeed shows a very high concentration level for contributors in all networks with a peak above 7000 (below 1500 is considered well-mixed, between 1500 to 2500 is moderately concentrated, and above 2500 is highly concentrated). Counting the communities with at least one contributor (the donut plot inside each panel of Figure. 5), contributors are to be found only in about 21-50% of all detected communities. A Pearson's Chi-squared test indicates a significant deviation from chance in all networks (\(p<0.001\), with 100 000 bootstrapped replicates). Figure 6 visually confirms this result for the \(G_{TA}\) network: contributors (dark red) tend to cluster in a few central communities.
## 6 Pre-voting power shifts
Governance tokens are cryptoassets and, consequently, can be purchased and sold. Furthermore, previous works provide preliminary evidence that users may hold their voting rights only for the duration of single proposals [16]. Therefore, we hypothesize that changes in the ownership distribution shortly before the voting power is determined could indicate attempts to acquire additional power to influence a proposal's decision. We investigate to what extent users, and especially contributors, acquire voting rights shortly before the poll execution.
For proposals that rely on on-chain data, it is possible to access the current and historical token balances of voters. Thus, we determine their voting power at earlier points in time by re-implementing the proposal strategies on historical data and comparing it to their actual voting power. Note that we recompute it
Figure 5: **Concentration of contributors across network communities.** The bar plots show the Herfindahl-Hirschman concentration index for the distribution of contributors (\(\blacksquare\)) and non-contributors (\(\blacksquare\)) to communities assigned by the Louvain community detection algorithm. The inset donut plots show the share of communities with at least one contributor; in all networks, contributors are concentrated in a few of them.
considering that the voter \(v_{i}\) still selects the same option \(o_{i}\). Assuming that users' holdings do not fluctuate with high frequency, we sample their token balances on a daily basis6. More formally, given a proposal \(p\), for each voter \(u_{i}\) we denote the actual voting power \(w_{i}(h_{\tau})\) as the voting power at the block \(h_{\tau}\) of the vote execution, and \(\hat{O}^{p}(h_{\tau})=[\hat{o}_{1}^{p}(h_{\tau}),\,\dots]\) as the actual ranked outcome. Next, for each of the 100 days preceding the vote on the proposal \(p\), we recompute the users' historical voting power \(w_{i}(h_{\tau-t})\) and the resulting hypothetical ranked outcome \(\hat{O}^{p}(h_{\tau-t})\), where \(h_{\tau-t}\) is a block representative of the \(t^{th}\) day before the poll. We thus compare \(\hat{O}^{p}(h_{\tau-t})\) to \(\hat{O}^{p}(h_{\tau-t-1})\) and determine whether there was a _majority shift_ if \(\hat{o}_{1}^{p}(h_{\tau-t})\neq\hat{o}_{1}^{p}(h_{\tau-t-1})\). Finally, we measure the number of majority shifts across proposals. Since we are correlating against on-chain data, we utilize the validated dataset described in Section 3, covering 8116 (23.11%)
Figure 6: **The co-voting network of the Top-100 DAOs by TVL (winning votes only). The colors identify the largest communities obtained by optimizing modularity, while smaller communities are in light gray; contributor nodes are colored in dark red (\(\bullet\)). Contributors are not uniformly distributed across communities, but they tend to cluster in a few of them in the middle of the graph, suggesting a higher network centrality. Network plotted using the OpenOrd layout algorithm in Gephi [7], after removing redundant edges following [30].**
proposals. We therefore emphasize that the findings reported in this Section are a lower boundary estimation.
In total, we found majority shifts for 1202 (14.81%) proposals in 229 DAOs in the 100 days before the poll. The median number of shifts per proposal is 1, with a standard deviation of 2.64, and the maximum number of shifts for a single proposal is 30. To investigate whether the majority shifts are more frequent in the proximity of vote executions, Figure 7 reports the aggregated count of majority shifts across proposals as a function of the time distance from the vote execution. We observe a constant or slightly increasing trend in farther dates from \(-100\)d to \(-50\)d, and a clearly increasing trend the closer time gets to the vote date 0d. This indicates that the trading of governance tokens increases shortly before polls and that users might trade voting power to decide the outcome of the proposal in their preferred way. We acknowledge, however that we only identify a pattern and further research is required to better investigate this phenomenon.
Finally, we examine the participation of contributors in the proposals with majority shifts. Out of 1202 proposals with majority shifts, 1362 contributors associated with 1457 different DAO spaces voted in 728 (60.57%) proposals.
## 7 Discussion and Conclusions
Our study augments the existing body of knowledge on decision-making in DAOs. It substantiates the results of previous studies, highlighting that the distribution of governance tokens is highly concentrated [6, 20], and the exercise of voting rights is very low [5]. Going beyond these findings, we discovered evidence that contributors, who are essentially users vested in DAOs, are involved in decision-making and, in some cases, have the power to effectively influence
Figure 7: **Majority shifts occur in temporal proximity of polls**. Majority shifts occur when the voters, shortly before a proposal, trade enough governance tokens to swing the final outcome of the poll. We focus on the _validated dataset_ and identify majority shifts up to 100 days before the votes. In temporal proximity to the proposals, the number of shifts increases, indicating _last-minute_ voting power acquisition.
the trajectories of DAOs. Furthermore, we provide evidence that contributors are more likely found at the center of the DAO governance ecosystem and that majority shifts happen, especially shortly before the votes.
These findings have several implications. First, they suggest that contributors are overrepresented in the decision-making process of certain DAOs compared to other governance users. This is in line with known concerns that contributors may have differing interests and that users with smaller stakes might be discouraged from voting [8]. Second, we found only limited evidence for DAO contributors influencing other, possibly competing, spaces. This is relevant because a rational governance token holder vested in a space might be interested in voting against proposals benefiting the evolution or adoption of other, competing DAOs (see [19]). Third, we found evidence of co-voting patterns among contributors, which is an indicator of the existence of inner circles of power in DAOs. These findings refute the conventional wisdom that DAOs are decentralized and run autonomously without being under anyone's control. This is relevant for resolving questions of accountability, as vested users and large governance token holders may be considered members of a legally recognized entity and therefore responsible for the underlying dApp. The ongoing Tornado Cash investigation claiming that the developers influenced its governance is a prime example of this line of argumentation.
Our work clearly faces some limitations and opens directions for further research. Currently, it focuses on off-chain voting and on one platform alone (Snapshot), whilst voting is executed also on-chain on multiple DLTs, as well as on other off-chain platforms. Extending the study to other governance platforms and to on-chain DLT voting would be a straightforward improvement. Furthermore, our results provide preliminary evidence that majority shifts take place before voting. It would be important to investigate and explain more formally the factors influencing governance participation and voting, also combining on-chain data with traditional methods surveying crypto users [4, 3]. Lastly, we view the contributors in our dataset as a lower boundary. Incorporating more data sources, such as Github, would likely elevate this baseline.
Who governs DAOs? This question has been the driving force behind our research and is also a significant concern for regulators currently formulating policy recommendations for Decentralized Finance (DeFi) systems [31]. Although our study does not aim to unveil the identities of the responsible individuals, as regulatory efforts suggest, it offers a systematic investigation into the role of vested users in the governance of DAOs. This, in turn, can provide valuable insights and inform ongoing regulatory debates on that topic. |
2309.04021 | Magnetocentrifugal mechanism of pair creation in AGN | In the manuscript, we study the efficiency of pair creation by means of the
centrifugal mechanism. The strong magnetic field and the effects of rotation,
which always take place in Kerr-type black holes, guarantee the frozen-in
condition, leading to the generation of an exponentially amplifying
electrostatic field. This field, when reaching the Schwinger threshold, leads
to efficient pair production. The process has been studied for a wide range of
AGN luminosities and black hole masses, and it was found that the mechanism is
very efficient, indicating that for AGNs where centrifugal effects are
significant, the annihilation lines in the MeV range will be very strong. | Zaza N. Osmanov, Gianluigi Bodo, Paola Rossi | 2023-09-07T21:21:59Z | http://arxiv.org/abs/2309.04021v1 | # Magnetocentrifugal mechanism of pair creation in AGN
###### Abstract
In the manuscript, we study the efficiency of pair creation by means of the centrifugal mechanism. The strong magnetic field and the effects of rotation, which always take place in Kerr-type black holes, guarantee the frozen-in condition, leading to the generation of an exponentially amplifying electrostatic field. This field, when reaching the Schwinger threshold, leads to efficient pair production. The process has been studied for a wide range of AGN luminosities and black hole masses, and it was found that the mechanism is very efficient, indicating that for AGNs where centrifugal effects are significant, the annihilation lines in the MeV range will be very strong.
keywords: pair creation; AGN: general; instabilities; acceleration of particles +
Footnote †: journal:
## 1 Introduction
In the literature, the population of electron-positron pairs in AGN magnetospheres has been studied from different perspectives. In the framework of Penrose pair production, the MeV photons originating in the inner accretion disk and entering the ergosphere may increase their energy via the blueshift effect. In due course, the energy will reach the GeV threshold, which is enough for pair creation after scattering off the protons [1]. Another popular scenario is the so-called \(\gamma\gamma\) process, when high-energy photons scatter off relatively soft photons, always present in the accretion disks, and produce electron-positron
pairs [2]. As it has been found, these are not the only mechanisms providing the population of \(e^{+}e^{-}\) pairs. The present paper is dedicated to the study of the new mechanism of pair production in the AGN magnetospheres.
In a recent paper [3], a new mechanism of pair creation in the magnetospheres of pulsars has been presented. In particular, it has been shown that since the magnetospheres of pulsars are characterized by rotation, magnetocentrifugal effects might lead to the generation of Langmuir waves. On the other hand, the centrifugal force harmonically depends on time [4], leading to the parametric instability of the process and thus to the exponential growth of the electrostatic field. By means of this growth, under certain conditions, the electrostatic field will approach the Schwinger threshold, \(E_{S}=\pi m^{2}c^{3}/(e\hbar)\simeq 1.4\times 10^{14}\) statvolt cm\({}^{-1}\)[5, 6, 7], when pair creation might start. Here \(m\) and \(e\) are the electron's mass and charge respectively, \(c\) is the speed of light; and \(\hbar\) denotes the Planck constant. In particular, quantum electrodynamics considers the vacuum as a complex system composed of virtual particles and antiparticles continuously creating and annihilating. The strong electric field (\(E\)) on the other hand, if its work done on the Compton wave-length, \(\lambda_{{}_{C}}\), is of the order of the necessary energy of materialised pairs, \(eE\lambda_{{}_{C}}\simeq 2mc^{2}\), will lead to extremely efficient pair creation.
It is generally accepted that plasma particles in a nearby region of AGN are embedded in the magnetic field strong enough to provide the frozen in-condition [8], which when combined with effects of rotation (always present in Kerr-type black holes [9, 10]) leads to the relativistic magnetocentrifugal effects close to the light cylinder zone [11] - a hypothetical area where the linear
Figure 1: Sketch of the model, with the centrifugally accelerated co-rotating particles in the nearby zone of the LC area of the Kerr-type black hole with an accretion disk.
velocity of rotation exactly equals the speed of light (See the sketch in Fig. 1). As it has been shown, the magnetocentrifugal acceleration might play an important role in particle energization in AGN [13, 12], when particles might reach energies of the order of 10TeV. This very effect will inevitably induce Langmuir waves in the magnetospheres of back holes as well.
A series of papers [15, 14, 16] has been dedicated to studying this particular problem in AGN (also considering particle acceleration) and it was found that the magnetocentrifugal generation of Langmuir waves was so efficient that the energy pumped by the electrostatic modes from rotation was enormous. A similar study for pulsars also showed an extremely efficient character of the excitation of electrostatic modes [17, 18]
In the framework of these works, an approximate expression of GJ particle number density has been used [19] without taking general relativistic effects into account. This particular problem has been studied in [20] where the authors have examined Kerr-type black holes and derived a general-relativistic expression of GJ density.
In [21] the excitation of magnetocentrifugally driven Langmuir waves has been explored by taking the general expression of GJ density into account. The increment of the process has been studied for a wide range of physical parameters, including the particle Lorentz factors, AGN luminosity, and the mass of the central object. It was found that the time scale of the process is small compared to the kinematic time scale of rotation, indicating high efficiency of the exponential amplification of the electric field.
The exponentially increasingg electric field will reach the Schwinger threshold, and pair production will start. In the present manuscript, we examine the new mechanism of the magnetocentrifugally driven pair creation in the AGN magnetospheres and explore the process versus important physical parameters.
The paper is organized as follows: in Sec.2, the general framework of the approach will be outlined; in Sec. 3, we will apply the model to AGN, obtaining results; and in Sec. 4, we summarize them.
## 2 The Framework
In this section, we briefly outline the theoretical model of centrifugally excited Langmuir waves and the corresponding pair cascading that takes place when the electric field approaches the Schwinger threshold. In the framework of the paper, we assume that the magnetic fields are almost straight because, as the study shows [22], they change their rectilinear configuration on the LC surface,
and the particles most of their evolution follows the unperturbed field lines.
The generation of electrostatic waves is fully governed by the set of following equations (rewritten in Fourier space) [21]
\[\frac{\partial p_{{}_{\beta}}}{\partial t}+ikv_{{}_{\beta 0}}p_{{}_{\beta}}=v_{{}_{\beta 0}} \Omega_{0}^{2}r_{{}_{\beta}}p_{{}_{\beta}}+\frac{e_{{}_{\beta}}}{m_{{}_{\beta}}}E, \tag{1}\]
\[\frac{\partial n_{{}_{\beta}}}{\partial t}+ikv_{{}_{\beta 0}}n_{{}_{\beta}},+ikn_{{}_{ \beta 0}}v_{{}_{\beta}}=0 \tag{2}\]
\[ikE=4\pi\sum_{{}_{\beta}}n_{{}_{\beta 0}}e_{{}_{\beta}}, \tag{3}\]
where Eq. (1) represents the Euler equation, Eq. (2) is the continuity equation and Eq. (3) is the Poisson equation. We used the following notations: \(p_{{}_{\beta}}\) denotes the dimensionless first order momentum, \(\beta\) is an index of species (protons or electrons), \(k\) represents the wave number, \(\upsilon_{{}_{\beta 0}}(t)\approx c\cos\left(\Omega_{0}t+\phi_{{}_{\beta}}\right)\) is the unperturbed velocity, \(\Omega_{0}\) denotes the angular momentum of rotation, \(\phi_{{}_{\beta}}\) represents a phase \(r_{{}_{\beta}}(t)\approx\frac{c}{\Omega_{0}}\sin\left(\Omega_{0}t+\phi_{{}_{ \beta}}\right)\) is the radial coordinate of a corresponding species, \(e_{{}_{\beta}}\) denotes charge and \(n_{{}_{\beta}}\) and \(n_{{}_{\beta 0}}\) are respectively the first and the zeroth order Fourier terms of the number density.
Following the method originally developed for AGN in [16] (see also the detailed study in [21]) one obtains the dispersion relation of the process
\[\omega^{2}-\omega_{e}^{2}-\omega_{p}^{2}J_{0}^{2}(b)=\omega_{p}^{2}\sum_{\mu}J _{\mu}^{2}(b)\frac{\omega^{2}}{(\omega-\mu\Omega_{0})^{2}}, \tag{4}\]
leading to the growth rate of the instability
\[\Gamma=\frac{\sqrt{3}}{2}\left(\frac{\omega_{e}\omega_{p}{}^{2}}{2}\right)^{ \frac{1}{3}}J_{\mu}(b)^{\frac{2}{3}}, \tag{5}\]
where \(\omega\) denotes the frequency of the electrostatic waves, \(\omega_{e,p}\equiv\sqrt{4\pi e^{2}n_{e,p}/m_{e,p}\gamma_{e,p}^{3}}\) represents the plasma frequency of a corresponding specie (electrons and protons), \(n_{e,p}\), \(m_{e,p}\) and \(\gamma_{e,p}\) are respectively the density, mass, and the relativistic factor of the mentioned components, \(J_{\nu}(x)\) is the Bessel function of the first kind, \(b=2ck\sin\phi/\Omega_{0}\) and \(\mu=\omega_{e}/\Omega_{0}\).
The evolution of the electric field then writes as
\[E=E_{0}e^{\Gamma t}, \tag{6}\]
where for the initial value of the electric field, one can use the Gauss's law
\[E_{0}\simeq 4\pi n\Delta r, \tag{7}\]
where \(n\) is the number density of particles and \(\Delta r=\frac{\gamma}{d\gamma/dr}\) is a spatial scale where the centrifugal effects are supposed to be most important. By taking an expression of the Lorentz factor of centrifugally accelerated particles into account [23]
\[\gamma=\frac{\gamma_{0}}{1-r^{2}/R_{lc}^{2}}, \tag{8}\]
one can estimate the scale of the shell \(\Delta r\simeq\gamma_{0}R_{lc}/(2\gamma)\)[21]. Here \(\gamma_{0}\) is the initial relativistic factor and \(R_{lc}=c/\Omega_{0}\) represents the LC radius.
If the particle distribution is detrmined by rotation, then the particle density should equal the GJ density, which in the general-relativistic scenario is given by [20]
\[n_{{}_{GJ}}\simeq\frac{\left(\Omega-\Omega^{F}\right)B_{H}r_{H}^{2}\cos\theta }{\pi ce\alpha_{l}\rho^{2}}, \tag{9}\]
where \(\Omega=2c\alpha_{s}r_{g}r/\Sigma^{2}\) is the angular velocity with respect to the absolute space, \(\alpha_{s}=ar_{g}\), \(r_{g}=GM/c^{2}\), \(M\) represents the black hole mass, \(\Omega^{F}=c\alpha_{s}/4r_{g}r_{H}\) denotes the angular velocity by which the magnetic field lines co-rotate, \(r_{H}=r_{g}+\sqrt{r_{g}^{2}-\alpha_{s}^{2}}\) is the event horizon radius, \(\alpha_{l}=\rho\sqrt{\Delta}/\Sigma\), \(\rho^{2}=r^{2}+\alpha_{s}^{2}\cos^{2}\theta\), \(\Delta=r^{2}-2rr_{g}+\alpha_{s}^{2}\) and \(\Sigma^{2}=\left(r^{2}+\alpha_{s}^{2}\right)^{2}-\alpha_{s}^{2}\Delta\sin^{2}\theta\) and \(\theta\) is the angle relative to the axis of rotation.
By means of the instability of the centrifugally induced Langmuir waves, the electric field will reach the Schwinger threshold. For high values of the electric field, the pair creation rate per unit of volume is given by [7, 24]
\[R\equiv\frac{dN}{dtdV}=\frac{e^{2}E^{2}}{4\pi^{3}c\hbar^{2}}\sum_{k}\frac{1} {k^{2}}exp\left(-\frac{\pi m^{2}c^{3}}{e\hbar E}k\right). \tag{10}\]
This expression is valid for constant electric fields. One can straightforwardly check that the Langmuir frequency is of the order of \(10^{4-5}\) Hz. On the other hand, the characteristic frequency of par creation \(\nu\simeq 2mc^{2}/h\sim 10^{20}\) Hz exceeds by many order of magnitude the plasma frequency, indicating that the aforementioned expression realistically describes the pair cascading process. As it is clear from Eq. (10) the pair creation will occur not only for the Schwinger threshold, when the process becomes extremely efficient, but also for values
less than \(E_{S}\). On the other hand, if the difference between \(E_{S}\) and \(E\) becomes large, the process will be exponentially suppressed.
## 3 Discussion and Results
The Kerr-type black holes are rotating with the angular velocity [10]
\[\Omega\approx\frac{ac^{3}}{2GM\left(1+\sqrt{1-a^{2}}\right)}\approx 2.5\times 1 0^{-2}\frac{a}{M_{8}\left(1+\sqrt{1-a^{2}}\right)}rad/s^{2}, \tag{11}\]
where \(M_{8}=M/(10^{8}M_{\odot})\) is a dimensionless mass parameter of the black hole and \(M_{\odot}\simeq 2\times 10^{33}\) g is the solar mass. Rotation combined with the magnetic field will lead to the magnetocentrifugal process of acceleration.
In the framework of the paper, we assume the equipartition approximation when the magnetic field energy density is of the order of the AGN emission energy density and when for the magnetic induction one obtains
\[B\simeq\frac{1}{r}\sqrt{\frac{2L}{c}}\simeq 2.8\times 10^{2}\times\frac{R_{H}}{r }\times L_{42}^{1/2}, \tag{12}\]
where \(L\) is the luminosity of AGN and we use the dimensionless luminosity \(L_{42}=L/(10^{42}erg~{}s^{-1})\). In such a strong magnetic field, electrons have a gyro-radius of the order of \(R_{gyro}\simeq\gamma mc/(eB)\simeq 10^{-5}\times\gamma/10^{4}\) cm, which is by many orders of magnitude less than the spacial scale factor of the process - LC radius, therefore, the plasma particles are in frozen-in condition and particles centrifugally accelerate. But the acceleration process might be limited by several factors.
Moving in the strong magnetic field, the particles will experience very efficient synchrotron losses with the power \(P_{s}\simeq 2e^{4}B^{2}\gamma^{2}/(3m_{p}c^{3})\). Then, for the corresponding time scale of energy losses, one obtains \(\gamma m_{p}c^{2}/P_{s}\) which for the same Lorentz factor, \(10^{4}\), is much less than the rotation period of the black hole's nearby zone, \(2\pi/\Omega\). As a result, the particles, soon after they start accelerating, lose their perpendicular momentum, transit to the ground Landau state, and continue sliding along the field lines. Therefore, the synchrotron mechanism does not impose any constraints on particle energies. A similar scenario takes place for protons as well.
Another constraint has been introduced in [23] for the field lines co-rotating in the equatorial place and developed in [13] for the inclined ones: the particles experience an effective reaction force from the field lines. On the other hand,
the same charged particles experience the magnetic Lorentz force. Initially the particles follow the field lines, but in due course of time the reaction force will exceed the Lorentz one (on the LC area) violating the frozen-in condition and for the maximum Lorentz factor one obtains
\[\gamma_{max}^{BBW}\simeq A_{1}+\left[A_{2}+\left(A_{2}^{2}-A_{1}^{6}\right)^{1 /2}\right]^{1/3}+\left[A_{2}-\left(A_{2}^{2}-A_{1}^{6}\right)^{1/2}\right]^{1/ 3}, \tag{13}\]
with
\[A_{1}=-\frac{\gamma_{0}ctg^{2}\theta}{12}, \tag{14}\]
\[A_{2}=\frac{\gamma_{0}e^{2}B_{lc}^{2}}{4m^{2}c^{2}\Omega^{2}}+A_{1}^{3}, \tag{15}\]
where \(B_{lc}\) is the magnetic field on the LC length-scales.
The particles moving in a photon sea will lead to another process: the inverse Compton scattering, when the accelerated particles encounter soft photons. Saturation occurs when the energy gain and the cooling process balance each other. When this happens, the maximum relativistic factor achievable by electrons writes as [13]
\[\gamma_{max}^{IC}\simeq\left(\frac{8\pi m_{e}c^{4}}{\gamma_{0}\sigma_{T}L \Omega}\right)^{2}, \tag{16}\]
where \(\sigma_{T}\) denotes the Thomson cross section. For protons, the same mechanism is strongly suppressed [25], therefore, in the aforementioned expression, we used the electron's mass.
When charged particles move on curved trajectories, they emit curvature radiation, which, when balanced with the energy gain due to the centrifugal acceleration, achieves the maximum Lorentz factor [26]
\[\gamma_{max}^{c}\simeq\frac{1}{\gamma_{0}^{1/5}}\times\left(\frac{3\pi mc^{3} \sin\alpha}{2\pi e^{2}\Omega}\right)^{2/5}\times\left(\frac{R_{c}}{R_{lc}} \right)^{4/5}, \tag{17}\]
where \(R_{c}\) is the curvature radius of the trajectory, and we assume that the trajectory close to the LC is almost circle [11] and therefore, \(R_{c}\simeq R_{lc}\).
It is clear that the mechanism that provides the minimum value of the relativistic factor is a leading process in limiting the maximum achievable energies.
One can straightforwardly check that if \(\theta\simeq\pi/2\) for electrons, this is the IC scattering with
\[\gamma_{e,max}\simeq 1.2\times 10^{4}\times\left(\frac{10}{\gamma_{0}}\times \frac{M_{8}}{L_{42}}\right)^{2} \tag{18}\]
and for protons - the BBW process
\[\gamma_{p,max}\simeq 3.2\times 10^{6}\times M_{8}^{2/3}\times\left(L_{42}\times \frac{\gamma_{0}}{10}\right)^{1/3}. \tag{19}\]
In Fig. 2 we plot the instability time scale (normalised by the rotation period of the black hole, \(P=2\pi/\Omega\)) versus the Lorentz factors, which are supposed to be equal. The set of parameters is: \(M_{8}=1\), \(L_{42}=1\), \(\theta\simeq 89^{0}\), \(r=R_{lc}/\sin\theta\), \(\gamma_{0}=1\). As it is evident from the figure, for almost the whole range of the considered Lorentz factors, the instability timescale is small compared to the period of rotation, indicating high efficiency of the process. The higher efficiency comes for the smallest Lorrentz factors, but on the other hand, the instability strongly
depends on the relativistic character of the process: the radial velocity behaves as \(\upsilon\simeq c\cos\left(\Omega_{0}t+\phi\right)\)[4], therefore, one could assume \(\gamma_{e,p}\gtrsim 2\).
Due to the instability, the electric field exponentially increases, and by approaching the Schwinger threshold, pair creation will initiate (see Eq. (10)). It is clear from Eq. (10), when it is started, the rate becomes very high. On the other hand, by creating the pairs, the plasma energy density will be significantly increased, which will have a feedback effect on the process itself. In particular, the energy balance leads to a condition when the pair plasma power density gain becomes of the order of the electric power density
\[2m_{e}c^{2}R(t)\simeq\frac{d}{dt}\left(\frac{E^{2}(t)}{8\pi}\right). \tag{20}\]
This is an algebraic equation for solving a time-scale \(t_{0}\), when the condition is satisfied. By combining Eqs.(6,10), for \(\gamma_{e,p}=10\) the aforementioned expression leads to \(E\simeq E_{0}e^{\Gamma t_{0}}\simeq 3\times 10^{12}\) statvolt cm\({}^{-1}\). With such a high value, the
Figure 3: Behaviour of the dimensionless time-scale versus the inclination angle of the field line relative to the rotation axis. The set of parameters is: \(M_{8}=1\), \(L_{42}=1\), \(r=R_{lc}/\sin\theta\), \(\gamma_{0}=1\), \(\gamma_{e,p}=2\).
pair production rate becomes quite high. From Eq. (10) one can show that \(R\simeq 2\times 10^{28}\) cm\({}^{-3}\) s\({}^{-1}\).
The corresponding time-scale when the pair production rate reached this value equals \(t_{0}\simeq 10^{4}\) sec, which for the number density gives \(n_{pair}\simeq Rt_{0}\simeq 10^{32}\) cm\({}^{-3}\). This in turn leads to an annihilation time-scale of the order of \(\tau_{ann}\simeq 1/(\sigma n_{pair}c)\simeq 10^{-19}\) sec (\(\sigma\simeq 10^{-24}\) cm\({}^{-3}\) is the Thomson cross section). Therefore, in limiting the pair production rate, the annihilation process has to be taken into account.
In particular, for non-relativistic temperatures, it has been found that the annihilation rate is given by [27]
\[\Lambda\simeq 2\pi cr_{e}^{2}n_{-}n_{+}, \tag{21}\]
where \(n_{-}\) and \(n_{+}\) are the number densities of electrons and positrons, respectively, and \(r_{e}\) is the electron's classical radius. By taking the natural relation \(n_{-}=n_{+}\) into account, the balance between the production and the annihilation processes
\[R(\tau)\simeq\Lambda\simeq 2\pi cr_{e}^{2}\left(\int_{0}^{\tau}R(t)dt\right)^{2}, \tag{22}\]
which, after taking the derivative (by \(\tau\)) of Eq. (22) and neglecting the term \(e^{\Gamma\tau}\) compared to \(e^{2\Gamma\tau}\) in the left hand side of equation, straightforwardly leads to an estimate of densities of electrons and positrons \(\int_{0}^{\tau}R(t)dt\)
\[n_{\pm}\simeq\frac{\Gamma}{2\pi cr_{e}^{2}}\simeq 1.3\times 10^{11}cm^{-3}, \tag{23}\]
which for the pair creation rate gives the value
\[R\simeq\frac{\Gamma^{2}}{2\pi cr_{e}^{2}}\simeq 2.6\times 10^{8}cm^{-3}s^{-1}. \tag{24}\]
For the inclined field lines applied to jet-like structures, in Fig. 3 we show the dimensionless time scale versus \(\theta\). The set of parameters is: \(M_{8}=1\), \(L_{42}=1\), \(r=R_{lc}/\sin\theta\), \(\gamma_{0}=1\), \(\gamma_{e,p}=2\). As it is clear from the plot, the instability is still efficient inside the jet structures. But the time-scale when the balance takes place (see Eq. (22)) exceeds the kinematic time-scale by several times, indicating the irrelevance of the mentioned process inside the jet-like structures.
For the obtained value of the pair number density (see Eq. (23)), the annihilation time-scale becomes of the order of \(t_{ann}\simeq 1/(\sigma_{T}n_{\pm}c)\simeq 400\) sec. These pairs, initially mildly relativistic, will be characterised by the synchrotron cooling timescale, \(t_{syn}\simeq\gamma mc^{2}/P_{s}\simeq 10^{7}\) sec, exceeding by many orders \(t_{ann}\), indicating very low efficiency of the synchrotron process. On the other hand, the particles can centrifugally accelerate, which inevitably reduces the cooling timescale, and the synchrotron process might become important. Therefore, one should consider this scenario as well.
As it has been shown in [26], the acceleration time-scale of particles is given by
\[t_{acc}\simeq\frac{R_{lc}}{2c}\times\left(1-\frac{r_{0}^{2}}{R_{lc}^{2}} \right)^{1/2}, \tag{25}\]
where \(r_{0}\) indicates the initial coordinate of the particle and we have assumed that \(\gamma_{pairs}\simeq 1\). By taking into account \(\Delta r\simeq\gamma_{0}R_{lc}/(2\gamma_{e,p})\) one can straightforwardly show that \(t_{acc}\) is of the order of \(10^{4}\) sec. This means that the synchrotron mechanism, after the pairs are accelerated still will be inefficient. A similar conclusion comes from the IC scattering, because normally this process is efficient for relativistic electrons, and consequently the particles should be centrifugally energized. But as we have already seen, the acceleration timescale is too large to make the mechanism efficient enough.
In Fig. 4 we show the pair creation/annihilation rate versus the AGN luminosity. The set of parameters is the same as in Fig. 2 except \(\gamma_{e,p}=10\) and the luminosity range. As it is evident, the higher the luminosity, the higher the pair creation/annihilation rate, which is a natural behaviour.
In Fig. 5, similar behavior has been shown, but versus the normalized black hole mass, \(M_{n}=M/M_{\odot}\). The set of parameters is the same as in Fig. 4, except \(L=10^{42}\) erg/s and the range of the black hole mass. The plot is a continuously decreasing function of the black hole mass, which is a natural result of Eq. (11): higher the mass, less the angular velocity of rotation, and consequently, the less the centrifugal effects.
This study shows that centrifugally energized AGN magnetospheres should be characterized by the annihilation lines, \(2mc^{2}\simeq 1\) MeV, which might be interesting in the context of multi-wavelength observations of AGN [28, 29, 30]. Moreover, in AGN astronomy, it is well known that the X-ray and GeV-TeV gamma ray skies have been explored in detail, while the study in the MeV range is not that rich [31], therefore, the present study is significant. It is worth noting that in general, this emission will be redshifted because it has an extragalactic origin, and consequently, the observed energy, \(\epsilon\), will be reduced
by the factor, \(1+z\), where \(z\) is a redshift of the AGN
\[\epsilon_{obs}=\frac{1\;MeV}{1+z}. \tag{26}\]
This means that for small redshift AGNs, the annihilation line is of the order of 1 MeV, whereas for higher redshifts, the energy might be even much less. For example, according to the catalog [32], the highest observed redshift is \(\sim 6\), implying that the observed annihilation line will be of the order of 140 keV.
## 4 Conclusions
For the wide range of AGN luminosities and masses, we have studied the efficiency of centrifugally induced pair production.
In particular, by assuming that the AGN magnetosphere is composed of pro
Figure 4: Pair creation/annihilation rate is shown as a function of \(L\). The set of parameters is the same as in Fig. 1 except \(\gamma_{e,p}=10\) and the luminosity.
tons and electrons, which are centrifugally energized, we have found that the centrifugal force efficiently induces the exponentially amplifying electrostatic field, which in due course of time reaches the Schwinger threshold, leading to pair production.
It has been shown that the process is balanced by the annihilation mechanism, leading to a saturated value of production/annihilation rate, which has been explored versus the AGN luminosity and the central black hole mass.
We have found that for a wide range of parameters, the mechanism is very efficient, except for the field lines with small inclination angles with respect to the rotation axis, excluding the AGN jets from a class of objects where the studied process is significant.
Figure 5: Pair creation/annihilation rate is shown as a function of \(M_{n}\). The set of parameters is the same as in Fig. 3 except \(L=10^{42}\) erg/s and the range of the black hole mass.
Acknowledgments
Z.O. would like to thank Dr. Fabrizio Tavecchio for interesting comments. The work was supported by the EU fellowships for Georgian researchers, 2023 (57655523). Z.O. also would like to thank Torino Astrophysical Observatory and Universita degli Studi di Torino for hospitality during working on this project.
|
2309.06636 | Evolution of trust in structured populations | The trust game, derived from an economics experiment, has recently attracted
interest in the field of evolutionary dynamics. In a recent version of the
evolutionary trust game, players adopt one of three strategies: investor,
trustworthy trustee, or untrustworthy trustee. Trustworthy trustees enhance and
share the investment with the investor, whereas untrustworthy trustees retain
the full amount, betraying the investor. Following this setup, we investigate a
two-player trust game, which is analytically feasible under weak selection. We
explore the evolution of trust in structured populations, factoring in four
strategy updating rules: pairwise comparison (PC), birth-death (BD), imitation
(IM), and death-birth (DB). Comparing structured populations with well-mixed
populations, we arrive at two main conclusions. First, in the absence of
untrustworthy trustees, there is a saddle point between investors and
trustworthy trustees, with collaboration thriving best in well-mixed
populations. The collaboration diminishes sequentially from DB to IM to PC/BD
updating rules in structured populations. Second, an invasion of untrustworthy
trustees makes this saddle point unstable and leads to the extinction of
investors. The 3-strategy system stabilizes at an equilibrium line where the
trustworthy and untrustworthy trustees coexist. The stability span of
trustworthy trustees is maximally extended under the PC and BD updating rules
in structured populations, while it decreases in a sequence from IM to DB
updating rules, with the well-mixed population being the least favorable. This
research thus adds an analytical lens to the evolution of trust in structured
populations. | Chaoqian Wang | 2023-09-12T22:54:45Z | http://arxiv.org/abs/2309.06636v3 | # Evolution of trust in structured populations
###### Abstract
The trust game, derived from a notable economics experiment, has recently attracted interest in the field of evolutionary dynamics. In a prevalent version of the evolutionary trust game, players adopt one of three strategies: investor, trustworthy trustee, or untrustworthy trustee. Trustworthy trustees enhance and share the investment with the investor, whereas untrustworthy trustees retain the full amount, betraying the investor. Following this setup, we propose a two-player version of the trust game, which is analytically feasible. Based on weak selection and pair approximation, we explore the evolution of trust in structured populations, factoring in four strategy updating rules: pairwise comparison (PC), birth-death (BD), imitation (IM), and death-birth (DB). Comparing structured populations with well-mixed populations, we arrive at two main conclusions. First, in the absence of untrustworthy trustees, there is a saddle point between investors and trustworthy trustees, with collaboration thriving best in well-mixed populations. The collaboration diminishes sequentially from DB to IM to PC/BD updating rules in structured populations. Second, an invasion of untrustworthy trustees makes this saddle point unstable and leads to the extinction of investors. The 3-strategy system stabilizes at an equilibrium line where the trustworthy and untrustworthy trustees coexist. The stability span of trustworthy trustees is maximally extended under the PC and BD updating rules in structured populations, while it decreases in a sequence from IM to DB updating rules, with the well-mixed population being the least favorable. This research adds an analytical lens to understanding the evolution of trust in structured populations.
trust game; evolutionary game theory; replicator dynamics; pair approximation 1
## 1 Introduction
The "trust game" is a pivotal experiment in behavioral economics utilized to investigate the nuances of trust and cooperative behavior between individuals. Developed initially in the 1990s, it involves a dyadic setup with two anonymous players, delineated as the "trustor" and the "trustee" [1]. The trustor is endowed with a monetary sum and faces the decision to send a part or the entirety of this sum to the trustee. The transferred amount multiplies, augmenting the value before it reaches the trustee. Subsequently, the trustee decides the portion of the increased sum to retain and what fraction to return to the trustor. The theoretical optimal strategy from a pure economic standpoint is for the trustor not to transfer any amount, anticipating that the trustee aims to maximize their payoff and therefore will not return any amount. Conversely, the game's essence rests on the human tendencies to trust and reciprocate, often leading to the transfer of money demonstrating trust and subsequent reciprocity by returning a part of the money.
Traditionally, the trust game has been explored in various economic experimental setups, examining strategies with incomplete information [2] and over repeated interactions [3]. The richness of behaviors observed has been chiseled further in research where individuals embrace both roles of the trustor and trustee [4]. With meta-data analysis, it has offered a broadened perspective on strategy formulation [5]. The landscape of trust games extends beyond economic rationales, touching upon nuances influenced by broader humanity factors such as gender and culture [6], alongside the subjective territories of
beauty and expectations [7]. Moreover, biological investigations have ventured into unraveling genetic influences on trust dynamics [8], albeit these represent but a fraction of the multidimensional tapestry woven in trust game investigations. The reader can refer to a recent review [9] that provides a more complete picture of the development of the traditional trust games over the last two decades.
Evolutionary dynamics offers a fresh angle on traditional game theory, illustrating how players, while seeking higher individual payoffs, can still choose to cooperate spontaneously, favoring the collective's interest over their own highest payoff [10; 11]. This cooperative behavior is notably amplified in structured populations that closely mirror real-world scenarios where individuals interact mainly with their neighbors rather than the entire population, fostering "spatial reciprocity" through localized strategy interaction and reproduction [12; 13]. Evolutionary graph theory delves into this phenomenon, studying the dynamics of evolution in such structured populations [14; 15; 16]. A commonly used theory is the pair approximation approach, which introduces the marginal effect of games based on the voter model [17]. Originating from general biology and physics fields [18; 19; 20; 21; 22], early pair approximation methodologies eventually found application in delineating evolutionary games on regular graphs, unearthing the pivotal '\(b/c>k\)' rule [23]. Soon after, the methodology was applied for multi-strategy, two-player games, spawning the concept of "replicator dynamics on graphs" [24], which supplements the understanding of replicator dynamics in well-mixed populations [25]. Recent advancements in the pair approximation approach have facilitated studies on game transitions [26] and asymmetric social interactions in evolutionary dynamics [27]. A parallel prominent branch in evolutionary graph theory is rooted in the identity-by-descent theory, capable of depicting evolutionary dynamics across various network structures [28; 29]. This framework has spurred significant findings recently, including [30; 31; 32; 33; 34; 35; 36]. For a comprehensive understanding of the progress in evolutionary dynamics over the past two decades, the reader can refer to a recent review [37].
The seminal work applying evolutionary dynamics to trust games began with the original \(N\)-person trust game in a well-mixed and infinite population [38] (and its lesser-known version in a finite population [39]). In the \(N\)-person trust game, players are allowed to employ three strategies: investor (trustor), trustworthy trustee, and untrustworthy trustee. To simplify the model, the option of not investing is unavailable to investors. The payoffs are calculated based on the traditional trust game and are proceeded through the lens of evolutionary dynamics. In recent years, the underlying model of the \(N\)-person trust game has inspired a wide range of studies, including consideration of network structures [40], punishment & reward mechanisms [41; 42; 43], reputations [44; 45; 46], diverse investment patterns [47], conditional investment in repeated interactions [48], comparison with logit dynamics [49], and the effects of different updating rules [50]. Some studies have ventured into alternative strategy settings, moving away from the 3-strategy approach. Examples include a simulation study on a square lattice [51], research on fixed provider and consumer roles [52], and further theoretical studies [53; 54].
There have been studies in the literature on the evolutionary dynamics of trust games, but they have yet to apply the theoretical approach of evolutionary graph theory. Specifically, previous studies have either been based on analytical studies of well-mixed populations [38; 42] or Monte Carlo simulation studies in structured populations [40; 44; 45; 46]. Analytical studies on structured populations are still lacking. Ohtsuki and Nowak [24] provide the general replicator equations for multi-strategy two-player games on regular graphs, where each player has the same number of neighbors. This enables the theoretical analysis of trust games in a structured population. Given that most current studies on evolutionary trust games are \(N\)-player games, which are not analytically feasible at the moment, we need to first transfer the underlying model to the form of two-player games. In this work, we follow the 3-strategy setup of the seminal work on evolutionary trust games [38], but propose a corresponding two-player version. On this basis, we focus on theoretical solutions under different strategy updating rules in structured populations and
how they affect the evolution of trust in the structured population. We start by introducing a corresponding two-player trust game model in the next section.
## 2 Model
### Trust game
The two-player trust game that we propose employs a 3-strategy system. A player can employ one of the following strategies:
1. Investor (\(I\)), also known as a trustor, who invests \(t_{V}\) to the trustee co-player and expects a return from the trustee, which is conditional upon the trustee being trustworthy. Here, \(t_{V}>0\) is the trusted value, an input parameter. If the co-player also adopts the investor role, neither invests, resulting in no action.
2. Trustworthy trustee (\(T\)), who multiplies the investment from the investor by \(R_{T}\) (\(R_{T}>1\)). The trustee receives \(R_{T}t_{V}\). The investor also receives \(R_{T}t_{V}\). If the co-player is also a trustee, no transaction occurs.
3. Untrustworthy trustee (\(U\)), who multiplies the investment from the investor by \(R_{U}\) (\(R_{U}>R_{T}\)). The trustee receives \(R_{U}t_{V}\). The investor receives nothing. If the co-player is also a trustee (either \(T\) or \(U\)), no transaction takes place.
Based on the described strategies, we construct the following payoff matrix:
\[\left[a_{ij}\right]=\begin{pmatrix}0&-t_{V}+R_{T}t_{V}&-t_{V}\\ R_{T}t_{V}&0&0\\ R_{U}t_{V}&0&0\end{pmatrix}. \tag{1}\]
An \(I\)-player gains \(-t_{V}+R_{T}t_{V}\) when interacting with a \(T\)-player and incurs loss of \(-t_{V}\) when facing a \(U\)-player. On the other hand, a \(T\)-player earns \(R_{T}t_{V}\) in encounters with an \(I\)-player, while a \(U\)-player secures \(R_{U}t_{V}\) against an \(I\)-player.
The parameters \(R_{T}\) and \(R_{U}\) dictate specific relations. We have mentioned \(R_{T}>1\), which guarantees that an investment to a trustworthy trustee always brings a positive return (\(-t_{V}+R_{T}t_{V}>0\)). Also, \(R_{U}>R_{T}\) indicates that being untrustworthy can yield greater benefits compared to being trustworthy (\(R_{U}t_{V}>R_{T}t_{V}\)). Moreover, we must require \(R_{U}<2R_{T}\), which means being trustworthy is always a prosocial behavior that confers greater collective benefits (\(-t_{V}+R_{T}t_{V}+R_{T}t_{V}>-t_{V}+R_{U}t_{V}\)). To sum up, the relation \(1<R_{T}<R_{U}<2R_{T}\) is established.
We also notice that \(t_{V}\) can be extracted from the payoff matrix \(\left[a_{ij}\right]\). In other words, \(t_{V}\) serves a role analogous to that of the selection strength (introduced in Section 2.2). The effect of \(t_{V}\) is only visible when selection strength is non-marginal, which does not apply to this study. Therefore, we can simply set \(t_{V}=1\), eliminating the need for further exploration.
### Evolutionary dynamics
During each elementary step, a random focal player is selected to update its strategy through the interactions with its \(k\) neighbors. If the population is well-mixed, these neighbors are randomly chosen from the population and vary over time. If the population is structured, the neighbors remain constant. All players have the same number of neighbors, denoted as \(k\).
The randomly selected focal player earns a payoff from playing \(k\) trust games in Section 2.1 against \(k\) neighbors. The neighbors similarly determine their own payoffs through interactions with their respective neighbors. The focal player adopts the strategy (\(I\), \(T\), or \(U\)) of a neighbor or retains its own strategy, based on which strategy yields the higher payoff. The more successful strategy is more likely to be adopted, a process further detailed under various updating rules in Section 4. We assume a weak selection strength, indicating that the differences in payoff have only a marginal influence on the evolutionary dynamics.
The weak selection framework allows for the analysis of dynamics in structured populations. In light of this, we analyze the evolutionary dynamics discussed earlier using replicator dynamics in both well-mixed [25] and structured populations [24]. Although the focus of this work is on structured populations, we begin our analysis with well-mixed populations to establish a basis for comparison.
## 3 The well-mixed population
In an infinite well-mixed population, the frequency of \(i\)-players is denoted by \(x_{i}\), where \(\sum_{i}x_{i}=1\), \(i=I,T,U\). The system state can be described by \(\mathbf{x}=(x_{I},x_{T},x_{U})\). The replicator equations are \(\dot{x}_{i}=x_{i}(f_{i}-\phi)\), where \(f_{i}=\sum_{j}x_{j}a_{ij}\) is the mean payoff of \(i\)-players, \(\phi=\sum_{i}x_{i}f_{i}\) is the mean payoff of the population [25]. From Eq. (1), we obtain
\[f_{I} =[x_{T}(R_{T}-1)-x_{U}]t_{V}, \tag{2a}\] \[f_{T} =x_{I}R_{T}t_{V},\] (2b) \[f_{U} =x_{I}R_{U}t_{V}, \tag{2c}\]
and
\[\phi=x_{I}[x_{T}(R_{T}-1)-x_{U}]t_{V}+x_{I}t_{V}(x_{T}R_{T}+x_{U}R_{U}). \tag{3}\]
Therefore, the replicator dynamics for the well-mixed population is
\[\left\{\begin{aligned} \dot{x}_{I}&=x_{I}t_{V}[x_{T}(R_{T} -1)-x_{U}-x_{I}x_{T}(2R_{T}-1)-x_{I}x_{U}(R_{U}-1)],\\ \dot{x}_{T}&=x_{I}x_{T}t_{V}[R_{T}-x_{T}(2R_{T}-1)-x _{U}(R_{U}-1)],\\ \dot{x}_{U}&=x_{I}x_{U}t_{V}[R_{U}-x_{T}(2R_{T}-1)-x _{U}(R_{U}-1)].\end{aligned}\right. \tag{4}\]
We can see that the replicator equations in well-mixed populations are independent of \(k\).
Solving for \(\dot{\mathbf{x}}=0\) yields two distinct equilibrium points and one equilibrium line in the system described by Eqs. (4). The first equilibrium point is the \(I\)-vertex, \(\mathbf{x}^{(I)}=(1,0,0)\). The second equilibrium point is located on the \(IT\)-edge,
\[\mathbf{x}^{(IT)}=\bigg{(}\frac{R_{T}-1}{2R_{T}-1},\frac{R_{T}}{2R_{T}-1},0 \bigg{)}. \tag{5}\]
The equilibrium line encompasses the entire \(TU\)-edge,
\[\mathbf{x}^{(TU)}=\Big{(}0,x_{T}^{(TU)},x_{U}^{(TU)}\Big{)}, \tag{6}\]
where \(0\leq x_{T}^{(TU)},x_{U}^{(TU)}\leq 1\), \(x_{T}^{(TU)}+x_{U}^{(TU)}=1\). On the line represented by \(\mathbf{x}^{(TU)}\), there are infinite equilibrium points, including the \(T\)- and \(U\)-vertices, \((0,1,0)\) and \((0,0,1)\).
According to the stability analysis (see Appendix A), the equilibrium point \(\mathbf{x}^{(I)}\) is unstable. The equilibrium point \(\mathbf{x}^{(IT)}\) is a saddle point, being only stable along the \(IT\)-edge and turning unstable when any \(U\)-player is introduced into the system. The equilibrium line \(\mathbf{x}^{(TU)}\) remains stable only within a certain interval. More precisely, the equilibrium line \(\mathbf{x}^{(TU)}\) remains stable when \(x_{T}^{(TU)}<x_{T,\star}^{(TU)}\), where
\[x_{T,\star}^{(TU)}=\frac{1}{R_{T}}. \tag{7}\]
Refer to Figure 1(a) for a numerical demonstration of these equilibrium points and their respective stability.
## 4 The structured population
According to [24], the essence of evolutionary dynamics in structured populations entails a transformation of the payoff matrix \(\left[a_{ij}\right]\leftarrow\left[a_{ij}+b_{ij}\right]\) in comparison to well-mixed populations. More precisely, the replicator dynamics becomes \(\dot{x}_{i}=x_{i}(f_{i}+g_{i}-\phi)\), where \(g_{i}=\sum_{j}x_{j}b_{ij}\) is the additional advantage for \(i\)-players resulting from the network structure. Here, \(b_{ij}\) depends on specific strategy updating rules. The rules under consideration include pairwise comparison (PC), birth-death (BD), imitation (IM), and death-birth (DB). Below, we discuss them separately.
### Pairwise comparison (PC) and birth-death (BD)
The replicator dynamics of the PC and BD updating rules in a structured population are equivalent to each other under weak selection [24].
In the PC updating rule, a focal player and one of its neighbors are randomly selected. With a probability marginally proportional to the payoff in the pair, the focal player adopts the strategy of the selected neighbor or keeps its own strategy [13]. In the BD updating rule, a focal player is selected with a probability marginally proportional to the payoff among the population, then a random neighbor adopts the focal player's strategy [14].
According to [24], both rules adhere to the following formula for calculating \(b_{ij}\):
\[b_{ij}=\frac{a_{ii}+a_{ij}-a_{ji}-a_{jj}}{k-2}. \tag{8}\]
Figure 1: The evolution of the system depicted by ternary diagrams, including (a) well-mixed (WM) and structured populations under the (b) PC/BD, (c) IM, and (d) DB updating rules. The black points are stable, the white points are unstable, and the grey point indicates a saddle point. On the \(IT\)-edge, the grey point marks \(\mathbf{x}^{(IT)}\), and the annotated formula is the analytical expression of \(x_{T}^{(IT)}\). On the \(TU\)-edge, a particular critical white point marks the boundary separating stable and unstable regions along the equilibrium line, and the annotated formula is the analytical expression of \(x_{T,\star}^{(TU)}\). Input parameters: \(R_{T}=1.8\), \(R_{U}=2\), \(t_{V}=1\), \(k=4\).
Using Eq. (1), we express each element in the matrix \(\left[b_{ij}\right]\) as
\[\left[b_{ij}\right]=\frac{1}{k-2}\begin{pmatrix}0&-t_{V}&-t_{V}-R_{U}t_{V}\\ t_{V}&0&0\\ R_{U}t_{V}+t_{V}&0&0\end{pmatrix} \tag{9}\]
Therefore, \(g_{i}=\sum_{j}x_{j}b_{ij}\) is computed as
\[g_{I} =-\frac{x_{T}+x_{U}(R_{U}-1)}{k-2}t_{V}, \tag{10a}\] \[g_{T} =\frac{x_{I}}{k-2}t_{V},\] (10b) \[g_{U} =\frac{x_{I}(R_{U}+1)}{k-2}t_{V}. \tag{10c}\]
The resulting replicator dynamics in structured populations under the pairwise comparison or birth-death updating rule is as follows:
\[\left\{\begin{aligned} \dot{x}_{I}&=x_{I}t_{V}\bigg{[}x_{T}(R_{T}-1)-x_{U}- \frac{x_{T}+x_{U}(R_{U}+1)}{k-2}-x_{I}x_{T}(2R_{T}-1)-x_{I}x_{U}(R_{U}-1)\bigg{]},\\ \dot{x}_{T}&=x_{I}x_{T}t_{V}\bigg{[}R_{T}+\frac{1}{k-2}-x_{ T}(2R_{T}-1)-x_{U}(R_{U}-1)\bigg{]},\\ \dot{x}_{U}&=x_{I}x_{U}t_{V}\bigg{[}R_{U}+\frac{R_{U}+1}{k- 2}-x_{T}(2R_{T}-1)-x_{U}(R_{U}-1)\bigg{]}.\end{aligned}\right. \tag{11}\]
### Imitation (IM)
In the IM updating rule, a focal player is randomly selected. With a probability marginally proportional to the payoff among all neighbors and itself, the focal player adopts the strategy of a neighbor or keeps its own strategy [12].
According to [24], the IM rule adheres to the following formula for calculating \(b_{ij}\):
\[b_{ij}=\frac{(k+3)a_{ii}+3a_{ij}-3a_{ji}-(k+3)a_{jj}}{(k+3)(k-2)}. \tag{12}\]
From this formula, we can calculate
\[\left[b_{ij}\right]=\frac{3}{(k+3)(k-2)}\begin{pmatrix}0&-t_{V}&-t_{V}-R_{U} t_{V}\\ t_{V}&0&0\\ R_{U}t_{V}+t_{V}&0&0\end{pmatrix}, \tag{13}\]
and
\[g_{I} =-\frac{3x_{T}+3x_{U}(R_{U}-1)}{(k+3)(k-2)}t_{V}, \tag{14a}\] \[g_{T} =\frac{3x_{I}}{(k+3)(k-2)}t_{V},\] (14b) \[g_{U} =\frac{3x_{I}(R_{U}+1)}{(k+3)(k-2)}t_{V}. \tag{14c}\]
Therefore, the replicator dynamics in structured populations under the imitation updating rule is
\[\left\{\begin{aligned} \dot{x}_{I}&=x_{I}t_{V}\bigg{[}x_{T}(R_{ T}-1)-x_{U}-\frac{3x_{T}+3x_{U}(R_{U}+1)}{(k+3)(k-2)}-x_{I}x_{T}(2R_{T}-1)-x_{I}x_{U}(R_{U}-1) \bigg{]},\\ \dot{x}_{T}&=x_{I}x_{T}t_{V}\bigg{[}R_{T}+\frac{3}{( k+3)(k-2)}-x_{T}(2R_{T}-1)-x_{U}(R_{U}-1)\bigg{]},\\ \dot{x}_{U}&=x_{I}x_{U}t_{V}\bigg{[}R_{U}+\frac{3(R_{U }+1)}{(k+3)(k-2)}-x_{T}(2R_{T}-1)-x_{U}(R_{U}-1)\bigg{]}.\end{aligned}\right. \tag{15}\]
### Death-birth (DB)
In the DB updating rule, a focal player is randomly selected. With a probability marginally proportional to the payoff among all neighbors, the focal player adopts the strategy of a neighbor [23]. Compared to the IM rule, the DB rule completely ignores the payoff of the focal player, making it unable to retain its own strategy [33; 34].
According to [24], the DB rule adheres to the following formula for calculating \(b_{ij}\):
\[b_{ij}=\frac{(k+1)a_{ii}+a_{ij}-a_{ii}-(k+1)a_{jj}}{(k+1)(k-2)}, \tag{16}\]
by which we compute
\[\left[b_{ij}\right]=\frac{1}{(k+1)(k-2)}\begin{pmatrix}0&-t_{V}&-t_{V}-R_{U}t _{V}\\ t_{V}&0&0\\ R_{U}t_{V}+t_{V}&0&0\end{pmatrix}, \tag{17}\]
and
\[g_{I} =-\frac{x_{T}+x_{U}(R_{U}-1)}{(k+1)(k-2)}t_{V}, \tag{18a}\] \[g_{T} =\frac{x_{I}}{(k+1)(k-2)}t_{V},\] (18b) \[g_{U} =\frac{x_{I}(R_{U}+1)}{(k+1)(k-2)}t_{V}. \tag{18c}\]
In this way, the replicator dynamics in structured populations under the death-birth updating rule is
\[\left\{\begin{aligned} \dot{x}_{I}&=x_{I}t_{V}\bigg{[}x_{T}(R_{ T}-1)-x_{U}-\frac{x_{T}+x_{U}(R_{U}+1)}{(k+1)(k-2)}-x_{I}x_{T}(2R_{T}-1)-x_{I}x_{U} (R_{U}-1)\bigg{]},\\ \dot{x}_{T}&=x_{I}x_{T}t_{V}\bigg{[}R_{T}+\frac{1}{(k+1)( k-2)}-x_{T}(2R_{T}-1)-x_{U}(R_{U}-1)\bigg{]},\\ \dot{x}_{U}&=x_{I}x_{U}t_{V}\bigg{[}R_{U}+\frac{R_{U }+1}{(k+1)(k-2)}-x_{T}(2R_{T}-1)-x_{U}(R_{U}-1)\bigg{]}.\end{aligned}\right. \tag{19}\]
### Results
By solving \(\dot{\mathbf{x}}=\mathbf{0}\) in Eq. (11), (15), and (19), we obtain the equilibrium points in the structured population under the PC/BD, IM, and DB rules. They share similar stability properties, with minor differences in the analytic forms. We identify two distinct
equilibrium points and one equilibrium line. The first equilibrium point is the \(I\)-vertex, \(\mathbf{x}^{(I)}=(1,0,0)\). The second equilibrium point is on the \(IT\)-edge,
\[\mathbf{x}^{(IT)}=\frac{1}{2R_{T}-1}\begin{cases}\left(R_{T}-1-\frac{1}{k-2},R_{ T}+\frac{1}{k-2},0\right),&\text{PC/BD},\\ \left(R_{T}-1-\frac{3}{(k+3)(k-2)},R_{T}+\frac{3}{(k+3)(k-2)},0\right),&\text{ IM},\\ \left(R_{T}-1-\frac{1}{(k+1)(k-2)},R_{T}+\frac{1}{(k+1)(k-2)},0\right),&\text{ DB}.\end{cases} \tag{20}\]
The equilibrium line represents the whole \(TU\)-edge,
\[\mathbf{x}^{(TU)}=\left(0,x_{T}^{(TU)},x_{U}^{(TU)}\right), \tag{21}\]
where \(0\leq x_{T}^{(TU)},x_{U}^{(TU)}\leq 1\), \(x_{T}^{(TU)}+x_{U}^{(TU)}=1\). There exist infinite equilibrium points on the line depicted by \(\mathbf{x}^{(TU)}\), including the \(T\)-vertex at \((0,1,0)\) and the \(U\)-vertex at \((0,0,1)\).
As the stability analysis indicates (see Appendix A), the equilibrium point \(\mathbf{x}^{(I)}\) is unstable. The equilibrium point \(\mathbf{x}^{(IT)}\) is a saddle point, which is stable only along the \(IT\)-edge and becomes unstable when any U-player is introduced into the system. The equilibrium line \(\mathbf{x}^{(TU)}\) is stable only within the interval of \(x_{T}^{(TU)}<x_{T,\star}^{(TU)}\), where
\[x_{T,\star}^{(TU)}=\begin{cases}\frac{k-1+R_{U}}{(k-2)R_{T}+R_{U}},&\text{PC/ BD},\\ \frac{(k+3)(k-2)+3R_{U}+3}{(k+3)(k-2)R_{T}+3R_{U}},&\text{IM},\\ \frac{(k+1)(k-2)+R_{U}+1}{(k+1)(k-2)R_{T}+R_{U}},&\text{DB}.\end{cases} \tag{22}\]
See Figure 1(b), (c), and (d) for a numerical demonstration of these equilibrium points and their stability under the PC/BD, IM, and DB rules, respectively.
## 5 Discussion
In Section 3 and 4, we delineated the dynamics in both the well-mixed and structured populations. The comparison between these scenarios can be discussed from two perspectives: the characteristics of the saddle point \(\mathbf{x}_{T}^{(IT)}\) on the \(IT\)-edge, and the stability interval of the equilibrium line \(\mathbf{x}_{T}^{(TU)}\) on the \(TU\)-edge. In the subsequent sections, we will discuss these aspects separately.
### The \(1t\)-edge
In the absence of \(U\)-players, the system behavior on the \(IT\)-edge describes the competition and collaboration between the investors (\(I\)) and the trustworthy trustees (\(T\)). From the payoff matrix, these two strategies must form an interdependent relationship to generate a payoff; neither the investor nor the trustee can do so on their own, necessitating cooperation to realize positive outcomes. We can deduce the optimal system state where the payoff of the population is maximized. Let us set \(x_{U}=0\) and \(x_{I}=1-x_{T}\) in Eq. (3), which becomes a function of \(x_{T}\),
\[\phi=(1-x_{T})f_{I}+x_{T}f_{T}=x_{T}(1-x_{T})(2R_{T}-1). \tag{23}\]
According to Eq. (23), the mean payoff of the population \(\phi\) is maximized at \(x_{T}=1/2\). That is, when investors and trustworthy trustees are each half of the population, the system state best serves the collective interest. Thus, we can define the equilibrium point close to \(x_{T}^{(IT)}=1/2\) as the point of prosocial behavior, characterized by collaboration between investors and trustworthy trustees.
Based on this criterion, we compare the \(x_{T}^{(IT)}\) values given by Eq. (5) and (20) in the four different systems, and find that
\[\frac{1}{2R_{T}-1}\bigg{(}R_{T}+\frac{1}{k-2}\bigg{)}>\frac{1}{2R_{ T}-1}\bigg{(}R_{T}+\frac{3}{(k+3)(k-2)}\bigg{)}\] \[>\frac{1}{2R_{T}-1}\bigg{(}R_{T}+\frac{1}{(k+1)(k-2)}\bigg{)}> \frac{R_{T}}{2R_{T}-1}>\frac{1}{2}. \tag{24}\]
That is, PC/BD>IM>DB>WM, and WM>1/2. The pairwise comparison and birth-death updating rules lead to the largest \(x_{T}^{(IT)}\) value, and the imitation, death-birth, and the well-mixed population lead to smaller \(x_{T}^{(IT)}\) values sequentially. In the smallest case, the well-mixed population, we still have \(x_{T}^{(IT)}>1/2\), and \(x_{T}^{(IT)}\to 1/2\) only as \(R_{T}\rightarrow\infty\). This naturally follows from the fact that a trustee always gains a higher payoff than an investor, \(R_{T}t_{V}>-t_{V}+R_{T}t_{V}\). Therefore, even if the simultaneous presence of both is necessary to generate payoff for both strategies, being a trustee is a more attractive option.
From this perspective, it is evident that the well-mixed population is the most favorable for maintaining the trustee ratio near 1/2, implying that it is most favorable for investor-trustee collaboration to thrive. On the contrary, the structured population is not conducive to maintaining the trustee ratio near 1/2, with the PC and BD updating rules being the most harmful to collaboration.
The essence of spatial reciprocity brought by network structures is that players with the same strategy have a higher chance of meeting with each other. In the calculation of the additional advantage \(b_{ij}\) brought by the network structure in Eq. (8), (12), and (16), we notice that \(a_{ii}-a_{jj}=0\) always holds for the different updating rules. There is no advantage when an investor or a trustee meet with the same strategy type. Instead, as we have discussed, they must collaborate with the opposing strategy type to generate a payoff. This is the reason why structured populations disfavor collaboration between the investor and trustworthy trustees compared to a well-mixed population.
Figure 2(a) numerically compares the equilibrium points of the well-mixed population and the structured population with different updating rules on the \(IT\)-edge, which visually demonstrates the observations discussed above.
Figure 2: Comparison between the well-mixed (WM) and structured populations under the PC/BD, IM, and DB updating rules. (**a**) Comparison of the saddle point along the \(IT\)-edge. Since \(x_{T}^{(IT)}=1/2\) is most beneficial to the population, it can be concluded that the well-mixed population can favor the prosocial behavior of collaboration more than structured populations. (**b**) Comparison of the critical point along the \(TU\)-edge. Since trustworthy trustees are prosocial and the interval where \(x_{T}^{(TU)}<x_{T,\star}^{(TU)}\) is stable, the PC/BD updating in structured populations can enlarge the attraction interval and favor the prosocial behavior of trust. Input parameters: \(R_{T}=1.8\), \(R_{U}=2\), \(t_{V}=1\), \(k=4\).
### The TU-edge
While the investors and trustworthy trustees form prosocial collaboration and reciprocity, an invasion of untrustworthy trustees can break this state and lead to the extinction of investors; that is, the saddle point \(\mathbf{x}_{T}^{(IT)}\) on the \(IT\)-edge becomes unstable in the direction of \(U\)-vertex. As a result, the evolution ends with the coexistence of trustworthy and untrustworthy trustees, with no investors, stabilizing at the equilibrium line on the \(TU\)-edge. In this case, we define a state with more trustworthy trustees as a better ending. Since the equilibrium line is stable at the interval \(x_{T}^{(TU)}<x_{T,\star}^{(TU)}\), a larger critical value \(x_{T,\star}^{(TU)}\) indicates an extended stable interval with more trustworthy trustees.
Based on this criterion, we compare the \(x_{T,\star}^{(TU)}\) values given by Eq. (7) and (22) in the four different systems, and find that
\[\frac{k-1+R_{U}}{(k-2)R_{T}+R_{U}}>\frac{(k+3)(k-2)+3R_{U}+3}{(k+3)(k-2)R_{T}+3 R_{U}}>\frac{(k+1)(k-2)+R_{U}+1}{(k+1)(k-2)R_{T}+R_{U}}>\frac{1}{R_{T}}. \tag{25}\]
That is, PC/BD>IM>DB>WM. The pairwise comparison and birth-death updating rules lead to the largest \(x_{T,\star}^{(TU)}\) value and favor trustworthy trustees the most. The imitation, death-birth, and the well-mixed population lead to smaller \(x_{T,\star}^{(TU)}\) values sequentially. In this sense, the PC and BD updating rules in structured populations are most favorable to support trustworthy trustees against untrustworthy trustees.
We also observe that the critical value \(x_{T,\star}^{(TU)}\) in the well-mixed population is determined only by \(R_{T}\), but in the structured populations it is also related to \(R_{U}\). According to the expressions representing different scenarios outlined in Eq. (25), the advantage of trustworthiness instead undermines maintaining a high proportion of trustworthy trustees (i.e., an increase in \(R_{T}\) results in a decrease in \(x_{T,\star}^{(TU)}\)), while the advantage of untrustworthy trustees facilitates maintaining a high proportion of trustworthy trustees (i.e., an increase in \(R_{U}\) results in an increase in \(x_{T,\star}^{(TU)}\)). This is because, during the evolution of the three strategies, an increase in \(R_{T}\) favors the reproduction of investors, which in turn contributes to the exploitation by untrustworthy trustees, and thus a larger proportion of untrustworthy trustees relative to trustworthy trustees at the extinction of investors. The effect of \(R_{U}\) works vice versa.
Figure 2(b) numerically compares the critical points of the well-mixed population and the structured population with different updating rules on the equilibrium \(TU\)-edge, which demonstrates our conclusions intuitively.
## 6 Conclusion
Trust evolves between investors, trustworthy and untrustworthy trustees. In the two-player game framework, this work represented the trust game as a \(3\times 3\) payoff matrix. Under weak selection and pair approximation, we investigated the evolutionary dynamics in a structured population, where each player has the same number of neighbors. Analytical solutions under four strategy updating rules, including PC, BD, IM, and DB, were obtained, and we find that structured populations do not always favor the evolution of trust.
On the one hand, investors and trustworthy trustees are interdependent to generate a payoff. In this sense, the well-mixed population is most conducive to the coexistence of investors and trustworthy trustees and can maximize the mean payoff of the population. In contrast, the DB, IM, and PC/BD updating rules in structured populations sequentially reduce investors and shift the equilibrium point away from the optimal ratio in coexistence. The principle underlying spatial reciprocity is to increase exposure to the same strategy at the cost of decreasing the chance of meeting with a different strategy, which is unfortunately counterproductive in the context of investor-trustee interplay that requires encountering the opposing strategy to generate a payoff.
On the other hand, the invasion of untrustworthy trustees destroys the collaborative relationship between investors and trustworthy trustees, leading to the extinction of in
vectors. In both well-mixed and structured populations, the system eventually stabilizes in an equilibrium line, where trustworthy and untrustworthy trustees coexist. We find that the PC and BD updating rules in the structured population are most conducive to increasing the stability interval for retaining more trustworthy trustees. Furthermore, the IM and DB updating rules are unfavorable for retaining trustworthiness, respectively, while the well-mixed population is the least favorable.
To sum up, this study provides an analytical perspective on the evolution of trust in structured populations. In the future, extensive factors such as punishment, reward, and reputation can be incorporated to modify the payoff matrix and the role of new factors in structured populations can be investigated. Moreover, it remains an open question how investors and trustees can transform into each other. There have been many different explorations in this area [43; 51; 52], and their versions in structured populations can also be further investigated.
**Funding:** Publication of this article was funded in part by the George Mason University Libraries Open Access Publishing Fund.
**Institutional Review Board Statement:** Not applicable.
**Informed Consent Statement:** Not applicable.
**Data Availability Statement:** The theoretical data used to support the findings of this study are already included in the article.
**Conflicts of Interest:** The author declares no conflict of interest.
## Appendix A Stability analysis
Under the constraint \(\sum_{i}x_{i}=1\), we can substitute for \(x_{I}=1-x_{U}-x_{T}\). The system (4), (11), (15), and (19) can be expressed as
\[\left\{\begin{aligned} \dot{x}_{T}&=(1-x_{T}-x_{U})x_{T}t_{V }[R_{T}+\Delta-x_{T}(2R_{T}-1)-x_{U}(R_{U}-1)],\\ \dot{x}_{U}&=(1-x_{T}-x_{U})x_{U}t_{V}[R_{U}+(R_{U} +1)\Delta-x_{T}(2R_{T}-1)-x_{U}(R_{U}-1)],\end{aligned}\right. \tag{10}\]
where
\[\Delta=\begin{cases}0,&\text{WM},\\ \frac{1}{k-2},&\text{PC}/\text{BD},\\ \frac{3}{(k+3)(k-2)},&\text{IM},\\ \frac{1}{(k+1)(k-2)},&\text{DB}.\end{cases} \tag{11}\]
Through this approach, we study the properties of the four systems together, where \(\Delta\geq 0\).
To analyze the stability of \(\mathbf{x}^{(I)}\) and \(\mathbf{x}^{(IT)}\), we compute the Jacobian matrix of the system (10),
\[J=\begin{pmatrix}\frac{\partial\dot{x}_{T}}{\partial x_{T}}&\frac{\partial\dot {x}_{T}}{\partial x_{U}}\\ \frac{\partial\dot{x}_{U}}{\partial x_{T}}&\frac{\partial\dot{x}_{U}}{\partial x _{U}}\end{pmatrix}, \tag{12}\]
where
\[\frac{\partial\dot{x}_{T}}{\partial x_{T}} =(1-2x_{T}-x_{U})[R_{T}+\Delta-x_{U}(R_{U}-1)]t_{V}-x_{T}(2-3x_{T}- 2x_{U})(2R_{T}-1)t_{V}, \tag{10}\] \[\frac{\partial\dot{x}_{T}}{\partial x_{U}} =-x_{T}t_{V}[(1-x_{T})(R_{U}-1)+R_{T}+\Delta-x_{T}(2R_{T}-1)-2x_{U }(R_{U}-1)],\] (11) \[\frac{\partial\dot{x}_{U}}{\partial x_{T}} =-x_{U}t_{V}[(1-x_{U})(2R_{T}-1)+R_{U}+(R_{U}+1)\Delta-2x_{T}(2R_ {T}-1)-x_{U}(R_{U}-1)],\] (12) \[\frac{\partial\dot{x}_{U}}{\partial x_{U}} =(1-x_{T}-2x_{U})[R_{U}+(R_{U}+1)\Delta-x_{T}(2R_{T}-1)]t_{V}-x_{ U}(2-2x_{T}-3x_{U})(R_{U}-1)t_{V}. \tag{13}\]
An equilibrium point is stable if the Jacobian matrix at that point is negative definite. According to basic linear algebra, a matrix is negative definite if all of the odd-ordered principal minors are less than zero and all even-ordered principal minors are greater than zero. This condition can be expressed as: \(\partial\dot{x}_{T}/\partial x_{T}<0\) and \((\partial\dot{x}_{T}/\partial x_{T})(\partial\dot{x}_{U}/\partial x_{U})-( \partial\dot{x}_{T}/\partial x_{U})(\partial\dot{x}_{U}/\partial x_{T})>0\).
For \(\mathbf{x}^{(l)}=(1,0,0)\), we have
\[\left.\frac{\partial\dot{x}_{T}}{\partial x_{T}}\right|_{\mathbf{x}=\mathbf{x }^{(l)}}=(R_{T}+\Delta)t_{V}>0. \tag{14}\]
Therefore, \(\mathbf{x}^{(l)}\) is unstable.
According to Eq. (5) and (20), \(\mathbf{x}^{(IT)}\) can be expressed as
\[\mathbf{x}^{(IT)}=\frac{1}{2R_{T}-1}(R_{T}-1-\Delta,R_{T}+\Delta,0). \tag{15}\]
Therefore, for \(\mathbf{x}^{(IT)}\), we have
\[\left.\frac{\partial\dot{x}_{T}}{\partial x_{T}}\right|_{\mathbf{x}=\mathbf{x }^{(IT)}}=-(1-x_{T}^{(IT)})(R_{T}+\Delta)t_{V}<0, \tag{16}\]
which is the reason why \(\mathbf{x}^{(IT)}\) is stable along the \(IT\)-edge: let \(x_{U}=0\), then the reduced 2-strategy system of \(I\) and \(T\) can be described by \(\dot{x}_{T}\) and the stability can be judged by only Eq. (16).
We may compute the even-ordered principal minor to further complete the stability analysis of \(\mathbf{x}^{(IT)}\) in the 3-strategy system. However, a quick way to prove that \(\mathbf{x}^{(IT)}\) is unstable is by showing \(\left.\partial\dot{x}_{U}/\partial x_{U}\right|_{\mathbf{x}=\mathbf{x}^{(IT)} }>0\) directly. This is because switching the two equations in the system (10) does not influence its properties: \((\dot{x}_{T},\dot{x}_{U})\) is equivalent to \((\dot{x}_{U},\dot{x}_{T})\), whose stability should be ensured by \(\partial\dot{x}_{U}/\partial x_{U}<0\), the odd-ordered principal minor less than zero. However, we have
\[\left.\frac{\partial\dot{x}_{U}}{\partial x_{U}}\right|_{\mathbf{x}=\mathbf{x }^{(IT)}}=(1-x_{T}^{(IT)})(R_{U}-R_{T}+R_{U}\Delta)t_{V}>0, \tag{17}\]
which proves that \(\mathbf{x}^{(IT)}\) cannot be stable in the 3-strategy system. To sum up, \(\mathbf{x}^{(IT)}\) is a saddle point only stable along the \(IT\)-edge.
To study the stability of the equilibrium line,
\[\mathbf{x}^{(TU)}=\Big{(}0,x_{T}^{(TU)},x_{U}^{(TU)}\Big{)}, \tag{18}\]
we need to comprehend the physical insight into what happens on the \(TU\)-edge. Without the investment of \(I\)-players, the \(T\)- and \(U\)-players become indistinguishable in terms of
payoff: \(f_{T}=f_{U}\) when \(x_{I}=0\) according to Eqs. (2). Therefore, the system does not evolve on the \(TU\)-edge, which is the reason why the \(TU\)-edge is equilibrated everywhere.
In this way, we treat \(T\) and \(U\) as a whole and reduce the 3-variable system to a 2-variable system: \(x_{I}\) and \(x_{T}+x_{U}\). According to \(\sum_{i}x_{i}=1\), we can further cancel \(x_{T}+x_{U}=1-x_{I}\), \(x_{U}=1-x_{I}-x_{T}\), and express the replicator dynamics by only \(\dot{x}_{I}\) with an input parameter \(x_{T}\),
\[\dot{x}_{I} = x_{I}t_{V}\Big{\{}x_{T}(R_{T}-1)-(1-x_{I}-x_{T})-[x_{T}+(1-x_{I}- x_{T})(R_{U}+1)]\Delta \tag{13}\] \[-x_{I}x_{T}(2R_{T}-1)-x_{I}(1-x_{I}-x_{T})(R_{U}-1)\Big{\}},\]
where \(\Delta\) has the same meaning in Eq. (12).
The single-order Jacobian matrix of the system (13) is
\[\frac{\mathrm{d}\dot{x}_{I}}{\mathrm{d}x_{I}} = t_{V}\Big{\{}x_{T}(R_{T}-1)-(1-2x_{I}-x_{T})-[x_{T}+(1-2x_{I}- x_{T})(R_{U}+1)]\Delta \tag{14}\] \[-2x_{I}x_{T}(2R_{T}-1)-x_{I}(2-3x_{I}-2x_{T})(R_{U}-1)\Big{\}}.\]
To analyze the stability of \(\mathbf{x}^{(TU)}=\Big{(}0,x_{T}^{(TU)},x_{U}^{(TU)}\Big{)}\), we compute
\[\frac{\mathrm{d}\dot{x}_{I}}{\mathrm{d}x_{I}}\Big{|}_{\mathbf{x}=\mathbf{x}^{ (TU)}} = t_{V}\Big{\{}x_{T}^{(TU)}(R_{T}-1)-(1-x_{T}^{(TU)})-[x_{T}^{(TU) }+(1-x_{T}^{(TU)})(R_{U}+1)]\Delta\Big{\}}, \tag{15}\]
from which we know that \(\mathbf{x}^{(TU)}\) is stable if
\[\frac{\mathrm{d}\dot{x}_{I}}{\mathrm{d}x_{I}}\Big{|}_{\mathbf{x}=\mathbf{x}^{ (TU)}} < 0\Leftrightarrow x_{T}^{(TU)}<\frac{1+(R_{U}+1)\Delta}{R_{T}+R_{U} \Delta}\equiv x_{T,\star}^{(TU)}. \tag{16}\]
Eq. (16) gives the explicit expression of \(x_{T,\star}^{(TU)}\) that we presented in Eq. (7) and (22). On the equilibrium line \(\mathbf{x}^{(TU)}\), the interval of \(x_{T}^{(TU)}<x_{T,\star}^{(TU)}\) is stable.
Finally, we note that an equilibrium point on the \(IU\)-edge can be obtained, but it is not valid. Solving the system (11), we find the following solution
\[\mathbf{x}^{(IU)}=\frac{1}{R_{U}-1}(-1-(R_{U}+1)\Delta,0,R_{U}+(R_{U}+1)\Delta). \tag{17}\]
However, since \(x_{I}^{(IU)}=[-1-(R_{U}+1)\Delta]/(R_{U}-1)<0\), it does not exist, which is why it was omitted from the main text.
|
2309.05317 | Neural Koopman prior for data assimilation | With the increasing availability of large scale datasets, computational power
and tools like automatic differentiation and expressive neural network
architectures, sequential data are now often treated in a data-driven way, with
a dynamical model trained from the observation data. While neural networks are
often seen as uninterpretable black-box architectures, they can still benefit
from physical priors on the data and from mathematical knowledge. In this
paper, we use a neural network architecture which leverages the long-known
Koopman operator theory to embed dynamical systems in latent spaces where their
dynamics can be described linearly, enabling a number of appealing features. We
introduce methods that enable to train such a model for long-term continuous
reconstruction, even in difficult contexts where the data comes in
irregularly-sampled time series. The potential for self-supervised learning is
also demonstrated, as we show the promising use of trained dynamical models as
priors for variational data assimilation techniques, with applications to e.g.
time series interpolation and forecasting. | Anthony Frion, Lucas Drumetz, Mauro Dalla Mura, Guillaume Tochon, Abdeldjalil Aïssa El Bey | 2023-09-11T09:04:36Z | http://arxiv.org/abs/2309.05317v3 | # Neural Koopman prior for data assimilation
###### Abstract
With the increasing availability of large scale datasets, computational power and tools like automatic differentiation and expressive neural network architectures, sequential data are now often treated in a data-driven way, with a dynamical model trained from the observation data. While neural networks are often seen as uninterpretable black-box architectures, they can still benefit from physical priors on the data and from mathematical knowledge. In this paper, we use a neural network architecture which leverages the long-known Koopman operator theory to embed dynamical systems in latent spaces where their dynamics can be described linearly, enabling a number of appealing features. We introduce methods that enable to train such a model for long-term continuous reconstruction, even in difficult contexts where the data comes in irregularly-sampled time series. The potential for self-supervised learning is also demonstrated, as we show the promising use of trained dynamical models as priors for variational data assimilation techniques, with applications to e.g. time series interpolation and forecasting.
Dynamical systems, self-supervised learning, Koopman operator, auto-encoder, remote sensing, data assimilation, Sentinel-2.
## I Introduction
The evergrowing amount of historical data for scientific applications has recently enabled to model the evolution of dynamical systems in a purely data-driven way using powerful regressors such as neural networks. While many of the most spectacular results obtained by neural networks rely on the paradigm of supervised learning, this paradigm is limited in practice by the available amount of labelled data, which can be prohibitively costly and difficult to obtain. For example, a number of Earth observation programs that have been launched in the last decade provide huge amounts of sequential (though generally incomplete) satellite multi/hyperspectral images covering the entire Earth's surface. However, few accurate and reliable labels exist for land cover classification of the ground pixels, although some efforts have been made, e.g. for crop type classification and segmentation [1, 2].
In this context, one can leverage another machine learning paradigm called self-supervised learning (SSL) [3]. It consists in training a machine learning model to solve a pretext task that requires no labels in order to learn informative representations of the data which can be used to solve downstream tasks. When dealing with image data, possible pretext tasks include predicting the relative positions of two randomly selected patches of a same image [4] and predicting which rotation angle has been applied to an image [5]. Many SSL approaches can be labelled as contrastive SSL [6], which means that they aim at learning similar representations for images that are related by transformations such as rotations, crops and color transfers. We refer the interested reader to [7] for a review of self-supervised learning for remote sensing applications.
In our case, since we are dealing with sequential data, we use a natural pretext task which consists in being able to forecast the future state of the data from a given initial condition. This is similar in spirit to recent approaches in natural language processing where a model is trained on completing texts and then used to perform various tasks in zero/few shot, e.g. [8]. Our trained model is used for downstream tasks which can be formulated as inverse problems such as denoising and interpolation. We solve these tasks by minimising a variational cost which uses a trained model as a dynamical prior, contrarily to classical data assimilation techniques [9, 10, 11] which leverage hand-crafted dynamical priors that require domain knowledge, and are not always available. Besides, these priors should be differentiable since any first order optimization algorithm tackling such a problem must be able to differentiate through repeated compositions of the model, which requires careful implementation [12] and is out of reach for many operational systems relying on complex dynamical models [13]. In contrast, neural emulators of the dynamics are _de facto_ implemented in packages supporting automatic differentiation, e.g. Pytorch [14], Tensorflow, JAX, etc. providing effortless access to model derivatives.
For all these reasons, in this paper, we first aim at modelling dynamical systems from observation data using differentiable models. We assume that the state of a dynamical system can be described by a \(d\)-dimensional state variable \(\mathbf{x}\in\mathcal{D}\) with \(\mathcal{D}\subset\mathbb{R}^{d}\). Then, assuming the system is governed by an autonomous ordinary differential equation (ODE), one can describe its (discrete) dynamics by a function \(F:\mathcal{D}\rightarrow\mathcal{D}\) such that \(\mathbf{x}_{t+1}=F(\mathbf{x}_{t})\). Although \(F\) might be any non-linear function, Koopman operator theory [15] tells us that the model can be described by a linear operator acting in the space of observation functions. Namely, given an observation function \(f:\mathcal{D}\rightarrow\mathbb{R}\), the so-called Koopman operator \(\mathcal{K}\) maps an observation function \(f\) to its composition with the dynamics:
\[\mathcal{K}f(\mathbf{x}_{t})\triangleq(f\circ F)(\mathbf{x}_{t})=f(\mathbf{x }_{t+1}). \tag{1}\]
From this definition, \(\mathcal{K}\) is linear because of the linearity of the function space, i.e. for any \(f,g:\mathcal{D}\rightarrow\mathbb{R}\):
\[\mathcal{K}(f+g)(\mathbf{x}_{t})=(f+g)(\mathbf{x}_{t+1})=\mathcal{K}f(\mathbf{ x}_{t})+\mathcal{K}g(\mathbf{x}_{t}). \tag{2}\]
Yet, the function space being infinite dimensional, the advantage of the linearity of \(\mathcal{K}\) comes at the cost of an infinite dimension, which makes it difficult to model in practice. Thus,
the key to finding a finite-dimensional representation of the Koopman operator is to look for Koopman invariant subspaces (KIS) [16], i.e. subsets of the function space that are stable by the Koopman operator. There exists a variety of such spaces, but one needs to retrieve nontrivial KIS that give information about the dynamics of the state variable.
Once a KIS is found, the restriction of the Koopman operator to it is a matrix \(\mathbf{K}\) which can be interpreted with classical linear algebra tools. Notably, each of the complex eigenvalues of \(\mathbf{K}\) is associated to an observation function that is located in the subspace. Let us denote by \(\mathbf{K}=\mathbf{V}\mathbf{A}\mathbf{V}^{-1}\) the complex eigendecomposition of \(\mathbf{K}\), with \(\mathbf{V}\) the complex eigenvectors and \(\mathbf{A}\) a complex diagonal matrix containing the associated eigenvalues. Predicting \(\tau\) steps in the future through the Koopman operator means multiplying the initial latent state vector (obtained with the functions from the invariant subspace) by \(\mathbf{K}^{\tau}=\mathbf{V}\mathbf{A}^{\tau}\mathbf{V}^{-1}\). Therefore, one can see that the eigenvectors associated with an eigenvalue of modulus higher than one will have an exponentially growing contribution, while those with an eigenvalue of modulus smaller than one will exponentially vanish. Only eigenvalues of modulus very close to one will approximately preserve the norm of the latent state in the long run, which might be crucial for time series with clear seasonality or periodicity.
Our approach fits into the Koopman operator framework to model dynamical systems from data. More specifically, our contributions are the following:
1) We perform a synthetic review of the different approaches that have recently been used to compute data-driven approximations of the Koopman operator, emphasizing on the limitations of each of the successive categories of approaches.
2) We refine and extend our own approach to learn a neural Koopman operator, first sketched in [17], with a discussion on the interest of having a (close to) orthogonal Koopman operator and on handling irregular time series.
3) We present several ways to use our model as a fully-differentiable dynamical prior in data assimilation in order to solve inverse problems using automatic differentiation.
4) We present associated results on simulated as well as real-world data. We notably perform a frequency upsampling experiment on fluid flow data. We also show interpolation experiments on satellite image time series using variational data assimilation with our model as a dynamical prior, including in hard scenarios such as irregularly sampled data and transfer to new areas unseen during training.
## II Background an related works
### _Koopman operator theory_
In short, the Koopman operator theory [15] states that any dynamical system can be described linearly at the cost of an infinite dimension. However, some methods seek to find a finite-dimensional representation of the Koopman operator. Notably, Dynamic Mode Decomposition [18] (DMD) consists in finding a matrix \(\mathbf{A}\) such that the residual \(\mathbf{r}_{t}\) in
\[\mathbf{x}_{t+1}=\mathbf{A}\mathbf{x}_{t}+\mathbf{r}_{t} \tag{3}\]
is as small as possible in the least squares sense. This approach has been theoretically linked to the Koopman mode decomposition in [19], and has known many different variants, e.g. [20, 21, 22]. However, it relies on the implicit assumption that the set of observation functions constituting the identity of the state space is a Koopman invariant subspace (KIS). This assumption can be useful in regions of the state space where the dynamics are close to linear, but it is very unlikely to be generally true. In order to mitigate this shortcoming, the Extended Dynamic Mode Decomposition [23] (EDMD) uses a manually designed dictionary of observable functions from the dynamical system. Common choices of dictionaries include polynomials of the observed variables up to a given degree and sets of radial basis functions. These dictionaries all include the identity of the state space, which trivially allows to make predictions in the state space by projection. Choosing to include only these functions in the dictionary of functions amounts to performing a classical DMD.
Being a generalization of DMD, EDMD can give satisfactory results when the chosen dictionary of functions is well suited to the considered dynamical system. However, a hand-designed dictionary of observables might still not be the most optimal choice, and it is typically very high dimensional. For these reasons, subsequent works have proposed to automatically learn a low-dimensional dictionary of observation functions through machine learning. For example, there is a rich literature on leveraging Reproducing Kernel Hilbert Spaces to obtain approximations of the Koopman operator with some interpretability and theoretical guarantees, e.g. [24, 25, 26].
Other methods [27, 28] jointly learn the parameters of a neural network which computes a set of observation functions and a matrix \(\mathbf{K}\) which is the restriction of the Koopman operator to this set. To be able to retrieve the evolution of the state variable from the KIS, they constrain the inclusion of the state space in the subspace along with the learnt functions. This is a convenient trick, yet it restricts those methods since it means assuming that there exists a low-dimensional KIS containing the state functions.
In order to not rely on this assumption anymore, some other works [29, 30, 31, 32, 33, 17] do not constrain a trivial link between the KIS and the state space, and instead train another neural network to reconstruct the state variables from the learnt observation functions. In this case, the network learning the KIS and the network that reconstructs the state space from it form an autoencoder. This framework is theoretically more powerful since it only assumes a nonlinear relationship between the KIS and the state space.
Among these methods, [29], which we refer to as Deep-Koopman in this paper, learns an auxiliary network that outputs eigenvalues as a function of the encoded state of the system while others learn a fixed matrix \(\mathbf{K}\). [33] learns two distinct matrices for the forward and backward evolutions, in order to favor the consistency of the latent dynamics. A good indicator for the stability of long run predictions is that the eigenvalues of the learnt Koopman matrix should be located on the unit circle, which may encourage us to look for matrices with such eigenvalues. Among those are orthogonal matrices, which have many desirable properties. Most importantly, they lead to constrain the dynamics to be periodic. We detail the reason why this is true in Appendix A.
### _Orthogonality regularisation_
The promotion of orthogonality for the weight matrices of linear layers in neural networks has been long studied. This idea is related to the well-known vanishing gradient and exploding gradient issues. Those get more important as the computational graph gets deeper, e.g. for recurrent neural networks and for very deep residual neural networks [34].
[35] showed that the initialisation of weights as a random orthogonal matrix can be much more effective than the classical random Gaussian initialisation. It was also advocated that the orthogonality of the weight matrices should be promoted during the training phase too. [36] introduced a soft regularisation term for weight matrices \(\mathbf{W}\):
\[||\mathbf{W}\mathbf{W}^{T}-\mathbf{I}||^{2}. \tag{4}\]
This term, which is to be used in a similar way to weight decay [37], was shown to significantly improve the final performance of neural networks in computer vision tasks. [38] compared this term with similar orthogonality-promoting expressions, and showed that they all all brought substantial gains in the performance of deep residual networks.
In our case, constraining the Koopman operator to be orthogonal leads to periodic dynamics, which are of course stable in the long run and useful to model seasonality in time series. Yet, working with an exactly orthogonal \(\mathbf{K}\) may not always be desirable, for instance when the data are noisy, or the time series is not exactly periodic (e.g. when there are interannual variations or slower trends in seasonal dynamics), or even not periodic. For these reasons, we will resort to a soft penalization as in (4) instead of enforcing exactly the orthogonality of \(\mathbf{K}\).
### _Variational data assimilation_
Variational data assimilation can be used to solve inverse problems involving time series for which one has at disposal a set of possibly noisy and incomplete trajectories \(\tilde{\mathbf{x}}\) as well as a dynamical prior and/or a regularisation \(\mathcal{R}\) on the distribution of the solution. It consists in finding the complete trajectory that minimizes a variational cost \(\mathcal{C}\) of the form
\[\mathcal{C}(\mathbf{x})=\mathcal{D}(\mathbf{x},\tilde{\mathbf{x}})+\mathcal{ R}(\mathbf{x}) \tag{5}\]
where \(\mathcal{D}\) is a chosen distance, such as a norm of the difference between two elements (restricted to dimensions on which \(\tilde{\mathbf{x}}\) is defined). In practice, when all terms are smooth, the cost can be minimised by gradient descent or related first order algorithms. The gradient can be obtained either analytically when tractable, or using automatic differentiation, as made easily accessible by modern computing frameworks, e.g. Pytorch.
Alternatively, one can restrain the search on a set of trajectories defined by a model \(\mathcal{M}:\mathbf{x}_{0}\rightarrow\mathbf{x}\). In this case, one formulates a cost on the input of the model:
\[\mathcal{C}(\mathbf{x}_{0})=\mathcal{D}(\mathcal{M}(\mathbf{x}_{0}),\tilde{ \mathbf{x}})+\mathcal{R}(\mathcal{M}(\mathbf{x}_{0})). \tag{6}\]
A conceptual view of constrained variational data assimilation is displayed on Figure 1. We refer the interested reader to [9] for an extensive review on data assimilation.
While variational data assimilation is traditionally used with priors \(\mathcal{R}\) that were constructed from physical knowledge of the studied dynamical system, recent works [13, 39] have attempted to leverage machine learning tools to learn a prior in a completely data-driven way. In the second case, the prior is jointly learned with a gradient-based optimization algorithm, further improving performance. Other works [40] have proposed to learn a data-driven surrogate model to predict the residual error of an existing physics-based model, which finally results in a hybrid model. Those models have the advantage of being fully differentiable and implemented in an automatic differentiation framework (e.g. Pytorch or Tensorflow), which means that their associated cost can be differentiated automatically via the chain rule. Overall, linking data assimilation and machine learning is a very hot topic, which has been recently reviewed in [41].
## III Proposed methods
### _Neural network design and training_
Our architecture for a neural Koopman operator relies on three components : an encoding neural network \(\phi\) with a decoder \(\psi\) and a matrix \(\mathbf{K}\in\mathbb{R}^{d\times d}\). It is graphically represented in Figure 2. The idea is that \((\phi,\psi)\) learns the relationship between the state space and a learnt \(d\)-dimensional (approximately) Koopman invariant subspace, while \(\mathbf{K}\) corresponds to the restriction of the Koopman operator to this space. Our goal is to be able to make long-term predictions of the state by successive multiplications of the encoded initial state, followed by a decoding to come back to the original data space. This translates in equations as
\[\psi(\mathbf{K}^{\tau}\phi(\mathbf{x}_{t}))=\mathbf{x}_{t+\tau} \tag{7}\]
Fig. 1: Visual representation of constrained variational data assimilation. It consists in choosing the initial condition from which the model’s trajectory minimises the distance to the sampled data. One could also include a prior in the variational cost on the initial condition, such as the trajectory smoothness.
Fig. 2: Schematic view of our architecture.
for any initial condition \(\mathbf{x}_{t}\) and time increment \(\tau\). We emphasize that \(\tau\) does not necessarily have to be an integer since one can easily compute noninteger powers of \(\mathbf{K}\) by using its matrix logarithm, as explained in section III-B. The time increment \(\tau\) could also very well be negative, enabling to predict the past state of a dynamical system from future states.
Our training data will be constituted from \(N\) time series of length \(T\), which we denote as \((\mathbf{x}_{i,t})_{1\leq i\leq N,0\leq t\leq T+1}\). Note that these time series could be manually chosen and possibly overlapping cuts of longer time series. A first processing step is to augment the state space with its discrete derivatives \(\mathbf{x}_{i,t}-\mathbf{x}_{i,t-1}\). We therefore work with the variable \(\mathbf{y}\) defined as
\[\mathbf{y}_{i,t}=\left(\mathbf{x}_{i,t+1}\quad\mathbf{x}_{i,t+1}-\mathbf{x}_{ i,t}\right)^{T} \tag{8}\]
for index \(t\in[\![0,T]\!]\). This reformulation makes it easier to predict the future state. Indeed, given that the data varies smoothly, one could expect that \(\mathbf{x}_{i,t}+(\mathbf{x}_{i,t}-\mathbf{x}_{i,t-1})\) is a good approximation of \(\mathbf{x}_{i,t+1}\) (this formally looks like an explicit Euler scheme to integrate an underlying infinitesimal representation formulated as an ODE). This intuition is further theoretically justified by Takens' theorem [42], which, informally, states that the evolution of a dynamical system gets more and more predictable when we know more time lags from an observed variable of the system. Using this augmented state is therefore useful when the observed \(\mathbf{x}\) is not the state variable of the system. In the following, we write our loss function using \(\mathbf{x}\) even though it can be written in the same way with an augmented state \(\mathbf{y}\).
We denote by \(\Theta\) the set of all the trainable parameters of our architecture. \(\Theta\) includes the coefficients of \(\mathbf{K}\) along with the trainable parameters of \(\phi\) and \(\psi\). In order to obtain the desired behavior corresponding to equation (7), we train the architecture using the following loss terms:
* The prediction term \(L_{pred}\) ensures that the long-term predictions starting from the beginning of each time series are approximately correct. Some works [30] weigh this loss with an exponentially decaying factor that gives more importance to short term predictions, but we choose to penalize the errors on all time spans equally: \[L_{pred}(\Theta)=\sum_{1\leq i\leq N}\sum_{1\leq\tau\leq T}||\mathbf{x}_{i, \tau}-\psi(\mathbf{K}^{\tau}\phi(\mathbf{x}_{i,0}))||^{2}.\] (9)
* The auto-encoding term \(L_{ae}\) is the classical loss for auto-encoders, making sure that \(\psi\circ\phi\) is close to the identity: \[L_{ae}(\Theta)=\sum_{1\leq i\leq N}\sum_{0\leq t\leq T}||\mathbf{x}_{i,t}- \psi(\phi(\mathbf{x}_{i,t}))||^{2}.\] (10)
* The linearity term \(L_{lin}\) is a regularisation term which favors the linearity of the learnt latent dynamics. It is useful to favor the long-term stability, which is not always guaranteed by the prediction loss alone: \[L_{lin}(\Theta)=\sum_{1\leq i\leq N}\sum_{1\leq\tau\leq T}||\phi(\mathbf{x}_{ i,\tau})-\mathbf{K}^{\tau}\phi(\mathbf{x}_{i,0})||^{2}.\] (11)
* The orthogonality term is a second regularisation term, prompting the complex eigenvalues of \(\mathbf{K}\) to be located close to the unit circle, which favors the long-term stability of the latent predictions. It is particularly helpful when the dynamics are close to periodic, as explained in Appendix A. \(||.||_{F}\) denotes the Frobenius norm. \[L_{orth}(\mathbf{K})=||\mathbf{K}\mathbf{K}^{T}-\mathbf{I}||_{F}^{2}\] (12)
Although it was mentioned in [17] that training a neural architecture directly to long-term predictions with large values of \(\tau\) leads to bad local minima, we found that it can actually lead to good results with a careful choice of the learning rate. In cases where it is not effective, we recommend training the architecture for short-term prediction first, as explained in [17].
### _Handling irregular time series_
When working with irregular time series, it is not possible to augment the state with delayed observations as described in equation (8). Yet, the training can still be performed in a way similar to the case of regular time series. One has to distinguish two cases : (1) the data has a regular sampling with missing values (i.e. all temporal distances are multiples of a reference duration) and (2) the time increments between the sampled points are completely arbitrary.
If the irregular time series result from a regular sampling with missing values, then one can denote these data by \((\mathbf{x}_{i,t})_{1\leq i\leq N,1\leq t\leq T}\), with the binary observation variable \((\mathbf{H}_{i,t})_{1\leq i\leq N,1\leq t\leq T}\) being so that \(\mathbf{H}_{i,t}=1\) if \(\mathbf{x}_{i,t}\) is actually observed and \(0\) otherwise. Then, one can trivially multiply each term of the prediction, auto-encoding and linearity losses from equations (9)-(11) by the corresponding binary coefficient \(\mathbf{H}_{i,t}\) to train a model for these irregular data.
When the data is sampled at arbitrary times, one has to adopt a continuous formulation. In this case, one does not work with the discrete \(\mathbf{K}\) but rather with its continuous counterpart \(\mathbf{L}\), which is related to it through the matrix exponential
\[\mathbf{K}=\exp(\mathbf{L}) \tag{13}\]
and can be seen as its corresponding infinitesimal evolution. A sufficient condition to guarantee the existence of such a matrix \(\mathbf{L}\) is that \(\mathbf{K}\) (always diagonalizable in \(\mathbb{C}\)) has no real negative eigenvalue [43]. In our case, we constrain \(\mathbf{K}\) to be close to orthogonal and initialize it to the identity. Thus, the eigenvalues are very unlikely to become real negative (in addition, this set has zero Lebesgue measure), and this never happened in our experiments.
In that case, we can equivalently switch to a continuous dynamical system whose evolution can be described in a Koopman invariant subspace by
\[\frac{d\phi(\mathbf{x}(t))}{dt}=\frac{d\mathbf{z}(t)}{dt}=\mathbf{L}\mathbf{z} (t). \tag{14}\]
In this case, it is a well known result that
\[\mathbf{z}(t_{0}+\tau)=\exp(\tau\mathbf{L})\mathbf{z}(t_{0}) \tag{15}\]
for any time increment \(\tau\in\mathbb{R}\). In particular, with \(\tau=1\), we find the previous definition of \(\mathbf{K}\) from equation (13).
Let us suppose that we train a model on \(N\) irregular time series. For each index \(1\leq i\leq N\), we denote the trajectory \(\mathbf{x}_{i}\) as a list of \(T_{i}\) time-value pairs \((t_{i,k},\mathbf{x}_{i,k})_{0\leq k\leq T_{i}}\). Without loss of generality, one can suppose that the pairs are ordered by
increasing times, with \(t_{i,0}=0\). The set of trainable parameters \(\Theta\) now includes the parameters of \((\phi,\psi)\) and the coefficients of the infinitesimal evolution matrix \(\mathbf{L}\). Then, one can rewrite the prediction, auto-encoding and linearity loss terms as:
\[L_{pred}(\Theta)=\sum_{1\leq i\leq N}\sum_{1\leq k\leq T_{i}}|| \mathbf{x}_{i,k}-\psi(\mathbf{K}^{t_{i,k}}\phi(\mathbf{x}_{i,0}))||^{2} \tag{16}\] \[L_{ae}(\Theta)=\sum_{1\leq i\leq N}\sum_{0\leq k\leq T_{i}}|| \mathbf{x}_{i,k}-\psi(\phi(\mathbf{x}_{i,k}))||^{2}\] (17) \[L_{lin}(\Theta)=\sum_{1\leq i\leq N}\sum_{1\leq k\leq T_{i}}|| \phi(\mathbf{x}_{i,k})-\mathbf{K}^{t_{i,k}}\phi(\mathbf{x}_{i,0})||^{2} \tag{18}\]
where we use the slightly abusive notation \(\mathbf{K}^{t}=\exp(t\mathbf{L})\) for any non-integer time increment \(t\). Now, one can use these rewritten loss terms in conjunction with the unchanged orthogonality loss to learn from irregularly-sampled data in the same way as from regularly-sampled ones, although it is likely to be a more challenging learning problem.
The continuous formulation is actually more general than the discrete one, but we work with a discrete formulation when possible for convenience and because it gave better results in our experiments. Note that training a model with a discrete formulation does not mean that we give up on the continuous modelling. Indeed, when one has a trained discrete matrix of evolution \(\mathbf{K}\) at hand, it is possible to switch to continuous dynamics as soon as a matrix logarithm exists [43]. In that case, the complex eigendecomposition of \(\mathbf{K}\) writes
\[\mathbf{K}=\mathbf{V}\mathbf{\Lambda}\mathbf{V}^{-1} \tag{19}\]
with \(\mathbf{V}\in\mathbb{C}^{d\times d}\) and \(\mathbf{\Lambda}\in\mathbb{C}^{d\times d}\) a diagonal matrix. Then, \(\mathbf{L}\) can be obtained by computing the principal logarithm of each (necessarily not real negative) diagonal coefficient of \(\mathbf{\Lambda}\):
\[\mathbf{L}=\mathbf{V}\log(\mathbf{\Lambda})\mathbf{V}^{-1}. \tag{20}\]
One can easily check that \(\mathbf{L}\) then verifies equation (13), and use this matrix to query the state of the latent system at any time from a given initial condition using equation (15).
### _Variational data assimilation using our trained model_
Once a model has been trained for a simple prediction task, it is supposed to hold enough information to help solve a variety of inverse problems involving the dynamics, like interpolation or denoising. To leverage this knowledge, we resort to variational data assimilation, using the trained model as a dynamical prior instead of a more classical hand-crafted physical prior. We describe hereafter a general formulation for inverse problems involving time series of images and two different methods to solve them. Although we consider images specifically in our experiments, the methods can be used for any time series by ignoring or adapting the spatial prior.
Suppose that we are working on images containing \(N\) pixels and \(L\) spectral bands (L being 3 for RGB images or higher for multi/hyperspectral images), defined on a set of \(T\) time steps with some missing values. We denote this data by \((\mathbf{\tilde{x}}_{t})_{t\in H}\) with \(H\subset\llbracket 0,T\rrbracket\). For each \(t\in H\), \(\mathbf{\tilde{x}}_{t}\in\mathbb{R}^{N\times L}\). Our objective is to reconstruct (and possibly extend) a complete time series \(\mathbf{x}\in\mathbb{R}^{(T+1)\times N\times L}\).
The first method that we propose is a weakly-constrained variational data assimilation, where we minimise a variational cost on \(\mathbf{x}\) which is composed of at most three components: a term of fidelity to the available data, a dynamical prior which is given by our model, and a spatial prior. The variational cost on \(\mathbf{x}\) can thus be expressed as
\[\sum_{t\in H}||\mathbf{x}_{t}-\mathbf{\tilde{x}}_{t}||^{2}+\alpha\sum_{t=0}^{ T-1}||\mathbf{x}_{t+1}-\mathcal{M}(\mathbf{x}_{t})||^{2}+\beta S(\mathbf{x}) \tag{21}\]
where \(\mathcal{M}(\mathbf{x}_{t})=\psi(\mathbf{K}\phi(\mathbf{x}_{t}))\) and \(S\) is the spatial prior. In practice, \(S\) can be a classical spatial regularisation leading to spatially smooth images, such as a Tikhonov regularisation [44] or the total variation [45]. We emphasize that the optimized variable here is the whole time series \(\mathbf{x}\). The first term of equation (21) is the data fidelity term (first term of equation (5)) and the other two terms form together the prior or regularisation term (second term of equation (5)).
In some cases, it can be useful to consider a more constrained optimization. This is especially true when dealing with very noisy data, in which case the data fidelity term can lead to overfitting the noise even if a high weight is put on the prior terms. We do not optimize on \(\mathbf{x}\) anymore but rather on the latent initial state \(\mathbf{z}_{0}\) of the prediction, so that only values of \(\mathbf{x}\) that can be produced by our data-driven dynamical prior are considered. In this way, we seek to solve
\[\mathbf{z}_{0}^{*}=\underset{\mathbf{z}_{0}\in\mathbb{R}^{N\times d}}{\text{ arg min}}\sum_{t\in H}||\mathbf{\tilde{x}}_{t}-\mathbf{x}_{t}(\mathbf{z}_{0})||^{2}+ \beta S(\mathbf{x}(\mathbf{z}_{0})), \tag{22}\]
where, for time \(t\), \(\mathbf{x}_{t}(\mathbf{z}_{0})=\psi(\mathbf{K}^{t}\mathbf{z}_{0})\). After finding the optimal initial condition \(\mathbf{z}_{0}^{*}\), one can simply compute the associated predictions at any time \(t\) using \(\mathbf{x}_{t}(\mathbf{z}_{0})\). Note that \(\mathbf{z}_{0}\) belongs to \(\mathbb{R}^{N\times d}\) since we assumed that the input of \(\phi\) is the reflectance vector of a single pixel of an image, so that the model forecasts the dynamics of all pixels in parallel.
The constrained and unconstrained assimilations have different advantages and weaknesses. The unconstrained assimilation is generally useful when \(\mathbf{\tilde{x}}\) is close to \(\mathbf{x}\), i.e. when the observed data are complete and when the signal-to-noise ratio is large enough. In this case, it can efficiently provide small corrections to the noise in the dynamics. The constrained optimization, however, is not useful in this case since it is not able to reconstruct the exact observed data. Therefore, when the error due to the noise has a magnitude smaller or similar to the reconstruction error of the model, the constrained optimisation is not able to perform a relevant denoising. On the other hand, the constrained assimilation is very effective to deal with sparse or very noisy observed data. Indeed, any prediction made by the model should be a possible trajectory of the dynamical system, which means that the result of this optimization can be seen in some way as the plausible trajectory which best matches the observed data.
Another possibility in our framework is to perform a constrained assimilation like in equation (22) but with a joint optimization of the initial latent space and of the parameters of the model. Thus, the optimisation problem becomes
\[\underset{\mathbf{z}_{0},\mathbf{K},\psi}{\text{min}}\sum_{t\in H}||\mathbf{ \tilde{x}}_{t}-\mathbf{x}_{t}(\mathbf{z}_{0})||^{2}+\beta S(\mathbf{x}( \mathbf{z}_{0})), \tag{23}\]
where, again, \(\mathbf{x}_{t}(\mathbf{z}_{0})=\psi(\mathbf{K}^{t}\mathbf{z}_{0})\). While this problem would be very difficult to solve when starting with random variable initializations, using the parameters of a pretrained model as initial values gives good results. Notably, adjusting the model parameters enables to fit the available data much better, which is especially useful when working on a set of data which differs from the model training data. This strategy corresponds, in machine learning parlance, to transfer learning or solving a downstream task using the self-supervised trained model. However, one must be careful not to make the model overfit the assimilated data at the cost of its general knowledge, which could be related to the well-known catastrophic forgetting [46]. Thus, it is critical to use a very low learning rate as commonly described in the literature, e.g. [47].
## IV Experiments on simulated data
Here, we present a benchmark of our method against the method of [29], which we call DeepKoopman, on a 3-dimensional dynamical system arising from fluid dynamics.
The nonlinear fluid flow past a cylinder with a Reynolds number of 100 has been a fluid dynamics benchmark for decades, and it was proven by [48] that its high-dimensional dynamics evolves on a 3-dimensional attractor with the model:
\[\dot{x} =\mu x-y-xz \tag{24}\] \[\dot{y} =\mu y+x-yz\] \[\dot{z} =-y+x^{2}+y^{2}.\]
This dynamical system is not periodic, yet it exhibits a stable limit cycle and an unstable equilibrium.
In our experiments, we use the training and test data from [29], which have been generated by numerically integrating equations (24). All of our models trained on this dynamical system have the same architecture: the encoder \(\phi\) (resp. decoder \(\psi\)) is a Multi-Layer Perceptron (MLP) with 2 hidden layers of size 256 and 128 (resp. 128 and 256), with the dimension of the latent space and matrix \(\mathbf{K}\) being 16. Their reported mean squared errors are averaged over all variables, time steps and trajectories from the test set.
### _Interpolation from low-frequency regular data_
We first show the ability of our architecture to model a continuous dynamical system even when trained on discrete data. As mentioned in subsection III-B, once a model is trained, one can retrieve its learnt Koopman matrix \(\mathbf{K}\) and compute its corresponding infinitesimal operator \(\mathbf{L}\). Then, one can analytically compute a new discrete matrix corresponding to an advancement by any desired time increment. Concretely, suppose that \(\bar{\omega}\) is a chosen target frequency (we choose \(\bar{\omega}=50\) Hz here). We train a model on training time series sampled at a lower frequency \(\omega\), obtaining in particular a Koopman matrix \(\mathbf{K}\). Then, we compute the discrete operator \(\bar{\mathbf{K}}=\exp\left(\frac{\omega}{\bar{\omega}}\log\mathbf{K}\right)= \exp\left(\frac{\omega}{\bar{\omega}}\mathbf{L}\right)\), and use it to perform predictions at a frequency \(\bar{\omega}\) from the initial states of the test time series. Finally, we compute a mean squared error between the high-frequency groundtruth and predictions.
We report in Table I the results obtained with various training frequencies \(\omega\). Note that these models are trained without the orthogonality loss term from equation (12) since it enabled better forecasting performance for this dataset. We compare our results against DeepKoopman models trained on the same data and for which we interpolated linearly inside of the latent space between the low-frequency time steps to perform a high-frequency prediction. One can see that the quality of our high-frequency predictions depends very little on the training frequency. In contrast, the MSE for DeepKoopman is on par with ours for the highest training frequencies but increases exponentially as the training frequency decreases. Indeed, being specialised in discrete predictions, the DeepKoopman model does barely better than a linear interpolation of the state variable when one tries to use it to interpolate. Our model, however, successfully combines the information of many low-resolution time series to construct a faithful continuous representation of the dynamics. Visual results for one trajectory can be observed on Figure 3.
### _Learning to predict forward and backward_
Here, we investigate the ability of our models to perform backward predictions after being trained on forward prediction. In theory, one can simply invert the learnt matrix \(\mathbf{K}\)
Fig. 3: Upsampling experiment on fluid flow data. We learn a model on low-frequency data and then use the continuous representation to make a high-frequency prediction which we compare to the groundtruth. Top: our model and DeepKoopman compared to the groundtruth trajectory. Bottom: corresponding mean squared errors over time, in a logarithmic scale.
This is not a possibility of the DeepKoopman framework since it computes a new matrix \(\mathbf{K}\) as a function of the input at each iteration, and one would need to invert the matrix of the preceding state (which one does not have access to) to predict backwards. Table II reports our mean squared errors: in this table, "HF" means that the model was trained on high-frequency data (50 Hz) while "LF" means that it was trained on low-frequency data (2.5 Hz). One can see that the backward predictions have significantly higher errors in average than forward predictions. This matches the observation by [33] that naively inverting a learnt Koopman matrix is generally not effective. In particular, our model trained on low-frequency forecasting with no orthogonality term quickly diverges from the groundtruth when performing backward predictions. However, our model with an orthogonality regularisation trades reduced forecasting performances for the ability to run better backward reconstructions although it was not trained on this task. Figure 4 shows a typical example for which the model with an orthogonal matrix sticks to the time series while the unregularised one diverges from it.
As will be shown in subsequent experiments, the models trained with an orthogonality regularisation are more stable and versatile than their unregularised counterparts in general, making them more suitable for downstream tasks.
## V Experiments on real Sentinel-2 time series
In this section, we work on multispectral satellite image time series. They consist in successive multivariate images of the forests of Fontainebleau and Orleans in France, which have been taken by the Sentinel-2 satellites as a part of the European Copernicus program [49] over a duration of nearly 5 years. We use the reflectance from \(L=10\) visible and infrared spectral bands at a spatial resolution of 10 meters, resorting to bicubic interpolation for those that were originally at a 20 meter resolution. Although the satellites have a revisit time of five days, many images are unexplitable due to the presence of too many clouds between the satellite and the surface. Therefore, we performed temporal interpolation of the available data to obtain complete versions of the time series, along with the original incomplete versions for training on irregularly sampled data and assimilation purposes. The interpolation is performed with the Cressman method, which fills each missing value with a normalized sum of the available data, weighted by a gaussian function of the temporal distance to the filled time, with a gaussian radius of 15 days. We show RGB compositions of sample images from these time series in figure 5. Further details are available in [50], and the data is freely accessible from github.com/anthony-frion/Sentinel2TS. Throughout this section, the reported mean squared errors are averaged over all spectral bands, pixels and available times.
### _Forecasting_
First, we train a model to perform predictions from an initial condition on the 10-dimensional reflectance vector of a given pixel. Our encoder \(\phi\) is a multi-layer perceptron with hidden layers of size 512 and 256, and the decoder \(\psi\) has a symmetric architecture. The latent space and matrix \(\mathbf{K}\) have a size of 32.
The model is trained on the interpolated version of the Fontainebleau dataset, containing 343 images, of which we use the first 243 for training and the last 100 for validation.
Fig. 4: Backward reconstructions of a test time series from a model trained with an orthogonality loss term (orthogonal) and a model trained without it (unregularised). Both models were trained on high-frequency forecasting.
Fig. 5: Left: a temporally interpolated Fontainebleau image. Right: a non-interpolated Orleans image. The date for both images is 2006/2018. Those are RGB compositions with saturated colors. The red square is the \(150\times 150\) pixel training area and the blue squares are test areas.
Note that we do not train our model on 243-time-steps pixel reflectance time series but on time series of length 100 extracted from the data. In this way, our model learns to predict from any initial condition rather than just from the initial time of the dataset. Therefore, a trained model is able to model the long-term reflectance dynamics of a pixel from only 2 observations (because we use an augmented state as presented in equation (8)). However, one can obtain a much more accurate forecast by taking into account a higher number of observations for a given pixel. Namely, one can try to predict the future dynamics of a pixel given a time series representing its past behavior. We perform this task using the variational cost from equation (22), where the set \(H\) of observed time indices contains all positive integers up to time \(T_{train}=242\) (i.e. the training data). while the groundtruth time series prolongs the observations up to time index \(T_{val}=342\). We investigate minimising this cost with no spatial regularisation or with a simple Tikhonov regularisation favoring the spatial smoothness of the resulting time series. More precisely, we seek to solve
\[\mathbf{z}_{0}^{*}=\underset{\mathbf{z}_{0}\in\mathbb{R}^{N\times d}}{\text{ arg min}}\sum_{t=0}^{T_{train}}||\mathbf{\tilde{x}}_{t}-\mathbf{x}_{t}(\mathbf{z}_{0})|| ^{2}+\beta S(\mathbf{x}_{t}(\mathbf{z}_{0})) \tag{25}\]
where we again note \(\mathbf{x}_{t}(\mathbf{z}_{0})=\psi(\mathbf{K}^{t}\mathbf{z}_{0})\), and \(S(\mathbf{x})\) is a smoothness prior, penalizing the square of the spatial gradient of the resulting images through first order finite differences. The case with no spatial regularisation simply corresponds to \(\beta=0\). Once \(\mathbf{z}_{0}^{*}\) has been computed, one can use it to extend the generated time series at will by simply using higher powers of \(\mathbf{K}\). We will refer to this technique as assimilation-forecasting. We solve this equation with gradient descent, using \(\mathbf{z}_{0}=\phi(\tilde{\mathbf{x}}_{0})\) as a starting point, which is more effective than a random starting point. Should \(\mathbf{z}_{0}^{*}\) be equal to \(\phi(\tilde{\mathbf{x}}_{0})\), then this would be equivalent to simply performing a forecast from the initial state \(\tilde{\mathbf{x}}_{0}\), which is not the case in practice. One can observe assimilation-forecasting results in a 3-dimensional PCA projection of a \(100\times 100\) subcrop of the data in figure 6 and for a particular pixel in figure 7. Note that, even though all the pixels correspond to a forest environment, one can visually see that there are various long-term patterns among the pixels, and that our model can model them all.
We also try to forecast the Orleans time series in the same way with a model trained on the forest of Fontainebleau in order to test the zero-shot transfer performance of our model. Note that, in addition to being unseen during training, the Orleans data is irregularly sampled, which makes it harder to predict with assimilation-forecasting. All quantitative forecasting performances of our model are reported in table III, along with the performance of a long short-term memory network (LSTM) trained on the same Fontainebleau data. As previously mentioned, using assimilation-forecasting is far more effective than a simple prediction from an initial state. Using an additional smoothness prior further improves the results. One can also see that assimilation-forecasting achieves better results than an LSTM for long-term forecasting of this dataset, mainly because the LSTM is more fit to short-term prediction and tends to accumulate errors after multiple steps of nonlinear computation, as can be qualitatively assessed on figure 7. In many cases, the long-term predictions of the LSTM diverge in amplitude. In contrast, our model, being driven by a nearly orthogonal matrix, is very stable on the long-term and well fit for time series with a pseudo-periodic pattern.
Additionally, we noticed that training a model without the orthogonality loss term results in slightly better results for naive prediction from a given initial condition but far worse assimilation-forecasting results. This is in line with the results from section IV-B and confirms that the models with a nearly orthogonal matrix are better for performing downstream tasks.
### _Interpolation through data assimilation_
We now move on to performing interpolation tasks. As previously mentioned, the satellite image time series are usually incomplete since most of the observations are too cloudy to be exploited. Therefore, one often has to interpolate them in time to work with regularly sampled data. Here, we perform variational data assimilation, using our data-based model to constrain the search. The variational cost, in the framework of equation (23), is minimised jointly on the latent initial condition and on the parameters of the pre-trained model.
Fig. 6: Top: groundtruth images of Fontainebleau, corresponding to test times. Middle: predictions made by our model by assimilating the time series up to day 1200 with a trained model. Bottom: Same as middle but including a spatial regularisation in the variational cost. The colors result from a 3-dimensional principal component analysis (PCA) of the 10 spectral bands performed globally on all the Fontainebleau data. This is much more informative than an RGB composition, mainly because vegetation is very reflective in the near-infrared domain.
Fig. 7: Predictions of different methods for the reflectance of the B7 band (in near infrared). The vertical line marks the limit between the training and the validation data. The pixel of interest is marked by a red dot on the Fontainebleau image from figure 5.
Here, we test on raw incomplete data while our model was trained only on interpolated data from the forest of Fontainebleau. We consider the areas in the blue squares from figure 5. In both cases, we have at our disposal a set of around 85 \(100\times 100\times 10\) images, each with its associated time index, irregularly sampled over a duration of 342 time steps (i.e. nearly 5 five years). From this set, we randomly mask half of the images which we use for the interpolation, while keeping the other half as a groundtruth to evaluate the quality of the computed interpolation. As a point of comparison, we seek the periodic pattern that best matches the available data, with a temporal smoothness prior using Tikhonov regularisation in the time dimension. Namely, we solve
\[\underset{\mathbf{x}\in\mathbb{R}^{p\times N\times L}}{\text{arg min}}\sum_{t\in H}||\tilde{\mathbf{x}}_{t}-\mathbf{x}_{t\uparrow \otimes p}||^{2}+\alpha S_{t}(\mathbf{x}), \tag{26}\]
where \(p\) is the known pseudo-period of one year, we use the notation \(t\%p\) for the remainder of the Euclidean division of \(t\) by \(p\), and \(S_{t}\) is the temporal smoothness prior. This method, which we call "periodic interpolation", is a strong baseline since it explicitly leverages the physical knowledge of the pseudo-period of the data. Yet, even if it was not trained on this data but only fine-tuned on it, our model has better results, as can be seen quantitatively in table IV and for a particular pixel of the forest of Orleans on figure 8. We show in the table the mean and standard deviation of the mean squared errors over 20 randomly computed masks.
Since the baseline method obtains much better results on Fontainebleau than on Orleans, the former seems to be easier to interpolate, yet the gap between this method and ours is bigger on this area since it is closer to the training data.
### _Training on an irregular time series_
As our final experiment, we investigate the training of our architecture on an irregular version of the Fontainebleau time series. This corresponds to the relatively simple setting mentioned in subsection III-B since this time series results from a regular sampling from which some observations have been removed because they were not usable. We were therefore able to optimise directly on the discrete operator \(\mathbf{K}\) rather than on its continuous counterpart \(\mathbf{L}\). As explained in subsection III-B, one just has to adapt the prediction, auto-encoding and linearity loss terms by computing them only for time delays for which the groundtruth is available. We were able to obtain satisfying results in this way, although the computed model is not as good as when training on interpolated time series. This tends to suggest that training our model directly on irregular time series can be a possibility when it is not possible to perform an interpolation as a pre-processing step.
We found that training on irregular data makes our model more subject to overfitting. Indeed, the model is not forced to predict a smooth evolution anymore but only to be able to correctly reconstruct some sparsely located points. Therefore, all regularisation terms that we presented in subsection III-A are very important to get the most out of this dataset. To support this claim, we performed an ablation study in which we tested different loss functions: the complete loss with the 4 terms from equations (9)-(12) and 4 versions where one of the terms has been removed. For each version of the loss function, we trained models on the Fontainebleau data from 5 different initialisations. We then retrieved the mean and standard deviations of the mean squared errors obtained when performing assimilation-forecasting as in subsection (V-A). The results are presented in table V. One can see that the final results on the Fontainebleau area largely depend on the model initialisation, yet both the mean and the standard deviation of the MSE are lower when using all loss terms.
Fig. 8: Comparison of interpolations for an Orleans pixel on the B7 band, using periodic interpolation and using data assimilation with our model trained on Fontainebleau data. We show an interpolation with no spatial regularisation (as can be found in [50]) and an interpolation with an additional spatial regularisation in the variational cost. The pixel of interest is marked by a red dot on the Orleans image on figure 5.
We show qualitative results of the assimilation-forecasting of irregular data using one of our models trained on irregular data with the complete loss function in figure 9. We emphasize that the blue curve is only a Cressman interpolation of the groundtruth points and should not be seen as a groundtruth here. Our model fits the training points well and, in some way, performs a smoother interpolation between those than the Cressman interpolation that was used to obtain the regularly-sampled data from section V-A.
We emphasize that, when tested on the same irregular test data, models trained on interpolated Fontainebleau data have better interpolation performance but lower forecasting performance than models trained on irregular Fontainebleau data. Using interpolation as a pre-processing step is not a trivial choice since models trained on these data will learn the interpolation scheme along with the true data. However, it can be seen as a form of data augmentation.
## VI Discussion
In the assimilation results presented in the previous section, adding a regularisation on the spatial gradient generally only results in a modest improvement of the mean squared error compared to using no regularisation. Since the chosen regularisation was a very basic one, this suggests that our predictions could gain more from spatial information. One could imagine using a more complex spatial prior, e.g. a data-driven one like in [39]. One could also use an image as the input of the Koopman auto-encoder instead of a single pixel, which would enable it to directly leverage spatial information (e.g. with a convolutional architecture) but might be more difficult and less suited to pixel-level downstream tasks. Another possibility is to train a convolutional neural network to correct the residual errors made when recomposing images from our pixelwise model. This approach has been presented in [50] and it can indeed improve the results. Yet, it is much less flexible and elegant since such a CNN needs to be trained for every new model or task considered, for which we do not necessarily have enough time or data.
The weights of the prior terms in assimilation should be chosen carefully, although a slightly inaccurate choice is unlikely to severely affect the results. A good way to proceed, when possible, is to use a set of validation data to set the parameters \(\alpha\) and \(\beta\) from equations (21)-(23). About the choice of allowing the parameters of the pretrained model to vary when performing data assimilation (i.e. solving the problem from equation (22) or (23)), it seems from our experiments that: (1) It is more beneficial to make the parameters vary when working with data that differ from what could be found in the training dataset. This can be seen as fine-tuning the model. (2) Allowing the model parameters to vary is effective when performing interpolation, but more dangerous for forecasting. An explanation could be that a slightly modified model keeps its tendency to generate smooth trajectories but not necessarily its long-term stability. One could investigate extensions of the framework of equation (23) with e.g. an orthogonality term to make sure that the modified model remains stable.
## VII Conclusion
In this paper, we presented a method that enables to jointly learn a Koopman invariant subspace and an associated Koopman matrix of a dynamical system in a data-driven way. We showed that this method enables to learn a continuous representation of dynamical systems from discrete data, even in difficult contexts where the data are sparsely or irregularly sampled. In addition, it was demonstrated that a trained model is not only useful to forecast the future state of a dynamical system but also to solve downstream tasks. Indeed, we used the forward prediction as a pretext task to learn general useful information about the dynamical system in a self-supervised way. Since our architecture is fully differentiable, we showed how this information can be leveraged to solve inverse problems using variational data assimilation.
A possible extension of our work is to introduce a control variable in order to better predict the state of systems on which we know that some information is lacking. For example, precipitation data could be used as a control variable to better predict the vegetation reflectance. For image data specifically, one could make a better use of the spatial structure of the images by learning a complex spatial prior that would be coupled to the dynamical prior or by directly learning an end-to-end model that takes into account both dynamical and spatial information. Finally, a stochastic extension of our framework would make it able to output distributions of possible trajectories rather than single predictions.
## Appendix A Using a (special) orthogonal matrix for the Koopman operator leads to periodic dynamics
Let us assume that we have found a KIS that leads us to consider linear Koopman dynamics in a latent space given by \(\mathbf{z}_{t}\in\mathbb{R}^{d}\). Let us further assume the discrete dynamics are given by
\[\mathbf{z}_{t+1}=\mathbf{K}\mathbf{z}_{t} \tag{27}\]
where \(\mathbf{K}\in\mathcal{SO}(d)\), i.e. \(\mathbf{K}\) belongs to the special orthogonal group, i.e. the group of invertible matrices satisfying \(\mathbf{K}\mathbf{K}^{T}=\mathbf{K}^{T}\mathbf{K}=\mathbf{I}\), and with determinant equal to \(+1\).
First we note that the norm of the iterates \(\mathbf{z}_{t}\) remain equal to that of the initial condition \(\mathbf{z}_{0}\). Indeed:
\[\left\|\mathbf{z}_{t+1}\right\|^{2}=\left\|\mathbf{K}\mathbf{z}_{t}\right\|^ {2}=\mathbf{z}_{t}^{T}\mathbf{K}^{T}\mathbf{K}\mathbf{z}_{t}=\mathbf{z}_{t}^{T} \mathbf{z}_{t}=\left\|\mathbf{z}_{t}\right\|^{2} \tag{28}\]
Fig. 9: Forecasting results on the B7 band, with irregular data from the forest of Fontainebleau. The considered pixel is marked by a blue dot on figure 5.
and it is easy to see by induction that every iterate's norm is equal to \(\|\mathbf{z}_{0}\|\). So the dynamics remain on a sphere of radius \(\|\mathbf{z}_{0}\|\).
Besides, \(\mathcal{SO}(d)\) is a Lie group, whose Lie algebra \(\mathfrak{so}(d)\) is the set of skew-symmetric matrices of size \(d\). Furthermore, as \(\mathcal{SO}(d)\) is compact, the exponential map \(\exp:\mathfrak{so}(d)\rightarrow\mathcal{SO}(d)\), corresponding here to the matrix exponential, is surjective [51]. This means that any special orthogonal matrix can be written as the matrix exponential of a skew-symmetric matrix \(\mathbf{L}\): \(\exp(\mathbf{L})=\mathbf{K}\). Equivalently, a skew-symmetric matrix logarithm of a special orthogonal matrix always exists. In these conditions, we can see that \(\mathbf{z}_{t+1}\) is the solution to the following ODE, representing the same dynamics in continuous time:
\[\frac{d\mathbf{z}}{dt}=\mathbf{L}\mathbf{z} \tag{29}\]
with \(\mathbf{z}(0)=\mathbf{z}_{t}\). We proceed to show that the dynamics generated by this ODE must be periodic.
\(\mathbf{L}\) is a skew-symmetric matrix, to which the spectral theorem applies: it can be diagonalized in a unitary basis, and its eigenvalues must be purely imaginary. There exists \(\mathbf{U}\in\mathcal{U}(d)\) such that:
\[\mathbf{L}=\mathbf{U}^{*}\mathbf{D}\mathbf{U} \tag{30}\]
with \(\mathbf{D}=\text{diag}(i\alpha_{1},i\alpha_{2},...,i\alpha_{d}),\alpha_{k}\in \mathbb{R}\). By denoting \(\mathbf{K}^{\tau}=\exp(\tau\mathbf{L})\) (giving \(\mathbf{z}_{t}\) by matrix multiplication with \(\mathbf{z}_{0}\)), we can write:
\[\mathbf{K}^{\tau}=\exp(\tau\mathbf{L})=\mathbf{U}^{*}\exp(\tau\mathbf{D}) \mathbf{U} \tag{31}\]
If we write out \(\mathbf{K}^{\tau}_{rs}\), denoting as \(\mathbf{u}_{r}\) the \(r^{\text{th}}\) column of \(\mathbf{U}\), we get
\[\mathbf{K}^{\tau}_{rs}=\mathbf{u}^{*}_{r}\exp(\tau\mathbf{D})\mathbf{u}_{s}= \sum_{k=1}^{d}u^{*}_{kr}\exp(i\tau\alpha_{k})u_{sk}. \tag{32}\]
The exponential factor is periodic of period \(\frac{2\pi}{\alpha_{k}}\). Hence each entry of \(\mathbf{K}^{\tau}\) is a linear combination of periodic functions. Mathematically, such a linear combination is only periodic when all the ratios between pairs of periods of the summands are rational. For all practical purposes, however, when numbers are represented with finite precision in a computer, such a linear combination can be itself seen as periodic. Finally, the same argument applies for the whole matrix \(\mathbf{K}^{\tau}\) to be periodic.
This shows that the dynamics of linear dynamical system specified with a skew-symmetric matrix (when continuous) or with a special orthogonal matrix (when discrete) leads to periodic dynamics. Note that this property carries on to any time independent transformation of \(\mathbf{z}_{t}\): for any regular enough function \(\psi\), \(\psi(\mathbf{z}(t))\) will itself be periodic with the same period as \(\mathbf{z}(t)\).
|
2303.17748 | MLGCN: An Ultra Efficient Graph Convolution Neural Model For 3D Point
Cloud Analysis | The analysis of 3D point clouds has diverse applications in robotics, vision
and graphics. Processing them presents specific challenges since they are
naturally sparse, can vary in spatial resolution and are typically unordered.
Graph-based networks to abstract features have emerged as a promising
alternative to convolutional neural networks for their analysis, but these can
be computationally heavy as well as memory inefficient. To address these
limitations we introduce a novel Multi-level Graph Convolution Neural (MLGCN)
model, which uses Graph Neural Networks (GNN) blocks to extract features from
3D point clouds at specific locality levels. Our approach employs precomputed
graph KNNs, where each KNN graph is shared between GCN blocks inside a GNN
block, making it both efficient and effective compared to present models. We
demonstrate the efficacy of our approach on point cloud based object
classification and part segmentation tasks on benchmark datasets, showing that
it produces comparable results to those of state-of-the-art models while
requiring up to a thousand times fewer floating-point operations (FLOPs) and
having significantly reduced storage requirements. Thus, our MLGCN model could
be particular relevant to point cloud based 3D shape analysis in industrial
applications when computing resources are scarce. | Mohammad Khodadad, Morteza Rezanejad, Ali Shiraee Kasmaee, Kaleem Siddiqi, Dirk Walther, Hamidreza Mahyar | 2023-03-31T00:15:22Z | http://arxiv.org/abs/2303.17748v1 | # MLGCN: An Ultra Efficient Graph Convolution Neural Model for 3D Point Cloud Analysis
###### Abstract
The analysis of 3D point clouds has diverse applications in robotics, vision and graphics. Processing them presents specific challenges since they are naturally sparse, can vary in spatial resolution and are typically unordered. Graph-based networks to abstract features have emerged as a promising alternative to convolutional neural networks for their analysis, but these can be computationally heavy as well as memory inefficient. To address these limitations we introduce a novel Multi-level Graph Convolution Neural (MLGCN) model, which uses Graph Neural Networks (GNN) blocks to extract features from 3D point clouds at specific locality levels. Our approach employs precomputed graph KNNs, where each KNN graph is shared between GCN blocks inside a GNN block, making it both efficient and effective compared to present models. We demonstrate the efficacy of our approach on point cloud based object classification and part segmentation tasks on benchmark datasets, showing that it produces comparable results to those of state-of-the-art models while requiring up to a thousand times fewer floating-point operations (FLOPs) and having significantly reduced storage requirements. Thus, our MLGCN model could be particular relevant to point cloud based 3D shape analysis in industrial applications when computing resources are scarce.
## 1 Introduction
With advances in 3D acquisition technologies, 3D sensors are becoming more accessible and cost-effective. Sensors including 3D scanners, LiDARs, and RGB-D cameras (e.g., RealSense, Kinect, and Apple depth cameras) provide a wealth of information about the shape, scale, and geometry of objects in the environment. Consequently, there has been an increasing need to develop algorithms and models for point cloud analysis and 3D model classification and segmentation have become active areas of research in machine learning and computer vision. Deep learning techniques have proven to be highly effective for this task due to their ability to learn rich features and representations from raw data. However, most existing 3D deep learning models rely on large and complex architectures, making them computationally expensive and unsuitable for real-time applications, such as augmented reality, robotics, and autonomous driving.
Most sensors on modern 3D perception devices acquire data in the form of point clouds and, traditionally, researchers sample this data on voxel grids for 3D volumetric convolutions. However, the use of low-resolution can result in
information loss, e.g., when multiple points fall within the same voxel. To preserve necessary detail in the input data, a high-resolution representation is preferable, but this can lead to an increase in computational costs and memory requirements. Whereas data acquired by sensors is often in the form of 3D point clouds, they are unordered and sparse, requiring models that are permutation agnostic and multi-scale. Whereas classical Convolution Neural Network (CNN) models have been effective for image-based computer vision problems, they cannot be directly applied to 3D point cloud analysis.
In recent years, numerous powerful models have been proposed to analyze point clouds [1, 2, 3, 4, 5, 6, 7, 8]. Most of these models, however, suffer from a significant drawback: they are typically too complex in terms of parameters and require a large number of mathematical operations, making them unsuitable for industrial use or deployment on lightweight compute devices. Specifically, many of them need to calculate graphs of connectivity on top of point clouds multiple times, resulting in a large number of Floating Point Operations (FLOPs).
Our work addresses the above limitations by introducing a lightweight model that can be trained easily and deployed on low-memory and low-end CPU devices. Instead of relying on complex structures, such as attention mechanisms or deep stacks of feature extraction blocks, which require a large amount of training data and are susceptible to over-fitting, our proposed model (see Figure 1) consists of multiple shallow graph-based network blocks that capture information from point clouds using different graph KNNs. The use of different KNN graphs combined with shallow GNNs can alleviate the over-smoothing issue caused by deep GCNs [9, 10]. Furthermore, utilizing precomputed shared graph KNNs, within a GNN block, greatly reduces the number of floating point operations. This architecture offers an efficient solution for processing point clouds without compromising accuracy, making it practical for real-world applications. Our paper makes the following contributions:
1. We propose a multi-branch graph-based network that effectively captures features at various spatial locality levels of 3D objects, using efficiently designed and lightweight graph-based neural network blocks.
2. We should that our models are significantly more efficient both in terms of computation and storage, than exsiting approaches for abstracting features from graphs for downstream computer vision tasks.
3. We conduct a series of ablation studies to analyze the impact of different branches on our model's performance, to gain insight into the role that specific branches play.
Figure 1: Top: The overall architecture of our 3D point cloud processing model, which is designed to be lightweight and efficient for deployment on low-memory, low-CPU devices. Points sampled from an object are fed to GNN blocks for computing features at various spatial locality levels, which are subsequently used for downstream tasks. Bottom: the design of our GNN and GCN blocks, which are the building blocks of our proposed 3D object processing model. One can include as many GCN blocks as needed, where ‘+’ denotes the concatenation operation.
## 2 Related Work
Over the past few years, the field of deep learning has seen a surge in research efforts aimed at developing effective methods for analyzing sensor data. Methods that are designed for 2D images cannot be directly applied to 3D point clouds which can be sparse, nonuniform in density and lack local spatial ordering. A promising neural network models for 3D shape analysis in this setting is the PointNet model [11]. Unlike previous methods that transform point cloud data to regular 3D voxel grids or collections of images, PointNet processes point cloud data directly, extracting information from individual points and aggregating this it into a feature vector using Global Max Pooling.
The PointNet model's inability to capture local structures induced by the metric space limits its ability to represent fine-grained patterns and also generalize to complex scenes. To address this issue, PointNet++ [1] applies PointNet recursively on nested partitions to extract local features, then combining the learned features across multiple scales.
In [2], the GBNet combines channel affinity modules and CNN networks to improve the representation of point clouds, while learning both local and global features. GBNet utilizes an error-correcting feedback structure to design a back-projection CNN module, In [3] medial spectral coordinates are added as additional features to point cloud 3D coordinates. These coordinates contain both local and global features, resulting in improved performance of vanilla models for computer vision tasks.
The PointMLP model [4] utilizes MLPs to gather local information from points in a hierarchical manner without using a local feature extractor. Additionally, it employs lightweight affine modules to transform information from points to a normal distribution.
GNNs have the unique ability to handle topologically-structured data without requiring explicit encoding into vectors, by capturing graph-based information [12], making them an ideal candidate for the efficient processing of point clouds. The authors of [5] proposed the DGCNN model where an EdgeConv neural network module incorporates local information around each point, and is then stacked to learn global shape properties using Graph Convolutional Networks.
Zhang and colleagues [13] enhanced DGCNN by introducing LDGCNN, which links hierarchical features from different dynamic graphs to calculate informative edge vectors. They removed the transformation network from DGCNN and showed that an MLP can extract transformation-invariant features. They further improved performance by freezing the feature extractor and retraining the classifier.
As attention mechanisms gained momentum in capturing node representation on graph-based data, Chen and colleagues proposed the GAPNet model [6] which embeds a graph attention mechanism within stacked MLP layers to learn local geometric representations. The GAP-Layer employs an attention-based graph neural network to consider the importance of each neighbor of a point.
The DGANET model [7] uses an improved KNN search algorithm to construct a local dilated graph for each point, modeling long-range geometric correlations with its neighbors. This helps the point neural network to learn more local features of each point, with a larger receptive field during the convolution operation. The authors embed an offset-attention mechanism into a dilated graph attention module and employ graph attention pooling to aggregate the most significant features.
Huang and colleagues [8] propose the Dual-Graph Attention Convolution Network (DGACN), which introduces an improved version of graph attention that leverages information from different hops. They also propose a novel graph self-attention mechanism that extracts more informative features from point clouds.
The Point-transformer [14] model utilizes self-attention to capture local information in the vicinity of each point. In addition, the authors introduce a trainable positional encoding that is learned within the end-to-end network. [7; 8; 15; 14] use graph attention-based mechanisms that are known to be parameter-heavy and can make the process of training and inference computationally expensive.
## 3 The Proposed Method: MLGCN
MLGCN is a multi-level graph neural network model that can capture information from 3D point clouds at different locality levels efficiently. The model consists of multiple GNN blocks, each taking a set of 3D point clouds as input and learning a representation of the 3D dataset. The model then concatenates and uses these features for downstream tasks. We have designated two downstream branches: one for a classification task (i.e., correctly labeling the 3D model), and one for a segmentation task (i.e., decomposing the model into a set of semantically meaningful parts). In this section, we describe the key components of the MLGCN model, a schematic of which is shown in Figure 1. We assume the following point cloud as input to the system:
\[\mathcal{X}=\left\{\mathbf{p}_{i}=(x_{i},y_{i},z_{i})\in\mathbb{R}^{3}\text{ for }i=1,2,\cdots,N\right\} \tag{1}\]
### KNN Graphs
Given 3D point cloud data, the model forms a set of KNN graphs, where nodes represent 3D points, and each node is connected to its \(k\) closest nodes using edges. The parameter \(k\) defines the locality level around each point where local neighborhood information will be collected. The unique utilization structure of our KNN graph is that the graph is computed once for an input \(\mathcal{X}\) and then reused for various other blocks' outputs of \(Y\). This approach saves computation time and resources, making our model very efficient. The edge connectivity from the KNN graph is used to decide on passing information (messages) over an edge, allowing the model to capture the global features of the input data. Overall, the KNN graph used in our MLGCN model provides a way to explore the local structure of 3D point clouds as well as capture global features efficiently and effectively. To formulate the KNN graph construction, we define the graph \(\mathcal{G}_{k}\) as:
\[\mathcal{G}_{k}=(\mathcal{X},E_{k}) \tag{2}\]
where \(\mathcal{X}\) represent the nodes in our graph and \(E_{k}\subseteq\mathcal{X}\times\mathcal{X}\) represents the edges. Each node \(\mathbf{p}_{i}\) is connected to another node \(\mathbf{p}_{j}\) if \(\mathbf{p}_{j}\) locates within the \(k\) closest neighbors of \(\mathbf{p}_{i}\). As the graph is directed, the graph contains self-loops (see Figure 1 bottom left).
### GNN Block
Each Graph Neural Network (GNN) block takes a 3D point cloud as input and extracts features from it. These features are then concatenated and used for both classification and segmentation tasks. To extract these features, the GNN block applies a series of operations on the input data. First, a multi-layer perceptron (MLP) is applied to transform the input, which is then processed by a series of Graph Convolution Network (GCN) blocks and one single KNN graph. If the parameter \(k\) is set to \(0\), the model skips the KNN graph computation and only extracts global information from the point cloud.
Each GCN block processes the input data and then its output is concatenated with its input and passed to the next GCN block, along with the output of the KNN graph. The KNN graph output is shared between GCN blocks in a GNN block. The next GCN block operates similarly to the previous one, processing the concatenated features to extract additional information. This process can be repeated multiple times except for the last GCN block where the input and output vectors are no longer concatenated. In Figure 1 bottom left, we illustrate the GNN block architecture. Here
\[\Gamma(\mathcal{X})=f\left(\text{concat}\{GB_{k_{i}}(\mathcal{X})|i=1,\cdots,m\}\right), \tag{3}\]
where \(f\) is the shared MLP applied to the concatenated outputs of the GNN blocks \(GB(\mathcal{X})\).
### GCN block
The GCN block in our MLGCN model applies a series of operations on the input data using the KNN graph information that was computed previously. The input data is first processed by a shared multi-layer perceptron. The GCN block then
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & Input & Model Size & FLOPS & Number of parameters & \multirow{2}{*}{Accuracy} & GPU Memory \\ & Shape & Mega Bytes (100 Mega) & 100 Thousands & & \\ \hline Pointnet (vanilla [11] & 1024 & - & 1.5 & 8 & 87.1 & - \\ Pointnet [11] & 1024 & 38 & 4.5 & 35 & 89.2 & 50 \\ Pointnet++ [1] & 1024 & 17 & 8.9 & 14 & 90.7 & 100 \\ GBNet [2] & 1024 & 34 & 98 & 87 & 93.8 & 220 \\ PointMLP [4] & 1024 & 100 & 157 & 132 & **94.5** & 90 \\ \hline DGCNN [5] & 1024 & 21 & 1300 & 18 & 92.9 & 110 \\ LDGCNN [13] & 1024 & 13 & 920 & 10 & 92.9 & - \\ DGANET [7] & 1024 & 6 & - & 15 & 92.3 & - \\ GAPNet [6] & 1024 & 21 & 580 & 19 & 92.4 & 31 \\ DGACN [8] & 1024 & - & 1600 & 240 & 94.1 & - \\ Point-Transformer [14] & 1024 & 82 & - & 140 & 93.7 & 155 \\ \hline Light MLGCN & 1024 & **1.5** & **1.3** & **1.2** & 90.7 & 45 \\ Lighter MLGCN & **512** & **0.4** & **0.2** & **0.3** & 88.6 & **10** \\ \hline \hline \end{tabular}
\end{table}
Table 1: We carry out a comparison of various models using different metrics, with processing on the ModelNet-40 dataset as the basis for evaluation.
uses the KNN graph information to propagate the input feature information for each node and the nodes it is connected to. This operation allows the model to capture local features of the input data using the precomputed KNN graph. The output of the GCN block is then max pooled. This max pooling operation summarizes the information learned from the input data and allows the model to capture the most important features of the input with respect to the defined locality level \(k\).
Our information placement module uses graph connectivity as follows. We assume our message passing function \(h(\mathbf{p}_{i},\mathbf{p}_{j},Y)\) accepts two nodes \(\mathbf{p}_{i}\), \(\mathbf{p}_{j}\) and then passes the information (\(y_{j}\in Y\)) on node \(\mathbf{p}_{j}\) to node \(\mathbf{p}_{i}\) conditioned on the graph neighborhood information, i.e., if \((\mathbf{p}_{i},\mathbf{p}_{j})\in E_{k}\). Here \(E_{k}\) is shared among all GCN blocks that belong to the same GNN block. In Figure 1 bottom right, we show the GCN block architecture.
### Information Processing in the GCN Block
As mentioned previously, a GNN block input is a 3D point cloud \(\mathcal{X}\) where a graph \(\mathcal{G}_{k}=(\mathcal{X},E_{k})\) is made. We now explain how the inputs and outputs of each GCN block are obtained. Let \(y_{i}^{t}\) represent the information from the \(i^{th}\) node of our graph after the \(t^{th}\) GCN block operation is applied on the input. We can formulate \(y_{i}^{t}\) as
\[y_{i}^{t}=A\left(\left\{h(\mathbf{p}_{i},\mathbf{p}_{j},f_{t}(Y^{*(t-1)}))|( \mathbf{p}_{i},\mathbf{p}_{j})\in E_{k}\right\}\right) \tag{4}\]
where \(A\) is the aggregation function and \(f_{t}\) is the \(t^{th}\) shared MLP. The aggregation function used in our pipeline is max pooling (although other aggregation functions could be used as well) and it is applied along the neighborhood axes. For all GCN blocks except for the last one, the information is \(y_{i}^{t}\) concatenated with the input to the same GCN block:
\[y_{i}^{*t}=\text{concat}\left(y_{i}^{t},y_{i}^{*(t-1)}\right). \tag{5}\]
For \(t=1\), \(y_{i}^{t}=y_{i}^{*t}=f_{0}(\mathcal{X})\). Now, with the GNN block represented by \(GB(\mathcal{X})\), \(GB(\mathcal{X})=Y^{l}\) where \(l\) is the index of the last GCN block.
### Overall Architecture
Each variation of MLGCN uses a set of GNN blocks with different values of \(k\). Let the first block be a block with \(k=0\), with the purpose of extracting global information for each node. The other blocks can be set to extract information with different locality levels. Now assume we have a set of \(m\) different GNN blocks in our model with \(K=\{k_{1},k_{2},\cdots,k_{m}\}\). As mentioned previously, outputs of all GNN blocks are concatenated and then passed through a shared MLP. From there, the extracted features are pooled and then used in a downstream task, e.g., classification or segmentation.
#### 3.5.1 Classification Branch
We designated a classification branch to classify 3D input models according to different labels. For the classification task, we simply apply a max pooling along the node's axes and pass the outcome to a classifier as follows:
\[\mathcal{L}_{\text{classification}}=\mathcal{C}\left(A\left(\Gamma(\mathcal{X}) \right)\right) \tag{6}\]
where \(\mathcal{L}_{\text{classification}}\) is the set of classification labels, \(\mathcal{C}\) is a classifier and \(A\) is the max pooling function here.
#### 3.5.2 Segmentation Branch
The second designated branch in our overall architecture is dedicated to the part segmentation of the 3D models. For the segmentation task, the model concatenates the information of each node with the repeated pooled information obtained for all the nodes from the GNN blocks that is used in the classification branch:
\[\mathcal{L}_{\text{segmentation}}=\mathcal{C}\left(\text{concat}\left( \text{repeat}\left(\Gamma(\mathcal{X})\right),\Gamma(\mathcal{X})\right)\right) \tag{7}\]
where \(\mathcal{L}_{\text{segmentation}}\) is the set of segmentation labels, \(\mathcal{C}\) is a classifier and \(A\) is the max pooling function.
### Light-MLGCN & Lighter-MLGCN
Here, we introduce two sample architectures with a MLGCN backbone, Light-MLGCN, and Lighter-MLGCN. These are example models to demonstrate the efficiency of MLGCN-based models. To show this we compare their performance to that of state-of-the-art models that are commonly used for 3D classification and segmentation problems.
Both Light-MLGCN and Lighter-MLGCN utilize multiple GNN-blocks with varying \(k\) sizes. This allows them to capture information related to different locality levels without requiring additional trainable parameters to capture the distance from the neighborhood center. Additionally, the \(l\) value for each GNN block is set to 2, resulting in a shallow network that is less susceptible to over-fitting. Moreover, Light-MLGCN computes graphs based on only three features, as the range of \(f_{0}\) is 3, which makes its graph calculation process much faster than that of other existing papers. These models share the graph for each GNN-block, which results in fewer mathematical operations. Light-MLGCN was trained using hyperparameters of \(K=63,15,0\), and for each GNN block, \(y^{0}\in\mathbb{R}^{1024\times 3}\), \(y^{1}\in\mathbb{R}^{1024\times 32}\), \(y^{2}\in\mathbb{R}^{1024\times 128}\), and \(\Gamma(\mathcal{X})\in\mathbb{R}^{1024\times 256}\). Conversely, Lighter-MLGCN was trained using hyperparameters of \(K=31,7,0\), and for each GNN block, \(y^{0}\in\mathbb{R}^{512\times 3}\), \(y^{1}\in\mathbb{R}^{512\times 16}\), \(y^{2}\in\mathbb{R}^{512\times 64}\), and \(\Gamma(\mathcal{X})\in\mathbb{R}^{512\times 128}\).
## 4 Experiments
We now evaluate the performance of our MLGCN models with respect to different metrics. We demonstrate that our models achieve comparable accuracy to existing models in both classification and segmentation tasks while being considerably smaller and faster.
### Implementation Details
We trained our models on a machine with a single P100 GPU with 12GB memory. For the optimization step, we employed the Adam optimizer, setting the batch size to 128. The initial learning rate was 0.001, which was reduced by a factor of 0.997 (\(e^{-0.003}\)) after the \(20^{th}\) epoch.
### Classification
Our primary experiment involves comparing the accuracy and speed of our models on ModelNet-40 [16], a dataset consisting of 9,843 training and 2,468 testing meshed CAD models from 40 distinct categories. In Table 1, we compare our model to several recent and popular models in terms of accuracy, floating-point operations, number of trainable parameters, model storage size, and GPU memory.
As shown in Table 1, when comparing Light-MLGCN with the best model in terms of accuracy, we see that it is more than 100 times more efficient in terms of FLOPS, and is also more than 100 times smaller in terms of the number of parameters and more than 60 times smaller in terms of model size. Whereas it has only 3.8 percent lower classification accuracy on the ModelNet-40 dataset than the best model [16], Light-MLGCN is considerably faster and more compact.
Among graph-based models, DGACN achieves the highest accuracy but requires 1230 times more FLOPS than our model while only achieving 3.4 percent higher accuracy. Additionally, Lighter-MLGCN achieves comparable accuracy to Light-MLGCN with only a 2.1 percent difference, while being significantly smaller and faster, and processing only 512 points sampled from point clouds. A detailed presentation of our results is in Table 1.
\begin{table}
\begin{tabular}{c|c|c|c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & Class & Inst. & \multirow{2}{*}{aero bag} & \multirow{2}{*}{cap car} & \multirow{2}{*}{chair} & \multirow{2}{*}{guirt} & \multirow{2}{*}{knip} & \multirow{2}{*}{laptop} & \multirow{2}{*}{motor-} & \multirow{2}{*}{mug} & \multirow{2}{*}{pistol} & \multirow{2}{*}{rocket} & \multirow{2}{*}{kate-} \\ & mIoU & mIoU & & & & & & & phone & & & & & bike & & & board \\ \hline Pointnet & 80.4 & 83.7 & 83.4 & 78.7 & 82.5 & 74.9 & 89.6 & 73.0 & 91.5 & 85.9 & 80.8 & 95.3 & 65.2 & 93.0 & 81.2 & 57.9 & 72.8 & 80.6 \\ Pointnet++ & 81.9 & 85.1 & 82.4 & 79.0 & 87.7 & 77.3 & 90.8 & 71.8 & 91.0 & 85.9 & 83.7 & 95.3 & 71.6 & 94.1 & 81.3 & 58.7 & 76.4 & 82.6 \\ GBNet & 82.6 & 85.9 & 84.5 & 82.2 & 86.8 & 78.9 & **91.1** & 74.5 & 91.4 & 89.0 & 84.5 & 95.5 & 69.6 & 94.2 & 83.4 & 57.8 & 75.5 & 83.5 \\ PointMLP & **84.6** & **86.1** & 83.5 & 83.4 & 87.5 & **80.5** & 90.3 & 78.2 & 92.2 & 88.1 & 82.6 & 96.2 & **77.5** & **95.8** & **85.4** & **64.6** & 83.3 & 84.3 \\ \hline DGCNN & 82.3 & 85.2 & 84.0 & 83.4 & 86.7 & 77.8 & 90.6 & 74.7 & 91.2 & 87.5 & 82.8 & 95.7 & 66.3 & 94.9 & 81.1 & 63.5 & 74.5 & 82.6 \\ LDGCNN & 82.2 & 84.8 & 84.0 & 83.0 & 84.9 & 78.4 & 90.6 & 74.4 & 91.0 & 88.1 & 83.4 & 95.8 & 67.4 & 94.9 & 82.3 & 59.2 & 76.0 & 81.9 \\ DGANET & 82.6 & 85 & 84.6 & **85.7** & 87.8 & 78.5 & 91.0 & 77.3 & 91.2 & 87.9 & 82.4 & 95.8 & 67.8 & 94.2 & 81.1 & 59.7 & 75.7 & 82.0 \\ GAPNet & 82 & 84.7 & 84.2 & 84.1 & **88.8** & 78.1 & 90.7 & 70.1 & 91.0 & 87.3 & 83.1 & 96.2 & 65.9 & 95.0 & 81.7 & 60.7 & 74.9 & 80.8 \\ \hline MLGCN & 83.2 & 84.6 & **87.4** & 78.2 & 85.6 & 75.6 & 75.9 & **81.1** & **93.1** & **93.2** & **89** & **96.4** & 67.5 & 93.7 & 81.8 & 60.6 & **85.2** & **87.6** \\ \hline \hline \end{tabular}
\end{table}
Table 2: A comparison of the results achieved by different models for part segmentation on the ShapeNetPart dataset. The results demonstrate that our proposed model performs comparably to the best-performing models in the literature for part segmentation, and in some cases, even outperforms them. We obtain the best score for 8 out of 16 object classes.
### Segmentation
In addition to the 3D classification problem, we also evaluated the performance of our models on the part segmentation task using the ShapeNetPart dataset [17]. This dataset contains 16,881 3D shapes from 16 different classes, where each class has 2 to 6 parts, resulting in a total of 50 different parts. Our objective is to demonstrate that our lightweight model can achieve comparative results (or even better results) while remaining significantly smaller in size than other existing models. To ensure a fair comparison with previous work, we trained and tested our model on samples comprising 2048 points each, using the same settings as those in other papers. The results are presented in Table 2, which shows that our model achieves comparable performance with other state-of-the-art models, despite being much smaller in size.
Moreover, to provide a visual representation of our model's output, we compared it's output labels to the ground truth in Figure 2. The results show that our model is able to accurately segment the parts of the 3D objects, further demonstrating its efficacy for this task.
## 5 Ablation Studies
We now examine details of our models and demonstrate that they are much more efficient than the other existing models.
### FLOPS Required for Each Operation
In many graph-based models, graph calculation is one of the most computationally intensive operations. To calculate the K-NN graph, the K-nearest neighbor algorithm is used to find the nearest neighbors of each point. This results in a computational complexity of \(O(n^{2}\times k)\), where \(n\) is the number of points and \(k\) is the length of the feature vector for each point. This complexity can have a significant impact on the number of floating-point operations required for a graph-based model.
Table 3 demonstrates that graph calculation can be highly resource-intensive when dealing with a large number of points and features. For instance, the FLOPS required to calculate graphs using KNN can increase dramatically as the number of points and features increase. In contrast, Light-MLGCN employs shared graphs on small feature vectors for multiple GCNs, resulting in reduced computational overhead. As a result, Light-MLGCN is able to achieve comparable performance to other state-of-the-art models while being much faster and smaller in size.
Most current graph-based models used for this specific problem require multiple instances of graph extraction on point clouds with 32 to 128 features. This can result in a large number of floating-point operations, which can lead to reduced performance and longer training times. As shown in Table 1, graph-based models generally require significantly more floating-point operations than non-graph-based models.
### Performance of MLGCN model with Various Input Shapes
While Light-MLGCN was primarily designed to operate on 1024 points and Lighter-MLGCN on 512 points, both models can be tested on other sampled point cloud sizes. This section aims to demonstrate the effectiveness of our models with different point cloud shapes. We show that our models can perform well even on sparser point clouds. To
Figure 2: The top row shows the ground truth segmentation, while the bottom row displays the predicted class output label using our MLGCN model.
get a better sense of this, we tested both of our models with input sizes of 128, 256, 512, and 1024 and present the number of FLOPS and their corresponding accuracies in Table 4.
As shown in this table, the simplicity and shallow structure of both Light-MLGCN and Lighter-MLGCN enable them to be trained on smaller point cloud samples without over-fitting, resulting in high accuracy even when using much fewer 3D point cloud sample points. This demonstrates the flexibility of our models and their ability to perform well under varying input conditions.
### MLGCN as an Encoder
Our proposed MLGCN model can also serve as an encoder model for encoding 3D point clouds and extracting meaningful features. To evaluate this hypothesis, we extracted the information of the classification MaxPool (\(\Gamma(\mathcal{X})\)) branch (before the classifier) and projected it into a lower-dimensional space to examine how these features separate between different classes of 3D models. Figure 3 presents a (2D TSNE) visualization of the projection of feature vectors generated by our model when tested on the Modednet-40 dataset onto a 2-dimensional space. The figure clearly demonstrates that our model can effectively cluster each class of 3D objects into a separate cluster, indicating the ability of the model to extract and encode meaningful features from 3D point clouds. It should be noted that Z-score outlier detection was applied to the data. The figure suggests that our proposed model can serve as a robust encoder model for extracting features from 3D point clouds.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & Input Shape & FLOPS (Giga) & Accuracy \\ \hline \multirow{4}{*}{\begin{tabular}{c} 1024 \\ \end{tabular} } & 1024 & 0.13 & 90.7 \\ & 512 & 0.06 & 89.5 \\ & 256 & 0.03 & 88.4 \\ & 128 & 0.014 & 86.4 \\ \hline \hline \multirow{4}{*}{
\begin{tabular}{c} 1024 \\ \end{tabular} } & 1024 & 0.04 & 89.8 \\ & 512 & 0.017 & 88.6 \\ \cline{1-1} & 256 & 0.008 & 86.9 \\ \cline{1-1} & 128 & 0.004 & 83.7 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The performance of the MLGCN model can vary with different input shapes. In order to evaluate the robustness of the model under different input conditions, we conducted experiments with various input shapes and analyzed the results.
\begin{table}
\begin{tabular}{c c c} \hline \hline Dimension & \multirow{2}{*}{Operation Type} & FLOPS \\ Configuration & & Mega \\ \hline \hline (1024,3)-(1024,32) & Point-wise Dense & 0.13 \\ (1024,32)-(1024,64) & Point-wise Dense & 2 \\ (1024,64)-(1024,128) & Point-wise Dense & 8 \\ (1024,128)-(1024,256) & Point-wise Dense & 33 \\ (1024,512)-(1024,1024) & Point-wise Dense & 537 \\ \hline (2048,128)-(2048,256) & Point-wise Dense & 67 \\ (2048,512)-(2048,1024) & Point-wise Dense & 1074 \\ \hline (1024,3) & Graph Calculation & 4 \\ (1024,32) & Graph Calculation & 50 \\ (1024,64) & Graph Calculation & 100 \\ (1024,128) & Graph Calculation & 201 \\ (1024,512) & Graph Calculation & 805 \\ \hline (2048,128) & Graph Calculation & 805 \\ (2048,512) & Graph Calculation & 3221 \\ \hline \hline \end{tabular}
\end{table}
Table 3: We provide a comparison of the number of floating-point operations (FLOPS) required for different operation types in a model.
### Role of Different Sets of \(K\)
In this section, we examine how different sets of \(K\) in the GNN blocks of our proposed model impact the accuracy on the Modelnet-40 dataset. We first demonstrate that our selected \(K\) values of \([0,15,63]\) perform well in the GNN blocks, as indicated in Table 5. We observe that our model achieves high accuracy using these \(K\) values.
Furthermore, we explore the possibility of combining different \(K\) values to improve the accuracy of our model. We do this by using multiple GNN blocks with different \(K\) values and concatenating their output features. We discover that incorporating \(k=0\), which captures global features, results in a significant improvement in accuracy.
Finally, it is worth noting that by combining different sets of \(K\) values, we can capture multi-scale information with different receptive field sizes, enabling the model to learn both local and global features effectively.
## 6 Conclusion
In conclusion, our Multi-level Graph Convolution Neural (MLGCN) model presents a novel and efficient approach to 3D shape analysis, which is particularly for 3D object classification and 3D part segmentation from point cloud data. Our main goal was to develop a model that is lightweight and suitable for industrial and mobile applications, as most state-of-the-art models for 3D object classification can be heavy in terms of their compute and memory requirements for practical use. Our model outperforms other state-of-the-art models in terms of model size, number of operations, and number of parameters, while still achieving competitive accuracy.
Our approach uses lightweight KNN graphs shared across shallow GNN blocks to extract features from 3D point clouds at various locality levels. Our experiments demonstrate that our model can capture the relevant information in point clouds while still achieving high accuracy.
Figure 3: A 2D TSNE plot to visualize the projected features obtained by our proposed model (Light-MLGCN) for 20 different object classes.
\begin{table}
\begin{tabular}{c c c} \hline \hline Block & FLOPS (Giga) & Accuracy \\ \hline \([0,15,63]\) & 0.13 & 90.7 \\ \([0,19,63]\) & 0.13 & 90.4 \\ \([0,9,44]\) & 0.13 & 90.3 \\ \([0,19,44]\) & 0.13 & 90.1 \\ \hline \([15,63]\) & 0.09 & 90.1 \\ \([0,44]\) & 0.08 & 89.9 \\ \([19,63]\) & 0.09 & 89.9 \\ \([9,44]\) & 0.09 & 89.8 \\ \hline \([15]\) & 0.04 & 89.5 \\ \([44]\) & 0.05 & 89.3 \\ \([63]\) & 0.05 & 89.1 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The outcomes of our proposed model when using different sets of \(K\) in the GNN blocks.
Overall, our work represents a significant contribution to the development of efficient and effective 3D shape analysis models, with important implications for the fields of robotics, augmented reality, vision, graphics, and other industrial applications. We anticipate that our findings will motivate further research in this area, and we hope that our approach will inspire the development of even more efficient and lightweight 3D object shape abnalysis models in the future, for classification, segmentation and other vision tasks.
|
2309.03766 | Charge transfer and asymmetric coupling of MoSe$_2$ valleys to the
magnetic order of CrSBr | Van der Waals (vdW) heterostructures composed of two-dimensional (2D)
transition metal dichalcogenides (TMD) and vdW magnetic materials offer an
intriguing platform to functionalize valley and excitonic properties in
non-magnetic TMDs. Here, we report magneto-photoluminescence (PL)
investigations of monolayer (ML) MoSe$_2$ on the layered A-type
antiferromagnetic (AFM) semiconductor CrSBr under different magnetic field
orientations. Our results reveal a clear influence of the CrSBr magnetic order
on the optical properties of MoSe$_2$, such as an anomalous linear-polarization
dependence, changes of the exciton/trion energies, a magnetic-field dependence
of the PL intensities, and a valley $g$-factor with signatures of an asymmetric
magnetic proximity interaction. Furthermore, first principles calculations
suggest that MoSe$_2$/CrSBr forms a broken-gap (type-III) band alignment,
facilitating charge transfer processes. The work establishes that
antiferromagnetic-nonmagnetic interfaces can be used to control the valley and
excitonic properties of TMDs, relevant for the development of opto-spintronics
devices. | C. Serati de Brito, P. E. Faria Junior, T. S. Ghiasi, J. Ingla-Aynés, C. R. Rabahi, C. Cavalini, F. Dirnberger, S. Mañas-Valero, K. Watanabe, T. Taniguchi, K. Zollner, J. Fabian, C. Schüller, H. S. J. van der Zant, Y. Galvão Gobato, . | 2023-09-07T15:14:30Z | http://arxiv.org/abs/2309.03766v1 | # Charge transfer and asymmetric coupling of MoSe\({}_{2}\) valleys to the magnetic order of CrSBr
###### Abstract
Van der Waals (vdW) heterostructures composed of two-dimensional (2D) transition metal dichalcogenides (TMD) and vdW magnetic materials offer an intriguing platform to functionalize valley and excitonic properties in non-magnetic TMDs. Here, we report magneto-photoluminescence (PL) investigations of monolayer (ML) MoSe\({}_{2}\) on the layered A-type antiferromagnetic (AFM) semiconductor CrSBr under different magnetic field orientations. Our results reveal a clear influence of the CrSBr magnetic order on the optical properties of MoSe\({}_{2}\), such as an anomalous linear-polarization dependence, changes of the exciton/trion energies, a magnetic-field dependence of the PL intensities, and a valley \(g\)-factor with signatures of an asymmetric magnetic proximity interaction. Furthermore, first principles calculations suggest that MoSe\({}_{2}\)/CrSBr forms a broken-gap (type-III) band alignment, facilitating charge transfer processes. The work establishes that antiferromagnetic-nonmagnetic interfaces can be used to control the valley and excitonic properties of TMDs, relevant for the development of opto-spintronics devices.
Transition Metal Dichalcogenides, two-dimensional magnets, van der Waals Heterostructures, Proximity Effects, Magneto-Optics.
cause of their unique magnetic properties and possible applications in spintronics. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13] Several studies were performed in heterostructures using magnetic materials and monolayer TMDs. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22] These heterostructures employ magnetic proximity effects to modify the physical properties of the ML TMD adjacent to the magnetic material and therefore offer new opportunities for engineering magnetic heterostructures. [18] Actually, recent studies evidenced an enhanced valley splitting of WSe\({}_{2}\) and WS\({}_{2}\) monolayers on the ferromagnetic (FM) material EuS, [23, 24] a giant zero-field valley splitting of MoSe\({}_{2}\)/CrBr\({}_{3}\), [25] asymmetric magnetic proximity interactions in MoSe\({}_{2}\)/CrBr\({}_{3}\), [16] and an anomalous temperature dependence of the MoSe\({}_{2}\)/MnPSe\({}_{3}\) excitonic peak below the Neel temperature (\(T_{N}\)). [3] Furthermore, magnetic proximity effects have led to spin-dependent charge transfer and concomitant circularly polarized PL in hybrid devices based on both CrI\({}_{3}\)[2, 17] and CrBr\({}_{3}\). [1] However, most previous studies in magnetic vdW heterointerfaces involved vdW ferromagnetic materials. [1, 2, 3, 4, 5, 6] AFM materials have a variety of spin orderings with distinct magnetic symmetry groups which could result in unique magnetic properties and therefore there are interesting ways to control their functionalities by choosing appropriate AFM materials. [3]
In this work, we investigate the impact of the CrSBr antiferromagnetic substrate on the exciton and valley properties of ML MoSe\({}_{2}\). We have performed micro-PL measurements under a magnetic field along each of the three crystallographic axes of CrSBr. In general, our findings show that the exciton and valley properties of ML TMDs can be engineered by the interplay of magnetic proximity, efficient charge transfer effects, exciton/trion-magnon coupling and dielectric anomalies of 2D antiferromagnetic materials.
The layered magnetic material CrSBr is a vdW direct gap semiconductor with A-type AFM and Neel temperature of 132 K in its bulk form. [26, 27, 28, 29, 30, 31, 32, 33, 34] In addition, CrSBr presents another phase transition around the temperature of \(T=40\) K [7, 29, 30] which is not well understood but might be related to crystal defects [26] or spin-freezing effects. [13] The CrSBr crystal consists of layers with rectangular unit cells in the plane (\(\hat{a}\) - \(\hat{b}\)) which are stacked along the \(\hat{c}\) axis to produce an orthorhombic structure [Figure 1(b)]. The optical properties of CrSBr reflect its highly anisotropic electronic and magnetic structure. A prominent example is the coupling of excitons to the magnetic order. Changes in the static magnetic configuration induced by applying a magnetic field, for instance, directly impact the exciton energy. The electronic band structure, and consequently the energy of excitons in CrSBr, are sensitive to the interlayer magnetic exchange interaction which can be used to probe its magnetic properties. [7, 28, 30, 34]
Monolayer MoSe\({}_{2}\) is a direct band gap semiconductor with two inequivalent \(\pm\)K valleys and robust excitons. [35, 36, 37, 38, 39, 40, 41, 42, 43] Under out-of-plane magnetic fields, valley Zeeman effects and magnetic-field-induced valley polarization are observed and these effects depend on the presence of strain, doping and magnetic proximity effects. [44, 45, 46, 25, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56]
Figure 1 (a) shows an optical microscope image of our MoSe\({}_{2}\)/CrSBr heterostructure and the crystal orientations, \(\hat{a}\) and \(\hat{b}\), of the CrSBr bulk crystal, while in Figure 1 (b) the MoSe\({}_{2}\) and CrSBr crystal structures are sketched. Figure 1 (c) presents the predicted type-III (broken-gap) band alignment of the heterostructure. In Figure 1 (d), the PL spectrum of CrSBr at 3.6 K is displayed in the left part. Several PL peaks are observed below 1.4 eV and associated with excitons, [7, 34, 57, 58] defects, [26] and strong exciton-photon coupling. [59]
Figure 1 (d) shows the exciton and trion peaks in the normalized PL spectra of MoSe\({}_{2}\)/CrSBr and MoSe\({}_{2}\)/SiO\({}_{2}\). According to our theoretical predictions for the band alignment [see Figures 2 (e-f)], the MoSe\({}_{2}\) layer may be strongly p-doped, while MoSe\({}_{2}\) on SiO\({}_{2}\) is usually n-doped. [47, 52] Therefore, the trion in MoSe\({}_{2}\)/CrSBr is most likely a positively charged exciton. In addition, a low-energy shoulder in the trion emission is observed which could be due to inelastic scattering of trions by magnons, or due to localized trions at charged impurities. [60]
Next, we investigate the light-polarization
properties in detail. The anisotropic optical emission of the CrSBr layer is evidenced by linear-polarization-resolved PL measurements. Figure 1 (e) shows a color map of the linearly-polarized PL intensity as a function of the in-plane linear polarization angle at 3.6 K. The polar plots for these emissions are shown in Figure 1 (f). All CrSBr PL peaks are strongly linearly polarized along the \(\hat{b}\) axis, which evidences the anisotropic electronic structure of CrSBr, as expected. Remarkably, a clear dependence of the PL intensity on the in-plane polarization angle is observed for both, the MoSe\({}_{2}\) exciton [black squares in Fig. 1 (f)] and trion [red circles in Fig. 1 (f)] emissions. This result indicates that the MoSe\({}_{2}\) has acquired a linear-polarization component along the \(\hat{a}\) axis probably due to magnetic proximity or photonic effects due to the linear dichroism of CrSBr.
We have also measured the PL for different magnetic field (\(\vec{B}\)) orientations. Figures 2 (a) and (c) show color maps of the MoSe\({}_{2}\)/CrSBr magneto-PL intensity under \(\vec{B}\) parallel to the in-plane easy (\(\vec{B}\parallel\hat{b}\)) and hard axis (\(\vec{B}\parallel\hat{a}\)), respectively. For \(\vec{B}\parallel\hat{b}\), the PL spectrum of CrSBr red-shifts abruptly by about 15 meV above a field of 0.375 T and is constant above 0.375 T (see also Figures S6 and S7). This result is similar to previous magneto-optical measurements for few-layer CrSBr [28] and was explained by a spin-flip transition from AFM to FM order also observed in magnetization measurements [26]. Under \(\vec{B}\parallel\hat{a}\), the PL spectrum shifts smoothly, due to the canting of
Figure 1: (a) Optical microscope image of the studied ML MoSe\({}_{2}\)/bulk CrSBr vdW heterostructure, covered with a thin layer of hBN. The thickness of our CrSBr layer is about 35 nm (Figure S1) (b) Schematics of the crystal structure of the heterostructure. (c) Schematics of the band alignment of ML MoSe\({}_{2}\)/bulk CrSBr and charge transfer from the MoSe\({}_{2}\) valence band (VB) to the CrSBr conduction band (CB). (d) Typical PL spectra from ML MoSe\({}_{2}\)/CrSBr and MoSe\({}_{2}\)/SiO\({}_{2}\) regions at 3.6 K. The laser energy is 1.88 eV. (e) Color-coded map of the linearly-polarized emission intensity as a function of the angle of in-plane polarization. The laser excitation is linearly polarized along the \(\hat{a}\) axis (f) Polar plot of the PL intensity versus the in-plane linear polarization angle for the most intense PL peak energy of CrSBr (1.329 eV) and also for the exciton (X) and trion (T) emission peaks from MoSe\({}_{2}\) on CrSBr.
the spins along \(\vec{B}\), saturating at \(B=1.075\) T beyond which the PL spectrum remains unchanged. The observed PL red shifts of CrSBr with increasing magnetic field was explained by a magnetization-dependent interlayer electronic coupling in the CrSBr material [28].
Remarkably, we also find that the PL intensities of the MoSe\({}_{2}\) trion and exciton are correlated to the field-induced phase transition in CrSBr bulk: for \(\vec{B}\parallel\hat{b}\) an abrupt change of the PL intensity of MoSe\({}_{2}\) above the critical magnetic field of 0.375 T occurs [see Figure 2 (b) and Figures S6 and S7], and for \(\vec{B}\parallel\hat{a}\), a continuous decrease of both MoSe\({}_{2}\) PL intensities is present up to 1.075 T, which corresponds to the saturation of the magnetization in CrSBr. Furthermore, the relative intensity of the trion/exciton peaks (Figure S8) also shows an abrupt change for \(\vec{B}\parallel\hat{b}\), above the critical field 0.375 T, and a continuous change up to 1.075 T for \(\vec{B}\parallel\hat{a}\), which would indicate an increase in the doping of MoSe\({}_{2}\). This could be explained by a change of charge transfer after the magnetic-field-induced phase transition.
These results can be rationalized by our first principles calculations of the electronic band structure shown in Figures 2 (e-g). Not only the electronic structures of CrSBr in the AFM/FM phases are different [28, 61] but also their band alignment (type-III) with respect to MoSe\({}_{2}\) changes [see bottom panels in Figures 2 (e-g)]. These energetic differences suggest that the charge transfer between ML MoSe\({}_{2}\) and CrSBr can be drastically altered when increasing the magnetic field because of the transition from AFM to FM phases in CrSBr.
Let us now turn to the magneto-PL investigations of the MoSe\({}_{2}\)/CrSBr heterostructure for an out-of-plane magnetic field (\(\vec{B}\parallel\hat{c}\)) under linearly-polarized excitation and \(\sigma^{-}\) circularly
Figure 2: (a,c) Color-code map for circularly-polarized PL intensity from the MoSe\({}_{2}\)/CrSBr heterostructure as a function of the in-plane magnetic field, oriented along the in-plane easy (for \(\vec{B}\parallel\hat{b}\)) and intermediate axes (for \(\vec{B}\parallel\hat{a}\)). The excitation is performed using a linearly polarized laser. The PL detection is \(\sigma^{-}\) for positive magnetic fields. (b,d) Magnetic-field dependence of the MoSe\({}_{2}\) PL intensity of the exciton and trion emissions for both field orientations; the MoSe\({}_{2}\) PL intensity is sensitive to the magnetic phases of the CrSBr. Calculated band structure with spin-orbit coupling for (e) CrSBr AFM, (f) ML MoSe\({}_{2}\), and (g) CrSBr FM systems. The CrSBr systems consist of 6 layers. The horizontal dashed lines indicate the Fermi energy aligned with respect to the vacuum levels. The bottom panels show the band structure in the region of the blue rectangles, indicating clear differences of the AFM and FM energy levels next to the Fermi energy.
polarized PL detection as a function of \(\vec{B}\). For CrSBr emission energies [Figure 3 (a)], a continuous red-shift of all PL peaks occurs while increasing \(B\) (in absolute value) up to a saturation field of about 2.25 T, beyond which the PL peaks remains unchanged, consistent with previous reports.[28] For MoSe\({}_{2}\), the color code map of PL intensity as a function of \(B\) is shown in Figure 3 (b). It also exhibits a correlation with the magnetic phase order of CrSBr. Figure 3 (c) presents the intensities of the exciton and trion PL peaks as a function of the \(B\). We observe an unusual change of the PL intensity for both, exciton and trion, in the range of -2.25 to +2.25 T, which is correlated to the magnetic-field-induced phase transition of CrSBr. In addition, we observe a blue (red) shift of PL peak positions as shown in Figure 3 (b) and (d) with an increase of positive (negative) \(B\) values, resembling the effects of the valley Zeeman splitting.[47, 48]
The \(B\) dependence for one particular polarization branch (\(\sigma^{+}\) or \(\sigma^{-}\)) of the PL peak of the exciton or trion in TMDs can be written as[62, 63, 50]
\[E_{i}(B)=E_{i}(B=0)+g_{i}^{j}\mu_{B}B, \tag{1}\]
in which \(\mu_{B}\) is the Bohr magneton, the subindex \(i=\mathrm{X}\:(\mathrm{T})\) identifies the exciton (trion), and the superindex \(j=\pm\) denotes the circular polarization \(\sigma^{\pm}\). Equation (1) describes the Zeeman shift of one polarization branch (the increase or decrease depends on the sign of \(g_{i}^{j}\), which is system dependent), whereas the Zeeman splitting requires the knowledge of the Zeeman shifts for each polarization. The total \(g\)-factor that modulates the Zeeman splitting is then given by \(g_{i}=g_{i}^{+}-g_{i}^{-}\). In pristine monolayer TMDs, \(g_{\mathrm{X}}^{+}\sim-2\) and \(g_{\mathrm{X}}^{-}\sim 2\) (directly related to the angular momenta of the valence and conduction band states at the K valleys involved in the exciton transition[53, 64]), leading to a total \(g\)-factor of \(g_{\mathrm{X}}\sim-4\). Furthermore, time reversal symmetry connects the \(g\)-factors obtained at positive and negative magnetic fields via \(g_{i}^{+}(B>0)=-g_{i}^{-}(B<0)\), allowing us to recover the Zeeman shift of the \(\sigma^{+}\) branch by measuring the \(\sigma^{-}\) branch at negative magnetic fields.
The excitonic Zeeman shift obtained for the MoSe\({}_{2}\)/CrSBr heterostructure is displayed in Figure 3 (d). As a reference, we have also measured the magneto-PL in the MoSe\({}_{2}\)/SiO\({}_{2}\) region of the sample [see Figure 3 (e)]. The Zeeman shifts of the trion peaks are presented in Figure S10 and follow closely the excitonic features. Our results reveal an intriguing asymmetric signature in the Zeeman shift of the MoSe\({}_{2}\) exciton within the MoSe\({}_{2}\)/CrSBr heterostructure, while the MoSe\({}_{2}\)/SiO\({}_{2}\) system displays a symmetric response. These findings point to an asymmetric coupling between the MoSe\({}_{2}\) valleys and the CrSBr bands, which is dependent on the magnetic ordering. Exploiting time reversal symmetry allows us to extract distinct \(g_{i}^{+}\) and \(g_{i}^{-}\) values for each magnetic phase at positive and negative magnetic fields. In Figure 3 (f), we present a schematic representation of the symmetric and asymmetric Zeeman shifts, summarizing the observed features of Figures 3 (d,e). The obtained \(g\)-factors for excitons and trions are summarized in Table 1. Particularly, for the excitons in MoSe\({}_{2}\)/SiO\({}_{2}\), we extract \(\left|g_{\mathrm{X,T}}^{+}\right|=\left|g_{\mathrm{X,T}}^{-}\right|\) leading to a total \(g\)-factor of \(\sim-4.0\), consistent with theoretical[53, 54, 64, 65] and experimental values, reported in the literature for MoSe\({}_{2}\)/SiO\({}_{2}\) or hBN/MoSe\({}_{2}\)/hBN.[42, 45, 47, 48, 52, 66, 67, 68, 69, 70] For the MoSe\({}_{2}\) excitons on CrSBr, \(\left|g_{\mathrm{X,T}}^{+}\right|\) is distinctly different from \(\left|g_{\mathrm{X,T}}^{-}\right|\) and the total \(g\)-factor is less negative than the typical values of \(-4\) in pristine MoSe\({}_{2}\). Our study uncovers notable variations in the \(g\)-factors of the MoSe\({}_{2}\) exciton and trion when the CrSBr undergoes transitions between the AFM and FM phases, revealing an asymmetric coupling between the spin-valley properties of MoSe\({}_{2}\) and the magnetic ordering of CrSBr. These distinct \(g\)-factors provide valuable insights into the intricate interplay between electronic and magnetic degrees of freedom, underscoring the importance of considering the magnetic state of CrSBr in understanding the behavior of excitonic systems in this heterostructure. The changes in the magnitude of the \(g\)-factors are consistent with proximity effects due to the hybridization between the layers, as previously demon
strated in MoSe\({}_{2}\)/WSe\({}_{2}\),[53, 71] WSe\({}_{2}\)/CrI\({}_{3}\),[72] and WS\({}_{2}\)/graphene systems.[73] A systematic analysis of the microscopic features behind the asymmetric \(g\)-factors is beyond the scope of the current manuscript; however, we point out that asymmetric signatures in valley Zeeman splitting have recently been observed in MoSe\({}_{2}\)/CrBr\({}_{3}\)[16] heterostructures at zero magnetic field. In these systems the magnetic moments in CrBr\({}_{3}\) point in the out-of-plane direction and act already as an external magnetic field. Here, the magnetic moments of CrSBr are oriented in-plane and therefore the asymmetric coupling is manifested once we apply an external magnetic field. The asymmetric Zeeman shifts do not necessarily require a magnetic material but can also be present in systems where valence bands are mixed.[74]
Furthermore, we have also measured the linear polarization of the PL of the heterostructure [see Figure S4 (f)]. We find that the angle dependence and relative intensity of the trion/exciton of the MoSe\({}_{2}\) PL are clearly modified as compared to 0 T. The observed anisotropy of the relative intensities of MoSe\({}_{2}\) trion/exciton could be explained by an anisotropic band structure of the heterostructure due to proximity effects.
We now analyze the temperature dependence of the PL data shown in Figure 4. For CrSBr, a blue shift of the PL band is observed with decreasing temperature, which is accompanied by a change in the peak shape around the magnetic phase transition (\(T_{N}\) around 132 K). In addition, at 40 K, sharp peaks appear below 1360 meV, together with a clear enhancement of the PL intensity of the peak at around 1330 meV. A clear correlation between the emission peaks and phase transitions in CrSBr is thus present.
Figure 3: Color-code map of the circularly resolved PL intensity as a function of out-of-plane magnetic field for (a) CrSBr and (b) MoSe\({}_{2}\)/CrSBr. The laser excitation is linearly polarized and the PL detection is \(\sigma_{-}\) for positive magnetic field. (c) PL intensity of exciton and trion peaks of MoSe\({}_{2}\)/CrSBr as a function of magnetic field. Zeeman shift for the exciton peaks in the region of (d) MoSe\({}_{2}\)/CrSBr and (e) MoSe\({}_{2}\)/SiO2. The solid lines are the fittings to the data. The extracted \(g\)-factors are summarized in Table 1. (f) Schematic representation of symmetric (dashed lines) and asymmetric (solid lines) Zeeman shifts as function of magnetic field. The transparent lines indicate the \(\sigma^{+}\) polarization that is not being measured.
Important changes are also observed for the PL of MoSe\({}_{2}\). At higher temperatures (\(T_{N}\) > 132 K), the trion binding energy of MoSe\({}_{2}\)/CrSBr is much lower than that of MoSe\({}_{2}\)/SiO\({}_{2}\), probably due to different dielectric constant values of CrSBr and SiO\({}_{2}\). Remarkably, we find an anomalous temperature dependence of the exciton and trion peak positions for MoSe\({}_{2}\)/CrSBr. This is visualized in Figure 4 (d) (see also Figures S11 and S12), where we plot the extracted trion binding energies versus temperature. The MoSe\({}_{2}\)/CrSBr trion binding energy increases with decreasing temperature between the magnetic phase transitions, while it stabilizes above \(T_{N}\) and below 40 K. A similar anomaly was observed in the temperature dependence of excitons in the MoSe\({}_{2}\)/MnPSe\({}_{3}\) heterostructure near \(T_{N}\), and was associated to a coupling of MoSe\({}_{2}\) excitons to magnons in MnPSe\({}_{3}\)[3]. In our heterostructure, MoSe\({}_{2}\) excitons may also couple to the (in-coherent) magnons [57] of CrSBr at non-zero temperatures. The impact of these magnons on the CrSBr band structure has not yet been studied in detail, but it is expected that magnon-induced changes will affect both the charge transfer between MoSe\({}_{2}\) and CrSBr as well as the dielectric screening experienced by the excitons in MoSe\({}_{2}\). Both phenomena may contribute to the exciton/trion temperature dependence. [75, 76, 77, 78] However, further studies will be necessary to understand in more detail this experimental result.
In summary, we have measured the linearly and circularly polarized PL on MoSe\({}_{2}\)/CrSBr heterostructures under magnetic fields up to 9 T oriented along the different crystallographic axes of CrSBr. The results show that the valley and excitonic properties (intensity, energy position, and \(g\)-factors) of monolayer MoSe\({}_{2}\) are strongly influenced by the magnetic order of a CrSBr substrate. For all magnetic field orientations we found that the MoSe\({}_{2}\) PL intensity is sensitive to the magnetic ordering of the CrSBr. We predict a type-III band alignment for MoSe\({}_{2}\)/CrSBr which can account for the observed correlation of MoSe\({}_{2}\) PL intensity with the magnetic induced phase transition of CrSBr. For out-of-plane magnetic fields, a clear asymmetric Zeeman shift is observed for MoSe\({}_{2}\)/CrSBr. Furthermore, we observe an anomalous behaviour of the trion binding energy as a function of temperature. The binding energy is considerably low at high temperatures and increases below \(T_{N}\). In general, our results are explained by asymmetric magnetic proximity, charge transfer, exciton/trion magnon coupling and dielectric anomalies of the 2D anti
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{MoSe\({}_{2}\)/SiO\({}_{2}\)} & \multicolumn{2}{c}{MoSe\({}_{2}\)/CrSBr} \\ & & & AFM & FM \\ \hline \multirow{4}{*}{Exciton} & \(g_{\rm X}^{+}\) & \(-1.98\pm 0.05\) & \(-1.75\pm 0.39\) & \(-0.90\pm 0.05\) \\ & \(g_{\rm X}^{-}\) & \(1.97\pm 0.05\) & \(1.25\pm 0.44\) & \(2.37\pm 0.05\) \\ & \(g_{\rm X}\) & \(-4.0\pm 0.1\) & \(-3.0\pm 0.8\) & \(-3.3\pm 0.1\) \\ \hline \multirow{4}{*}{Trion} & \(g_{\rm T}^{+}\) & \(-2.10\pm 0.05\) & \(-1.33\pm 0.18\) & \(-1.42\pm 0.06\) \\ & \(g_{\rm T}^{-}\) & \(2.14\pm 0.05\) & \(2.25\pm 0.46\) & \(1.84\pm 0.06\) \\ \cline{1-1} & \(g_{\rm T}\) & \(-4.2\pm 0.1\) & \(-3.6\pm 0.6\) & \(-3.3\pm 0.1\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Exciton and trion \(g\)-factors for \(B>0\). The \(g\)-factors for \(\sigma^{+}\) were obtained via \(g_{i}^{+}=-g_{i}^{-}(B<0)\) and the total \(g\)-factor is given by \(g_{i}=g_{i}^{+}-g_{i}^{-}\). In the AFM phase, acessed by small fields, the Zeeman shifts approach the spectral resolution of the system, resulting in higher error bars for the obtained values. Nevertheless, the errors are still smaller than the extracted \(g\)-factors and allow us to unambiguously identify the asymmetric signatures.
ferromagnetic material. Our findings offer a unique insight into the interplay of proximity effects and charge transfer in antiferromagnetic-nonmagnetic interfaces that modify the exciton and valley properties of 2D TMDs.
**Acknowledgement** This work was supported by Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP) (grants 22/08329-0 and 23/01313-4) and by the Brazilian Council for Research (CNPq) (grant 311678/2020-3). CSB acknowledges the financial support of CAPES fellowship. TSG and HvdZ received funding from European Union Horizon 2020 research and innovation program under grant agreement No. 863098 (SPRING). PEFJ, KZ, CS, and JF acknowledge the financial support of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) SFB 1277 (Project-ID 314695032, projects B05, B07 and B11), SPP 2244 (Project No. 443416183, SCHU1171/10), and of the European Union Horizon 2020 Research and Innovation Program under Contract No. 881603 (Graphene Flagship). YGG and HvdZ acknowledge support from the Fapesp-SPRINT project (grant 22/00419-0). K.W. and T.T. acknowledge support from the JSPS KAKENHI (Grant Numbers 21H05233 and 23H02052) and World Premier International Research Center Initiative (WPI), MEXT, Japan. S.M.-V. acknowledges the European Commission for a Marie Sklodowska-Curie individual fellowship No. 101103355 - SPIN-2D-LIGHT. J.I.A. acknowledges support from the European Union's Horizon 2020 research and innovation programme for a Marie Sklodowska-Curie individual fellowship No. 101027187-PCSV. F.D. acknowledges financial support from Alexey Chernikov and the Wurzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter
Figure 4: (a) Color-code map of PL intensity as a function of temperature for the PL peaks of the CrSBr. The highest dot-dashed line indicates the CrSBr Néel temperature transition at 132 K, and the lower one at 40 K indicates the temperature where sharpening of the peaks appears. (b,c) Color-code maps of the PL intensity as a function of the temperature for the exciton and trion peaks from (b) MoSe\({}_{2}\)/CrSBr and (c) MoSe\({}_{2}\)/SiO\({}_{2}\). (d) Trion binding energy extracted from the data shown in (c) and (d).
ct.qmat (EXC 2147, Project-ID 390858490).
## 5 Supporting Information Available
The following files are available free of charge.
* SI: Sample preparation, experimental methods and complementary PL results. Details on the first principles calculations. (PDF)
|
2303.17990 | Exploring Global Climate Cooperation through AI: An Assessment of the
AI4GCC Framework by simulations | In scenarios where a single player cannot control other players, cooperative
AI is a recent technology that takes advantage of deep learning to assess
whether cooperation might occur. One main difficulty of this approach is that
it requires a certain level of consensus on the protocol (actions and rules),
at least from a majority of players. In our work, we study the simulations
performed on the cooperative AI tool proposed in the context of AI for Global
Climate Cooperation (AI4GCC) competition. We experimented simulations with and
without the AI4GCC default negotiation, including with regions configured
slightly differently in terms of labor and/or technology growth. These first
results showed that the AI4GCC framework offers a promising cooperative
framework to experiment with global warming mitigation. We also propose future
work to strengthen this framework. | Xavier Marjou, Arnaud Braud, Gaël Fromentoux | 2023-03-31T12:08:25Z | http://arxiv.org/abs/2303.17990v1 | Exploring Global Climate Cooperation through AI: An Assessment of the AI4GCC Framework by simulations
###### Abstract
In scenarios where a single player cannot control other players, cooperative AI is a recent technology that takes advantage of deep learning to assess whether cooperation might occur. One main difficulty of this approach is that it requires a certain level of consensus on the protocol (actions and rules), at least from a majority of players. In our work, we study the simulations performed on the cooperative AI tool proposed in the context of AI for Global Climate Cooperation (AI4GCC) competition. We experimented simulations with and without the AI4GCC default negotiation, including with regions configured slightly differently in terms of labor and/or technology growth. These first results showed that the AI4GCC framework offers a promising cooperative framework to experiment with global warming mitigation. We also propose future work to strengthen this framework.
Cooperative AI Deep learning Global warming
## 1 Introduction
Climate and resource issues generate structural transformations, sometimes brutal and unpredictable, in the business environment of all economic actors. As a result, they become major points of tension. In this context, many agencies and companies are studying the new environmental, social and corporate governance (ESG) reality to recommend new ESG-sensitive policies and investments, leading to many new initiatives.
To meet this need for anticipation and evolution of strategic thinking, there is a need for prospective scenarios, as well as new tools to virtually experiment them, as highlighted by the Carbon 4 IRIS initiative1. However, given that environmental and social issues generally involve multiple players, it is not always possible for rule-makers to control all players. As a consequence, there is a need to design policies motivating cooperative solutions. For instance [1], [2] proposed and evaluated multiple cooperation scenarios allowing Mobile Network Operators (MNOs) to cooperate to save energy during low-activity hours.
Footnote 1: e.g. [https://www.carbone4.com/lancement-iris-initiative](https://www.carbone4.com/lancement-iris-initiative).
Regarding climate warming, AI for global climate cooperation (AI4GCC) [3] is a new community initiative that recently launched an interdisciplinary challenge to identify the most suited negotiation protocol to reach the best compromise between economic and climatic concerns. To frame this work, they provided a tool under the form of a MARL environment that allows multiple simulated regions to interact in order to collectively mitigate climate change.
In this paper, we experimented the AI4GCC tool with two configurations: one with agents who do not negotiate, another one with agents who negotiate using the default negotiation protocol implemented by the framework. We observed that this negotiation allows to decrease the global temperature. In addition, we also highlighted some current limitations of the framework.
## 2 AI4GCC Framework
### Model overview
The AI4GCC framework [3] comes under the form of a reinforcement-learning (RL) [4] environment that allows instantiating multiple agents, each representing a region. They interact during an episode of T=20 iteration steps, either with or without negotiation. Each step represents a 5-year period (\(\Delta\)), hence a full episode represents a duration of 100 years. At each step \(t\), the RL environment returns a distinct step-reward (\(u_{i,t}\)) to each RL agent (aka region \(i\)), which is proportional to the labor \(l\) and the consumption \(c\) of the region. At the end of each episode, the sum of the step-rewards provides a _regional episode-reward_ (\(u_{i}\)). During training, each agent periodically calculates its regional episode-reward (\(u_{i}\)) (mean value over the last 100 episodes) and the framework also derives a _collective episode-reward_\(u\) (which is also known as _episode_reward_mean_ in RLlib Ray software).
By default, the tool comes with N=27 pre-configured regions as shown in Table 5.
\[u_{i;t}=\frac{\left(l_{i;t}/1000.0\right)*\left(\left(\frac{c_{i;t}}{(l_{i;t}/1 000.0)}+\epsilon\right)^{1-\alpha}-1\right)}{1-\alpha},\ \ \ u_{i}=\sum_{t=1}^{T}u_{i;t}\,,\ \ \ u=\sum_{i=1}^{N}u_{i} \tag{1}\]
Two main updates happen between two consecutive timesteps:
* The labor (\(l\)) update is based on a _long-term population_ size (\(l_{a;i}\)) and a _convergence speed_ value (\(l_{g;i}\)).
* The consumption (\(c\)) is based on a multiple parameters, including a delta_A parameter (\(\delta_{a}\)). \[l_{i;t+1}=l_{i,t}\times\left(\frac{1+l_{a;i}}{1+l_{i,t}}\right)^{l_{g;i}}\] (2)
\[c_{i,t+1}=\left(dom\_pref\cdot c_{dom_{i;t+1}}^{sub\_rate}+\sum_{i=1}^{N}for\_ pref_{i}\cdot c_{for_{i}}^{sub\_rate}\right)^{\frac{1}{{}^{\frac{1}{{}^{\frac{1}{{}^{ \frac{1}{{}^{\frac{1}{{}^{\frac{1}{{}^{\frac{1}{{}^{\frac{1}{{}^{\frac{1}{{}^{ \frac{{}^{\leftleft({}^{{}^{\left({}^{\left({}^{\left({}^{}^{{}^{}^{}^{ }^{\left({}^{}^{{}^{}^{}^{\left({}^{}^{{}^{}^{}^{}^{{}^{}^{}^{ }^{{}^{}^{\left({}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{}^{{}^{}^{}^{}^{{}^{}^{ }^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{}^{{}^{}^{}^{ }^{{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{ }^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{}^{{}^{}^{}^{{}^{}^{ }^{{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{{}}^{}^{ }^{{}^{{}^{{}^{}^{{}^{{}^{}^{{}}^{}^{{}^{}^{{}^{{}^{}^{{}^{{}}^{}^{}^{{}^{ }^{{}^{{}^{{}^{{}}^{{}^{{}^{}^{{}}^{{}^{{}^{{}}^{{}^{}^{{}}^{{}^{{}^{ }}^{{{}^{{}^{}^{{}}^{{}^{{}^{{}}^{{}^{{}^{}^{{}}^{{}^{{}}^{{}^{{}}^{ }^{{{}^{{}}^{{}^{{}^{{}}^{{}^{{}}^{{}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}}^{{ }^{{{}}^{{}^{{}^{{}}^{{}^{{}}^{{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}}^{{ }^{{{}^{{}}^{{{}}^{{}^{{}^{}}^{{{}^{}^{{}}^{{{}}^{{}^{{}}^{{}^{{}}^{{{}}^{{}}^{{ }^{{{}}^{{{}^{}{}^{{}^{{}}^{{}^{{}}{{}^{{}^{{}}^{{}}^{{}^{{}}^{{{}}^{{}}^{{ }^{{{}}^{{{}}^{{}^{{}}^{{{}}^{{}^{{}^{{}}^{{{}}^{{}^{{}}^{{{}}^{{}}^{{{}}^{{}}^{{ }^{{{{}}^{{}}^{{{}}^{{}^{{}}^{{{}}^{{}^{{}}^{{{}}^{{{}}^{{{}}^{{{}}^{{}}^{{{}}^{{}}^{{{ }^{{{}}^{{{}}^{{}}^{{{}}^{{{}}^{{}}^{{{}}^{{{}}^{{{}}^{{}}^{{{}}^{{ }^{{{}}^{{{}}^{{{}}^{{{}}^{{{}}^{{}}^{{{}}^{{{}}^{{}}^{{}}^{{{{}}^{{{}}^{{{}}^{{ }}^{{{}}^{{{}}^{{{}}^{{{}}^{{{{}}^{{{{}}}^{{}}^{{{{}}}^{{{{}}}^{{{}}^{{}}^{{{}}^{{{ }^{{}}^{{{{}}}^{{{{}}^{{}}^{{{{{}}}^{{{}}^{{{{}}}^{{{{}}}^{{{}}}^{{{}}^{{{{}}}^{{{}}^{{}}^{ }^{{{{{}^{{{}}}^{{{{}}^{{{{}}}^{{{{}}}^{{{{}}^{{{{{}}}}^{{{{{{}}}}^{{{{{}}}}^{{{{}}}^{{{}}^{{{}}^{{{}}}^{{{}}^{{{}}}^{{{}}^{{{}^{{{}}^{{{{}}}^{{{{}}}^{{{{}}^{{{}}}^{{{}}^{{{{{}}}^{{{}}}^{{{{}}^{{{{}}}^{{{{}}}^{{{}}^{{{{}}}^{{{{}}}^{{{{}}^{{{}}^{{{}}}^{{{}}^{{{}}^{{{{}}^{{{{{{}}}}^{{{{}}}^{{{{}}}^{{{}}^{{{}}^{{{{}}}^{{{{}}^{{{{}}}^{{{}}^{{{}}}^{{{{}}^{{}}^{{{{}}^{{}}^{{{}}^{{{}}^{{{}}^{{}^{{}}^{{{}}^{{}^{{}^{{}}^{{{}}^{{{}^{{{}}^{{{}}}^{{{}^{{}}^{{{}}^{{{}}^{{{}}^{{}^{{{}}^{{{}}^{{{}}^{{{{}}^{{{}}}^{{{}}^{{}^{{{}}^{{}^{{{}}^{{}^{{{}}^{{{}}^{{{}}^{{}}^{{{{}}^{{{}}^{{}^{{}}^{{{}}^{{{}^{{}}^{{{}}^{{{}}^{{{}}^{{{}^{{}}^{{{}}^{{{}^{{{}}^{{}}^{{{}^{{}}^{{{}}^{{{}^{{}}^{{{}^{}}^{{{}^{{}^{}} {{}^{{}^{{}^{{}^{{}^{}^{{{}^{}^{{\}}^{{}^{{}^{{}^{}^{}{{}^{{}^{\}{{}^{}^{}{{}^{{}
### Framework Limitations
Although the current AI4GCC framework is already considerable and challenging to understand in its smallest economic details, we noticed three aspects that we could not easily implement and integrate into our experimentation.
* Configuring strong discontinuities like a sudden drop of labour (eg: due to a natural disaster), or a technology leap-forward (e.g. due to so-called AGI tools). It would be important to allow such hypotheses to be tested, but this would probably question the economic model.
* Integrating cooperation metrics such the incentive-to-cooperate, safety and fairness (cf definition in [2], as well as their use in a telecom use-case in [1]). This would require a framework allowing each agent to implement a specific regional policy.
* Simulating more than 27 regions (e.g. 100-200 regions) in a reasonable training time. Hopefully, the results of the AI4GCC challenge will bring solutions like notion of clubs that will make this achievable.
## 3 Experiment
We used the RL software environment offered by AI4GCC2, in which we performed two modifications:
Footnote 2: [https://github.com/mila-iqia/climate-cooperation-competition](https://github.com/mila-iqia/climate-cooperation-competition)
* In order to log the metrics associated to each region, we added callbacks in the environment.
* In order to respectively modify the configuration of _labor_ and _technology_ (cf. experiment 2), we modified the default AI4GCC values \(l_{a;i}\) and \(g_{a,i}\) of a region.
We used Ray 1.0.0 on an NVIDIA 2080-Ti GPU to perform our MARL experiments. Each test was performed from scratch 5 times (i.e.: one full training before each test) to estimate representative values for the mean and standard deviation.
### Experiment-1
In a first stage, we used the default AI4GCC configuration of Table 5 for all agents (a.k.a. regions). In order to estimate whether the negotiation protocol affects the economical ranking of regions, we performed two tests; one without negotiation and another one with negotiation.
* Test-1-no-nego: evaluation without negotiation (no agent tries to negotiate)
* Test-1-nego: evaluation with negotiation (all agents try to negotiate, based on the default negotiation).
Each test was performed after training from scratch on \(40,000\) episodes.
### Experiment-2
In a second stage, we experimented different configurations than the default AI4GCC configuration of Table 5 either for all 27 regions, or for only one region (15, 19, or 6). Similarly to experiment-1, we carried out one test with negotiation and another one without negotiation, hence \(4*2=8\) tests.
To input different _labor and technology configurations_ (LTCs) in the AI4GCC environment, we extracted the default AI4GCC \(l_{a;i}\) and \(g_{a,i}\) values and modified them to vary the labor and/or technology values respectively. Since early experiments showed that the current framework can not withstand large configuration disruptions such as a drastic drop of population in the long run or at one point, we remained relatively conservative and only experimented the following variations: \([10\%,0\%,10\%]\) for \(l_{a;i}\) and \([-10\%,0\%,10\%]\) for \(g_{a,i}\), which led to a set of \(3*3=9\) LTCs, i.e. 9 subtests per test.
The list of eight tests is as follows.
* Evaluation without negotiation
* Test-2-a-no-nego: all agents have the same LTC.
* Test-2-h-no-nego: only a high-ranked region (region 15) has a different LTC.
* Test-2-m-no-nego: any a middle-ranked region (region 19) has a different LTC.
* Test-2-l-no-nego: only a low-ranked region (region 6) has a different LTC.
* Evaluation with negotiation
* Test-2-a-nego: all agents have the same LTC.
* Test-2-h-nego: only a high-ranked region (region 15) has a different LTC.
* Test-2-m-nego: only a middle-ranked region (region 19) has a different LTC.
* Test-2-l-nego: only low-ranked region (region 6) has a different LTC.
Since early experiments showed that training for \(10,000\) episodes did not significantly change the results compared to training for \(40,000\) episodes, we performed each subtest after training from scratch for only \(10,000\) episodes in order to save energy (in kWh).
## 4 Results
### Experiment-1
Table 1 shows that the global temperature increase is less important at the end of the episode than without negotiating, which is a nice and major result, but which comes with a lower collective reward.
Table 2 goes deeper into the details by describing the results per region. The values are based on the mean value on the last 100 episodes of the training (i.e. once the model has finished its training). Although each region lost between 0 to 20% of its utility, the results showed that the negotiation protocol led to minor modifications regarding the ranking of each region, which is an advantage to incentive regions to participate in the framework.
In addition, Table 3 summarizes the mean values for each of the five default actions across the 27 regions both for the negotiated mode and non-negotiated mode simulations. It shows that the default negotiation leads to higher mitigation rates, lower saving rates and, surprisingly, to maximum exports close to zero. Although we could not explain the root cause, it nevertheless suggests that the negotiation step is useful as it induces agents to perform slightly different actions, which leads to a lower global temperature than in the baseline scenario (no negotiation).
### Experiment-2
The results (cf Figure 1 to Figure 8 in Annex) showed that conservative modifications of labor and/or technology configurations led to three observations:
* The tension between utility and temperature is confirmed.
* Regardless of the LTC of each of the nine subtests, each region obtained a lower gain with negotiation compared to no-negotiation. However, the percentage of loss depends on the region (e.g. between \(-14\%\) and \(-4\%\) as indicated in Table 4), which might be a source of tension.
* The ranking of the three tested regions (high-ranked, mid-ranked, or low-ranked region) did not change, regardless of the configured LTC.
Moreover, the results showed that a single region, regardless of its rank, could not have a significant impact on the collective outcome, both in the scenarios with and without negotiations.
\begin{table}
\begin{tabular}{|c|c|c|} \hline test & global temperature increase & collective episode-reward \\ \hline test-1-no-nego & \(5.8\pm 0.2\) & \(165.5\pm 0.5\) \\ test-1-nego & \(3.0\pm 0.3\) & \(152.9\pm 3.4\) \\ \hline \end{tabular}
\end{table}
Table 1: Experiment-1’s main results
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline region & \(u_{i}\) no-nego & \(u_{i}\) nego & difference \\ \hline
15 & \(22.3\) & \(21.4\) & \(-4\%\) \\
19 & \(5.3\) & \(4.6\) & \(-14\%\) \\
6 & \(0.6\) & \(0.6\) & \(-5\%\) \\ all & \(6.0\) & \(5.7\) & \(-6\%\) \\ \hline \end{tabular}
\end{table}
Table 4: Average regional episode-rewards with and without negotiations
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline region id & \(u_{i}\) no-nego & rank no-nego & \(u_{i}\) nego & rank nego & gain & rank delta \\ \hline
[MISSING_PAGE_POST]
\hline \(u\) & 165.5 & & 150.5 & & & \\ \hline \end{tabular}
\end{table}
Table 2: regional episode-rewards and collective episode-reward with and without negotiation
\begin{table}
\begin{tabular}{|c|c|c|} \hline action & test-1-no-nego & test-1-nego \\ \hline mitigation rate & \(0.009\pm 0.005\) & \(0.041\pm 0.002\) \\ saving rate & \(0.011\pm 0.002\) & \(0.059\pm 0.002\) \\ max export & \(1727\pm 730\) & \(1\pm 3\) \\ mean imports & \(0.022\pm 0.001\) & \(0.024\pm 0.002\) \\ mean tariffs & \(0.021\pm 0.001\) & \(0.022\pm 0.003\) \\ \hline \end{tabular}
\end{table}
Table 3: Actions values
## 5 Discussion
In its current form, the framework naturally encourages cooperation as every country suffers the same penalties from the damage function but also have no way of having a greedier approach to mitigating climate damage than investing in carbon reduction. We would encourage having regions have a potentially more selfish approach in order to capture reality a bit more by at least:
* Having a different damage function per region, since global temperature doesn't impact regions the same way, an local temperature does not impact regions the same way either.
* Having a new course of action for every region called resiliency investments which only effect would be to reduce the impact of climate damage done on the economy at a regional scale. This could encompass the risk of badly designed negotiation protocols by allowing a less efficient selfish path that doesn't require any trust.
Another interesting evolution path would be to exploit the dynamism of the framework to use less continuous functions both for reward and damage by introduction a form a catastrophic event probability as the temperature rises and the social welfare of some region fails to catch up. This would however have negative consequences on both the explain-ability of the model and the training costs (as way more games would need to be played to capture those probabilities).
Finally it would be interesting to re-use the lesson learnt from this first approach, both from an AI perspective and from how we can use the results of RL AI to drive policy, with another modelling framework. It will also be challenging to see how frameworks that are going to be used by AIs differ from framework to be used by humans. Our two initial targets will be to use such models in a more telco oriented scenario and in a more ressource driven cross sectorial scenario.
## 6 Conclusion
We experimented the AI4GCC framework, which is a decision support tool using cooperative AI to simulate cooperative scenarios to mitigate global warming while preserving economic activities. Our experiments confirmed that the current default negotiation leads to mitigating global warming but the root cause incentivizing agents to lower economical reward remains to be identified. We also suggested to allow for a damage function more specific per region. As other initiatives aimed at mitigating environmental problems multiply, it will be interesting to study how they could reuse the AI4GCC framework.
## Acknowledgments
This work reused the AI4GCC code from MILA and Salesforce.
|
2309.08633 | CDF W mass anomaly revisited | The CDF, ATLAS and LHCb have released the measurements on the W boson mass
$m_W$ at $\sqrt{S}=1.96, 7, 13 TeV$, respectively. The measured values show the
declining tendency, namely $m_W$ decreases with the increment of the collider
energy. If the declining tendency is confirmed, it might be the signal of
metric field at high energy colliders. In this paper, we propose a model to
account for such tendency and explore the properties of the model. | Shou-hua Zhu | 2023-09-14T00:11:20Z | http://arxiv.org/abs/2309.08633v2 | # CDF W mass anomaly revisited
###### Abstract
The CDF, ATLAS and LHCb have released the measurements on the W boson mass \(m_{W}\) at \(\sqrt{S}=1.96,7,13TeV\), respectively. The measured values show the declining tendency, namely \(m_{W}\) decreases with the increment of the collider energy. If the declining tendency is confirmed, it might be the signal of metric field at high energy colliders. In this paper, we propose a model to account for such tendency and explore the properties of the model.
## I Introduction
In 2022, the new analysis of W mass based on CDF data with 8.8 \(fb^{-1}\) integrated luminosity was released. CDF is one of the detector at \(p\overline{p}\) collider Tevatron with energy as high as \(\sqrt{S}=1.96TeV\). The result [1]
\[m_{W}=80,433.5\pm 9.4MeV \tag{1}\]
is quite extraordinary since it is more than \(7\sigma\) higher than that of the standard model (SM) global fit. It has stimulated numerous studies [2] on the discrepancy. As the counter part at the Tevatron, D0 result is [3]
\[m_{W}=80,375\pm 11\pm 20MeV. \tag{2}\]
Since CDF and D0 disagree significantly, further inputs from other experiments are necessary. The author has experienced many anomalies which have come and gone. One only needs to take the interesting anomaly seriously before it can be confirmed. Recently we are considering to drop the single Ricci scalar term in the Lagrangian and looking for the phenomenological evidences. The CDF new result happens to be one of the indications, which is why we wrote this paper more than one year later after CDF paper [1].
In order to explore the unknown physics from the experimental measurement, the more reliable way is to examine firstly the single experiment, at least with the same energy. After all, the different experiments with various colliding beams and energy may bring the unknown physical effect, besides the man-made mistakes. In the following we will examine other W mass measurements.
Compared with CDF analysis, W mass is measured to be [4]
\[m_{W}=80,370\pm 19MeV \tag{3}\]
by ATLAS using \(\sqrt{S}=7TeV\) data. It is one of the detectors at \(pp\) collider LHC. The discrepancy between CDF and ATLAS is around \(3\sigma\). W mass is measured to be [5]
\[m_{W}=80,354\pm 23\pm 22MeV \tag{4}\]
by LHCb using \(\sqrt{S}=13TeV\) data. There were also other analysis results at four detectors ALEPH, DELPHI, L3 and OPAL at \(e^{+}e^{-}\) collider LEP [6]. However the central values of the old LEP results are quite diverse, and the uncertainty of any single detector is quite larger than those of CDF and ATLAS.
From Eq. (1,3, 4), we can see that the central values of \(m_{W}\) decline with the increment of the colliding energy. Frankly speaking, the statistical significance of this declining tendency is not so high. It is quite interesting to know wether the future CMS analysis with \(\sqrt{S}=7,8\) and \(13TeV\) can confirm the tendency or not.
In fact, there are similar declining tendency for top quark mass measurement. The latest top quark mass, combined CDF and D0 data, is [7]
\[m_{t}=174.30\pm 0.35\pm 0.54GeV \tag{5}\]
by direct measurement at Tevatron with \(\sqrt{S}=1.96TeV\). At higher energy LHC, top quark mass is
\[m_{t} = 172.69\pm 0.25\pm 0.41GeV \tag{6}\] \[m_{t} = 172.44\pm 0.13\pm 0.47GeV \tag{7}\]
at ATLAS [8] and CMS [9] respectively with \(\sqrt{S}=7,8TeV\) data. The latest CMS result using \(\sqrt{S}=13TeV\) data is [10]
\[m_{t} = 171.77\pm 0.37GeV. \tag{8}\]
In fact, it is only one measurement and fair to wait for the combination value with other \(13TeV\) data.
One may naturally wonder how about the mass of Z boson? Actually there was the very precise measurement from Z-pole data at LEP-1. However, meaningful precise measurement from other experiments is absent. Although the precision from other experiments is not comparable with that of LEP-1 [7], it is quite interesting and important to examine above-mentioned declining tendency for \(m_{Z}\) at the LHC.
We have enumerate tediously the measured values of heavy particles, \(m_{W}\),\(m_{Z}\) and \(m_{t}\). Note that they should be the constant in the SM since the energy scale dependence has been removed after including the higher order effects during the data fitting. Namely their values should be the same at the different colliders: LEP, Tevatron and LHC. If the measured values are really decrease
with the increment of collider energy, how to account for such effects and declining tendency? In this paper, we try to attribute such effects to the different metric field for different colliding energy. On one hand, it is quite natural since the different energy will cause different metric field at the reaction region. On the other hand, such effects would be thought negligible tiny, since they are usually suppressed by Planck scale according to the common wisdom. However, the physics of the highest energy regime reached by high energy colliders should be basically assumed as not fully known. Such possibility is still open. In the following we will focus on such theoretical explanation.
## II The Model
The Lagrangian of proposed model can be written as
\[\mathcal{L}=\mathcal{L}_{g}+\mathcal{L}_{m}.\]
Here the general coordinate invariant pure gravity and matter Lagrangians with the metric field \(g\) and the weak doublet Higgs field \(\Phi\) are
\[\mathcal{L}_{g}=\sqrt{g}\left\{-\kappa R\right\} \tag{9}\]
\[\mathcal{L}_{m} = \sqrt{g}\left\{-g^{\mu\nu}\partial_{\mu}\Phi^{\dagger}\partial_{ \nu}\Phi+\lambda\left(\Phi^{\dagger}\Phi\right)^{2}\right. \tag{10}\] \[\left.+\Phi^{\dagger}\Phi\left(\xi R-\mu^{2}\right)+\Lambda_{0} \right\}.\]
The convention is the same with Ref. [11], namely purely imaginary time coordinate with \(g_{\mu\nu}=\delta_{\mu\nu}\) for flat space, and the gravitational coupling \(\frac{e^{4}}{16\pi G_{N}}=\frac{m_{P}^{2}}{16\pi}\) is chosen as 1. Here \(\kappa\) is a free parameter to be determined. \(\mu^{2}\) term is needed as in the usual SM Lagrangian, which literally induces mass of all particles in the SM after electro-weak symmetry spontaneously breaking. \(R\) is the usual Ricci scalar and \(\xi\) is a dimensionless free parameter. \(\Lambda_{0}\) is the allowed free constant parameter.
The electro-weak symmetry breaking is realized through Higgs field acquiring the vacuum expectation value (\(v\))
\[<\Phi>=v+H\]
and the \(H\) is the physical Higgs field. Here
\[v^{2}=\frac{1}{2}\frac{\mu^{2}-\xi R}{\lambda} \tag{11}\]
Different colliders correspond to different \(R\). Currently the detail calculation for high energy collisions are not available. In the long run, R should be calculated in the colliding case. As the simplest approximation, R is assumed to be a different constant for different energy collision, which can be extracted from experiment data. As usual, the \(m_{W}\),\(m_{t}\), \(m_{Z}\) and \(m_{H}\) are all proportional to \(v\). As shown in Eq. (11), the declining tendency for different energy colliders should be universal behavior.
Based on current mass measurements, the order of magnitude of variation among LEP, Tevatron and LHC should be \(O(10^{-3})\sim O(10^{-4})\). Can the contribution from \(\xi R\) be so large? Basically \(\xi\) is an arbitrary dimensionless parameter, and which can be determined empirically. We will argue theoretically that the contribution can be large. Due to the renormalizable criteria, as discussed in next section, we will drop R-term in Eq. (9). In order to reproduce the usual Hilert-Einstein gravity, \(\xi\) is fixed to be \(O(m_{P}^{2}/v^{2})\). Such value is much larger than the usual assumption, for example for the case of Higgs inflation models. Usually the range with the sizable gravity effect is estimated as
\[r\sim G_{N}E=\frac{E}{m_{P}^{2}},\]
where \(E\) is the effective collider energy. Due to the \(\xi\) enhancement, the range becomes
\[r\sim\xi\frac{E}{m_{P}^{2}}\sim\frac{m_{P}^{2}}{v^{2}}\frac{E}{m_{P}^{2}}= \frac{E}{v^{2}}.\]
After the electro-weak symmetry breaking with \(v^{2}=\mu^{2}/(2\lambda)\),
\[-\kappa+\xi v^{2}=-1\]
which is empirically required by the Newtonian gravity. And
\[\lambda v^{4}-\mu^{2}v^{2}+\Lambda_{0}=\Lambda\]
which is the cosmological constant. After symmetry breaking, the induced Lagrangian becomes
\[\mathcal{L}=\sqrt{g}\left\{-R+\Lambda+\cdots\right\}. \tag{12}\]
## III Theoretical reason to drop \(R\) term
In order to make the \(\xi R\) sizable contribution at the high energy collider plausible, we will explore the theoretical motivation to drop \(R\) term, namely \(\mathcal{L}_{g}=0\) due to renormalizable criteria.
Renormalizability and associated infinity are usually thought as annoying, however it can be treated as the tool, even a principle to construct a meaningful theory. In order to illustrate the key difficulty to renormalize gravity. We utilize a toy model with only one real scalar field \(\phi\). \(\kappa\) is taken as 1 in Eq. (9). The Lagrangian of matter of Eq. (10) is replaced by
\[\mathcal{L}_{m}=\sqrt{g}\left\{-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial _{\nu}\phi+\frac{1}{2}\phi M^{2}\phi\right\}. \tag{13}\]
Treating \(g\) as the external source, the counter-terms at one-loop level can be extracted from Ref. [11]
\[\Delta{\cal L} = \frac{\sqrt{g}}{\epsilon}\left\{\frac{1}{4}\left(M^{2}-\frac{1}{6} R\right)^{2}\right. \tag{14}\] \[\left.+\frac{1}{120}\left(R_{\mu\nu}R^{\mu\nu}-\frac{1}{3}R^{2} \right)\right\}\]
where \(\epsilon=8\pi^{2}(D-4)\) and \(D\) is the dimension of the space-time.
Ref. [11] has argued that the unrenormalizable term in Eq. (14), namely the \(RM^{2}\) term, can be eliminated by adding the specific term \(\frac{1}{12}R\phi^{2}\) to the original Lagrangian of Eq. (13). However the unrenormalizable terms \(R^{2}\) and \(R_{\mu\nu}R^{\mu\nu}\) remain. The situation becomes even worse after including contributions from the gravitons in the loops. It seems impossible to generally eliminate all unrenormalizable terms by modifying the original Lagrangian. This is the key argument that gravity is an unrenormalizable theory. As shown in this simple excise, there exists fundamental difficulty to renormalize the gravity in this way. Some fundamental aspect of the gravity has to be changed.
As the basic requirement of a renormalizable theory, the new form counter-terms beyond the origin Lagrangian are not allowed. It seems there is only possible by treating the metric as the coupling parameter instead of the dynamical field. Under this assumption, the metric is not the dynamical field, namely the kinetic term \(R\) will be dropped. Provided that the metric acts only as the parameter, the form of counter-terms in Eq. (14) is the same with original Lagrangian in Eq. (13). All the previous unrenormalizable terms \(RM^{2}\), \(R^{2}\) and \(R_{\mu\nu}R^{\mu\nu}\) are the functions of the \(g_{\mu\nu}\), which is the building block of the original Lagrangian. From this point of view, the model must be renormalizable as it should be. The renormalizability of toy model of Eq. (13) is guaranteed by the properties of the dynamical quantum field \(\phi\). The metric only becomes the dynamical field after the electro-weak symmetry breaking, as shown in last section.
In principle, a realistic model should include all theoretical allowed terms. In Eq. (9) and Eq. (10), the kinetic term \(R\) and the higher power of \(R\) terms are not allowed, since these terms break either the theory renormalizability or vacuum stability. In this sense, the renormalizability is treated as the principle to construct a physical theory.
For the general case, the quantum behavior can be written as the path integral of dynamical field \(\phi\)
\[Z=\int D\phi\exp\left\{iS\right\}. \tag{15}\]
Note that the metric field \(g\) is not a priori assumed as dynamic field. The action \(S\) can be divided as metric and other (matter) parts
\[S=S(g)+S(g,\phi)+S(\phi). \tag{16}\]
As such, \(\exp\{iS(g)\}\) is independent on the quantum field (\(\phi\)) and can be dropped and the path integral can be simplified as
\[Z=\int D\phi\exp\left\{iS(g,\phi)+iS(\phi)\right\}. \tag{17}\]
The metric, as the dynamical field after electro-weak symmetry breaking, manifests itself only classically. Eq. (10) is only the specific realization of the general case.
## IV Conclusion and discussion
This paper has explored the possible implication of CDF W mass anomaly. We propose a model to account for the possible collider energy dependence measurements of \(m_{W}\) and \(m_{t}\). If such dependence is confirmed, it may be the signature of the metric field at high energy collider. The several future experimental measurements are warmly welcomed, especially the \(m_{W}\), \(m_{t}\), \(m_{Z}\) and \(m_{H}\) at the LHC with \(\sqrt{S}=7/8\) and \(13TeV\). Note that the global fit for various experiments with different energy is illegal if the metric field sizable contributions are not included. Meanwhile the model also influences the Higgs study in the high energy regime and the early evolution of the Universe.
### Availability of data and material
The data analyzed during the current study are all available from the published papers or preprints.
### Competing interests
The authors declare that they have no competing interests.
### Funding
This work is supported by the National Science Foundation of China under Grants No. 11635001, 11875072.
### Authors' contributions
Shou-hua Zhu is the sole author with all responsibility of the draft.
###### Acknowledgements.
The author thanks Hai-Bo Li and Qiang Li for illuminating discussions on Z mass measurement. |
2308.16724 | Data-driven Product-Process Optimization of N-isopropylacrylamide
Microgel Flow-Synthesis | Microgels are cross-linked, colloidal polymer networks with great potential
for stimuli-response release in drug-delivery applications, as their size in
the nanometer range allows them to pass human cell boundaries. For applications
with specified requirements regarding size, producing tailored microgels in a
continuous flow reactor is advantageous because the microgel properties can be
controlled tightly. However, no fully-specified mechanistic models are
available for continuous microgel synthesis, as the physical properties of the
included components are only studied partly. To address this gap and accelerate
tailor-made microgel development, we propose a data-driven optimization in a
hardware-in-the-loop approach to efficiently synthesize microgels with defined
sizes. We optimize the synthesis regarding conflicting objectives (maximum
production efficiency, minimum energy consumption, and the desired microgel
radius) by applying Bayesian optimization via the solver ``Thompson sampling
efficient multi-objective optimization'' (TS-EMO). We validate the optimization
using the deterministic global solver ``McCormick-based Algorithm for
mixed-integer Nonlinear Global Optimization'' (MAiNGO) and verify three
computed Pareto optimal solutions via experiments. The proposed framework can
be applied to other desired microgel properties and reactor setups and has the
potential of efficient development by minimizing number of experiments and
modelling effort needed. | Luise F. Kaven, Artur M. Schweidtmann, Jan Keil, Jana Israel, Nadja Wolter, Alexander Mitsos | 2023-08-31T13:40:08Z | http://arxiv.org/abs/2308.16724v1 | # Data-driven Product-Process Optimization of N-isopropylacrylamide Microgel Flow-Synthesis
###### Abstract
Microgels are cross-linked, colloidal polymer networks with great potential for stimuli-response release in drug-delivery applications, as their size in the nanometer range allows them to pass human cell boundaries. For applications with specified requirements regarding size, producing tailored microgels in a continuous flow reactor is advantageous because the microgel properties can be controlled tightly. However, no fully-specified mechanistic models are available for continuous microgel synthesis, as the physical properties of the included components are only studied partly. To address this gap and accelerate tailor-made microgel development, we propose a data-driven optimization in a hardware-in-the-loop approach to efficiently synthesize microgels with defined sizes. We optimize the synthesis regarding conflicting objectives (maximum production efficiency, minimum energy consumption, and the desired microgel radius) by applying Bayesian optimization via the solver "Thompson sampling efficient multi-objective optimization" (TS-EMO). We validate the optimization using
the deterministic global solver "McCormick-based Algorithm for mixed-integer Nonlinear Global Optimization" (MAiNGO) and verify three computed Pareto optimal solutions via experiments. The proposed framework can be applied to other desired microgel properties and reactor setups and has the potential of efficient development by minimizing number of experiments and modelling effort needed.
**Key words:** microgel synthesis, flow-chemistry, Bayesian optimization, product-process optimization
## 1 Introduction
The size of microgels in the nano- and micrometer range and the microgel's ability to react reversibly to external stimuli of temperature, pH, or electrical potential in the surrounding medium [1] is highly relevant for their application. The microgel size has been studied for biomedical [2, 3, 4], phase separation [5, 6, 7], and catalysis [8] applications. Microgels in the nanometer size range have previously been applied for biomedical purposes, e.g., for drug delivery agents for medical uptake and release [3, 4] or implant coating [2]. In biomedical applications, microgels are particularly relevant, as their small size allows them to pass the human cell boundary [3]. For the cellular uptake, it was found that microgels of a hydrodynamic radius in the swollen state (at \(20\,^{\circ}\mathrm{C}\)) above \(400\,\mathrm{nm}\) and a cross-linker content above \(10\,\mathrm{mol}\,\%\) prevent microgel internalization.
The synthesis of microgels in flow reactors can overcome shortcomings of batch reactors, e.g., limited production capacity and downtime between batches, and enhances product development, intensifies production, and facilitates reaction scale-up [9, 10, 11, 12]. Furthermore, including process analytical technology in flow reactors allows in-line monitoring and process control under highly reproducible conditions [11, 12, 13]. Thus, continuous production enables the reliable synthesis of microgels.
To unfold the full potential of microgels, accelerating the development of tailor-made microgels is desirable. A faster development can be achieved by producing microgels in a continuous reactor mode, as it simplifies up-scaling to large-scale industrial production. Furthermore, model-based approaches facilitate the optimization of microgels with tailored properties. Computational models for describing microgel growth during the synthesis are very sparse and comprise mechanistic models suited for batch reaction exclusively [14, 15, 16, 17, 18, 19]. Our previous study [12] revealed significant deviations between the reaction progress in batch and flow reactors in the microgel synthesis. In particular, we cannot transfer the batch model equations straight to a plug-flow model, but rather we must consider diffusion effects, temperature distribution, and rheological aspects. The physical properties such as diffusivity coefficient and viscosity are not known during the microgel synthesis, so mechanistic modeling of the flow process is restricted.
To address this gap, we propose a data-driven hardware-in-the-loop optimization for N-isopropylacrylamide-based microgels, one of the most widely studied thermo-responsive microgel systems. The data-driven ap
proach facilitates the reaction optimization of the microgel synthesis in flow. We apply Thompson sampling efficient multi-objective optimization (TS-EMO) [20] to enhance the experimental synthesis design iteratively. The TS-EMO solver is based on the Thompson sampling algorithm, a popular approach in Bayesian optimization. Bayesian optimization searches for a (global) optimum with a focus on efficiency, i.e., aiming for small number of function evaluations. Efficiency is crucial when function evaluations are costly, e.g., require experimentation or extensive computation. In Bayesian optimization, a probabilistic model (also called surrogate or digital twin) of the objective function is constructed and iteratively updated as new data points are evaluated. The surrogate models are constructed via Gaussian Processes (GPs). GPs are considered an effective surrogate model as they provide predictions and variance estimates while relying on relatively few data points [21]. Black-box optimization involving GPs for chemical synthesis has been successfully applied for various reactions [22], including pharmaceutical product development [23], electrochemical reductive carboxylation [24], and polymerization [25]. Based on the surrogate model, a new set of input conditions is proposed for the next experimentation while considering the exploration-exploitation trade-off. The goal is to find the input variable values that minimize the objective function. TS-EMO extends the Thompson sampling algorithm to the multi-objective optimization setting. The promising performance of TS-EMO concerning data efficiency, capacity to handle noise, and the ability for batch-sequential usage [20] makes the algorithm suitable for the optimization of microgel synthesis.
As the microgel size is a highly relevant product characteristic in the mentioned applications, we aim to produce microgels of a targeted size (product feature). Simultaneously, we optimize the product flow and energy demand (process features) because the synthesis has to meet economic and ecological requirements. The synthesis procedure highly influences the characteristics of microgels, and multiple influences on the microgel size have been discovered experimentally. The surfactant [26; 27; 28; 6; 29; 9; 30], monomer [31], cross-linker [27; 32; 33; 34], and initiator [31; 35; 36] concentration in the synthesis impact the microgel size. Also, the process conditions, including reactor temperature [27; 31] and flow profile [31; 10], determine the microgel size. For the synthesis of microgels with constant cross-linking fraction, we include the reaction temperature, initiator and monomer flow, and the surfactant concentration as variable inputs in our data-driven study.
Since TS-EMO is a stochastic optimization algorithm, it does not guarantee finding the global optimum. We therefore conduct a computational validation step via global deterministic optimization using our open-source software MAiNGO (McCormick-based Algorithm for mixed-integer Nonlinear Global Optimization) [37]. MAiNGO has been demonstrated to be very suitable for optimization with GPs embedded [38]. The global deterministic optimization ensures that for a given GP and acquisition function the optimal solution is found. The computed Pareto-optimal solutions are computed based on the GPs trained on the experimental data. Thus, the Pareto-optimal points are estimates and need to be validated experimentally to show that we are truly able to synthesize the desired microgel
and to ensure that computational prediction and real experiment agree. Therefore, in addition, we validate our optimization results experimentally. We conduct the proposed synthesis of a selection of Pareto-optimal points and compare the experimental outcome to the computed findings.
We structure the remaining manuscript as follows. Section 2 describes the experimental setup of the microgel synthesis in the flow reactor. Section 3 reports our optimization approach, including the TS-EMO algorithm, the initial data set, and the problem setup using MAiNGO. Section 4 presents the results of the optimization studies and the computational and experimental validation. Finally, we conclude our work in Section 5.
## 2 Experimental
### Materials
\(N\)-isopropylacrylamide (NIPAM) (97%, ITC Chemicals) is distilled under vacuum for purification and recrystallized from hexane. 2,2'-azobis(2-methylpropionamidine)dihydro (AMPA) (97%, Sigma-Aldrich), \(N\),_N'-methylenebis(acrylamide) (BIS) (99%, Sigma-Aldrich), and hexadecyltrimethylammonium bromide (CTAB) (\(\geq\)97%, Merck) are used as received. Deionized water (referred to as water) is produced in-house (conductivity 0.8 \(\mathrm{\SIUnitSymbolMicro s}\mathrm{\SIUnitSymbolC}^{-1}\) at 25 \(\mathrm{\SIUnitSymbolCelsius}\)).
### Microgel synthesis in flow reactor
We synthesized microgels via precipitation polymerization inside a tubular glass reactor setup, as described in detail in our previous publication [12]. In the following, we provide a brief summary of this experimental setup. Two feed solutions are created, where the monomer and initiator are dissolved in water. The monomer solution contains deionized water with 110.6 \(\mathrm{mmol\,L}^{-1}\) of NIPAM, 2.7 \(\mathrm{mmol\,L}^{-1}\) of cross-linker BIS, and 0.41 \(\mathrm{mmol\,L}^{-1}\) of surfactant CTAB. Thus, the resulting microgels contain a cross-linker fraction of 2.5 \(\mathrm{mol\,\%}\). The initiator solution comprises deionized water with 1.5 \(\mathrm{mmol\,L}^{-1}\) of initiator AMPA. Both solutions (initiator and monomer) and constantly degassed using nitrogen. The flow rates of the monomer and initiator solution can be controlled between 2 \(\mathrm{mL\,min}^{-1}\) to 18 \(\mathrm{mL\,min}^{-1}\) and 0.1 \(\mathrm{mL\,min}^{-1}\) to 0.9 \(\mathrm{mL\,min}^{-1}\), respectively. Hence, the overall flow rate and the ratio between both feed flows can be adapted. An external heating bath heats the reactor to reaction temperature. We adjust the reactor temperature between 60 \(\mathrm{\SIUnitSymbolCelsius}\) to 80 \(\mathrm{\SIUnitSymbolCelsius}\). The produced microgels exit the reactor, and the solution is cooled to stop the reaction. During the continuous synthesis, we use Raman spectroscopy to determine the weight fraction of the remaining NIPAM (\(w_{NIPAM}\)) via in-line measurements. Raman spectra are recorded in HoloGRAMS (Kaiser Optical Systems, Ann Arbor, Michigan, USA) with cosmic ray correction using an RXN2 Raman Analyzer (Kaiser Optical Systems) and an acquisition time of 40 \(\mathrm{s}\). More details on the Raman measurement configuration are described in our previous work [12]. We assess the Raman spectra using
an evaluation model based on Indirect Hard Modeling [39], which we previously developed [12]. We published the calibration measurements for the model development for transparency and reproducibility [40]. In an off-line step, we use the Zetasizer Ultra (Malvern Panalytical, Malvern, UK) to determine the hydrodynamic diameter (\(D_{H}\)) of the collapsed microgels via Dynamic Light Scattering (DLS). The microgel samples are diluted in ultrapure water and prepared in a disposable capillary cell of the type DTS0012 for the DLS measurements. The measurements are carried out at 50 \({}^{\circ}\)C with an angle of 90\({}^{\circ}\) (side scatter). Each measurement is repeated four times, and the software ZS Xplorer analyzes the results. We exclude experimental data points where the DLS measurements are unreliable due to a high relative error of the microgel size or an increased polydispersity index, indicating that no microgels formed.
## 3 Computational
The following section is structured as follows. First, we formulate the optimization problem considering the goals and limitations of the experimental setup, see Sec. 3.1. In Sec. 3.2, we describe the procedure for generating a set of experiments to initialize the iterative optimization study. Next, we outline the conducted optimization studies in a high-level description in Sec. 3.3. Further, we give details on the basic operating principle of the employed TS-EMO algorithm in our hardware-in-the-loop setup and the validation approach via global deterministic optimization and the optimization problem definition therein in Sec. 3.3.1 and 3.3.2, respectively.
### Optimization problem definition
The optimization aims to find optimal settings for the synthesis to generate a high product output at short residence times and precise, targeted microgel sizes while minimizing the reaction temperature at steady-state. Furthermore, the objectives must be determined from outputs quantifiable via established monitoring techniques.
The reaction system has four optimization variables as inputs \(\mathbf{x}\): reaction temperature \(T\), surfactant concentration \(c_{CTAB}\), and flow rates of the initiator \(F_{I}\) and monomer \(F_{M}\) solution. The bounds on the inputs are presented in Tab. 1. The range of \(T\) comprises the minimum of 60 \({}^{\circ}\)C when the initiator decomposition effectively sets in [41] and the maximum of 80 \({}^{\circ}\)C when solvent evaporation becomes an issue. The bounds on \(c_{CTAB}\) are based on the reaction experience that no colloidal stability sets in below the lower limit. Generally, a higher \(c_{CTAB}\) causes a smaller microgel size. Thus, we determined the upper limit for \(c_{CTAB}\) based on preliminary experiments. The pump's capacity defines the limits for the monomer and initiator solution flow rates. Furthermore, at the minimum \(F_{M}=2\,\mathrm{mL}\,\mathrm{min}^{-1}\), which entails the maximum residence time in the reactor (approximately 1800 s), the final conversion is reached, as discovered in our previous work [12]. The employed upper bounds allow for achieving the desired microgel size range, as we conclude from empirical knowledge.
The concentration of the monomer NIPAM (\(c_{NIPAM}=110.6\,\mathrm{mmol\,L^{-1}}\)) in the stock solution, and the ratio of monomer to cross-linker BIS are kept constant for the reaction optimization to maintain a cross-linking fraction of \(2.5\,\mathrm{mol\%}\) within the microgel.
We measure two quantities of the system at the end of the reaction: The weight fraction of the monomer NIPAM \(w_{NIPAM}\) and the microgel's hydrodynamic radius \(r_{H}\). From the measurements, we derive two quantities \(\mathbf{y}\) for the surrogate model data set: The product flow (\(F_{Product}\)) and the squared deviation from the targeted microgel size (\(\Delta r_{H}^{2}\)). The product flow characterizes the reactor efficiency and is computed via:
\[F_{Product}=\frac{w_{NIPAM,0}-w_{NIPAM,f}}{w_{NIPAM,0}}\cdot(F_{I}+F_{M}),\]
where \(w_{NIPAM,0}\) and \(w_{NIPAM,f}\) denote the initial and final NIPAM weight fraction.
The output \(\Delta r_{H}^{2}\) is calculated as the squared difference between the measured and targeted hydrodynamic radius:
\[\Delta r_{H}^{2}={(r_{H,measured}-r_{H,target})}^{2}.\]
The targeted microgel size in this contribution is a hydrodynamic radius of \(100\,\mathrm{nm}\) in the collapsed state at \(50\,\mathrm{\SIUnitSymbolCelsius}\), as the size range is relevant in medical applications to pass the human cell boundary. Previously, it was found that microgels with a hydrodynamic diameter above \(800\,\mathrm{nm}\) in the swollen state are unsuitable for cellular uptake [3]. This size corresponds to a hydrodynamic radius of approximately \(222\,\mathrm{nm}\) at the collapsed state. Thus, microgels of a hydrodynamic radius of \(100\,\mathrm{nm}\) are expected to achieve fast cellular uptake kinetics.
The efficient microgel production targets a low reaction temperature as heating contributes significantly to energy consumption. The reaction temperature \(T\) is an input to the reactor system; hence, no additional measurement technology is needed. The difference to the minimum allowable temperature (see Tab. 1) is defined as another objective function:
\[\Delta T=T-T_{min}.\]
Technically, the input temperature can be used as an objective function directly. However, we use the temperature difference as the objective to scale the temperature values to a similar magnitude as the flow rates and to underline the generality of the method.
\begin{table}
\begin{tabular}{l c c c} \hline Variable & Unit & Lower bound & Upper bound \\ \hline \(F_{I}\) & \(\mathrm{mL\,min}^{-1}\) & 0.1 & 0.9 \\ \(F_{M}\) & \(\mathrm{mL\,min}^{-1}\) & 2 & 18 \\ \(c_{CTAB}\) & \(\mathrm{mmol\,L^{-1}}\) & 0.14 & 0.41 \\ \(T\) & \(\mathrm{\SIUnitSymbolCelsius}\) & 60 & 80 \\ \hline \end{tabular}
\end{table}
Table 1: Bounds on input variable values.
The resulting multi-objective optimization problem is summarized in the following:
\[\min_{\mathbf{x}\in[\mathbf{x}^{L},\mathbf{x}^{U}]}\;-F_{Product}\,,\;\Delta r_{H} ^{2},\;\Delta T,\]
where, \(\mathbf{x}=[F_{I},F_{M},c_{CTAB},T]\), and \(\mathbf{x}^{L}\) and \(\mathbf{x}^{U}\) denote their corresponding lower and upper bounds as presented in Tab. 1.
### Initial data set
Effective initial values are important to initialize the data-driven optimization algorithm. Often random choices are taken as initial guesses, without distinguishing between variables. However, we aim for efficient usage of experimental resources and accordingly devised the following tailored initialization. We configure three groups of experiments, each comprising five experimental settings. The division is visualized in Fig. 1. We distinguish between input variables \(T\) and \(c_{CTAB}\) that are at a fixed value for each group and inputs \(F_{M}\) and \(F_{I}\) that vary simultaneously within one group. We chose five settings per group, as this amount of experimental settings can be conducted within one day of working in the laboratory and is therefore practical. Furthermore, we decided to consider three groups of experiments as a trade-off between covering the input space of \(T\) and \(c_{CTAB}\) sufficiently and conducting a reasonable size of initial experiments in total.
Changing \(T\) between experimental runs relates to long transition times. Thus, we keep \(T\) at a fixed value for each group of experiments for an efficient proceeding. Also, \(c_{CTAB}\) is fixed for an experimental group, as preparing the monomer solution with different content of CTAB for each experiment execution is laborious and increases the risks of inserting air into the reactor system (oxygen inhibits the reaction) while decreasing the flexibility of the reaction setup. Therefore, keeping \(c_{CTAB}\) fixed constitutes a trade-off between effort for the synthesis preparation, risk of contamination, and loss of flexibility in synthesis execution.
We employ the lhsdesign function for Latin Hypercube Sampling (LHS) in MATLAB 2019b to determine the input values for the initial experiments. In the first step, we set the values for \(T\) and \(c_{CTAB}\) for each of the three groups via LHS. Subsequently, we perform LHS again for \(F_{I}\) and \(F_{M}\) within each group for five settings. In total, we derive an amount of 15 initial experiments.
Figure 1: Grouping of initial experiments designed via LHS.
### General approach
We conduct a hardware-in-the-loop optimization study involving TS-EMO and a validation study including computational and experimental validation. In the hardware-in-the-loop approach, we employ TS-EMO to determine the next group of experiments based on an initial experimental data set. After the suggested group conditions are experimentally tested, we repeat the optimization process and subsequent experimentation until eleven iterations have been reached. Finally, we validate the results from the TS-EMO study computationally via global deterministic optimization using the software MAiNGO and experimentally with reaction settings from Pareto optimal points.
#### 3.3.1 TS-EMO algorithm
We apply TS-EMO [20] to the product-process optimization of the continuous microgel synthesis. The schematic setup of the reactor combined with the algorithm is shown in Fig. 2.
TS-EMO uses experimental data points \(\mathbf{x}^{(i)}=[F_{I},F_{M},c_{CTAB},T]\) and \(\mathbf{y}^{(i)}:=[F_{product},\Delta r_{H}^{2}]\) to create an approximation via a GP surrogate model of the unknown function \(f\). For the training of the GPs, we apply matern type 1 as the function kernel. The third objective can directly be calculated from the input variables. In the multi-objective optimization step, Thompson sampling allows approximating the Pareto set of the optimal solutions. Here, we set the number of spectral sampling points to 4,000. Lastly, an optimal candidate set of input conditions \(\mathbf{x}^{(i+1)}=[F_{I,new},F_{M,new},c_{CTAB,new},T_{new}]\) is calcu
Figure 2: Overview of the iterative multi-objective optimization of the microgel synthesis in flow using the TS-EMO algorithm. Solid arrows indicate material flow, dotted arrows represent information transfer.
lated to continue in the next experimental iteration loop. The settings incorporate a genetic algorithm with 1,000 generations for optimization.
In conclusion, the TS-EMO algorithm is provided with an initial experimental data set designed via LHS. The algorithm then provides a new set of experiments to be conducted in the following experimental round. Subsequently, in each optimization round, we determine a set of the following five experiments at one fixed \(T\) and \(c_{CTAB}\) with varying \(F_{I}\) and \(F_{M}\). We chose batch-sequential optimization, meaning evaluating multiple points in each iteration, as off-line DLS measurements are conducted more efficiently in batch preparation. In addition, we chose five experimental settings, as we can adequately conduct this quantity within one day of synthesis experimentation. The TS-EMO calculation and the experimentation are repeated in multiple iterations. Meanwhile, searching for the optimal recipe should take as few iterations as possible to decrease the experimental effort and expense of chemicals used. Thus, the hardware-in-the-loop procedure ends when a certain number of iterations have been performed or the executor decides that sufficient reaction knowledge has been gathered. In the presented study, we end the procedure after eleven iterations.
#### 3.3.2 Global deterministic optimization
For the computational validation, we use MAiNGO [37] to conduct a global deterministic optimization where \(F_{Product}\) acts as the single objective. Additionally, we apply the \(\epsilon\)-constraint method [42] to restrict the objective \(\Delta r_{H}^{2}\). As the remaining objective \(\Delta T\) is directly proportional to the input \(T\), we restrict the upper bound of the input variable \(T\) step-wise. For the global optimization, we use the experimental data received in the TS-EMO study and do not perform further experiments in the form of a hardware-in-the-loop approach. We set the starting point and the \(\epsilon\) values for the optimization based on the results derived from the hardware-in-the-loop study.
We rewrite the optimization problem to a single-objective formulation in reduced space:
\[\begin{split}\min_{\mathbf{x}\in[\mathbf{x}^{\mathbf{L}},\mathbf{ x}\mathbf{U}]}&-F_{Product}\\ \text{s.t.}&\Delta r_{H}^{2}\leq\epsilon\end{split} \tag{1}\]
As stated above, the values for \(\epsilon\), \(\mathbf{x}^{U}\) of \(T\), and the starting point are derived from the results of the TS-EMO study.
## 4 Results and discussion
The results and discussion are organized as follows. First, we present our findings from the study involving TS-EMO with four inputs and three objectives in Sec. 4.1. There we show the Pareto optimal solutions for the three-dimensional objective system, the progression of the experimental outcome with accumulating experimentation, the error analysis of the measurements, and the Pareto optimal solutions for each of the four inputs. Subsequently, we display the results of the validation studies in
Sec. 4.2. The computational validation via global deterministic optimization is shown in Sec. 4.2.1. We re-formulated the optimization problem to a single objective with four input variables for the final study. In Sec. 4.2.2, we additionally exhibit the experimental validation of three Pareto optimal points. We provide all experimental data [43]. The data includes the raw Raman measurements and an evaluation of the DLS measurements. In addition, we make data points underlying the graphical representation of the results available in Supporting Information Sec. 2. The data points include the experimental data (Supporting Information Sec. 2.1) and the Pareto optimal solutions calculated via global deterministic optimization in the validation step (Supporting Information Sec. 2.2). As the Pareto optimal solutions calculated via TS-EMO are exhaustive, the data is not provided explicitly. The results can be re-constructed by applying TS-EMO on the experimental data provided. The software employed in this contribution is available open-source: TS-EMO [44] and MAiNGO [45] with MeLOn [46], the interface for embedded machine-learning models.
### Hardware-in-the-loop involving TS-EMO
We conduct eleven iterations for the hardware-in-the-loop optimization. We analyze the Pareto optimal solutions in detail regarding the feasible space of the objective values in Sec. 4.1.1. Next, the progression of the experimentation outcome, an analysis of the errors from the experimental measurements, and the computational uncertainty of the calculated Pareto front are presented in Sec. 4.1.2. Lastly, we evaluate the input variable values at the Pareto optimal points to derive suitable reactor settings for the desired microgel product in Sec. 4.1.3.
#### 4.1.1 Pareto optimal solutions
In the hardware-in-the-loop study, \(F_{I}\), \(F_{M}\), \(c_{CTAB}\), and \(T\) are varied as the inputs to the reactor, and \(F_{Product}\), \(\Delta r_{H}^{2}\), and \(\Delta T\) are the objectives. Fig. 3 shows the resulting Pareto front of the study. We used a population size of 5,000 to represent the three-dimensional Pareto front sufficiently. As visualizing three objectives is challenging, we proceed with a two-dimensional plot and add a color scale for the third objective to visualize the estimated Pareto front for better interpretation. However, it is crucial to remember that we are considering three-dimensional optimization results for the meaningful interpretation of the two-dimensional plots.
For the two-dimensional Pareto fronts, the desired outcome in Fig. 3, the utopia point, of the multi-objective optimization regarding the product flow and the squared radius deviation is located in the bottom left corner of the plot. Equally, small temperature deviations (depicted in dark blue) indicate the location of the utopia point in the third dimension. Looking at the results, it appears that the three objective functions are conflicting; thus, reaching the utopia point is impossible. In other words: the product flow rate becomes lower for microgels closer to the targeted size, and higher temperatures are needed for high product flow rates. In addition, the shaded area around a squared radius deviation accounts for a difference of \(\pm 5\,\mathrm{nm}\) or \(5\%\) to the desired size.
The analysis of the estimated Pareto front in Fig. 3 yields that up to \(6.0\,\mathrm{mL}\,\mathrm{min}^{-1}\) of product flow, a microgel size sufficiently close (\(\pm 5\,\mathrm{nm}\)) to the desired size is achievable. The microgel size deviation begins to diverge more strongly from the targeted value after a product flow rate of approximately \(6.5\,\mathrm{mL}\,\mathrm{min}^{-1}\) is reached. This deviation shows that product flow rates above a value of around \(6.5\,\mathrm{mL}\,\mathrm{min}^{-1}\) are incompatible with the targeted microgel size.
The temperature influences the optimal product flow more significantly than the optimal microgel size. This trend is represented by the color indicated temperature change that is more substantial along the x-axis than the y-axis. The underlying GPs (depicted in Supplementary Information Sec. 1) show that an increase in temperature generally accompanies an increase in product flow. Still, the product flow converges towards approximately \(6.5\,\mathrm{mL}\,\mathrm{min}^{-1}\) for temperatures above approximately \(70\,\mathrm{\SIUnitSymbolCelsius}\) (corresponding to \(10\,\mathrm{K}\) temperature deviation). Thus, low temperatures (below \(70\,\mathrm{\SIUnitSymbolCelsius}\)) are sufficient considering the trade-off between maximizing product flow and achieving the targeted microgel size, as above approximately \(70\,\mathrm{\SIUnitSymbolCelsius}\) only the product flow improves. Overall, the optimal temperature input spans the entire allowable range between \(60\,\mathrm{\SIUnitSymbolCelsius}\) to \(80\,\mathrm{\SIUnitSymbolCelsius}\). Furthermore, the GP for the squared radius deviation (Supplementary Information Sec. 1) shows an increase with rising temperatures. However, the correlation between reaction temperature and microgel size deviation appears highly non-linear and subject to inherent variance. Lastly, the underlying GP for the temperature deviation (Supplementary Information Sec. 1) confirms the successful training of the GPs, as the temperature deviation shows no correlation to \(F_{I}\), \(F_{M}\), or \(c_{CTAB}\), and is directly proportional to the input temperature values with little variance.
In conclusion, the results concerning a suitable microgel size at a high
Figure 3: Estimated Pareto front of the hardware-in-the-loop study using TS-EMO: Squared radius deviation over product flow. The color scale indicates the temperature deviation. The x symbols mark the experimental data points. The shaded area maps the deviation of \(\pm 5\,\mathrm{nm}\) to the desired microgel radius.
product flow and medium reactor temperatures are promising. The underlying GPs confirm our apriori reaction knowledge; thus, we can validate the functionality of the applied method elementarily. However, the GPs are occasionally subject to high variance, and the available data points are limited. Nevertheless, we can derive meaningful information about the synthesis, e.g., limiting the temperature to 70 \({}^{\circ}\)C is sufficient for successful synthesis. Furthermore, we find that a maximum product flow of 6.0 mL min\({}^{-1}\) is achievable when restricting the allowable microgel size deviation to \(\pm 5\) nm.
#### 4.1.2 Experiment progression and error analysis
In Fig. 4, the calculated Pareto front is shown with the progression of the experimentation. The temperature and the surfactant concentration for each experimental group are listed in addition to the order of experiment progression on the color scale. In the graph, the stars mark the results from the initial experiments designed via LHS. The LHS ensures a good distribution over the input space. The initial experimental results also cover the output space adequately, indicating that the initial data set already provides a reasonable basis for information on the reaction.
Furthermore, the triangles depicted in a color scale represent the experimentally determined data points and their progression in the hardware-in-the-loop approach. In each set of experiments, five data points are received. We must neglect some data points due to DLS measurement showing a high size distribution index (indicating that no real microgel was formed) or a high relative measurement error. Thus, a reduced amount of experimental data points is shown. There is no clear trend visible in the experiment progression, as the algorithm tries to balance exploitation and exploration in the design of the next experiment. The listed temperature and surfactant concentration values along the experimental progression show that the algorithm mostly explores temperature regions below 70 \({}^{\circ}\)C. While the surfactant concentration is varied over the entire allowed input space. Also, for the conducted experiments in this study, the algorithm does not repeat in any iteration the suggested experimental conditions regarding the combination of temperature and surfactant concentration. Although output measurements are sometimes excluded without further information to the algorithm, the algorithm does not try to re-evaluate the correlated input space. The batch-sequential procedure presumably achieves that the algorithm carries on without going back to previously tested conditions where no information was received. In other words: although no input information is gathered at certain input conditions within one experimental group, the information from the remaining input conditions within the group supports the algorithm enough.
In Fig. 5, the calculated Pareto front is shown with the computational standard deviation of the optimal points. Also, the experimental data points are depicted with the according experimental error bars. The magnitude of the experimental error is derived from the measurement technology. The evaluation model of the Raman measurements has an inherent root mean squared error of cross-validation (RMSECV) of 0.037 wt-%. The error propagation, including the RMSECV, is considered for the un
certainty of the product flow. For the DLS measurement, the Zetasizer Ultra internally evaluates the standard deviation over the four conducted measurements. This error value is also propagated for the uncertainty of the experimental squared particle size deviation.
Some experimental data points lie slightly below the estimated Pareto front. This phenomenon becomes visible in a three-dimensional analysis. However, the considered experimental data points lie within the calculated standard deviation of the estimated Pareto front for the squared radius deviation. Furthermore, the experimental error bars resulting from the DLS and Raman measurement errors are displayed to underline the magnitude of uncertainty inherent in the real-life experimental setup.
#### 4.1.3 Pareto optimal solutions for different inputs
In Fig. 6(a) to 6(c), the Pareto front for the objectives \(\Delta r_{H}^{2}\) and \(F_{Product}\) and three out of the four applied inputs is shown. The color scale indicates the associated input configuration. The inputs pictured include the surfactant concentration, the monomer, and the initiator flow rate. The Pareto front with the according input temperature is not depicted explicitly, as Fig. 3 contains information on the input temperature.
Fig. 6(a) shows that the microgel size deviates strongly from the desired size for higher \(c_{CTAB}\) values. Overall, \(c_{CTAB}\) ranges only between \(0.22\,\mathrm{mmol\,L^{-1}}\) to \(0.41\,\mathrm{mmol\,L^{-1}}\). The underlying GP (depicted in Supplementary Information Sec. 1) indicates that the product flow can be considered independent of \(c_{CTAB}\). In contrast, the correlation between squared radius deviation and \(c_{CTAB}\) is impaired by high variance. The
Figure 4: Estimated Pareto front of the hardware-in-the-loop study using TSEMO: Squared radius deviation over product flow. The gray circles represent the estimated Pareto optimal solutions based on the GPs. The stars indicate the initial experimental data set and the triangles the subsequent experimental data points, while the color of the triangles shows the experimental progression.
finding that the product flow is unaffected by \(c_{CTAB}\) follows the expected outcome, as a change in stabilizer should not impact the conversion kinetics of the reaction system.
In Fig. 6(b), the monomer flow rate ranges between \(2.75\,\mathrm{mL}\,\mathrm{min}^{-1}\) to \(14.2\,\mathrm{mL}\,\mathrm{min}^{-1}\) and mainly correlates to the product flow. The relation between monomer flow rate and product flow is defined in Eq. (3.1) stating that generally, the monomer flow and product flow are directly proportional (second term in Eq. (3.1)). However, the monomer flow rate is also related to the conversion (first term of Eq. (3.1)). A higher monomer flow can cause a smaller conversion, as not all monomer can be consumed in the smaller residence time. The underlying GP (depicted in Supplementary Information Sec. 1) shows the trade-off between high monomer flow rates associated with an increased overall flow and a lower conversion and low monomer flow rates with a low overall flow but higher conversion. Furthermore, the monomer flow rate has little to no influence on the microgel size deviation according to the underlying GP.
Finally, Fig. 6(c) shows the Pareto front for different initiator flow rates. Here, the initiator flow rate ranges between \(0.59\,\mathrm{mL}\,\mathrm{min}^{-1}\) to \(0.8\,\mathrm{mL}\,\mathrm{min}^{-1}\) with a clear tendency to the upper bound. Similar to the monomer flow rate, the initiator flow rate is directly proportional to the product flow as defined in Eq. (3.1). However, the initiator flow is a maximum of a third of the total flow rate and thus less significant for the overall change in residence time. As expected, the underlying GP (depicted in Supplementary Information Sec. 1) also shows a highly linear correlation between initiator flow rate and product flow. In addition, the GP for the squared radius deviation shows no clear trend depending on the initiator flow rate.
Figure 5: Estimated Pareto front of the hardware-in-the-loop study using TS-EMO: Squared radius deviation over product flow. The gray circles represent the estimated Pareto optimal solutions based on the GPs and the according standard deviation. The black circles indicate the experimental outcomes and the according measurement uncertainty.
### Validation
The validation conducted within this contribution includes a computational and experimental part. The computational validation is global deterministic optimization of the final GP, Sec. 4.2.1. The experimental validation is carried out for three calculated Pareto optimal solutions, and the results are shown in Sec. 4.2.2.
#### 4.2.1 Computational validation via global deterministic optimization
We proceed with a final deterministic global optimization using MAiNGO. The results from the hardware-in-the-loop study are incorporated into the final optimization for validation. First, the data points from the TS-EMO study are used to train GPs for \(F_{Product}\) and \(\Delta r_{H}^{2}\). The training
Figure 6: Estimated Pareto front of main study: Squared radius deviation over product flow for the input variables (a) CTAB concentration, (b) monomer flow rate, and (c) initiator flow rate. The circles represent the estimated Pareto optimal solutions based on the GPs, while the color scale indicates the magnitude of the respective input variable.
settings are the same as for the GPs used in the hardware-in-the-loop approach including TS-EMO. Second, the identified optimal point close to the targeted microgel size and a sufficient product flow at a reasonably low temperature is embedded as the starting point of the optimization: \(F_{I}=0.73\,\mathrm{mL}\,\mathrm{min}^{-1}\), \(F_{M}=8.1\,\mathrm{mL}\,\mathrm{min}^{-1}\), \(c_{CTAB}=0.34\,\mathrm{mmol}\,\mathrm{L}^{-1}\), and \(T=68.5\,^{\circ}\mathrm{C}\). The calculated outcome for these input variables yields a microgel size deviation of \(21.1\,\mathrm{nm}^{2}\) and a product flow of \(6.0\,\mathrm{mL}\,\mathrm{min}^{-1}\). Also, the visualization of the TS-EMO study (see Fig. 3) allows setting reasonable values for the \(\epsilon\) constraint method.
For the deterministic global optimization, the results including the \(\epsilon\) constraint method, are presented in Fig. 7 for each input separately. Here, we constrain the squared radius deviation step-wise with a maximum of \(25\,\mathrm{nm}^{2}\). The problem becomes infeasible for squared radius deviations below \(2\,\mathrm{nm}^{2}\). We compare the global deterministic optimization (MAiNGO) with the optimization results for two objectives (product flow and squared radius deviation) using TS-EMO.
Overall, the Figs. 7(a) to 7(d) show that the experimental data points, the Pareto front generated via TS-EMO, and the Pareto front obtained from MAiNGO agree correctly above a product flow of approximately \(4.3\,\mathrm{mL}\,\mathrm{min}^{-1}\). TS-EMO finds a feasible Pareto optimal solution only down to \(12.6\,\mathrm{nm}^{2}\) at a product flow of \(4.0\,\mathrm{mL}\,\mathrm{min}^{-1}\). In this region, the calculated solution via MAiNGO diverges and includes feasible solutions in the product flow range around \(4.3\,\mathrm{mL}\,\mathrm{min}^{-1}\) with squared radius deviations between \(10\,\mathrm{nm}^{2}\) to \(12\,\mathrm{nm}^{2}\).
Within the Pareto optimal solutions calculated via MAiNGO, three regimes can be differentiated most visible for the CTAB concentration and the reaction temperature. These regimes range at a product flow of \(3.4\,\mathrm{mL}\,\mathrm{min}^{-1}\) to \(3.8\,\mathrm{mL}\,\mathrm{min}^{-1}\), around \(4.3\,\mathrm{mL}\,\mathrm{min}^{-1}\), and \(4.5\,\mathrm{mL}\,\mathrm{min}^{-1}\) to \(6\,\mathrm{mL}\,\mathrm{min}^{-1}\). In each regime, the CTAB concentration, the initiator flow rate, and the reaction temperature are approximately constant, and only the monomer flow rate varies.
Further, we change the upper bound of the reactor temperature input variable value to \(61\,^{\circ}\mathrm{C}\), \(62\,^{\circ}\mathrm{C}\), and \(70\,^{\circ}\mathrm{C}\). The results of the TS-EMO optimization with two objectives compared to global deterministic optimization results via MAiNGO are shown in Fig. 8. The problem becomes infeasible for squared radius deviations below \(2\,\mathrm{nm}^{2}\) for temperatures \(62\,^{\circ}\mathrm{C}\) and higher, and below \(16\,\mathrm{nm}^{2}\) for \(61\,^{\circ}\mathrm{C}\).
In Fig. 8, the Pareto optimal points generated via TS-EMO and MAiNGO agree mostly. Only for a maximum input temperature of \(61\,^{\circ}\mathrm{C}\) the global deterministic optimization via MAiNGO finds slightly better Pareto optimal points for squared radius deviations above \(23\,\mathrm{nm}^{2}\). However, the product flow range between \(1.3\,\mathrm{mL}\,\mathrm{min}^{-1}\) to \(1.6\,\mathrm{mL}\,\mathrm{min}^{-1}\) and a minimum squared radius deviation of \(16.4\,\mathrm{nm}^{2}\) for the associated temperature are undesirable. Thus, temperatures above \(61\,^{\circ}\mathrm{C}\) are more relevant. For a maximum input temperature of \(62\,^{\circ}\mathrm{C}\), the Pareto optimal product flow is limited to \(4\,\mathrm{mL}\,\mathrm{min}^{-1}\) even for substantial deviations in squared radius at \(25\,\mathrm{nm}^{2}\). The Pareto optimal points for squared radius deviations below \(13\,\mathrm{nm}^{2}\) overlap for the MAiNGO and TS-EMO optimization for \(62\,^{\circ}\mathrm{C}\) and \(70\,^{\circ}\mathrm{C}\). For a maximum input temperature of \(70\,^{\circ}\mathrm{C}\), a notable improvement of the product flow up to approximately \(6\,\mathrm{mL}\,\mathrm{min}^{-1}\) is achievable
when allowing squared radius deviations starting at \(18\,\mathrm{nm}^{2}\) and above. The TS-EMO Pareto optimal points only cover squared radius deviations above \(12.5\,\mathrm{nm}^{2}\) for a maximum temperature of \(70\,\mathrm{\SIUnitSymbolCelsius}\). The Pareto optimal points for the MAiNGO optimization with a maximum temperature of \(70\,\mathrm{\SIUnitSymbolCelsius}\) and \(80\,\mathrm{\SIUnitSymbolCelsius}\) agree except for the regime around \(4.3\,\mathrm{mL}\,\mathrm{min}^{-1}\) and squared radius deviations of \(10\,\mathrm{nm}^{2}\) to \(12\,\mathrm{nm}^{2}\) indicating that temperatures above \(70\,\mathrm{\SIUnitSymbolCelsius}\) are irrelevant for optimized reactor settings.
Overall, the Pareto optimal solutions of TS-EMO and MAiNGO agree very well. Hence, the hardware-in-the-loop procedure using TS-EMO is validated sufficiently. However, the global deterministic optimization finds feasible Pareto optimal solutions beyond TS-EMO. The global deterministic optimization of the multi-objective synthesis problem is beneficial because little data is available, and thus guaranteeing a reliable and reproducible solution is crucial. However, the surrogate models represented
Figure 7: Estimated Pareto front of global deterministic optimization: squared radius deviation over product flow for the input variables (a) CTAB concentration, (b) monomer flow rate, (c) initiator flow rate and, (d) reaction temperature. The squares represent the estimated Pareto optimal solutions based on the GPs, while the color scale indicates the magnitude of the respective input variable. The x symbols mark the experimental data points. The blue circles indicate the estimated Pareto front via TS-EMO for two objectives only.
by GPs are subject to significant variance. Thus, a solution representing the actual reality remains challenging. We also demonstrate that the deterministic single-objective formulation is advantageous here to focus on the output space of interest and reduce computational effort.
#### 4.2.2 Experimental validation
We conduct three experiments along the deterministically estimated Pareto front for an experimental validation step to determine if the computed estimated based on the trained GPs can be verified experimentally. The inputs, the estimated, and experimentally determined values are presented in Tab. 2. The experimental and calculated values agree very well for the product flow. The most significant difference regarding the product flow occurs in Experiment 3 with an absolute divergence of \(0.03\,\mathrm{mL}\,\mathrm{min}^{-1}\) (or approximately \(2.8\%\)) to the calculated value. Generally, the agreement of calculated and experimental values is higher for the product flow than for the squared radius deviation. The most notable difference regarding the squared radius deviation arises for Experiment 1, where the absolute divergence is \(83\,\mathrm{nm}^{2}\). This significant divergence can be attributed to the high variation in the GP prediction for the squared radius deviation. At the same time, the estimated and experimental value for Experiments 2 and 3 agree sufficiently. Experiment 3 shows that we can efficiently synthesize microgels with a radius of \(101.5\,\mathrm{nm}\), which is acceptable in terms of accuracy.
Overall, the experimental validation indicates that the obtained data is enough to enable an adequate prediction via a GP surrogate model. The agreement between estimated and calculated data is good, although the underlying GPs are subject to significant variance. The applied procedure is successful with an absolute deviation of \(1.5\,\mathrm{nm}\) to the desired microgel
Figure 8: Comparison of TS-EMO and MAiNGO results for different bounds on input temperature. The filled circles represent the Pareto optimal points calculated via TS-EMO, while the squares show the Pareto optimal points calculated via MAiNGO.
radius.
## 5 Conclusions
Polymerization reactions in flow reactors play an essential role in precise polymer production. The efficient, accurate, reproducible synthesis of polymers such as microgels is important. Data-driven optimization supports the microgel development effectively. We incorporate the multi-objective optimization algorithm TS-EMO to optimize the synthesis of tailored microgels ecologically and economically. The proposed synthesis settings enable a product flow of maximum \(6.0\,\mathrm{mL}\,\mathrm{min}^{-1}\) while remaining in an acceptable range of \(\pm 5\,\mathrm{nm}\) to the targeted hydrodynamic radius. We use the global deterministic optimization software MAiNGO to prove the reliability and reproducibility of the results. In addition, we demonstrate the usefulness of global deterministic solutions for problems with little data availability.
From the experimental side, including Raman spectroscopy constitutes a powerful in-line process analytical tool that has the potential to be incorporated into automated reaction optimization setups. Limitations of the proposed work include the non-automated reactor system due to off-line DLS measurements. Dependable in-line size determination remains a critical shortcoming on the road to autonomous reaction optimization. Furthermore, the DLS data is occasionally unreliable or shows a high polydispersity (indicating no real microgel is produced). At the moment, these data points are discarded but could be meaningfully included as valuable information for the algorithm in the future. The reliability of DLS data and the challenging interpretation of the GP predictions shows that expert knowledge is still crucial in the optimization procedure and limits a potentially autonomous process based on machine learning. Generally,
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Experiment & Input & Value & Output & \begin{tabular}{c} Estimated \\ value \\ \end{tabular} &
\begin{tabular}{c} Experimental \\ value \\ \end{tabular} \\ \hline \multirow{4}{*}{1} & \(T\) & \(68.5\,\mathrm{\SIUnitSymbolC}\) & & & \\ & \(c_{CTAB}\) & \(0.35\,\mathrm{mmol}\,\mathrm{L}^{-1}\) & \(F_{Product}\) & \(5.95\,\mathrm{mL}\,\mathrm{min}^{-1}\) & \(5.93\,\mathrm{mL}\,\mathrm{min}^{-1}\) \\ & \(F_{I}\) & \(0.73\,\mathrm{mL}\,\mathrm{min}^{-1}\) & \(\Delta r_{H}^{2}\) & \(17\,\mathrm{nm}^{2}\) & \(100\,\mathrm{nm}^{2}\) \\ & \(F_{M}\) & \(7.69\,\mathrm{mL}\,\mathrm{min}^{-1}\) & & & \\ \hline \multirow{4}{*}{2} & \(T\) & \(71.0\,\mathrm{\SIUnitSymbolC}\) & & & \\ & \(c_{CTAB}\) & \(0.16\,\mathrm{mmol}\,\mathrm{L}^{-1}\) & \(F_{Product}\) & \(4.29\,\mathrm{mL}\,\mathrm{min}^{-1}\) & \(4.20\,\mathrm{mL}\,\mathrm{min}^{-1}\) \\ & \(F_{I}\) & \(0.34\,\mathrm{mL}\,\mathrm{min}^{-1}\) & \(\Delta r_{H}^{2}\) & \(10\,\mathrm{nm}^{2}\) & \(12.25\,\mathrm{nm}^{2}\) \\ & \(F_{M}\) & \(4.87\,\mathrm{mL}\,\mathrm{min}^{-1}\) & & & \\ \hline \multirow{4}{*}{3} & \(T\) & \(62.0\,\mathrm{\SIUnitSymbolC}\) & & & \\ & \(c_{CTAB}\) & \(0.33\,\mathrm{mmol}\,\mathrm{L}^{-1}\) & \(F_{Product}\) & \(3.43\,\mathrm{mL}\,\mathrm{min}^{-1}\) & \(3.53\,\mathrm{mL}\,\mathrm{min}^{-1}\) \\ \cline{1-1} & \(F_{I}\) & \(0.74\,\mathrm{mL}\,\mathrm{min}^{-1}\) & \(\Delta r_{H}^{2}\) & \(2\,\mathrm{nm}^{2}\) & \(2.25\,\mathrm{nm}^{2}\) \\ \cline{1-1} & \(F_{M}\) & \(3.68\,\mathrm{mL}\,\mathrm{min}^{-1}\) & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experimental validation of global deterministic optimization.
data-driven optimization is limited to a specific reactor setup. However, we can quickly adapt the proposed framework to other desired microgel properties and reactor setups. Thus, this work supports and enhances the development of suitable microgels for size-specific applications. The presented method efficiently explores new microgel synthesis recipes that facilitate tailor-made microgel production.
## Authors contributions
L.F.K.: conceptualization, methodology, Raman spectra evaluation, DLS measurement interpretation, designing experimental studies, TS-EMO and MAiNGO optimization configuration, graphic design, writing original draft; A.M.S.: TS-EMO optimization configuration, scientific support and discussion on interpretation of computational results, reviewing and editing the article; J.K.: assistance for experimental setup, synthesis conduction, Raman spectra acquisition, reviewing the article; J.I.: synthesis conduction, Raman spectra acquisition, conducting DLS measurements, reviewing the article; N.W.: conducting DLS measurements, reviewing and editing the article; A.M.: design of the project, scientific support and discussion on interpretation of computational results, and advice on the structure and presentation of this work, reviewing and editing the article.
## Acknowledgements
This work was performed as a part of project B4 of the CRC 985 "Functional Microgels and Microgel Systems" funded by Deutsche Forschungsgemeinschaft (DFG). The authors thank Jan Steinstrassen for support with conducting continuous microgel syntheses. The authors also thank Johannes M. M. Faust for fruitful discussions and Jannik Luthje and Daniel Jungen for support with the software MAiNGO.
|
2306.17828 | On the Cause of Unfairness: A Training Sample Perspective | Identifying the causes of a model's unfairness is an important yet relatively
unexplored task. We look into this problem through the lens of training data -
the major source of unfairness. We ask the following questions: How would the
unfairness of a model change if its training samples (1) were collected from a
different (e.g. demographic) group, (2) were labeled differently, or (3) whose
features were modified? In other words, we quantify the influence of training
samples on unfairness by counterfactually changing samples based on predefined
concepts, i.e. data attributes such as features, labels, and sensitive
attributes. Our framework not only can help practitioners understand the
observed unfairness and mitigate it by repairing their training data, but also
leads to many other applications, e.g. detecting mislabeling, fixing imbalanced
representations, and detecting fairness-targeted poisoning attacks. | Yuanshun Yao, Yang Liu | 2023-06-30T17:48:19Z | http://arxiv.org/abs/2306.17828v2 | # Understanding Unfairness via Training Concept Influence
###### Abstract
Knowing the causes of a model's unfairness helps practitioners better understand their data and algorithms. This is an important yet relatively unexplored task. We look into this problem through the lens of the training data - one of the major sources of unfairness. We ask the following questions: how would a model's fairness performance change if, in its training data, some samples (1) were collected from a different (_e.g._ demographic) group, (2) were labeled differently, or (3) some features were changed? In other words, we quantify the fairness influence of training samples by counterfactually intervening and changing samples based on predefined concepts, _i.e._ data attributes such as features (\(X\)), labels (\(Y\)), or sensitive attributes (\(A\)). To calculate a training sample's influence on the model's unfairness w.r.t a concept, we first generate _counterfactual samples_ based on the concept, _i.e._ the counterfactual versions of the sample if the concept were changed. We then calculate the resulting impact on the unfairness, via _influence function_[33, 51], if the counterfactual samples were used in training. Our framework not only helps practitioners understand the observed unfairness and repair their training data, but also leads to many other applications, _e.g._ detecting mislabeling, fixing imbalanced representations, and detecting fairness-targeted poisoning attacks.
## 1 Introduction
A fundamental question in machine learning fairness is: what causes unfairness? Without knowing the answer, it is hard to understand and fix the unfairness problem. In practice, this is also one of the first questions the practitioners would ask after calculating the fairness measures and finding the model unfair. Although the question sounds simple, it is difficult to identify the exact source of unfairness in the machine learning pipeline, as admitted by many leading fairness practitioners, _e.g._ Meta [1] describes: "Unfairness in an AI model could have many possible causes, including not enough training data, a lack of features, a misspecified target of prediction, or a measurement error in the input features. Even for the most sophisticated AI researchers and engineers, these problems are not straightforward to fix."
The sources of unfairness are many, including data sampling bias or under-representation [16, 70, 15, 7], data labeling bias [60, 65, 26], model architecture (or feature representation) [2, 47, 68, 56, 66, 39, 55, 41], distribution shift [23, 17, 50, 27]_etc._ In this work, we tackle this problem by looking at the most important and obvious source of bias - the training samples. It is because if the model's training samples are biased, then it would be unlikely the model can still remain fair without paying heavy costs later on. Specifically, we ask the following questions regarding how training samples would impact the model's unfairness: how a model's fairness measure would change if its training samples (1) were collected from a different (_e.g._ demographic) group, (2) were labeled differently, or (3) some of the features were changed? Answering those questions can help practitioners (1) _explain_ the cause of the model's unfairness in terms of training data, (2) _repair_ the
training data to improve fairness, and (3) _detect_ biased or noisy training labels, under-represented group, and corrupted features that hurt fairness.
In this work, we measure the training sample's impact on fairness using _influence function_[22, 33], and we define the influence on fairness measure w.r.t a training _concept_ - a categorical variable that describes data property. For example, we can choose the concept to be the sensitive group attribute and counterfactually intervene on it to answer the question "What is the impact on fairness if training data were sampled from a different group?" Or we can choose the concept to be the training labels, and then our method measures the impact on fairness when the label is changed. We can also apply the concept to the training features or to the existence of training samples. Our flexible framework generalizes the prior works that only consider removing or reweighing training samples [61, 38], and we can provide a broader set of explanations and give more insights to practitioners in a wider scope (_e.g._ what if a data pattern is drawn from another demographic group?). We name our influence framework as _Concept Influence for Fairness_ (CIF).
In addition to explaining the unfairness, CIF can also recommend practitioners ways to fix the training data to improve fairness by counterfactually intervening in the concepts. Furthermore, our framework leads to a number of other applications including (1) detecting mislabeling, (2) detecting poisoning attacks, and (3) fixing imbalanced representation. Through experiments on 4 datasets - including synthetic, tabular, and image - we show that our method achieves satisfactory performance in a wide range of tasks.
## 2 Influence of Training Concepts
We start with introducing the influence function for fairness, the concept in training data, and define our _Concept Influence for Fairness_ (CIF).
### Fairness Influence Function
**Influence Function on Group Fairness.** Denote the training data by \(D_{train}~{}=~{}\{z_{i}^{tr}=(x_{i}^{tr},y_{i}^{tr})\}_{i=1}^{n}\) and the validation data by \(D_{val}~{}=~{}\{z_{i}^{val}=(x_{i}^{val},y_{i}^{val})\}_{i=1}^{n}\). Suppose the model is parameterized by \(\theta\in\Theta\), and there exist a subset of training data with sample indices \(\mathcal{K}=\{K_{1},...,K_{k}\}\). If we perturb a group \(\mathcal{K}\) by assigning each sample \(i\in\mathcal{K}\) with weight \(w_{i}\in[0,1]\), denote the resulting counterfactual model's weights by \(\hat{\theta}_{\mathcal{K}}\).
**Definition 1**.: _The fairness influence of reweighing group \(\mathcal{K}\) in the training data is defined as the difference of fairness measure between the original model \(\hat{\theta}\) (trained on the full training data) and the counterfactual model \(\hat{\theta}_{\mathcal{K}}\):_
\[\text{infl}(D_{val},\mathcal{K},\hat{\theta}):=\ell_{\text{fair}}(\hat{\theta })-\ell_{\text{fair}}(\hat{\theta}_{\mathcal{K}}) \tag{1}\]
where \(\ell_{fair}\) is the fairness measure (will be specified shortly after).
Similar to [33, 34, 38], we can derive the closed-form solution of fairness influence function:
**Proposition 1**.: _The first-order approximation of \(\text{infl}(D_{val},\mathcal{K},\hat{\theta})\) takes the following form:_
\[\text{infl}(D_{val},\mathcal{K},\hat{\theta})\approx-\nabla_{\theta}\ell_{ \text{fair}}(\hat{\theta})^{\intercal}H_{\hat{\theta}}^{-1}\left(\sum_{i\in \mathcal{K}}w_{i}\nabla\ell(z_{i}^{tr};\hat{\theta})\right) \tag{2}\]
_where \(H_{\hat{\theta}}\) is the hessian matrix i.e. \(H_{\hat{\theta}}:=\frac{1}{n}\nabla^{2}\sum_{i=1}^{n}\ell(z_{i}^{tr};\hat{ \theta})\), and \(\ell\) is the original loss function (e.g. cross-entropy loss in classification)._
See Appendix A for the derivation.
**Approximated Fairness Loss.** The loss \(\ell_{fair}(\hat{\theta})\) quantifies the fairness of a trained model \(\hat{\theta}\). Similarly to prior work [61, 52], we can approximate it with a surrogate loss on the validation data. Denote the corresponding classifier for \(\theta\) as \(h_{\theta}\), we can approximate the widely used group fairness Demographic Parity [12, 20] (DP) violation as the following (assume both \(A\) and the classification task are binary):
\[\ell_{DP}(\hat{\theta}) :=\left|\mathbb{P}(h_{\theta}(X)=1|A=0)-\mathbb{P}(h_{\theta}(X) =1|A=1)\right| \tag{3}\] \[\approx\left|\frac{\sum_{i\in D_{val}:a_{i}=0}g(z_{i}^{val}; \theta)}{\sum_{i\in D_{val}}\mathbb{I}[a_{i}=0]}-\frac{\sum_{i\in D_{val}:a_{i }=1}g(z_{i}^{val};\theta)}{\sum_{i\in D_{val}}\mathbb{I}[a_{i}=1]}\right| \tag{4}\]
where \(g\) is the logit of the predicted probability for class 1. See Appendix B for the approximated violation of Equality of Opportunity [29] (EOP), and Equality of Odds [63] (EO).
### Concepts in Training Data
A concept is a _sample-level_ categorical attribute associated with the training data. Formally, denote a concept by \(C\in\mathcal{C}:=\{1,2,...,c\}\) where \(C\) is a discrete concept that is encoded in the data \((X,Y,A)\). We do not exclude the possibility that \(C\) can simply be either \(Y\) or \(A\) or any feature in \(X\), but \(C\) can be broader. See Figure 1 for an illustration. Our core idea is to quantify the influence when each training sample is replaced by its "counterfactual sample" (_i.e._ the counterfactual version of the sample if its concept were changed) when we intervene on a certain concept.
Figure 1: (a) shows the overall causal graph assumed in our work. In training data, the concept variable \(C\) can intervene feature \(X\), label \(Y\), or sensitive attribute \(A\). The model \(\theta\) is trained on \(X\) and \(Y\), and together with the validation dataset \(D_{val}\), the validation fairness metric Fair is computed. Figure (b), (c), and (d) show the individual case when the concept variable intervenes \(A\), \(X\), and \(Y\) separately.
**Examples.** We provide detailed examples of concepts and motivate why intervening on those concepts can be intuitively helpful for fairness.
* **Concept as Sensitive Attribute (\(C=A\)).** Intuitively speaking, the sensitive/group attribute relates closely to fairness measures due to its importance in controlling the sampled distribution of each group. Intervening on \(A\) corresponds to asking counterfactually what if a similar or counterfactual sample were from a different sensitive group.
* **Concept as Label (\(C=Y\)).** In many situations, there are uncertainties in the label \(Y|X\). Some other times, the observed \(Y\) can either encode noise, mislabeling or subjective biases. They can all contribute to unfairness. Intervening on \(Y\) implies the counterfactual effect if we were to change the label (_e.g._ a sampling, a historical decision, or a human judgment) of a sample.
* **Concept as Predefined Feature Attribute (\(C=attr(X)\)).** Our framework allows us to predefine a relevant concept based on feature \(X\). \(C\) can be either an externally labeled concept (_e.g._ sample-level label in image data) or a part of \(X\) (_e.g._ a categorical1 feature in tabular data). For instance, if we want to understand how skin color would affect the model's fairness, and if so which data samples would impact the observed fairness the most w.r.t skin color, we can specify \(C=attr(image)\in\{\text{dark},\text{light}\}\). Then intervening on this concept corresponds to identifying samples from different skin colors that, if were included in the training data, would lead to a fairer model. Footnote 1: All concepts in \(X\), \(Y\), or \(A\) that we consider are assumed to be categorical because the continuous concept is not well-defined in the literature of concept.
* for each of these samples that appear in the training data we have \(s_{i}=1\). Changing to \(s_{i}=0\) means the sample is counterfactually not included, _i.e._\(\hat{z}_{i}^{tr}(c^{\prime})=\varnothing\). Allowing the concept to be removed, we can incorporate the prior works on the influence of removing samples into our framework.
### Concept Influence for Fairness (CIF)
Our goal is to quantify the counterfactual effect of changing \(c\) for each data sample \((x,y,a)\). Mathematically, denote by \((\hat{x},\hat{y},\hat{a})\) the counterfactual sample by intervening on \(c\). Consider a training sample \(z_{i}^{tr}:=(x_{i},y_{i},a_{i},c_{i})\), and define a counterfactual sample for \(z_{i}^{tr}\) when intervening on \(C=c^{\prime}\) as follows:
\[\hat{x}(c^{\prime}),\hat{y}(c^{\prime}),\hat{a}(c^{\prime})\sim\mathbb{P} \left(\hat{X},\hat{Y},\hat{A}|X=x,Y=y,A=a,do(C=c^{\prime})\right),\ c^{\prime} \neq c. \tag{5}\]
In the above, \(do(\cdot)\) denotes the celebrated do-operation in causal models [48]. The definition is slightly abused - when \(C\) overlaps with any of \((X,Y,A)\), the \(do(\cdot)\) operation has a higher priority and is assumed to automatically override the other dependencies. For example, when \(C=A\), we have:
\[\mathbb{P}\left(\hat{X},\hat{Y},\hat{A}|X=x,Y=y,A=a,do(C=c^{\prime})\right)= \mathbb{P}\left(\hat{X},\hat{Y},\hat{A}|X=x,Y=y,\texttt{\pounds a},do(A=\hat{a })\right) \tag{6}\]
Denote a counterfactual sample as \(\hat{z}_{i}^{tr}(c^{\prime})=(\hat{x}_{i}(c^{\prime}),\hat{y}_{i}(c^{\prime} ),\hat{a}_{i}(c^{\prime}),\hat{c}_{i}=c^{\prime})\). Then we define the counterfactual model when replacing \(z_{i}^{tr}=(x_{i},y_{i},a_{i},c_{i})\) with \(\hat{z}_{i}^{tr}(c^{\prime})\) as:
\[\hat{\theta}_{i,c^{\prime}}:=\text{argmin}_{\theta}\{R(\theta)-\epsilon\cdot \ell(\theta,z_{i}^{tr})+\epsilon\cdot\ell(\theta,\hat{z}_{i}^{tr}(c^{\prime}))\} \tag{7}\]
Alternatively, we can identify multiple counterfactual examples \(\hat{z}_{i}^{tr}(c^{\prime},k),k=1,2,...,E\) and compute the average effects by defining: \(\hat{\theta}_{i,c^{\prime}}:=\text{argmin}_{\theta}\Big{\{}R(\theta)-\epsilon \cdot\ell(\theta,z_{i}^{tr})+\epsilon\cdot\frac{\sum_{k=1}^{E}\ell(\theta,\hat {z}_{i}^{tr}(c^{\prime},k))}{E}\Big{\}}\).
**Definition 2** (Concept Influence for Fairness (CIF)).: _The concept influence for fairness (CIF) of intervening on a concept \(C\) to \(c^{\prime}\) in sample \(i\) on the fairness loss \(\ell_{\text{fair}}\) is defined as:_
\[\text{infl}(D_{val},\hat{\theta}_{i,c^{\prime}}):=\ell_{\text{fair}}(\hat{ \theta})-\ell_{\text{fair}}(\hat{\theta}_{i,c^{\prime}}) \tag{8}\]
Invoking Proposition 1, we can easily prove:
**Proposition 2**.: _The concept influence for fairness (CIF) of a training sample \(z_{i}^{tr}\) when counterfactually intervened to \(\hat{z}_{i}^{tr}(c^{\prime})\) based on the target concept \(c^{\prime}\) can be computed as:_
\[\text{infl}(D_{val},\hat{\theta}_{i,c^{\prime}})\approx-\nabla_{\theta}\ell_{ \text{fair}}(\hat{\theta})^{\intercal}H_{\theta}^{-1}\left(\nabla\ell(z_{i}^ {tr};\hat{\theta})-\nabla\ell(\hat{z}_{i}^{tr}(c^{\prime});\hat{\theta})\right) \tag{9}\]
### Why Can CIF Improve Fairness?
We provide insights into why intervening training data attributes using CIF framework can improve fairness. For simplicity, we focus on accuracy disparity as the fairness measure. The complete analysis is shown in Appendix C, and we give a brief summary here. We base the analysis on the data generation model adopted in [25, 44] to capture the impact of data patterns generated with different frequencies and the impact of label errors. This setup is a good fit for understanding how counterfactual data interventions can change the data frequency of different groups (majority group with higher frequency vs. minority group with lower frequency) and provides insights for CIF.
Intervening labels \(Y\) is relatively straightforward. If we are able to intervene on a training label of a disadvantaged group from a wrong label to a correct one, we can effectively improve the performance of the model for this group. Therefore the label intervention can reduce the accuracy disparities. Our analysis also hints that the influence function is more likely to identify samples from the disadvantaged group with a lower presence in the data and mislabeled samples. This is because, for a minority group, a single label change would incur a relatively larger change in the influence value.
Intervening sensitive attributes \(A\) improves fairness by "balancing" the data distribution. Later in the experiments (Figure 9), we show that the influence function often identifies the data from the majority group and recommends them to be intervened to the minority group, as shown in Figure 2. In the analysis, we also show that this intervention incurs positive changes in the accuracy disparities between the two groups and therefore improves fairness.
## 3 Algorithmic Details
We present our algorithms for generating counterfactual samples and for computing CIF.
### Generating Counterfactual Samples
To compute the fairness influence based on Eqn. 9, we need to first generate the corresponding counterfactual sample \(\hat{z}_{i}^{tr}(c^{\prime})=(\hat{x}_{i}(c^{\prime}),\hat{y}_{i}(c^{\prime}), \hat{a}_{i}(c^{\prime}),\hat{c}_{i}=c^{\prime})\) when intervening concept \(C\) to \(c^{\prime}\)
Figure 2: Illustration of the effect of intervening sensitive attribute \(A\) as rebalancing data distribution.
Theoretically, generating the counterfactual examples requires knowing the causal graphs but we use a set of practical algorithms to approximate.
**Intervening Label \(Y\).** Since there is no variable in training data dependent on \(Y\) (Figure 1d), we can simply change the sample's label to the target label \(\hat{y}_{i}\) and keep other attributes unchanged, _i.e._\(\hat{z}_{i}^{tr}(\hat{y}_{i})=(x_{i},\hat{y}_{i},a_{i},\hat{c}_{i}=\hat{y}_{i})\).
**Intervening Sensitive Attribute \(A\).** When we intervene a sample's \(A\), both its \(X\) and \(Y\) need to change (Figure 1b). This is the same as asking, _e.g._ in a loan application, "How a female applicant's profile (_i.e._\(x_{i}\)) and the loan decision (_i.e._\(y_{i}\)) would change, had she been a male (_i.e._\(a_{i}=\hat{a_{i}}\))? Inspired by [11], we train a W-GAN [6] with _optimal transport mapping_[58] to generate _in-distribution2_ counterfactual samples for \(x_{i}\) as if \(x_{i}\) belongs to a different \(a_{i}\). To do so, we need to map the distribution of \(X\) from \(A=a\) to \(A=a^{\prime}\). We first partition the training samples' feature into two groups: \(X|A=a\) and \(X|A=a^{\prime}\). Then we train a W-GAN with the generator \(G_{a\to a^{\prime}}\) as the approximated optimal transport mapping from \(X|A=a\) to \(X|A=a^{\prime}\) and the discriminator \(D_{a\to a^{\prime}}\) ensures the mapped samples \(G_{a\to a^{\prime}}(X)\) and the real samples \(X|A=a^{\prime}\) are indistinguishable. The training objectives are the following:
Footnote 2: We need the counterfactual samples to be in-distributional rather than out-of-distributional because we need the change between the counterfactual sample and the original sample to be large enough to impact the fairness measure. We tried counterfactual examples [59] that impose minimum change to the original sample, and it does not work well in mitigation because the fairness influence value they induce is too small. Other approaches like data generation via causal graph only work on synthetic data.
\[\ell_{G_{a\to a^{\prime}}} =\ \frac{1}{n}\ \Big{(}\ \sum_{x\ \in\ X|A=a}D(G(x))+\ \lambda\cdot\sum_{x\ \in\ X|A=a}c(x,G(x))\Big{)} \tag{10}\] \[\ell_{D_{a\to a^{\prime}}} =\ \frac{1}{n}\Big{(}\ \sum_{x^{\prime}\ \in\ X|A=a^{\prime}}D(x^{\prime})-\sum_{x\in X|A=a}D(G(x))\ \Big{)}\]
where \(n\) is the number of training samples, \(\lambda\) is the weight balancing the conventional W-GAN generator loss (_i.e._ the first term in \(\ell_{G_{a\to a^{\prime}}}\)) and the distance cost function \(c(.)\) (_i.e._\(\ell_{2}\) norm in our case) that makes sure the mapped samples are not too far from the original distribution.
After we train the W-GAN on the training data, we can use the trained generator \(G_{a\to a^{\prime}}\) to map a sample \(x_{i}\) to its counterfactual version \(\hat{x}_{i}=G_{a_{i}\to\hat{a}_{i}}(x_{i})\). In addition, once we have the counterfactual features, we can use the original model to predict the corresponding counterfactual label (_i.e._ following the causal link \(X\to Y\) in Figure 1a). The resulting counterfactual sample is \(\hat{z}_{i}^{tr}(\hat{a}_{i})=(\hat{x}_{i},h_{\hat{\theta}}(\hat{x}_{i}),\hat{ a}_{i},\hat{c}_{i}=\hat{a}_{i})\).
**Intervening Feature \(X\).** In image data, assume there exists an image-label attribute \(C=attr(X)\), _e.g._ young or old in facial images, and intervening \(X\) means transforming the image (_i.e._ all pixel values in \(X\)) as if it belongs to a different \(C\). In tabular data, \(C\) is one of the features in \(X\), and when \(C\) is changed, all other features in \(X\) need to change accordingly. In both cases, similar to intervening \(A\), we train a W-GAN to learn the mapping from the group \(X|C=c\) to \(X|C=c^{\prime}\); the resulting generator is \(G_{c\to c^{\prime}}\) and the generated counterfactual feature is \(\hat{x}_{i}=G_{c_{i}\to\hat{c}_{i}}(x_{i})\). Similarly, since causal path \(X\to Y\) exists in Figure 1c, we also use the original model's predicted label as the counterfactual label. The resulting counterfactual sample is \(\hat{z}_{i}^{tr}(\hat{c}_{i})=(\hat{x}_{i},h_{\hat{\theta}}(\hat{x}_{i}),a_{i}, \hat{c}_{i}=\hat{x}_{i})\).
**Removal.** Removing is simply setting the counterfactual sample to be null, _i.e._\(\hat{z}_{i}^{tr}(c^{\prime})=\varnothing\).
### Computing Influence
Following [33], we use the Hessian vector product (HVP) to compute the product of the second and the third term in Eqn. 9 together. Let \(v:=\Big{(}\nabla\ell(z_{i}^{tr};\hat{\theta})-\nabla\ell(\hat{z}_{i}^{tr}(c^{ \prime});\hat{\theta})\Big{)}\), we can compute \(H^{-1}v\)
recursively [4]:
\[\hat{H}_{r}^{-1}v=v+(I-\hat{H}_{0})\hat{H}_{r-1}^{-1}v \tag{11}\]
where \(\hat{H}_{0}\) is the Hessian matrix approximated on random batches. Let \(t\) be the final recursive iteration, then the final CIF is \(\text{infl}(D_{val},\hat{\theta}_{i,c^{\prime}})\approx-\nabla_{\theta}\ell_{ \text{fair}}(\hat{\theta})^{\intercal}\hat{H}_{t}^{-1}v\), where \(\ell_{\text{fair}}(\hat{\theta})\) is the surrogate loss of fairness measure (_e.g._ Eqn. 4, 18 or 22).
## 4 Experiments
We present a series of experiments to validate the effectiveness of CIF in explaining and mitigating model unfairness, detecting biased/poisoned samples, and recommending resampling to balance representation.
### Setup
We test CIF on 4 datasets: synthetic, COMPAS [5], Adult [35], and CelebA [46]. We report results on three group fairness metrics (DP, EOP, and EO, see Table 1 in Appendix B for the definition). The detailed settings are the following:
* **Synthetic**: We generate synthetic data with the assumed causal graphs in Figure 1, and therefore we have the ground-truth counterfactual samples. See Appendix D.1 for the dataset generation process. Model: logistic regression.
* **COMPAS**: Recidivism prediction data (we use the preprocessed tabular data from IBM's AIF360 toolkit [10]). Feature \(X\): tabular data. Label \(Y\): recidivism within two years (binary). Sensitive attribute \(A\) (removed from feature \(X\)): race (white or non-white). Model: logistic regression. When intervening \(X\), we choose to flip the binary feature (age \(>45\) or not) in \(X\).
* **Adult**: Income prediction data (we use the preprocessed tabular data from IBM's AIF360 toolkit [10]). Feature \(X\): tabular data. Label \(Y\): if income \(>50K\) or not. Sensitive attribute \(A\) (removed from feature \(X\)): sex (male or female). Model: logistic regression. When intervening \(X\), we choose to flip the binary feature race (white or non-white) in \(X\).
* **CelebA**: Facial image dataset. Feature \(X\): facial images. Label \(Y\): attractive or not (binary). Sensitive attribute \(A\): gender (male and female). Model: ResNet18 [30]. When intervening \(X\), we choose to flip the binary image-level label "Young."
### Mitigation Performance
We test the CIF-based mitigation by first computing CIF values on all training samples, and then replacing samples with the highest CIF values by their corresponding generated counterfactual samples, and retraining the model. Figure 3-5 show the fairness performance after the model training. We observe that all three fairness measures improve significantly after following CIF's mitigation recommendations. See Figure 10-12 in Appendix D.3 for the reported model accuracy.
We summarize observations: (1) Intervening on \(Y\) proves to be highly effective on real-world data but not on synthetic. We conjecture that this is because we control the synthetic data to be cleanly labeled, which is not the case for other real-world data. Later we confirm this observation by showing the effectiveness of our approach in detecting noisy labels. (2) Intervening on \(A\) proves to be helpful for most cases, especially for DP, which highly relates to the demographic variable \(A\). (3) We set the size of synthetic data to be small (1,000) to show that simply removing training samples might not always be a good strategy, particularly on a small dataset which the model would suffer significantly from losing training samples.
**Fairness-utility Tradeoff.** We report the fairness-utility tradeoffs of our mitigation on COMPAS, together with the in-processing mitigation [3] in Figure 6. Our mitigation is comparable to [3]; sometimes we can achieve better fairness given a similar level of accuracy (_e.g._ when accuracy is \(\sim 60\%\)).
**Distribution of CIF Value.** We show the distribution of influence values computed on COMPAS corresponding to three fairness metrics in Appendix D.5, Figure 15. Intervening \(Y\) has the highest influence value compared to other types of intervention. This is because we change the value of \(Y\) directly in this operation, which is more "unnaturally" compared to generating more "natural" counterfactual examples with W-GAN (intervening \(X\) and \(A\)) or model-predicted value of \(Y\) (intervening \(X\)). The implication is mislabelling would have a larger impact on fairness than corrupted features or incorrect group membership, which is consistent with our theoretical analysis in Section 2.4. Therefore practitioners should be particularly cautious about mislabelling that can hurt fairness significantly, _e.g._ if any unprivileged group should be labeled favorable but ended up getting labeled unfavorable.
Figure 4: CIF-based mitigation performance with fairness measure Equality of Opportunity (EOP).
Figure 5: CIF-based mitigation performance with fairness measure Equality of Odds (EO).
Figure 3: CIF-based mitigation performance with fairness measure Demographic Parity (DP).
### Additional Applications of CIF
We provide three examples of additional applications that can be derived from our CIF framework.
**Fixing Mislabeling.** We flip training labels \(Y\) in the Adult dataset to artificially increase the model's unfairness. Following [60], we add group-dependent label noise, _i.e._ the probability of flipping a sample's \(Y\) is based on its \(A\), to enlarge the fairness gap. See Appendix D.6 for the experimental details. We then compute \(Y\)-intervened CIF on each sample, and flag samples with the top CIF value. In Figure 7, we report the precision of our CIF-based detection and mitigation performance if we flip the detected samples' labels and retrain the model. Our detection can flag the incorrect labels that are known to be the source of the unfairness with high precision (compared to randomly flagging the same percentage) and improves the model fairness if the detected labels are corrected.
**Defending against Poisoning Attacks.** We demonstrate another application of defending models against fairness poisoning attacks. To generate poisoned training samples that cause the model's unfairness, we choose poisoned training samples with the same probability based on the group- and label-dependent probability in the previous application. In addition to flipping the samples' labels, we also set the target feature (_i.e._ race in Adult) to be a fixed value (_i.e._ white) regardless of the original feature value. The attack that modifies a sample's feature to be a fixed value and changes its label is known as backdoor attack [28, 42, 64], a special type of poisoning attack. After the poisoning, all fairness measures become worse (see Appendix D.6 for more details). For detection, we compute \(X\)-intervened CIF on the poisoned feature, and flag samples with high CIF value. For mitigation, if we flag a sample to be poisoned, we remove it from the training set and retrain the model. Figure 8 shows the precision of our detection and the mitigation performance after removal. We observe a high precision and reasonably good fairness improvement.
**Resampling Imbalanced Representations.** To create an extremely imbalanced representation in the training set, in Adult we upsample the positive samples in the privileged group (_i.e._ male) by 200%, further increasing the percentage of positive samples that belong to the privileged group, and therefore the training samples are overwhelmingly represented by the privileged group. The resulting fairness becomes worse (see Appendix D.6 for more details). We then compute \(A\)-intervened CIF, and replace the high-influence samples with their counterfactual samples (_i.e._ adding counterfactual samples in the unprivileged group and reducing samples from the privileged group). In Figure 9, we report the percentage of high-influence samples that belong to the privileged group (_i.e._ how much CIF recommends the data balancing) and the mitigation performance. The high-influence
samples are almost all from the privileged group, which is expected, and if they were converted to the counterfactual samples as if they are from the unprivileged group, _i.e._ recollecting and resampling the training distribution, then fairness can improve.
## 5 Related Work
**Influence Function.** The goal of influence function is to quantify the impact of training data on the model's output. [33] popularizes the idea of training data influence to the attention of our research community and has demonstrated its power in a variety of applications. Later works have aimed to improve the efficiency of computing influence functions. For example, Tracein [49] proposes a first-order solution that leverages the training gradients of the samples, and a neural tangent kernel approach for speeding up this task. Other works have explored the computation of group influence [9], the robustness of influence function [8], its application in explainable AI [43] and other tasks like graph networks [18].
**Influence Function for Fairness.** Our work is closely relevant to the recent discussions on quantifying training data's influence on a model's fairness properties. [61] computes the training data influence to fairness when removing a certain set of training samples. [38] discusses a soft version of the removal and computes also the optimal "removal weights" for each sample to improve fairness. And [52] leverages the computed influence to perform a post-hoc model update to improve its fairness. Note that those works consider the fairness effect of removing or reweighing training samples. Our work targets a more flexible and powerful definition of influence that can give practitioners a wider scope of understanding by introducing the idea of concepts and generating counterfactual samples as well as result in a wider range of potential applications.
**Data Repairing for Fairness.** Our work is also related to the work on data repairing to improve fairness. [36, 37] discuss the possibilities of reweighing training data to improve fairness. [69] proposes a "reprogramming" framework that modified the features of training data. [45] explores the possibility of resampling labels to improve the fairness of training. Other works study the robustness of model w.r.t fairness [62, 19, 40]. Another line of research that repairs training data is through training data pre-proccessing [13, 14, 32, 24], synthetic fair data [53, 31, 67, 57], and data augmentation [54, 21].
Conclusions and Limitations
We propose _Concept Influence for Fairness_ (CIF), which generalizes the definition of influence function for fairness from focusing only on the effects of removing or reweighing the training samples to a broader range of dimensions related to the training data's properties. The main idea is to consider the effects of intervening on a certain _concept_ of training data, which is a more flexible framework to help practitioners better understand unfairness with a wider scope and leads to more potential downstream applications.
We point out two limitations: (1) CIF needs to generate counterfactual samples w.r.t different concepts, which can be computationally expensive and (2) in CIF-based mitigation, it can be non-trivial to determine the optimal number of training samples to intervene that would maximally improve fairness.
|
2309.17075 | Hydrodynamical simulations of the galaxy population: enduring successes
and outstanding challenges | We review the progress in modelling the galaxy population in hydrodynamical
simulations of the Lambda-CDM cosmogony. State-of-the-art simulations now
broadly reproduce the observed spatial clustering of galaxies, the
distributions of key characteristics such as mass, size and star formation
rate, and scaling relations connecting diverse properties to mass. Such
improvements engender confidence in the insight drawn from simulations. Many
important outcomes however, particularly the properties of circumgalactic gas,
are sensitive to the details of the subgrid models used to approximate the
macroscopic effects of unresolved physics, such as feedback processes. We
compare the outcomes of leading simulation suites with observations and with
each other, to identify the enduring successes they have cultivated and the
outstanding challenges to be tackled with the next generation of models. Our
key conclusions are: 1) Realistic galaxies can be reproduced by calibrating the
ill-constrained parameters of subgrid feedback models. Feedback is dominated by
stars and by black holes in low mass and high mass galaxies, respectively; 2)
Adjusting or disabling the physical processes implemented in simulations can
elucidate their impact on observables, but outcomes can be degenerate; 3)
Similar galaxy populations can emerge in simulations with dissimilar subgrid
feedback implementations. However, these models generally predict markedly
different gas flow rates into, and out of, galaxies and their haloes. CGM
observations are thus a promising means of breaking this degeneracy and guiding
the development of new feedback models. | Robert A. Crain, Freeke van de Voort | 2023-09-29T09:10:48Z | http://arxiv.org/abs/2309.17075v1 | # Hydrodynamical simulations of the galaxy population: enduring successes and outstanding challenges
###### Abstract
We review the progress in modelling the galaxy population in hydrodynamical simulations of the \(\Lambda\)CDM cosmogony. State-of-the-art simulations now broadly reproduce the observed spatial clustering of galaxies, the distributions of key characteristics such as mass, size and star formation rate, and scaling relations connecting diverse properties to mass. Such improvements engender confidence in the insight drawn from simulations. Many important outcomes however, particularly the properties of circumgalactic gas, are sensitive to the details of the subgrid models used to approximate the macroscopic effects of unresolved physics, such as feedback processes. We compare the outcomes of leading simulation suites with observations and with each other, to identify the enduring successes they have cultivated and the outstanding challenges to be tackled with the next generation of models. Our key conclusions are:
* Realistic galaxies can be reproduced by calibrating the ill-constrained parameters of subgrid feedback models. Feedback is dominated by stars and by black holes in low mass and high mass galaxies, respectively.
* Adjusting or disabling the physical processes implemented in simulations can elucidate their impact on observables, but outcomes can be degenerate.
* Similar galaxy populations can emerge in simulations with dissimilar subgrid feedback implementations. However, these models generally predict markedly different gas flow rates into, and out of, galaxies and their haloes. CGM observations are thus a promising means of breaking this degeneracy and guiding the development of new feedback models.
###### Contents
* 1 Introduction
* 2 Methods
* 2.1 Initial & boundary conditions
* 2.2 Gravitational & hydrodynamical evolution
* 2.3 Subgrid methods
* 2.4 Calibration of subgrid feedback models
* 2.5 Verification and convergence
* 3 Key properties of simulated galaxy populations
* 3.1 Galaxy stellar mass function
* 3.2 Size and morphology
* 3.3 Galaxy clustering
* 3.4 Star formation histories
* 3.5 Galaxy colours
* 4 Galaxy scaling relations
* 4.1 Supermassive black holes
* 4.2 The star-forming main sequence
* 4.3 The Tully-Fisher relation
* 4.4 The mass - metallicity relations
* 4.5 Cold gas in galaxies
* 5 Cosmic gas
* 5.1 Absorption system statistics
* 5.2 Physical properties of the circumgalactic medium
* 5.3 Gas inflows & outflows
* 5.4 Halo baryon fractions
* 6 The influence of environment
* 6.1 Satellite galaxies: Stripping & Starvation
* 6.2 Environmental effects: beyond halo mass
* 7 Future outlook
* 8 Summary
## 1 Introduction
The present-day galaxy population exhibits a remarkable diversity of characteristics, such as masses, star formation rates, morphologies, nuclear activity, and gas and dust content. The population's spatial distribution is highly-structured and heterogeneous: galaxies can be found in isolation in low density environments, or can comprise the population of rich clusters. Observations of the distant cosmos reveal that both the properties and spatial distribution of the galaxy population have evolved markedly over nearly 14 billion years of cosmic history. Reconciliation of this panoply of observed characteristics with a comprehensive theory of galaxy formation is a challenge at the frontier of the natural sciences.
Owing to the non-linearity of the collapse, hierarchical assembly, and relaxation of protogalactic structure, and the complexity of the myriad physical processes that influence galaxies, direct numerical simulation of the evolution of representative cosmological volumes is in principle the most desirable method of approaching the challenge. In recent years, this approach has emerged as the foremost means of interpreting observations, particularly those from highly-multiplexed galaxy surveys, and of seeking a clearer understanding of
the origin of galaxy properties. The widespread adoption of hydrodynamical simulations of representative cosmological volumes has been driven primarily by major improvements in their correspondence with the observed galaxy population, coupled with improved access to the simulation data, a development catalysed by the public release of data from major simulation campaigns (e.g. Nelson et al. 2015, 2019, McAlpine et al. 2016, Villaescusa-Navarro et al. 2023).
It is easy to now take for granted the availability of these versatile and realistic models. However, although simulations of representative volumes have yielded good agreement with observations of diffuse intergalactic gas (e.g. as traced by the Lyman-\(\alpha\) forest) for over two decades (Theuns et al. 1998, Dave et al. 1999), reproducing the properties and spatial distribution of galaxies and their star-forming gas reservoirs has proven a more stubborn challenge. For many years, simulated galaxies formed far too many stars, particularly at early cosmic epochs when the cosmic inflow rate is high, thus ending up too massive, too compact, and with too little angular momentum (e.g. Navarro & Benz 1991, Navarro et al. 1995, Sommer-Larsen et al. 1999, Navarro & Steinmetz 2000). Unsurprisingly then, they also exhibited unrealistic surface density and rotation velocity profiles (Abadi et al. 2003a,b). Simulations of representative cosmic volumes yielded galaxy stellar mass functions (GSMFs) with the wrong shape and normalisation, generally yielding too many galaxies at fixed stellar mass (e.g. Crain et al. 2009, Lackner et al. 2012, Khandai et al. 2015).
Mitigation of this 'overcooling' of gas into stars, and the concomitant spurious transfer of angular momentum from gas to dark matter (DM), was demonstrated early in the history of hydrodynamical simulations of galaxies via the inclusion of gas heating mechanisms (e.g. Katz & Gunn 1991, Mihos & Hernquist 1994, Navarro & Steinmetz 1997, Weil et al. 1998). The inclusion of energetic 'feedback' mechanisms, as a means to regulate gas cooling and star formation, has fostered major improvements in the realism of simulated galaxies, for example enabling the realisation of individual disc galaxies with encouragingly realistic surface density and rotation profiles (e.g. Governato et al. 2004, Okamoto et al. 2005, Governato et al. 2007, Guedes et al. 2011).
However, the macroscopic efficiencies of feedback mechanisms are governed by microphysics acting on spatial scales several orders of magnitude below the resolution scale of galaxy population simulations (e.g. Orlando et al. 2005), precluding their calculation _ab initio_. These processes (and others) are therefore treated with simplified subgrid models (see Section 2.3), which approximate the effects of unresolved processes and couple them to numerically-resolved scales, thereby producing an 'effective' model of galaxy formation. In the absence of authoritative empirical constraints on how microphysics influences macroscopic scales (though see Lopez et al. 2011, Rosen et al. 2014), the subgrid implementations of feedback in popular simulation codes that have emerged are diverse, and can produce conspicuously dissimilar outcomes when applied to identical initial conditions (see Scannapieco et al. 2012, and references therein). The dramatic variation of outcomes that emerge from controlled suites of simulations in which (only) the subgrid implementation of feedback is changed, and/or their parameters are varied systematically over plausible ranges, highlights that the influence of feedback is the most important systematic uncertainty in galaxy formation modelling (Oppenheimer et al. 2010, Schaye et al. 2010, Vogelsberger et al. 2013, Kim et al. 2014).
The most productive strategy that has emerged in response to this uncertainty is to calibrate the parameters of subgrid feedback models (see Section 2.4), with the aim of reproducing judiciously-chosen observable characteristics of the galaxy population. Clearly,
the characteristics used for the calibration cannot be considered as predictions of the simulation, but other properties can be considered as outcomes stemming from the implemented physical processes, as long as they are not compromised by the simplifications of the subgrid modelling. This approach has been used, to varying degrees, by several flagship-scale simulations of the galaxy population, including Illustris (Vogelsberger et al., 2014, Genel et al., 2014, Nelson et al., 2015), Magneticum (Hirschmann et al., 2014), EAGLE (Schaye et al., 2015, Crain et al., 2015, McAlpine et al., 2016), BlueTides (Feng et al., 2016), Romulus (Tremmel et al., 2017), IllustrisTNG (Pillepich et al., 2018, Nelson et al., 2018, Springel et al., 2018), FABLE(Henden et al., 2018) and SIMBA (Dave et al., 2019). An example of the dark matter, gas, and stellar light distributions that emerge from simulating a Milky Way-mass dark matter halo with a modern galaxy formation model is shown in Figure 1. The central and satellite galaxies visible have a clear gaseous and dark matter component as well. Further substructure is present in the dark matter and gas without a counterpart in either of the other components.
The key advantage of the calibration approach is that it steers simulations toward the production of a broadly realistic simulated galaxy population. The galaxies interact self-consistently with the intergalactic medium (IGM), enabling the address of diverse lines of enquiry. Several of the aforementioned projects have demonstrated, for example, that it is possible to reproduce the present-day galaxy stellar mass function, from the mass scale of dwarf galaxies to that of central group galaxies, with an accuracy comparable to the systematic uncertainty on the observational measurement. A subset of the models also consider diagnostics inferred from X-ray observations of galaxy groups/clusters, in order to ensure reasonable reproduction of the properties of intragroup/intracluster gas. Simulations of realistic galaxies also enable additional model components to be 'bolted on' either on-the-fly or in post-processing, for example to follow the formation and evolution of globular clusters (Pfeffer et al., 2018) or to predict rates of gamma ray bursts (Metha & Trenti
Figure 1: An illustrative example of the level of detail with which modern cosmological simulations are able to reproduce the structure of galaxies similar in mass (\(M_{halo}\sim 10^{12}\) M\({}_{\odot}\)) to the Milky Way, based on IllustrisTNG. From left to right, the panels show the dark matter surface density, the gas surface density, and a three colour image of the stellar surface brightness from the \(U\), \(B\) and \(K\) bands. Each image has a 140 kpc field of view. The central galaxy is gas-rich, star-forming and kinematically disc-dominated, with a significant bulge component. It is accompanied by a gas-rich satellite galaxy and dark matter substructure without an associated baryonic component. The gas density shows clumpy and filamentary structure not present in the dark matter and stars.
2020). The realism and versatility of the simulations has resulted in a remarkable scientific impact: a conservative estimate based on searches using the NASA Astrophysics Data System is that data and/or data products from these models feature in over one thousand astrophysics research articles published since 2014.
It is imperative to remember that much of galaxy formation and evolution modelling remains distantly removed from fundamental, ab initio theory. There is no guarantee that the reproduction of a particular observable represents a unique solution, i.e. different sub-grid models may yield similarly-successful outcomes. It is possible that the approximations necessitated by subgrid models may deliver success only at a particular resolution, or only at the expense of a failure elsewhere. Simulations are therefore often better suited to offering qualitative insight rather than quantitative predictions: some of the most instructive outcomes from simulation suites have stemmed from varying the efficiency of physical mechanisms to isolate their influence (and hence indicate _why_ a simulation reproduces a particular observable), or from a failure to reproduce particular observational measurements and so illuminate a fundamental shortcoming of the implemented physics, or even the adopted cosmogony.
The chief objective of this review is therefore to offer a critical assessment of the improved understanding of the formation and evolution of the galaxy population fostered by the current generation of state-of-the-art hydrodynamical (or magneto-hydrodynamical) simulations of the \(\Lambda\)-cold dar matter (\(\Lambda\)CDM) cosmogony. To this end, we highlight the key successes of the simulations that we believe will endure, and the shortcomings or absences of consensus that present outstanding challenges to be addressed by future models. Where possible, we offer candid explanations for the origin of shortcomings, which we hope will be particularly helpful to non-specialists and new practitioners. We focus on simulations that follow reasonably representative cosmological volumes (\(L\simeq 100\) comoving Mpc, hereafter cMpc) at fixed resolution, and thus yield simulated _populations_ of galaxies. These simulations are complementary to zoom simulations of individual galaxies, but we focus on the former because in general it is simpler to compare their outcomes with observations and to characterise their convergence behaviour (Section 2.5), and because one cannot confuse trends due to resolution with those due to galaxy or halo mass, which is a danger when analysing suites of zoom simulations that maximise the resolution at each mass scale. A major advantage of this type of simulation is the diversity of the lines of enquiry that they enable, so we necessarily restrict ourselves to reviewing their more fundamental outcomes.
We explore results primarily (but not exclusively) from the EAGLE, Horizon-AGN (Dubois et al., 2012; Kaviraj et al., 2017), IllustrsTNG and SIMBA projects, which are now somewhat mature simulation suites that have been studied in detail (key numerical details of these simulations are given in Table 1). The flagship simulation of each project follows the evolution of a cosmological volume of \(L\sim 100\,\mathrm{cMpc}\) with baryonic mass (spatial) resolution of \(\sim 10^{6-7}\,\mathrm{M}_{\odot}\) (\(\sim 1\) proper \(\,\mathrm{kpc}\), hereafter \(\,\mathrm{pkpc}\) in the ISM), and yields a galaxy population that is, to a greater or lesser degree, broadly realistic. They span the range of hydrodynamics solvers widely in use [Modern smoothed particle hydrodynamics (SPH), adaptive mesh refinement (AMR), and hybrid Lagrangian-Eulerian approaches, see Section 2], and adopt a diverse range of subgrid implementations of baryonic physics, each differing from the others in at least several major aspects. Of these, only IllustrsTNG solves the equations of magneto-hydrodynamics, the others use pure hydrodynamics. EAGLE and IllustrsTNG are the most well-studied and readily-accessible of these simulations, and consequently feature more frequently in this review. Our focus on these simulation suites
is to provide illustrative examples, and should not be misinterpreted as implicit criticism of other simulation campaigns, nor dismissal of their successes.
The review is structured as follows. We discuss the methods used in the execution, calibration and analysis of galaxy population simulations in Section 2. We review in Sections 3 and 4 the key properties of simulated galaxy populations, and scaling relations connecting diverse properties of simulated galaxies to their stellar mass, respectively. Section 5 focuses on the properties of the gaseous environments of simulated galaxies, and Section 6 examines environmental influences. We discuss likely future directions for the discipline in Section 7, and provide a brief summary of our conclusions in Section 8.
## 2 Methods
### Initial & boundary conditions
Cosmological simulations begin from initial conditions that specify the fluctuations in the density field at an epoch at which there is no significant non-linearity, generally \(z\gtrsim 100\). From this initial state, the equations of motion can be integrated forwards into the non-linear regime numerically. The initial matter power spectrum (which specifies the density contrast relative to the mean density as a function of spatial scale) is usually expressed as the product of an initial spectrum (with random phases) resulting from inflation, and a transfer function (which can be calculated with a Boltzmann solver such as CAMB; Lewis & Challinor 2011) that represents the subsequent linear evolution of each mode. Cosmological simulators are therefore in the fortunate position of having initial conditions that are well constrained by observations of the cosmic microwave background (CMB) radiation at \(z\approx 1100\).
Generating Gaussian random fields with a specified power spectrum, a process first detailed by Efstathiou et al. (1985), has evolved into a specialist discipline. We only briefly summarise the process here and encourage those with a particular interest to read the work of, for example, Jenkins (2010, 2013), Hahn & Abel (2011), and Hahn et al. (2021). The process comprises two key stages to realize the appropriate density fluctuations: the creation of uniform particle distribution throughout the simulation volume, followed by the application of displacements to positions and velocities of the particles or, alternatively, small adjustments to the masses of particles. The unperturbed particle distribution is usually constructed by tiling a cubic grid or a 'glass' distribution (White 1994). Random particle distributions are unsuitable because they exhibit a white noise power spectrum
\begin{table}
\begin{tabular}{l|l|l|l|l|l} \hline Simulation & Hydro method & Boxsize & DM particles & Baryonic mass & Baryonic spatial \\ & & [cMpc] & per dimension & resolution [M\({}_{\odot}\)] & resolutiona [pkpc] \\ \hline EAGLEb & Modern SPH & 100.0 & 1504 & \(1.9\times 10^{6}\) & 0.70 \\ Horizon-AGN & AMR & 142.0 & 1024 & \(1.0\times 10^{7}\) & 1.00 \\ IllustrisTNGc & Moving mesh & 110.7 & 1820 & \(1.4\times 10^{6}\) & 0.19 \\ SIMBAd & MFM & 147.0 & 1024 & \(1.8\times 10^{7}\) & 0.74 \\ \hline \end{tabular}
\end{table}
Table 1: Details of simulations used for illustrative examples
that, even in the absence of the intended displacements, fosters rapidly-growing non-linear structure. The displacements are calculated with linear or low order perturbation theory. Periodic boundary conditions are applied to opposing faces of the volume, ensuring that its mean density remains fixed and that no artificial boundaries are imposed.
The finite size of the simulation domain imposes an upper limit to the spatial scale of the fluctuations that can be sampled within it (correspondingly, there is a minimum wavenumber of \(k=2\pi/L\)). As large scale fluctuations seed the formation of rare features in the cosmic large-scale structure, such as galaxy clusters and large voids, the emergent space density of such features is underestimated (Bagla & Prasad 2006, Reed et al. 2007). Volumes of \(L\simeq 100\,\mathrm{cMpc}\) are too small to realise galaxy clusters (whose present-day space density is \(<10^{-6}\,\mathrm{Mpc}^{-1}\)), such that detailed examination with (magneto-)hydrodynamical simulations of the galaxy population _within_ them requires the use of zoomed initial conditions (e.g. Bahe et al. 2017, Cui et al. 2018, Tremmel et al. 2019). The scarcity of rare features in finite simulation volumes also makes them sensitive to cosmic variance, as different realisations of the same initial density fluctuations can result in statistically significant differences in the emergent large-scale structure.
### Gravitational & hydrodynamical evolution
Modelling the formation of the galaxy population entails solving the partial differential equations that govern the temporal evolution of the cosmic matter and radiation fields. Simulations of the galaxy population apply numerical techniques to solve the equations governing the gravitational evolution of matter, the hydrodynamical evolution of gas and, in some cases, the interaction of gas with evolving radiation and magnetic fields. Other physical processes are treated with subgrid models. Since the key numerical techniques are documented, and their advantages and shortcomings discussed, in detail elsewhere (see e.g. Springel 2010b, Price 2012, Teyssier 2015), we only briefly discuss them. We focus primarily on a discussion of subgrid methods (Section 2.3), because these dominate systematic uncertainties at the resolution of the simulations examined here.
Dark matter is most commonly treated as a collisionless fluid (but for self-interacting treatments, see e.g. Dave et al. 2001, Robertson et al. 2017), whose evolution in the continuum limit is described by the collisionless Boltzmann equation (CBE), under the influence of the gravitational potential given by Poisson's equation. The potential is assumed to be Newtonian, because velocities on resolved scales are non-relativistic. The high dimensionality of the CBE necessitates solving the coupled equations using a finite set of \(N\)-body tracer particles that sample the fluid's phase space distribution. The \(\mathcal{O}(N^{2})\) scaling of the computational cost of solving Poisson's equation (stemming from the long-range nature of gravity that requires consideration of \(N-1\) contributions to the potential at each particle), can be reduced to a scaling on the order of \(\mathcal{O}(N\log N)\) by approximating the contribution from distant particles. This is achieved via multipole expansion (e.g. Barnes & Hut 1986, Carrier et al. 1988), and/or by mapping the tracer distribution to a mesh and solving in Fourier space using fast transform methods (e.g. Hockney & Eastwood 1981). To prevent the unphysical scattering of close particle pairs, the gravitational force is softened on small scales using a kernel function, for which forms of varying complexity have been proposed (e.g. Monaghan 1985, Wendland 1995). It is common to adopt a softening scale that is fixed in comoving units (e.g. \(\simeq 1/25\) of the mean interparticle separation) and limited to a maximum proper size to ensure that the internal structure of dark matter haloes can be
resolved at late cosmic epochs (Power et al. 2003).
Cosmic gas is assumed to be ideal, collisional, inviscid, and non-conducting, enabling its dynamics to be described by the Euler equations rather than the more general Navier-Stokes equations. Traditionally, solution of the equations has been via two distinct approaches for discretising the fluid: either in volume (the Eulerian approach used by mesh-based schemes) or in mass (the Lagrangian approach used by particle-based schemes). Each approach has well-advertised shortcomings that contribute to inconsistent results when applied to relatively simple cosmological structure formation problems involving non-radiative hydrodynamics (e.g. Frenk et al. 1999, Sembolini et al. 2016). Eulerian methods are the de facto standard for many computational fluid dynamics problems, but the dynamic range needed to resolve galaxies in cosmological volumes demands the added complexity of adaptively-refined meshes (AMR, e.g. Abel et al. 2002), which may still fail to adequately capture the gravitational collapse of low-contrast fluctuations at early times (O'Shea et al. 2005) and cause over-mixing of gas with differing entropies, e.g. in the cores of galaxy clusters (Wadsley et al. 2008, Mitchell et al. 2009). The inherent Galilean non-invariance of mesh methods (Tasker et al. 2008) is particularly undesirable because the relative velocities of galaxies are generally much greater than the sound speed of the gas bound to them.
SPH (Lucy 1977, Gingold & Monaghan 1977) samples the fluid with tracer particles, naturally adapting the resolution within overdense structures. SPH is hence well suited to following the hierarchical growth of structure, and enables the self-gravity of the gas to be treated in an identical fashion to the dark matter, but requires the inclusion of an artificial viscosity term to capture shocks (Monaghan 1997), which are resolved poorly. Traditional SPH implementations suffer from multivalued particle pressure at contact discontinuities that create unphysical surface tension and inhibit phase mixing (e.g. Agertz et al. 2007), though this problem can be mitigated via the use of various adaptations (Ritchie & Thomas 2001, Price 2008, Hopkins 2013, Wadsley et al. 2017, Borrow et al. 2022b), which have become collectively known colloquially as'modern' or 'corrected' SPH.
In the last decade, several software packages have emerged that treat cosmic gas with sophisticated schemes that seek to capture the advantages of both Lagrangian and Eulerian approaches, i.e. continuous adaptability of resolution and geometry of the (magneto-)hydrodynamics calculation, Galilean invariance, accurate treatment of fluid mixing, and high-fidelity shock capturing without the use of artificial viscosity. These schemes (sometimes referred to as 'arbitrary Lagrangian-Eulerian', or ALE) typically use a mesh that deforms and moves with the fluid flow ('moving mesh', e.g. Springel 2010a, Vandenbroucke & De Rijcke 2016), or solve the Riemann problem without a mesh (e.g. meshless finite volume, MFV, or meshless finite mass, MFM; Hopkins 2015) (though note that MFM is not strictly an ALE method because it uses resolution elements with fixed mass). These approaches have proven successful for particular problems, but simultaneously realising the benefits of Eulerian and Lagrangian methods generally incurs a marked increase in computational cost and memory footprint.
Non-standard refinement criteria have been used with both AMR and moving mesh schemes, basing control of the resolution of the hydrodynamics calculation on conditions other than the fluid density. This has enabled examination of fluid flows in greater detail, for example, around supermassive black holes (SMBHs, Curtis & Sijacki 2015) or in the gaseous haloes around galaxies (the circumgalactic medium (CGM), Hummels et al. 2019, Peeples et al. 2019, van de Voort et al. 2019).
### Subgrid methods
The need to uniformly sample representative cosmic volumes restricts the feasible resolution of galaxy population simulations. Let us consider a brief example: assuming \(\Omega_{\rm b}=0.05\) and \(H_{0}=70\) km s\({}^{-1}\)Mpc\({}^{-1}\), uniformly sampling the gas fluid in a cubic volume of \(L=100\) cMpc with tracers of mass \(10^{5}\,{\rm M}_{\odot}\) (comparable to the stellar mass of ultra-faint dwarf galaxies) requires \(\simeq 68\) billion fluid elements (\(N=4082^{3}\)). Simulations including a broad suite of baryonic physics can require \(\sim 1\)kB per fluid element, incurring a total memory footprint for the baryonic part of the calculation of \(\simeq 68\)TB. Assuming an equal number of \(N\)-body particles for the dark matter, at 100 bytes per particle, brings the footprint to \(\simeq 75\)TB. With current high-performance computing facilities typically having 2-4GB of memory per core, our example would require execution on between \(20,000\) and \(40,000\) cores. Load balancing relatively high-resolution simulations at this scale remains extremely challenging, despite the development of sophisticated schemes for this purpose (see e.g. Menon et al. 2015, Schaller et al. 2016, Weinberger et al. 2020), largely because of the extreme dynamic range of the timestep hierarchy. As a result, calculations of this scale need to occupy these large core counts for prohibitively long periods (often several months). The challenge of accommodating the germane physical processes into simulations is illustrated schematically by Figure 2, which shows their characteristic length scales, and highlights that subgrid models remain a critical component of simulations of the galaxy population for the foreseeable future. We review the key physical processes treated with subgrid methods below.
#### 2.3.1 Radiative processes in cosmic gas
Radiative processes enable cosmic gas to dissipate its internal energy. In the absence of an incident radiation field, the ionisation balance and cooling rate of diffuse cosmic gas is governed by two-body processes (e.g. collisional excitation and ionization, collisional recombination, and free-free emission). The cooling rate due to these processes scales as the square of the gas density, and the contribution of each elemental species scales linearly with its abundance, enabling convenient tabulation of collisional ionisation equilibrium (CIE) cooling rates as a function of temperature and composition (e.g. Dalgarno & McCray 1972). CIE rates (plus inverse Compton cooling due to the CMB) for gas of primordial composition were adopted by the first generation of hydrodynamical simulations of galaxies (e.g. Katz et al. 1996), and remain in use in a limited number of modern simulations [e.g. MassiveBlack-II (Khandai et al. 2015), Romulus (Tremmel et al. 2015)]. CIE rates for metal enriched gases can differ from primordial rates by an order of magnitude (e.g. Boehringer & Hensler 1989, Sutherland & Dopita 1993) and are more widely used. However, it should be noted that uncertainties in nucleosynthetic yields (see Section 2.3.2) propagate into uncertainties in gas cooling rates. Metal cooling allows enriched ionised gas to cool below \(10^{4}\,{\rm K}\), though in many simulations such cooling is effectively disabled (at least for dense gas, \(n_{\rm H}\gtrsim 0.1\) cm\({}^{-3}\)) by the use of an effective equation of state or pressure floor (see Section 2.3.3). CIE rates for enriched gases (with abundance ratios similar to those of the solar neighborhood) are used by, for example, the Horizon-AGN and BlueTides (Feng et al. 2016) simulations.
Cosmic radiation fields can 'overionise' gas relative to CIE, reducing the cooling rate or even yielding radiative heating (e.g. Efstathiou 1992, Gnedin & Hollon 2012). The most dramatic example is the epoch of reionisation (EoR), when intergalactic neutral hydrogen (H i) is (re-)ionised by radiation from the first galaxies and active galactic nuclei (AGN). The accurate treatment of this process in simulations, which is necessary to interpret a di
verse range of observable properties of the first generations of galaxies and intervening IGM, requires the inclusion of explicit, and computationally-expensive, radiative transfer calculations. Simulations of the galaxy population instead generally adopt a temporally-evolving metagalactic UV/X-ray background (UVB) radiation field (for popular parametrisations see Haardt & Madau 1996, 2001, 2012, Faucher-Giguere et al. 2009), switched on at the 'instantaneous' reionisation redshift inferred from the Thomson scattering optical depth of the CMB. The field is assumed to be spatially uniform, because the clustering scale of the photon sources is shorter than the mean free path of the photons in the diffuse IGM (e.g. Zuo 1992), though this approximation is less applicable close to galaxies. Cooling rates are then computed assuming the gas to be optically thin and in ionisation equilibrium, using spectral synthesis codes such as cloudy(Ferland et al. 2017).
The photoionisation rate scales linearly with density, so radiative cooling rates considering photoionisation and CIE must be tabulated as a function of density, temperature, redshift and composition (e.g. Smith et al. 2008). The contribution of each species is additive, enabling 'element-by-element' tabulation and affording the flexibility to model cooling in gases with abundance ratios that deviate from those of the solar neighborhood.
Figure 2: Schematic illustration of the extreme dynamic range of the length scales of the physical processes influencing the formation and evolution of galaxies. Cosmological, hydrodynamical simulations are able to follow cosmic structure formation up to \(\mathcal{O}\)(100 Mpc) scales and have to resort to subgrid models on small scales. Future simulations of the galaxy population will push both of these boundaries, increasing their dynamic range, but the need for subgrid models will remain for years to come.
Wiersma et al. (2009a) provide such a tabulation, for the set of 11 most-important radiative coolants (H, He, C, N, O, Ne, Mg, Si, S, Ca and Fe) exposed to the Haardt & Madau (2001) UVB model. These tables are used by the EAGLE, Illustris/Illustris TNG, FABLE and Magneticum simulation suites, though only EAGLE and Magneticum exploit the element-by-element nature of the tables; the other suites scale the cooling rate as a function of metallicity, assuming solar neighborhood abundance ratios.
As the column density of cosmic gas approaches a value characteristic of Lyman-limit systems (\(N_{\rm HI}\gtrsim 10^{17}{\rm cm}^{-2}\)), it becomes optically thick, attenuating the incident radiation flux and increasing the cooling rate relative to the optically thin case. The corresponding volume density of gas for which self-shielding is important is \(n_{\rm H}\gtrsim 10^{-2}\,{\rm cm}^{-3}\) (e.g. Furlanetto et al. 2005), characteristic of the interstellar medium (ISM). The reduction of the UVB photoionisation rate in self-shielded gas can be approximated using fitting functions calibrated against full radiation transport calculations (e.g. Gnedin et al. 2009, Rahmati et al. 2013). This approximation is applied on-the-fly (for \(z<6\)) by the Illustris and Illustris TNG simulations. The self-shielding approximation of Rahmati et al. (2013) is also used by the Grackle library (last described by Smith et al. 2017), which computes cooling rates for interstellar gas using a primordial chemistry reaction network, in tandem with tabulated rates for metal species in the optically-thin regime. The primordial network also accounts for deviations from ionisation equilibrium. This library is used by the SIMBA simulations.
Local radiation fields in the ISM can dominate over the UVB, which further complicates the calculation of cooling rates (e.g. Schirber et al. 2004, Miralda-Escude 2005). Molecules and dust grains within the ISM attenuate the incident radiation spectrum (in dissimilar ways), and also contribute to the cooling rate (e.g. Omukai et al. 2005). Ploeckinger & Schaye (2020) provide cooling tables for gas, shielded by itself and dust grains, in ionisation equilibrium with radiation from both a spatially-uniform, temporally-evolving UVB and an interstellar radiation field of variable intensity, but with spectral shape fixed to that of the Milky Way (Black 1987).
#### 2.3.2 Element abundance evolution
Besides influencing radiative cooling rates, element abundance patterns in cosmic gas and stars encode a wealth of astrophysical information. Elements heavier than lithium ('metals') were produced by nucleosynthetic processes within stars and supernovae (SNe) (and in some cases, kilonovae), with some fraction of the synthesised elements returned to the local ISM as ejecta. The enriched gas can then be incorporated into future generations of stars or transported into the CGM or IGM by galaxy-scale outflows. When galaxies fall into more massive haloes, such as groups and clusters of galaxies, their enriched ISM can be stripped by the ram pressure it experiences from the CGM or intracluster medium (ICM).
Modern schemes for distributing and transporting the elements synthesised by stellar populations ('chemodynamics') remain similar to the first generation of implementations (e.g. Theis et al. 1992, Steinmetz & Mueller 1994, Carraro et al. 1998. Stellar particles, each assumed to represent a mono-age simple stellar population (SSP), donate enriched mass to neighboring fluid tracers. Elements then share the destiny of the tracer to which they are donated, enabling their transport and dispersal by gas flows, and incorporation into subsequent generations of stars, in a fashion consistent with the adopted hydrodynamics scheme. Early schemes focused on enrichment by Type II SNe (SNeII), whose progenitors have short lifetimes (\(\lesssim 30\)Myr, comparable to the sound-crossing time of fluid elements representing
the diffuse ISM) and whose nucleosynthetic products can be distributed immediately upon the formation of the SSP forms (the 'instantaneous recycling approximation'). Although the total metallicity is generally dominated by elements synthesised by SNeII, modelling the abundance of elements such as iron, carbon and nitrogen requires consideration of element release by Type Ia SNe (SNeIa) and long-lived stars that experience an asymptotic giant branch (AGB) phase. Modern simulations therefore generally adopt subgrid models that follow the timed release of individual elements from multiple nucleosynthetic channels, e.g. Oppenheimer & Dave (2008), used by SIMBA; Tornatore et al. (2007) used by Magneticum ; and Wiersma et al. (2009b) used by EAGLE and Illustris/IllustrisTNG.
The progenitors of AGB stars and SNeII are, respectively, intermediate (\(0.8\lesssim M\lesssim 8\,{\rm M}_{\odot}\)) and high (\(M\gtrsim 8\,{\rm M}_{\odot}\)) mass stars. An initial mass function (IMF) must therefore be specified in order to set the relative number of these progenitors (it is also required to set the energetics of stellar feedback, see Section 2.3.5). It is typically assumed to follow the form deduced from solar neighborhood number counts (e.g. Chabrier 2003), with the lower mass cut-off being the hydrogen mass burning limit (\(\simeq 0.07-0.09\,{\rm M}_{\odot}\)) and the upper mass cut-off reflecting the (more uncertain) maximum observed mass of stars. Typically limits of \(0.1\,{\rm M}_{\odot}\) and \(100\,{\rm M}_{\odot}\) are adopted. Uncertainties in the form and limits of the IMF translate directly into uncertainty on the cosmic metal budget. Mass loss during the AGB phase occurs in a brief phase at the end of the main sequence lifetime. Lifetimes are challenging to constrain observationally and are generally inferred from stellar evolution models (e.g. Romano et al. 2005). The models concur that lifetimes are a strongly-decreasing function of stellar mass, with some inferring a weak dependence on metallicity. The intermediate mass progenitors of AGB stars have lifetimes of \(10^{8}-10^{10}\,{\rm yr}\), which are long compared to the sound crossing time of ISM fluid elements (and hence the timesteps on which dense gas is advanced), but are in general shorter than, or comparable to, the dynamical timescales of galaxies. The mass dependence of stellar lifetimes therefore has a tangible influence on element release.
SNeIa are thermonuclear explosions resulting from binary star evolution. Their progenitors remain ill-constrained, with the most plausible cases being the accretion of mass onto a Chandrasekhar mass white dwarf from a non-degenerate companion, or the merger of two white dwarfs (e.g. Maoz et al. 2014). The SNIa rate resulting from either scenario is thus complicated by the ill-constrained properties of binaries, e.g. their mass fraction, mass function, and initial separation, however the involvement of at least one white dwarf dictates that the binary progenitor is typically old (\(\gtrsim 10^{9}\,{\rm yr}\)). Forward modelling the SNIa rate as a function of an SSP's age is therefore subject to many uncertainties (e.g. Greggio & Renzini 1983, Greggio 2005), fostering the alternative approach of assuming an empirical delay function for the SSP's SNIa rate, which integrates to unity at \(t=\infty\) and whose free parameters can be calibrated against cosmic SNIa observations (see e.g. Mannucci et al. 2006, Wiersma et al. 2009b).
The nucleosynthetic yields of SNeII, SNeIa, and AGB stars represent a significant systematic uncertainty for chemodynamical modelling. The uncertainties stem from the complexity of the physics involved, such as rotation in SNII progenitors, or the efficiency of convective envelope burning in AGB stars, the post-explosion time at which the yields are quoted (synthesised isotopes can have radioactive decay timescales shorter than the expansion timescale of the ejecta), or inconsistencies in the assumed mass ranges of progenitors. Nucleosynthetic calculations remain challenging to reconcile with the observed abundances of stars in the Galaxy, often motivating ad-hoc and element-specific rescaling factors (at
the factor \(\simeq 2\) level) to the resulting theoretical yields (e.g. Francois et al. 2004, Portinari et al. 2004). As such, the absolute element abundances predicted by simulations must be considered uncertain by at least a similar factor.
Mixing and diffusion are further significant sources of uncertainty for chemodynamical evolution. Some degree of overmixing is generally expected in mesh-based simulations, because fluids are implicitly mixed on scales smaller than that of the smallest cells, but excellent numerical convergence behaviour can be demonstrated with moving mesh treatments of hydrodynamics (van de Voort et al. 2020). Conversely, SPH (particularly traditional schemes) tends to underestimate mixing because of its inability to resolve dynamical instabilities (e.g. Agertz et al. 2007). In the absence of an explicit diffusion treatment, metal mixing in SPH simulations is further underestimated because metals are'stuck' to particles, resulting in poor sampling of fluids with a small but non-zero metallicity. The problem can be mitigated by the inclusion of a subgrid diffusion treatment (e.g. Wadsley et al. 2008, Greif et al. 2009), but the appropriate coefficients are unclear (e.g. Garnier et al. 2009). Wiersma et al. (2009b) argue that a simpler strategy to mitigate the metal sampling problem in SPH simulations is to use kernel-smoothed abundances. Irrespective of the hydro solver, undermixing can be exacerbated if, as is commonplace, sources of small-scale turbulence are neglected.
#### 2.3.3 Subgrid models of the ISM and star formation
Perturbations in a self-gravitating medium can grow only if their wavelength exceeds the Jeans (1928) length. Accurate numerical modelling of gravitationally collapsing systems therefore requires that the Jeans scales are adequately resolved: in hydrodynamical settings, this translates into the need to resolve pressure forces on the scale at which self-gravity begins to dominate. In Lagrangian simulations the condition is most naturally expressed in terms of the ratio of the particle mass and the Jeans mass (Bate & Burkert 1997), and in Eulerian counterparts in terms of the ratio of the cell size and the Jeans length (Truelove et al. 1997).
The thermogravitational collapse of the warm, diffuse phase of the ISM (\(T\sim 10^{4}\,\)K, \(n_{\rm H}\sim 10^{-1}\,\)cm\({}^{-3}\)) to the cold, dense phase (\(T\lesssim 10^{2}\,\)K, \(n\gtrsim 10^{2}\,\)cm\({}^{-3}\)) occurs quickly (McKee & Ostriker 1977). Simulations that seek to resolve the molecular ISM are therefore faced with demanding resolution requirements. The Jeans scales depend on both the density and temperature of the gas (\(M_{\rm J}\propto n_{\rm H}^{-1/2}T^{3/2}\), \(L_{\rm J}\propto n_{\rm H}^{-1/2}T^{1/2}\)), so an increase of (at least) three decades in density and a decrease of (at least) two decades in temperature corresponds to reductions in the Jeans mass from \(M_{\rm J}\sim 10^{7}\,\)M\({}_{\odot}\) to \(M_{\rm J}\sim 10^{2}\,\)M\({}_{\odot}\), and the Jeans length from \(L_{\rm J}\sim 1\,\)kpc to \(L_{\rm J}\sim 1\,\)pc. As noted earlier, appealing to such high resolution in representative cosmological volumes requires the use of very short timesteps and incurs a very large memory footprint, necessitating execution with large core counts that cannot be effectively exploited by the current generation of simulation codes. The pressure structure of the multiphase ISM therefore cannot be accurately resolved, typically requiring that a subgrid model is either used to predict the pressure structure as a function of the gas density (e.g. Yepes et al. 1997, Springel & Hernquist 2003), or to impose the structure 'by hand' as a pressure floor (e.g. Machacek et al. 2001, Robertson & Kravtsov 2008) or as an equation of state (e.g. Schaye & Dalla Vecchia 2008).
In the absence of a numerical treatment of the cold ISM, star formation must be implemented with models whose efficiency is calibrated to reproduce observed scaling relations averaged over suitably large spatial scales (\(\sim 1\,\)kpc). The most commonly adopted form is that of a Schmidt (1959) law, for which the star formation rate (SFR) density (\(\dot{\rho}_{\star}\)) scales
linearly with gas density (\(\rho_{\rm g}\)) over a dynamical time (\(t_{\rm dyn}\)): \(\dot{\rho}_{*}\propto\epsilon_{*}\rho_{\rm g}/t_{\rm dyn}\). Here, \(\epsilon_{*}\) is the efficiency per dynamical time which is calibrated so that, operating in tandem with the density threshold above which star formation is allowed to proceed, the simulation reproduces the observed Kennicutt-Schmidt (KS) star formation law (Kennicutt, 1998). Since the latter is based on surface densities rather than volume densities, the calibration depends on the assumed equation of state. Schaye & Dalla Vecchia (2008) therefore introduced an alternative scheme that, under the assumption of vertical hydrostatic equilibrium in the ISM, enables the KS law to be expressed as a pressure law, eliminating the dependence of the star formation law on the equation of state and replacing the calibrated parameters with observables. Alternatively, Feldmann et al. (2023) presented results from the FIREbox simulation of a (small, \(L=22.1\,\)cMpc) cosmological volume at \(z=0\), whose star formation model (Hopkins et al., 2018) does not impose an efficiency per free-fall time, instead aiming to recover the observed effective efficiency via the influence of the model's implemented ISM and feedback physics.
#### 2.3.4 SMBH seeding and growth
SMBHs are represented by collisionless sink particles, subject only to gravity. They are usually seeded at the centres of haloes (which do not already have a seed) identified on-the-fly by periodically applying a group-finder algorithm to the dark matter particle distribution (e.g. Di Matteo et al., 2008), or in high density gas whose properties satisfy a number of criteria (e.g. Dubois et al., 2012). There is no concensus concerning the formation mechanism of SMBHs (see e.g. Volonteri et al., 2021), but popular theories posit that their formation mass is below the resolution limit of simulations of the galaxy population, requiring that sink particles carry a subgrid SMBH mass (used by subgrid routines concerning SMBH evolution) in addition to their dynamical mass. Once the former exceeds the latter, the subgrid and dynamical masses grow in concert. Sink particles with masses less than or comparable to the mass of neighboring resolution elements do not experience a realistic dynamical friction force, so it is common to 'pin' them to the halo's centre of potential, or migrate them towards it. This practice has been shown to have significant consequences for the evolution of simulated galaxies (Bahe et al., 2022).
SMBHs grow via gas accretion and mergers with other SMBHs. The ambient gas accretion rate, \(\dot{m}_{\rm accr}\) is generally assumed to be at the minimum of the Eddington and Bondi-Hoyle-Lyttleton rates (e.g. Bondi & Hoyle, 1944), or a modified version of the latter, though recent models also explicitly consider the influence of the angular momentum of the gas (e.g. Angles-Alcazar et al., 2017). The Bondi rate is estimated from the sink particle's ambient gas properties, necessarily on spatial scales that are much larger than accretion discs, such that the simulations treat the physics of accretion onto SMBHs in a very simplistic fashion (see e.g. Shlosman et al., 1990). The SMBH grows at a rate \(\dot{m}_{\rm BH}=\dot{m}_{\rm accr}(1-\epsilon_{\rm r})\), where \(\epsilon_{\rm r}\) is the SMBH's radiative efficiency, generally assumed to follow the mean value for radiatively-efficient accretion onto a Schwarzchild BH, \(\epsilon_{\rm r}=0.1\)(Shakura & Sunyaev, 1973). Older simulations based on the popular model of Springel et al. (2005) boosted the Bondi rate (usually by a factor of \(\alpha=100\)) to compensate the underestimate of the gas density near the Bondi radius, though modern simulations tend to apply either a density-dependent correction as advocated by Booth & Schaye (2009), or no correction because it is generally only important early in the SMBH's growth history, making it degenerate with the choice of seed mass. A more sophisticated approach possible in mesh-based simulations is to refine the resolution of the simulation in the vicinity of SMBHs to ensure the Bondi radius is resolved (Curtis & Sijacki, 2015).
#### 2.3.5 Feedback processes
The ratio of galaxies' stellar mass to halo mass, as inferred for example by sub-halo abundance matching (e.g. Conroy et al. 2006), exhibits a characteristic peak at a halo mass of \(\sim 10^{12}\,\mathrm{M}_{\odot}\). The efficiency with which haloes convert their cosmic'share' of gas into stars decreases towards lesser and greater halo mass scales. This is widely interpreted as a signal that galaxy growth is primarily regulated (or even quenched) by separate feedback mechanisms in these two regimes: the formation and evolution of stellar populations in low-mass galaxies, and accretion onto SMBHs in massive galaxies. The injection of energy by these mechanisms occurs on numerically unresolved scales, so simulations must approximate their macroscopic effects with subgrid models.
Stellar populations inject energy and momentum into the ISM via stellar winds, radiation and SNe, and thus potentially disrupt star-forming molecular clouds, create turbulence, and drive gas out of galaxy discs. The simplest subgrid treatment sums the energy liberated at each timestep by a stellar particle (representing an SSP), and injects it thermally by raising the internal energy of the particle's neighboring fluid elements. However, because these elements are orders of magnitude more massive than SNe ejecta, the injected energy (canonically \(E_{\mathrm{SN}}\sim 10^{51}\,\mathrm{erg/SN}\)) is only sufficient to heat individual fluid elements to \(\sim 10^{5}\,\mathrm{K}\). In this regime the gas cooling time is short compared to its sound crossing time, leading to catastrophic artificial losses that preclude the formation of an adiabatic, energy-conserving blast wave (e.g. Katz et al. 1996, Dalla Vecchia & Schaye 2008, Creasey et al. 2011).
Popular solutions are to temporarily disable the cooling of gas, either by hand (e.g. Gerritsen & Icke 1997, Stinson et al. 2006) or using an additional subgrid model to account for unresolved ISM physics (Keller et al. 2014); to heat gas particles stochastically by temperature increments \(\Delta T\gg 10^{5}\,\mathrm{K}\) (Kay et al. 2003, Dalla Vecchia & Schaye 2012); or to inject (some of) the energy in kinetic form (e.g. Navarro & White 1993, Springel & Hernquist 2003, Dubois & Teyssier 2008). Each approach has pros and cons but, for coupling efficiencies of order unity, they all in principle enable (low-mass) galaxies to drive outflows and achieve self-regulated growth. The kinetic approach affords the freedom to specify explicitly both the initial velocity and mass loading (\(\eta=\dot{M}_{\mathrm{outflow}}/\dot{M}_{\star}\)) of the wind, enabling calibration of the wind model against the outcomes of higher resolution zoom simulations with more sophisticated ISM models, an approach adopted by SIMBA. The popular Springel & Hernquist (2003) implementation, which has been used in adapted form by many simulation suites including Illustris and IllustrisTNG, temporarily decouples 'kicked' particles from the hydrodynamics scheme, in principle aiding numerical convergence (see Section 2.5) but precluding interactions between winds and the ISM (Dalla Vecchia & Schaye 2008).
The physical mechanism by which AGN feedback couples to the ISM remains poorly understood, with a number of channels being plausible, such as radiation pressure on free electrons and/or dust grains, or very high velocity jets. Despite this absence of consensus, there is ample observational evidence to indicate that AGN drive large-scale, high-velocity outflows of ionized and molecular gas (e.g. Maiolino et al. 2012, Cicone et al. 2014). Subgrid AGN feedback models generally assume that some fraction, \(\epsilon_{\mathrm{f}}\), of the radiated luminosity of an SMBH accretion disc couples to the surrounding ISM: the AGN feedback energy \(E_{\mathrm{AGN}}=\epsilon_{\mathrm{f}}\epsilon_{\mathrm{r}}\dot{M}_{\mathrm{ BH}}c^{2}\), where \(\dot{M}_{\mathrm{BH}}\) is the accretion rate and \(c\) the speed of light (e.g. Springel et al. 2005). As with stellar feedback, the injected energy can be coupled to the numerically-modelled gas fluid in thermal and/or kinetic form. The simplest approach assumes that the coupling efficiency is fixed, such that the energy injection rate is proportional to the
accretion rate. However, motivated by the observation that high-velocity jets are more typically associated with accretion onto SMBHs at small fractions of the Eddington rate (Churazov et al. 2005, Merloni & Heinz 2008, Best & Heckman 2012), some models adopt a relatively low (high) value of \(\epsilon_{\rm f}\) when the SMBH is accreting at a high (low) fraction of the Eddington rate (e.g. Sijacki et al. 2007, Dubois et al. 2012, Weinberger et al. 2017, Dave et al. 2019). Since the Eddington rate depends linearly on the mass of the SMBH, this approach can act like a switch, whereby efficient feedback is triggered once the SMBH reaches a threshold mass.
### Calibration of subgrid feedback models
The numerical values of the parameters governing subgrid models can, in some cases, be chosen by reference to observables or theoretical arguments. In general though, the appropriate values are not known a priori and may be resolution dependent. The efficiencies of feedback processes are the most salient example: the microphysics are not well understood theoretically, and observations do not authoritatively characterise outflow properties on the scales at which subgrid models recouple to the hydrodynamics scheme (see e.g. Chisholm et al. 2016a,b). Moreover, changing the resolution of a simulation often changes the frequency of individual feedback events, and the energy, mass and momentum they each inject. Even if over some timescale the same total energy (for example) is injected, the macroscopic efficiency of feedback can depend on this intermittency because the numerical losses will differ.
It is therefore increasingly common practice to calibrate the parameters of subgrid feedback models, so that simulations broadly reproduce key properties of the galaxy population. Depending on the sensitivity of the subgrid models to resolution, the parameters may require recalibration if the resolution is changed. The choice of calibration diagnostics is somewhat arbitrary, but clearly they should be well-characterised by observations on scales resolved by the simulations, and should be sensitive to the parameter(s) requiring calibration. Most often, the stellar mass of galaxies is used (either via the observed GSMF, or the inferred stellar-to-halo mass ratio) to calibrate the efficiency of feedback associated with stellar evolution, and the mass of central SMBHs at fixed galaxy stellar mass to calibrate the efficiency of feedback associated with their growth. Reproduction of these quantities alone does not guarantee a realistic galaxy population, so complementary observables may also be used. For example, by also considering the sizes of disc galaxies, the EAGLE simulations more accurately reproduced many important galaxy scaling relations (Crain et al. 2015). IllustrisTNG further considers the cosmic SFR density and halo gas fractions. SIMBA's feedback efficiencies were tuned only against the GSMF and the \(M_{\rm BH}-M_{*}\) relation, the latter via the accretion efficiency rather than the AGN feedback efficiency. Horizon-AGN's stellar feedback efficiency was not explicitly calibrated but inferred from the Starburst99 (Leitherer et al. 1999) spectrophotometric model. Its AGN feedback model was, in common with EAGLE and IllustrisTNG, calibrated with reference to the \(M_{\rm BH}-M_{*}\) relation. As noted in Section 1, care must be taken not to interpret, as predictions, emergent properties that are closely related to the calibration diagnostics, nor those properties significantly compromised by the simplifications of the subgrid modelling.
Figure 3 shows the influence on the GSMF of adjusting the subgrid stellar feedback model in calibrated simulations. The left panel shows the influence of using stellar feedback efficiencies (effectively a coefficient specifying the fraction of \(E_{\rm SN}\) that couples to the ISM)
for EAGLE's stochastic thermal heating treatment, that differ from the Reference model by factors of 0.5 (weak feedback, WeakFB) and 2 (strong feedback, StrongFB). The shifts follow from galaxies of a fixed stellar mass becoming associated with less (more) massive dark matter haloes in the WeakFB (StrongFB) model. Because the dark matter halo mass function is steep, even a small change to the stellar - halo mass relation significantly alters the space density of galaxies at fixed stellar mass. The right panel shows the influence in IllustrisTNG of disabling the scaling of the energy of its kinetically-driven stellar winds as a function of metallicity (No Z-dep wind energy), and separately, the scaling of the velocity of these winds as a function of redshift (No z-dep wind velocity). The energy scaling (effectively a mass loading scaling at fixed velocity) suppresses the growth of low-mass galaxies, and acts in a fashion similar to that of the EAGLE feedback efficiency scaling in the left panel. The velocity scaling prevents kinetically-driven winds from stalling in relatively massive galaxies (see also Crain et al. 2009, Oppenheimer et al. 2010).
In analogy to calibrating the stellar feedback efficiency, a similar procedure is often used to calibrate the efficiency of AGN feedback, whereby the value of \(\epsilon_{\rm f}\) is adjusted to achieve broad reproduction of the observed relation between the stellar mass of galaxies and the mass of their central SMBH. Booth & Schaye (2010) show that so long as AGN feedback is numerically efficient and well sampled, stellar masses are insensitive to extreme variations of \(\epsilon_{\rm f}\), as SMBHs compensate by growing to the mass that enables them to inject the feedback energy needed to (self-)regulate gas inflow. As such, \(\epsilon_{\rm f}\) primarily sets the normalisation of
Figure 3: An illustration of the influence on the GSMF of adjusting the subgrid stellar feedback model in calibrated simulations. The left panel, adapted from Crain et al. (2015), shows the response of the present-day GSMF of EAGLE realised in an \(L=25\) cMpc volume to changing the adopted stellar feedback efficiency (which scales linearly with the injected energy) by factors of 0.5 (WeakFB, green curve) and 2.0 (StrongFB, red curve). The right panel, adapted from Pillepich et al. (2018b), shows the response of the present-day GSMF of IllustrisTNG realised in an \(L=37\) cMpc volume to disabling the metallicity dependence of the injected energy (No Z-dep wind energy, green curve) and to disabling the redshift dependence of the wind injection velocity (No z-dep wind velocity, red curve).
the SMBH - galaxy mass scaling relation.
### Verification and convergence
Numerical simulations, irrespective of their purpose or application, are simplified approximations of real phenomena or systems. It is therefore necessary to verify their outcomes to ensure they are fit for purpose. For many terrestrial applications, simulations can be validated against experimental data (e.g. computational fluid dynamics simulations are often confronted with wind tunnel measurements). Clearly, cosmological simulations can appeal to no such testbed. The performance of hydrodynamics solvers can be examined using a set of idealised tests (e.g. shock tubes, point explosions, vortex problems, fluid instability tests), but being a strongly non-linear process resulting from the interplay of complex physical processes, galaxy formation offers no simple tests with analytic or known solutions that can be used for validation. A laborious but illuminating strategy is therefore to apply multiple models to the same problem, enabling an assessment of the consensus between the models (e.g. Scannapieco et al. 2012, Kim et al. 2014, Cui et al. 2018).
Predictive power demands that the outcomes of simulations are robust to changes of various aspects, such as changes in resolution and the size of the simulation domain, a characteristic often referred to as 'convergence'. Unhelpfully, the physics of galaxy formation does not readily lend itself to achieving converged results. As noted in Section 2.1, changes to the size of the simulation volume influence the diversity of the environments and halo population that can be realised by the simulations, and the mass and spatial scales for which cosmic variance becomes important. They can also impact volume-integrated properties, such as the global star formation rate density.
Resolution convergence is particularly challenging to achieve, owing to the dominant role played by physical processes taking place at, or below, the numerical resolution scale (see Figure 2). In simple terms, a higher resolution simulation can resolve smaller galaxy/halo progenitors, enabling them to be followed at earlier cosmic epochs. They can also resolve higher gas densities, influencing (net) radiative cooling rates and, in some cases, interfacing with subgrid models at a different spatial scale. As remarked in Section 2.3.5, subgrid models can be designed to mitigate this resolution sensitivity, for instance by temporarily decoupling gas from hydrodynamical forces or radiative cooling, and/or by using the generally better-converged properties of the local dark matter distribution, rather than those of the gas, as inputs. However, these choices introduce their own drawbacks, moving the modelling philosophy closer to the phenomenological approach of semi-analytic modelling. As noted in Section 2.4, changing the resolution can also incur more subtle consequences, such as altering the intermittency and numerical efficiency of feedback events.
It is therefore good practice to compare the properties of galaxy populations that emerge from simulations when the volume (or boxsize) and resolution are varied (ideally individually). Such tests are straightforward and inexpensive to conduct when using smaller volumes and lower resolutions. To mitigate the computational cost of testing for convergence at higher resolution, it is common to conduct high-resolution simulations in a smaller simulation volume, and to run a partner simulation of the same volume at the fiducial resolution to control for boxsize effects. With such a suite of simulations, it is possible to make authoritative assessments of the degree to which the properties of the galaxy population are robust to volume and resolution changes. It is however important to remain vigilant to the possibility that the appearance of a result being converged may be contingent on the
particular implementation of one or more subgrid models.
Many subgrid models employ stochastic treatments of continuous processes, such as the conversion of gas particles or cells into stellar particles, or injecting energy kinetically and/or thermally into the gas through feedback processes. A number of recent studies have examined the impact of this subgrid stochasticity on emergent macroscopic properties, in simulations of individual galaxies (Keller et al. 2019, Davies et al. 2021, 2022) and periodic volumes (Genel et al. 2019, Borrow et al. 2022a), finding that small differences due to stochasticity can propagate into significant differences in present-day properties, even for galaxies resolved with many thousands of particles. The systematic uncertainty due to stochasticity is more significant in smaller samples of galaxies, i.e. zooms and small periodic volumes, and needs to be considered when interpreting results, including when analysing small simulations for the purposes of model calibration. In such cases, the uncertainty can be mitigated with brute force, by running multiple realisations of the same simulation using different random number seeds.
## 3 Key properties of simulated galaxy populations
In this section we examine the degree to which state-of-the-art hydrodynamical simulations reproduce key properties of the galaxy population. Clearly the definition of 'key' is somewhat arbitrary, and opinions will differ between practitioners, but we focus here on diagnostics that are sufficiently important to be widely considered as validation tests. Marked failure of a simulation to reproduce these diagnostics will undermine confidence in the broader conclusions one may draw from it. In some cases these properties are used as calibration diagnostics, so should not be considered predictions.
### Galaxy stellar mass function
The \(z\simeq 0\) GSMF is the observational diagnostic most frequently used to calibrate the parameters of subgrid feedback models. Simulations of \(L\sim 100\,\mathrm{cMpc}\) and baryonic mass resolution \(m_{\mathrm{g}}\sim 10^{6}\,\mathrm{M_{\odot}}\) adequately resolve and sample the GSMF over the range \(8\lesssim\log_{10}(M_{*}/\mathrm{M_{\odot}})\lesssim 11\), thus demanding realistic stellar masses for galaxies with space densities that differ by at least two decades. The present-day GSMFs of EAGLE, Horizon-AGN, IllustrisTNG and SIMBA are shown in Figure 4 together with measurements from the Sloan Digital Sky Survey (SDSS) and Galaxy and Mass Assembly (GAMA) surveys. This highlights that it has proven possible to reproduce the GSMF over much of this stellar mass range in simulations of the galaxy population, in some cases with an accuracy better than or comparable to the \(\sim 0.3\,\mathrm{dex}\) systematic uncertainty on stellar masses inferred using population synthesis models (e.g. Conroy et al. 2009). Although the EAGLE, IllustrisTNG and SIMBA simulations were calibrated to achieve this reproduction, success was not guaranteed because their subgrid models afford only limited freedom. For comparison, the figure also shows the GSMF of the flagship \(L=22.1\,\mathrm{cMpc}\) FIREbox simulation, in which the parameters governing stellar feedback processes (this simulation does not model AGN feedback) were not calibrated against properties of the galaxy population. FIREbox and Horizon-AGN overproduce the space density of galaxies, especially for those with stellar mass \(9\lesssim\log_{10}(M_{*}/\mathrm{M_{\odot}})\lesssim 10\).
A common aspect of models with realistic GSMFs is the operation of efficient feedback at all mass scales. Simulations that do not reproduce the observed GSMF commonly (but
not exclusively, e.g. Pakmor et al. 2023) form too many galaxies at fixed stellar mass. As noted in Section 2.4, this is a consequence of feedback failing to adequately regulate galaxy growth and thus allowing galaxies to form in low-mass dark matter haloes with too high space density. Failure to regulate galaxy growth can be due to the implemented feedback being unintentionally inefficient for numerical reasons (e.g. poor sampling of energy injection events, or injection of too little energy per feedback event), because a feedback mechanism (e.g. AGN) is not included, or because a comprehensive search of the model's plausible many-dimensional parameter space is too computationally expensive. The latter is a major barrier to exhaustive calibration, and has motivated the adoption of Gaussian process emulation to accelerate the procedure (Kugel & Borrow, 2022).
Reproduction of the GSMF at \(z\simeq 0\) alone is insufficient to ensure that simulations also recover its evolution with cosmic time, because plausible feedback models can yield unrealistic star formation histories (SFHs) (Crain et al., 2015). However, the use of com
plementary calibration diagnostics (e.g. galaxy sizes, or the cosmic SFR density) enables SFHs to be constrained sufficiently to reproduce GSMFs as early as \(z\simeq 7\), to an accuracy consistent with the systematic uncertainties on observationally-inferred masses (e.g. Furlong et al. 2015, Pillepich et al. 2018a, Dave et al. 2019). Simulations of \(L\sim 100\) cMpc are ill-suited for confrontation with the demographics of very high-redshift galaxies derived from early _James Webb Space Telescope_ (JWST) imaging (e.g. Labbe et al. 2023), as they do not sample the rare fluctuations in the density field that likely seed the formation of the bright, rare sources preferentially detected by these observations (Kannan et al. 2023).
### Size and morphology
Reproduction of galaxies with realistic sizes has long been viewed as a prominent challenge for cosmological simulations, owing primarily to the recognition that overcooling can lead to spurious angular momentum transfer from cold gas and stars to dark matter (Katz & Gunn 1991), and thus to the formation of galaxies that are too compact as well as too massive. Clearly, ensuring that galaxies form with broadly realistic stellar masses is then a necessary condition for reproduction of realistic sizes.
However, the original Illustris simulation yielded present-day galaxies too _large_ by a factor of \(\simeq 2\)(Snyder et al. 2015), highlighting that realistic masses are not a sufficient condition for realistic sizes. Using EAGLE, Crain et al. (2015) demonstrated that the overcooling and angular momentum problems are somewhat separable: simulated galaxies must not only form a realistic mass of stars, their SFH must also be broadly realistic, so that stars form from natal gas with the correct angular momentum distribution. For this reason, the sizes of present-day disc galaxies were explicitly considered when calibrating the parameters of EAGLE's subgrid stellar feedback model. Sizes were also considered by IllustrisTNG during the development of their model of stellar winds.
Figure 5 shows the galaxy size - stellar mass relations of, separately, star-forming and passive galaxies in EAGLE (from Furlong et al. 2017) and IllustrisTNG (from Genel et al. 2018), at \(z=0\) (panel a) and \(z=2\) (panel b). In both simulations, galaxy size is defined as the stellar half-mass radius, \(R_{50}\). Filled symbols with error bars are the observational size measurements of galaxies at the corresponding redshifts, from van der Wel et al. (2014). EAGLE and IllustrisTNG broadly reproduce the observed size - mass relations of star-forming and passive galaxies, and the evolution of these relations, from smaller sizes at fixed stellar mass, from \(z=2\). The simulations differ from the observations by \(\simeq 0.1\) dex at low redshift and \(\simeq 0.3\) dex at early times, and thus compare much more favourably with the observations than was the case for prior simulations (e.g. McCarthy et al. 2012).
Interestingly, both Horizon-AGN (see Dubois et al. 2016) and SIMBA (Dave et al. 2019) exhibit size - mass relations for which, at fixed mass, passive galaxies are larger than star-forming counterparts. In both cases, the relation for star-forming galaxies is in broad agreement with observational measurements, indicating that their passive galaxies are too large. Although in EAGLE and IllustrisTNG star-forming galaxies tend to be larger than passive counterparts, as is observed, the sizes of both galaxy types are similar at low mass (likely due to the poor sampling of SFHs), indicating that passive galaxies in these simulations are also too extended. This may be a consequence of spurious and prolonged dynamical heating of their stellar particles by more massive dark matter particles (Ludlow et al. 2019); if so, this problem can be mitigated via the adoption of baryonic and dark matter particles of similar mass (e.g. as per Tremmel et al. 2015), at the expense of a greater computational
cost and memory footprint.
An outcome seen in both EAGLE and IllustrisTNG is a correlation, at fixed stellar mass, between the size and the specific SFR of galaxies. Interestingly, in IllustrisTNG the trend is not seen when one considers only star-forming galaxies, and the overall trend reflects the changing relative fractions of (extended) star-forming and (compact) passive galaxies, from low to high stellar mass. In contrast, massive (\(M_{*}\gtrsim 10^{9.5}\,\mathrm{M}_{\odot}\)) star-forming galaxies in EAGLE exhibit a clear correlation between size and specific SFR.
A related issue is whether simulated galaxy populations reproduce the diversity of observed morphologies (and kinematics). EAGLE, Horizon-AGN, and IllustrisTNG reproduce galaxies with the characteristic early- and late-type morphologies of the Hubble Sequence (see Dubois et al. 2014, Schaye et al. 2015, Huertas-Company et al. 2019), and the correspondence of morphology (and/or kinematics) with the position of galaxies in the colour - magnitude diagram or its proxies (Correa et al. 2017). This is not an outcome that the simulations were (or could meaningfully be) calibrated to reproduce and, in common with sizes, largely follows from the build up of angular momentum in intergalactic gas due to large-scale tidal torques, the dynamical coherence of gas as it is accreted onto haloes and galaxies, the preferential removal of low-angular momentum gas in feedback-driven outflows, and the transfer of angular momentum from stars to the dark matter as discs are disrupted by gravitational instabilities and mergers. Detailed analysis of the simulations reveals correlations between the morphology of galaxies and properties of their host haloes, such as angular momentum (e.g. Zavala et al. 2016, Yang et al. 2021), flattening (Tnob et al. 2019), and assembly time (Davies et al. 2020, 2021). However, it should be borne in mind that the simulations can reproduce observed morphologies only in a relatively broad
Figure 5: Present-day galaxy sizes as a function of stellar mass for star-forming (blue curves) and passive (red curves) galaxies in EAGLE (solid curves, data from Furlong et al. 2017) and IllustrisTNG (dashed curves, data from Genel et al. 2018), at \(z=0\) (left panel) and \(z=2\) (right panel). Filled symbols with error bars denote the observational size measurements of van der Wel et al. (2014). Both simulations broadly reproduce the observed size – mass relations of both galaxy types, and their evolution over \(\simeq 10\,\mathrm{Gyr}\), though at late times both simulations overestimate the size of low-mass passive galaxies.
sense, owing to their internal structure being too smooth and their vertical scale heights too large (see Section 2.3.3). It will be fascinating to assess the degree to which future simulations, incorporating detailed models of the multiphase ISM, are able to recover the detailed internal structure and kinematics of the galaxy population.
### Galaxy clustering
Accurate reproduction of observed clustering statistics, which connect galaxies to the properties of their dark matter environment, represents an important validation test of galaxy population simulations, particularly because clustering properties are not used as calibration diagnostics. Authoritative predictions also provide a means to stress-test (semi-)analytic methodologies (Chaves-Montero et al., 2016; Guo et al., 2016), widely used to generate predictions for the large, mildly non-linear scales probed by cosmological surveys.
Springel et al. (2018) showed that IllustrisTNG (both TNG100 and its lower resolution, \(L=302.6\) cMpc counterpart TNG300) broadly reproduces, on scales of \(1\lesssim r\lesssim 10\,h^{-1}\,\mathrm{Mpc}\) and as a function of stellar mass and colour, the clustering of galaxies revealed in the low-redshift cosmos by SDSS observations. Similarly, Artale et al. (2017) examined the clustering of EAGLE galaxies on scales of \(1\lesssim r\lesssim 6\,\mathrm{Mpc}\) as a function of mass and colour, and found it to be broadly consistent with that measured from the GAMA survey, with the exception of low-mass (and poorly resolved) red galaxies, which cluster too strongly. Crain et al. (2017) showed that the clustering of gas-rich galaxies in EAGLE is in excellent agreement with that inferred from 21cm surveys, and that the dependence of clustering on galaxy colour (at fixed stellar mass) also manifests as a dependence on atomic hydrogen fraction.
### Star formation histories
The evolution of the cosmic SFR density (i.e. the volumetric SFR) is a fundamental observable whose measurement is a forefront goal in observational astronomy. Its precise characterisation is challenging owing to selection effects and observational systematic uncertainties (see e.g. Madau & Dickinson, 2014; Behroozi et al., 2019), but it remains a natural validation benchmark for simulations of representative cosmic volumes.
Solid, dot-dashed, dashed, and dotted curves in Figure 6, corresponding to the left y-axis, show the evolution of the cosmic SFR density of EAGLE, Horizon-AGN, IllustrisTNG, and SIMBA, respectively. The simulations that broadly reproduce the present-day GSMF yield similar SFR density histories (at the factor \(\simeq 2\) level). The integral of the cosmic SFR density (over cosmic time) differs from the integral of the GSMF (over mass) only due to mass loss from stellar evolution, and a small correction due to the use of apertures to measure the stellar mass of galaxies. In the context of observational measurements, a factor of 2 is non-negligible, being comparable to the long-standing tension between the observed cosmic stellar mass density and the mass loss-corrected integral of the cosmic SFR density, though this apparent discrepancy may be resolved by the use of more sophisticated panchromatic spectral energy distribution models (Leja et al., 2019).
The symbols in Figure 6, relative to the right y-axis, show the average SFH of central galaxies hosted by present-day haloes with mass \(M_{\mathrm{halo}}\simeq 10^{12}\,\mathrm{M}_{\odot}\), broadly similar to the halo mass of the Milky Way, in EAGLE (crosses) and IllustrisTNG (stars). These histories peak at similar epochs with similar SFRs, but diverge for \(t\gtrsim 5\,\mathrm{Gyr}\) (\(z\lesssim 1.3\)), declining more in EAGLE than in IllustrisTNG. Clearly, model-to-model differences at a particular
halo mass scale are not simple rescalings of the difference in cosmic SFR density, and instead reflect the complicated mass dependence of the physical processes treated by the simulations. This mass dependence is also manifest as subtle but significant differences in the shape of the GSMF realised by the simulations, such that the decline of the typical Milky Way-like SFH in EAGLE is likely related to the simulation 'undershooting' the space density of galaxies at the knee of the GSMF. The SFHs of galaxies in simulations of the galaxy population are also impacted by the simplified subgrid treatment of the ISM (Section 2.3.3), which yields artificially smooth gas distributions and, by extension, artificially smooth SFHs. This hinders the use of SFHs from the simulation to interpret observations (e.g. Sparre et al., 2017) and may have significant consequences for galaxy evolution (Pontzen & Governato, 2012).
### Galaxy colours
Galaxy surveys have revealed a remarkable bimodality in the colour distribution of galaxies. Star-forming galaxies appear blue, whereas passive (or 'quiescent') galaxies are redder, owing to their lack of short-lived blue stars. There remains much ongoing debate concerning which processes regulate and quench star formation, with such processes usually categorised as internal or external/environmental. We focus here on colour bimodality in relatively mas
Figure 6: The coloured curves show the cosmic SFR density history of EAGLE, Horizon-AGN (adapted from Kaviraj et al., 2017), IllustrisTNG and SIMBA (adapted from Dave et al., 2019) (colours per legend). The solid black curve denotes the fit to observations compiled by Madau & Dickinson (2014), converted to the Chabrier (2003) IMF. Coloured symbols show the typical SFH (per the right y-axis) in EAGLE (crosses) and IllustrisTNG (stars) of central galaxies hosted by present-day haloes with mass \(M_{\rm halo}\simeq 10^{12}\,{\rm M_{\odot}}\), broadly similar to that of the Milky Way. The cosmic SFR densities in modern simulations agree with each other to within a factor of a few and the shapes are fairly similar, decreasing substantially over the last 10 Gyr. However, the star formation history of a subset of galaxies can show larger variations as shown by the average SFHs of central galaxies in Milky Way-mass haloes, which diverges towards the present day for EAGLE and IllustrisTNG.
sive (and predominantly central) galaxies, and defer discussion of the evolution of satellite galaxies embedded in larger host haloes to Section 6.1.
Semi-analytic models first highlighted that the emergence of colour bimodality is a natural consequence of the use of energetic feedback in massive galaxies as a means to shape the high-mass end of the present-day GSMF away from the power-law distribution of dark matter haloes (e.g. Kang et al. 2005). To date, this shaping is achieved in all realistic state-of-the-art simulations via the onset of efficient AGN feedback as the dominant regulation mechanism in haloes of mass \(M_{\rm halo}\gtrsim 10^{12}\,{\rm M}_{\odot}\). Because these simulations are usually calibrated to reproduce the GSMF, it is not unreasonable to expect that they yield bimodal galaxy colours. However, accurate reproduction of the observed distribution of galaxies in the colour - mass plane also relies on the simulations yielding realistic SFHs and metallicities, because these properties also influence the observable properties of galaxies and are hence inputs to the population synthesis models used to translate physical properties into observables. One must also apply reasonable corrections for obscuration and attenuation by dust.
EAGLE(Trayford et al. 2015, 2017) and IllustrisTNG(Nelson et al. 2018b) have been shown to reproduce the key features of the colour - mass plane recovered from galaxy surveys. Having satisfied this validation step, the simulations enable the origin of bimodality to be examined in detail, and the sensitivity of the colour - mass distribution to details of the models to be explored. EAGLE and IllustrisTNG present the concensus that bimodality is a direct consequence of the onset of AGN feedback. The presence of blue galaxies that are brighter than those observed in EAGLE likely indicates that its AGN feedback does not adequately quench some fraction of massive galaxies (Trayford et al. 2015). Similarly, the greater fraction of red discs and blue spheroids in IllustrisTNG compared to observations may indicate that efficient AGN feedback is triggered in the wrong objects or at the wrong time (Rodriguez-Gomez et al. 2019); this is a problem whose resolution may require that AGN triggering is more sensitive to mergers (Bustamante & Springel 2019). Donnari et al. (2021) compare the closely related fraction of quenched galaxies as a function of stellar mass for a number of state-of-the-art simulations, finding qualitative but not quantitative agreement for central galaxies.
## 4 Galaxy scaling relations
We next examine the degree to which well-known galaxy scaling relations are reproduced by state-of-the-art simulations of the galaxy population. The form of, and scatter about, these scaling relations encodes valuable information concerning the physics of galaxy evolution, but extracting insight from observed relations often requires the interpretive power of simulations. Confidence in the insight obtained is naturally greater if multiple simulations present a consensus for the origin of particular relations. As with Section 3, we caution that some of the scalings presented here are popularly used as calibration diagnostics, and cannot be treated as predictions.
### Supermassive black holes
The primacy of AGN feedback as the mechanism by which state-of-the-art simulations regulate and quench star formation in massive galaxies, renders the relationship between the mass of galaxies and that of their central SMBH of particular importance. Although
AGN feedback is not thought to dominate in low-mass galaxies, the latter are the sites in which SMBHs are seeded. SMBHs grow in concert with their host galaxy, yielding a power-law scaling between their masses. Whether this relation is causal is the subject of energetic debate, as it may for example reflect a natural outcome of hierarchical assembly (Jahnke & Maccio, 2011), though it has been argued that this explanation is difficult to reconcile with the masses of SMBHs in dwarf galaxies (King & Nealon, 2021).
The left panel of Figure 7, adapted from Habouzit et al. (2021), shows the relation between the mass of the SMBHs and their host galaxies in EAGLE, Horizon-AGN, IllustrisTNG and SIMBA. The four simulations are broadly consistent with observed SMBH masses for \(M_{\star}\gtrsim 10^{10.5}\,\mathrm{M}_{\odot}\). In this regime the simulations exhibit scaling relations with similar slope and, as remarked in Section 2.4, the normalisation of the relation largely reflects differences in the calibration/choice of the subgrid AGN efficiency parameter, \(\epsilon_{\mathrm{f}}\). There is poorer consensus at lower galaxy masses, a regime in which the scatter in observed masses is also large (e.g. Kormendy & Ho, 2013). In this regime, the growth of SMBHs is likely regulated by stellar feedback (e.g. Dubois et al., 2015; Bower et al., 2017), and the growth of simulated SMBHs is sensitive to degenerate details such as the seed mass, the subgrid accretion model, and subgrid treatments of unresolved dynamics (e.g. Bahe et al., 2022).
The right panel of Figure 7 shows the present-day SMBH scaling relation in more detail for IllustrisTNG, with symbols coloured by the SFR of the host galaxy. In massive galaxies (\(M_{\star}\gtrsim 10^{10}\mathrm{M}_{\odot}\)), there is a strong negative correlation, at fixed stellar mass, between SMBH mass and SFR, indicative of the regulation of galaxy SFRs by AGN-driven outflows. Importantly, there is a qualitative consensus in state-of-the-art simulations for this particular correlation. However, as shown by Habouzit et al. (2021, see their Fig 4), the simulations differ significantly in terms of the quantitative influence of AGN feedback on star formation regulation.
Figure 7: Left: the characteristic present-day SMBH mass (\(M_{\mathrm{BH}}\)) as a function of stellar mass (\(M_{\star}\)) for EAGLE, Horizon-AGN, IllustrisTNG and SIMBA, adapted from Habouzit et al. (2021) Right: as left, but showing the distribution of present-day central galaxies in IllustrisTNG, coloured by their average SFR. For galaxies with \(10^{10}\lesssim M_{\star}\lesssim 10^{11}\,\mathrm{M}_{\odot}\), the SFR is lower in galaxies with more massive black holes, because they have experienced stronger AGN-driven outflows.
### The star-forming main sequence
Star-forming galaxies exhibit a tight correlation between their SFR and stellar mass; this relation has become popularly known as the'star-forming main sequence' (SFMS, e.g. Noeske et al. 2007), whose normalisation increases with increasing redshift. The ubiquity of the relation has led to the adoption of a more refined definition of what constitutes a passive galaxy, shifting from the canonical threshold at \(\dot{M}_{\star}>10^{-11}\) yr\({}^{-1}\) to some number of decades from the median of the SFMS, thus naturally accounting for the mass and redshift dependence of the characteristic SFR of star-forming galaxies.
State-of-the-art simulations broadly reproduce the present-day SFMS, elucidating its origin and that of the scatter about it. Analysing the SFMS in IllustrisTNG, Donnari et al. (2019) found that its normalisation is mildly sensitive to both the timescale over, and aperture within which, the SFR measured, and that its intrinsic scatter (roughly at the factor of two level) is largely insensitive to stellar mass. However they caution that the redshift evolution of the SFMS is not as strong as inferred from observations, a shortcoming in common with prior generations of simulations (e.g. Dave 2008). Matthee & Schaye (2019) examined the SFMS in EAGLE, finding a mild mass dependence. They showed that scatter in the present-day SFMS is due to both fluctuations on short time scales (\(\lesssim 2\)Gyr) associated with the physics of self-regulated gas flows, and long time scale (\(\sim 10\)Gyr) variations due to differences in halo formation histories. Related consequences of the short duration fluctuations are correlations at fixed stellar mass between the SFR and the outflow velocity of winds (Nelson et al. 2019), and between the SFR and the mass of cold gas that is (or will soon become) available for star formation (e.g. Lagos et al. 2016, Appleby et al. 2020), the latter being readily corroborated by observations (Saintonge & Catinella 2022).
### The Tully-Fisher relation
The Tully & Fisher (1977) relation is a well-known scaling relation connecting the asymptotic rotation velocity of disc-dominated galaxies and their luminosity (or mass). The strongest correlation is recovered when considering the total baryonic mass of galaxies (McGaugh et al. 2000). A similar relation exists for elliptical galaxies, between their central velocity dispersion and their mass (or luminosity; Faber & Jackson 1976). These relations are the traditional means by which the properties of galaxies were linked to those of their haloes, because galaxy dynamics (in standard cold dark matter cosmogonies) are primarily governed by the structure of their haloes.
The failure of prior generations of simulations to reproduce the observed rotation profiles of individual galaxies (summarised succinctly by Scannapieco et al. 2012), largely as a consequence of overcooling, translated in to a failure of simulations of galaxy populations to reproduce the high-mass end of the Tully-Fisher relation (e.g. McCarthy et al. 2012). The inclusion of AGN feedback and the calibration of feedback models in state-of-the-art simulations results in a more realistic stellar mass - halo mass relation and prevents artificial contraction of the halo due to the excessive condensation of stars (e.g. Schaller et al. 2015). As a result, these simulations broadly reproduce the observed Tully-Fisher relation (e.g. Vogelsberger et al. 2014, Ferrero et al. 2017, Sales et al. 2017), without having been explicitly calibrated to do so.
### The mass - metallicity relations
Feedback-driven outflows transport a fraction of the heavy elements synthesised in galaxies into the CGM (Tumlinson et al., 2011), and even beyond to the IGM (Aguirre et al., 2001). This establishes a relationship between the mass of a galaxy and the characteristic metallicities of its gas and stars (Larson, 1974). This'mass - metallicity relation' was revealed in detail by the advent of highly-multiplexed spectroscopic surveys such as SDSS, which enabled both the gas-phase metallicity (Tremonti et al., 2004), and that of the stars (Gallazzi et al., 2005), to be measured for 10s-100s of thousands of galaxies. These studies thus reveal that a greater fraction of the metals synthesised by low-mass galaxies, relative to more massive counterparts, are transported into their gaseous environments. These trends are broadly reproduced by EAGLE (Schaye et al., 2015), Horizon-AGN (Dubois et al., 2014), IllustrisTNG (Torrey et al., 2019) and SIMBA (Dave et al., 2019), and are a natural consequence of mass-dependence of the macroscopic mass loading of outflows that emerges from their subgrid models for feedback (Beckmann et al., 2017; Nelson et al., 2019; Mitchell et al., 2020, see also Muratov et al., 2015; Christensen et al., 2016).
The observations reveal significant scatter in metallicity at fixed stellar mass, and other properties have been shown to correlate with this scatter, such as star formation rate (the 'fundamental metallicity relation'); this behaviour is reproduced by the simulations (e.g. Lagos et al., 2016; Dave et al., 2019; Torrey et al., 2019). De Rossi et al. (2017) showed using EAGLE that one might expect a complex mass dependence on the correlation of metallicity with gas fraction and star formation rate. In low mass galaxies, lower metallicities correspond to higher gas fractions, higher star formation rates and the presence of young stellar populations, whereas at higher mass scales lower metallicities correspond to gas poor, quiescent galaxies. They interpreted this inversion as a consequence of the switch from stellar feedback-based regulation in low-mass galaxies, to AGN-dominated regulation.
### Cold gas in galaxies
Forging an holistic view of galaxy evolution requires an understanding of the evolution of their gaseous components, which can span many decades in density, temperature and ionisation fraction. Recovery of these properties is a prerequisite for the use of the simulations to elucidate the cycling of baryons into, and out of, galaxies (e.g. Tumlinson et al., 2017; Peroux and Howk, 2020), which is driven by the interplay of large-scale cosmic structure evolution and the complex physics of galaxy formation.
Scaling relations relating the mass of cold, dense gas (primarily atomic and molecular hydrogen) can be readily constructed from observations of galaxies in the low-redshift cosmos (see e.g. Saintonge and Catinella, 2022). Such relations present an appealing benchmark, but because large simulations of the galaxy population do not in general employ an explicit radiation transport scheme, they cannot model the balance of molecular, atomic, and ionised hydrogen on-the-fly in a self-consistent fashion. The partitioning is therefore achieved as a post-processing step based on empirical relationships and/or theoretical models.
Building on prior examinations of EAGLE (Lagos et al., 2015; Bahe et al., 2016; Crain et al., 2017) and IllustrisTNG (Diemer et al., 2018; Stevens et al., 2019), Dave et al. (2020) compared the cold gas mass functions and scaling relations of these simulations with those of SIMBA. The comparison revealed a broad qualitative concensus, but also a number of significant quantitative differences, such as in the characteristic break scale of the H i mass function and the direction in which this quantity shifts with increasing redshift. None of the
simulations were found to universally reproduce the key cold gas observational diagnostics, with each exhibiting a different conflict with the data, despite all three simulations broadly reproducing the present-day GSMF and SFMS. This signals that the combined action of the subgrid treatments of interstellar gas, star formation, and feedback in these simulations is too simplistic to reproduce holistically the cold gas and stellar properties of present-day galaxies.
## 5 Cosmic gas
The explicit modelling of cosmic gas flows and their interaction with galaxies is a major selling point of hydrodynamical simulations, relative to simplified methods such as semi-analytic modelling. The inclusion of energetic feedback processes in hydrodynamical simulations has indicated that a complex and intimate connection likely exists between the life cycles of galaxies and their gaseous environments, for which there is a growing body of corroborating observational evidence (e.g. Peroux & Howk 2020). Besides yielding fascinating insight in its own right, study of the gaseous cosmos is therefore now widely acknowledged as a complementary and profitable means of understanding galaxy formation and evolution, particularly in regard to constraining the physics of feedback and metal transport. In this section we review key properties of cosmic gas, on the scale of the cosmic large-scale structure (the IGM) and within galaxy haloes (the CGM/ICM), that emerge from state-of-the-art simulations of the galaxy population. The confrontation of these outcomes with observational measurements represent strong tests of the simulations, because such properties are less frequently considered than stellar properties when calibrating subgrid feedback models.
### Absorption system statistics
The low-redshift column density distribution functions of intermediate- and high-ionisation state metal ions in EAGLE and IllustrisTNG have been presented in a number of studies (e.g. Schaye et al. 2015, Nelson et al. 2018a, Wijers et al. 2019). These demonstrate that the simulations are mostly compatible with observational measurements of the column density distribution function of absorption systems seen in quasar spectra, which exhibit significant scatter at fixed column density. Differences between the simulations are limited to the relatively high column regime (\(N\gtrsim 10^{15}\)cm\({}^{-2}\)), in which EAGLE yields too few O vi absorbers, and IllustrisTNG yields too many. In both EAGLE and IllustrisTNG, the abundance of O vi absorbers increases slightly with increasing resolution. Wijers et al. (2019) demonstrated that the abundance of high column density O vii and O viii absorbers is influenced by AGN feedback in a non-trivial fashion: AGN-driven outflows increase the metallicity of absorbers by transporting oxygen into halo outskirts, but reduce the characteristic density of the absorbing gas. However, the close correspondence of the distribution functions of O vi, O vii and O viii in both simulations indicates that these diagnostics are, perhaps surprisingly, only mildly sensitive to the choice of hydrodynamics solver and details of how AGN and stellar feedback processes are implemented.
A related diagnostic is the covering fraction of absorption systems in the vicinity of galaxies. Prior generations of simulations failed to reproduce the high covering fraction of high column density (\(N\gtrsim 10^{17}\)cm\({}^{-2}\)) neutral hydrogen absorbers associated with star-forming galaxies at \(z\simeq 2-3\)(e.g. Fumagalli et al. 2014). Using EAGLE, Rahmati et al. (2015)
demonstrated that the H i covering fraction is largely determined by the supply of neutral hydrogen into haloes, which is governed primarily by gravitational infall, and only weakly influenced by feedback processes. However, feedback crucially governs the relationship between galaxy stellar mass and halo mass, such that recovering realistic covering fractions for simulated galaxies of a fixed stellar mass requires that they form in realistic environments. The greater feedback efficiency of state-of-the-art simulations relative to earlier generations of simulations ensures that \(z\simeq 2-3\) star-forming galaxies form in more massive haloes, which exhibit radial H i covering fraction profiles consistent with those observed.
### Physical properties of the circumgalactic medium
The physical properties of the CGM have long been viewed as a powerful means of constraining the physics of galaxy formation, because the CGM interfaces galaxies with their incoming supply of fuel for star formation and their feedback-driven outflows of enriched gas. A salient example is the opportunity to elucidate how outflows operate: the stellar masses of galaxies (for example) are largely agnostic to whether cosmic gas inflows are regulated by violent, intermittent episodes of feedback or a gentler and more continuous process, but the physical conditions of the CGM are expected to differ strongly between these cases (e.g. van de Voort & Schaye 2012).
Using EAGLE, Oppenheimer et al. (2018) examined the physical conditions traced by low-ionisation state absorption systems in the CGM of galaxies with mass comparable to that of the Milky Way, to assist the interpretation of results from the COS (Cosmic Origins Spectrograph)-Halos survey (Werk et al. 2013, 2014). They recovered column densities for most ions within a factor of \(\simeq 2\) of those reported by COS-Halos. They found little correlation of absorber column densities with the specific SFR of central galaxies, but a significant decline of column densities at larger impact factors, both findings being consistent with the observations. Their analysis elucidated that these low-ionisation state absorption systems trace cool (\(T\sim 10^{4}\,\mathrm{K}\)) clumps of gas close to galaxy discs (\(\lesssim 100\,\mathrm{kpc}\)), distinct from the warmer (\(T\simeq 10^{5.5}\,\mathrm{K}\)) and more diffuse gas traced by O vi absorbers, usually found at \(\gtrsim 150\,\mathrm{kpc}\). This suggests that low ionisation state absorbers likely trace enriched gas that is reaccreting onto galaxies.
Perhaps unsurprisingly, comparison of similar CGM diagnostics from different simulations reveals areas of both consensus and discrepancy. Peroux et al. (2020) demonstrate that the gas mass fluxes and metallicities as a function of the azimuthal angle about star-forming galaxies are qualitatively similar in EAGLE and the high-resolution \(L=51.7\,\mathrm{\,cMpc}\) TNG50 simulation of the IllustrisTNG suite. Both simulations exhibit a strong azimuthal dependence such that the major axis is dominated by relatively metal-poor inflows and the minor axis by enriched outflows. They also broadly reproduce the dichotomy of O vi column densities associated with star-forming and passive galaxies revealed by COS-Halos (Tumlinson et al. 2011). However, the origin of the relationship differs between the simulations. Based on analysis of EAGLE, Oppenheimer et al. (2016) argue that passive galaxies exhibit a deficit of O vi because they preferentially reside in more massive haloes (in which oxygen tends to be more strongly ionised) than star-forming galaxies of comparable stellar mass. Nelson et al. (2018) report a more direct connection between star formation and the transport of oxygen-enriched gas into the CGM by AGN feedback in IllustrisTNG. Such differences may be the tip of an iceberg, as the inclusion of physical processes not treated by either of EAGLE or IllustrisTNG, for example feedback by cosmic rays, may
induce significant changes to the CGM (Ji et al. 2020, Butsky et al. 2022).
### Gas inflows & outflows
The ISM and CGM are essentially gas reservoirs bound to galaxy haloes, whose mass evolves in response to a shifting balance between, primarily, galaxy- and halo-scale inflows and outflows, with a generally minor contribution also played by star formation, stellar mass loss and, in some cases, external environmental processes. In Figure 8 we show the halo mass-dependent present-day inflow and outflow rates as a function of galactocentric radius in EAGLE (solid curves) and IllustrsTNG (dashed curves). Black curves correspond to haloes of mass \(10^{11.5}\) M\({}_{\odot}\), blue curves to haloes of \(10^{12}\) M\({}_{\odot}\), and red curves to haloes of \(10^{12.5}\) M\({}_{\odot}\). Comparison to the present-day SFRs of the central galaxies hosted by \(10^{12}\) M\({}_{\odot}\) haloes shown in Figure 6 highlights that the inflow and outflow rates are higher than the SFRs for both simulations.
Subgrid feedback models aim to approximate the impact of unresolved feedback processes on resolved scales. The flow rates presented in Figure 8 illustrate that, despite the subgrid treatments in EAGLE and IllustrsTNG yielding broadly similar present-day galaxy populations, the radial transport of gas through their haloes differs significantly (see also Mitchell et al. 2020). For example, inflow rates in IllustrsTNG haloes are fairly constant as a function of galactocentric radius, whereas those in EAGLE decrease more strongly towards the halo centre. Outflow rates in the two simulations are reasonably similar for the more massive haloes (\(10^{12.5}\)M\({}_{\odot}\)) and at large radii, whereas inflow rates are different by a factor of 2 in the low and high halo mass range. Within \(\simeq 50\) kpc of the centre of the less massive haloes, the simulations differ by a factor of 3 or more.
Differing rates of inflow and outflow lead to differing halo baryon fractions, with high (net) inflow rates at the virial radius (as seen for low mass haloes in IllustrsTNG) being conducive to a high baryon fraction (see Figure 9) if sustained over a significant fraction
Figure 8: Inflow (left panel) and outflow (right panel) rates as a function of galactocentric radius for \(M_{\rm halo}=10^{11.5}\) M\(\odot\) (black curves), \(10^{12}\) M\(\odot\) (blue curves), and \(10^{12.5}\) M\(\odot\) (red curves) in EAGLE (solid curves) and IllustrsTNG (dashed curves) at \(z=0\). Even though both simulations produce similarly realistic galaxies, the flow of gas into and out of galaxies and their haloes can be substantially different. Additionally, the magnitude and sign of these differences depend on halo mass.
of cosmic time. In the absence of feedback and large-scale outflows, the gas accretion rate at the virial radius is similar to the product of the cosmic baryon fraction, (\(\Omega_{\rm b}/\Omega_{\rm m}\)), and the dark matter accretion rate, but large-scale feedback-driven outflows can significantly suppress this accretion rate and so also act as a form of 'preventative' feedback (e.g. Wright et al. 2020) that inhibits intergalactic gas from reaching the CGM and the ISM. The extent to which this happens is likely to depend strongly on the subgrid implementation of the feedback, and the differing flow rates shown in Figure 8 indicate that state-of-the-art simulations do not present a consensus.
### Halo baryon fractions
The gas fraction in the CGM (the CGM mass divided by the mass of the halo, normalised by the cosmic average baryon fraction) as a function of halo mass. The left panel shows the median for EAGLE, IllustrisTNG, SIMBA (data provided by R. Dave) and Romulus (data provided by M. Tremmel). The full distribution for IllustrisTNG is shown in the right panel, colour coded by the average logarithmic difference in SMBH mass as compared to the median SMBH mass at the same halo mass. Haloes with more massive black holes have lower gas fractions for \(M_{\rm halo}\lesssim 10^{12.5}\) M\({}_{\odot}\), because these haloes have experienced more feedback from AGN.
The baryon fraction of haloes, i.e. the ratio of the baryonic and total masses within the virial radius, is a closely related diagnostic to halo gas flows. In the absence of radiative physics (cooling and feedback), simulations indicate that baryon fractions should be close to the cosmic average baryon fraction of \(\Omega_{\rm b}/\Omega_{\rm m}\approx 0.16\)(Crain et al. 2007). Feedback-driven outflows, however, can markedly reduce the baryon fraction. Figure 9 shows the median halo CGM mass fractions as a function of halo mass for EAGLE, IllustrisTNG, SIMBA and Romulus (Tremmel et al. 2017). We include the latter here as it features a markedly different scheme for accretion onto SMBHs.
The baryon fractions of the massive haloes that host rich galaxy groups and clusters are reasonably well constrained (Vikhlinin et al. 2005, Sun et al. 2009), and it is now well established that, owing to differences in the subgrid implementations of feedback processes, accurate reproduction of the GSMF offers no guarantee of accurately recovering realistic baryon fractions on these scales (e.g. Schaye et al. 2015, Haider et al. 2016). However, adjustment of the feedback implementations, particularly for AGN, enables the correspondence with observed baryon fractions to be improved (Schaye et al. 2015, Weinberger et al. 2017, McCarthy et al. 2017). In less massive haloes, gas fractions are largely unconstrained and, as shown in Figure 9, the median relations that emerge from state-of-the-art simu
Figure 9: The gas fraction in the CGM (the CGM mass divided by the mass of the halo, normalised by the cosmic average baryon fraction) as a function of halo mass. The left panel shows the median for EAGLE, IllustrisTNG, SIMBA (data provided by R. Dave) and Romulus (data provided by M. Tremmel). The full distribution for IllustrisTNG is shown in the right panel, colour coded by the average logarithmic difference in SMBH mass as compared to the median SMBH mass at the same halo mass. Haloes with more massive black holes have lower gas fractions for \(M_{\rm halo}\lesssim 10^{12.5}\) M\({}_{\odot}\), because these haloes have experienced more feedback from AGN.
lations are strikingly dissimilar. As first highlighted by Davies et al. (2020), the median present-day CGM mass fraction at \(M_{\rm halo}=10^{12}\) M\({}_{\odot}\) is \(\simeq 0.2\) in EAGLE, but \(>0.5\) in IllustrisTNG.
Despite the dissimilar trends between CGM mass fraction and halo mass in EAGLE and IllustrisTNG, Davies et al. (2020) showed that in both simulations the scatter in CGM mass at fixed halo mass correlates strongly with the formation redshift of the halo (negatively), the mass of the central galaxy's SMBH (negatively), and the central galaxy's SFR (positively). The negative correlation with SMBH mass is shown for IllustrisTNG in the right panel of Figure 9, coloured by the average deviation in logarithmic SMBH mass relative to the median value. Davies et al. (2020) interpreted these correlations as an indication that early growth enables the SMBH to begin delivering efficient AGN feedback sooner, expelling more gas from the halo and extending the radiative cooling timescale of the remaining CGM. The reduced cooling efficiency inhibits replenishment of the ISM as it is consumed by star formation (or ejected by the associated stellar feedback), reducing the SFR or even quenching the galaxy. Similar trends have also recently been found in SIMBA (Sorini et al. 2022), and this qualitative consensus highlights that an intimate co-evolution of galaxies and their CGM is a major prediction of state-of-the-art simulations.
## 6 The influence of environment
Environment, in the context of galaxy evolution, can refer to a variety of important physical processes and effects. The cosmological environment governs the accretion rate onto haloes, which can be spatially inhomogeneous because of the filamentary structure of the cosmic web, as well as the rate of mergers. This profoundly impacts the growth of galaxies in combination with smaller scale processes, such as galactic winds. The large-scale structure and its inhomogeneity can lead to differences between cosmological and idealized simulations, though we will not further discuss this here.
The environment of a galaxy can also refer to whether or not it lives at the minimum of their host halo's potential well, where the gas densities and cooling rates are highest. Satellite galaxies used to reside at the centres of their own haloes, but have since fallen into more massive haloes and are thus offset from the centre of the potential. Back-splash galaxies or flyby galaxies are those that were satellites in the past but have left the more massive halo they were previously in. They may therefore seem to not live in extreme environments, but can still have been strongly affected in the past. Note that there is no well-defined halo boundary and environmental effects driven by the halo can extend beyond the virial radius. When galaxies experience such effects before entering the more massive halo, this is also sometimes referred to as pre-processing.
### Satellite galaxies: Stripping & Starvation
Satellite galaxies make up a large proportion of the galaxy population and therefore must be included if we are to understand galaxy formation fully. Satellites make up more than 40 per cent of the population below a stellar mass of \(3\times 10^{10}\) M\({}_{\odot}\), though this decreases to less than 20 per cent above \(M_{*}=10^{11}\) M\({}_{\odot}\). Although we often think about processes involved in the formation of central galaxies and satellite galaxies separately, each satellite spent its early life (possibly most of its life) as a central and is therefore also shaped by the physical processes relevant for central galaxy formation. However, after becoming satellites,
these galaxies evolve differently from central galaxies, because they experience various environmental effects, such as ram pressure and tidal stripping, as they move through the halo of their more massive companion. Note that a fraction of the galaxy population considered central galaxies are back-splash galaxies and have thus previously experienced stronger environmental effects.
Ram pressure is exerted on a body of gas when it moves through a medium with a different velocity. It scales with the density of the surrounding medium and the square of the velocity difference. Ram pressure only directly affects gas, whereas tidal effects are purely gravitational and thus affect all components of a satellite, i.e. dark matter, gas, and stars. When the ram pressure is high enough to remove the ISM of the satellite galaxy, this is typically referred to as ram pressure stripping (e.g. Gunn & Gott 1972, Simpson et al. 2018). With the ISM removed, the satellite can no longer form stars, so this results in relatively fast quenching of star formation. When only the gas in the halo of the satellite is removed, the galaxy can continue forming stars from its ISM until it is depleted. This is often referred to as starvation or strangulation and the quenching of star formation is much slower (e.g. van de Voort et al. 2017, Wright et al. 2022). In both cases, the satellite will not be able to replenish its ISM as it no longer has its own CGM acting as a reservoir from which it can accrete fresh gas.
Modern cosmological simulations have sufficient resolution to capture these stripping processes, though resolution effects can make a quantitative difference, because galaxies that are less well resolved are more easily disrupted (e.g. Yun et al. 2019). If simulations were able to fully model the multiphase ISM, it is likely that the cold, dense gas would be less easily stripped than in current simulation suites. Wright et al. (2022) find that starvation and ram pressure stripping of the ISM contribute a similar amount to the reduction of the SFR in satellite galaxies. Some evidence has been found that ram pressure stripping is stronger along the central galaxy's major axis than along its minor axis, pssibly because the density in the polar direction is reduced due to feedback-driven outflows (Martin-Navarro et al. 2021).
Quenching, or the suppression of star formation, is a complicated and unsolved problem, as discussed before in Section 3.5. Quenching in satellites depends on internal feedback processes as well as external stripping. Simulations have struggled to quantitatively reproduce observed quenched fractions at all stellar masses. There is general agreement that the majority of satellites are quenched, with the quenched fraction increasing towards lower masses (Bahe et al. 2017, Donnari et al. 2021). However, observations seem to show the opposite trend and very low quenched fractions for \(10^{9}<M_{\star}<10^{10}\,\mathrm{M}_{\odot}\)(Davies et al. 2019). For an unbiased comparison, it will be necessary to process the simulations in the same way as is done for the observational measurements.
### Environmental effects: beyond halo mass
The environment of a galaxy is dominated by properties of the dark matter-dominated halo it lives in (e.g. Crain et al. 2009). Although straightforward to determine in simulations, halo mass is difficult to measure observationally. Instead, there are a variety of observational methods that can be used to quantify the environments of galaxies, such as distance to the N-th nearest neighbor. These often used environmental indicators correlate strongly with the halo mass (Haas et al. 2012, Marasco et al. 2016). Many of the differences identified based on environmental indicators exist because they are sensitive to host halo mass, and
satellite galaxies behave differently from central galaxies, as discussed in Section 6.1.
In addition to a dependence of galaxy formation on halo mass, the larger scale environment the halo lives in can also influence the evolution of embedded galaxies to a certain extent. To study environmental effects that go beyond the halo mass of the system, it is important to remove the halo mass dependence and characterize the large-scale environment. This usually requires a large-scale structure finder, both in observations and simulations, or visual classification. Rosas-Guevara et al. (2022) build a void catalogue and compare galaxies living in those voids to those residing in the cosmic web. Overall the differences are fairly minor, e.g. average stellar masses can change by up to about 30 per cent. They find that this large-scale environment affects low-mass galaxies differently from high-mass galaxies, which likely indicates that there are a variety of processes at play. The galaxy mass in low-mass haloes is lower in voids, potentially due to a combination of lower accretion rates and lower merger rates, whereas it is higher in massive haloes, potentially because of lower black hole growth and lower amount of AGN feedback in void galaxies.
Ram-pressure stripping, as discussed in Section 6.1 is not limited to only taking place inside galaxy haloes. The filaments of the cosmic web can also harbour strong accretion shocks and provide ram pressure, which can result in stripping of the gas in and around dwarf galaxies that pass through (Benitez-Llambay et al. 2013, Pasha et al. 2022, Herzog et al. 2022). The ambient temperature of the gas in these filaments can be higher than the virial temperatures of dwarf galaxies, which means that they are not able to accrete fresh gas and thus remain gas-deficient. These galaxies have lower star formation rates and lower ISM masses than galaxies not affected by stripping in the cosmic web, yet they can appear quite isolated.
## 7 Future outlook
The level of realism achieved by state-of-the-art cosmological hydrodynamical simulations of the galaxy population has advanced dramatically in the last decade. Despite this success, the outcomes are particularly sensitive to subgrid implementations of feedback processes. It is thus reasonable to argue that this success suffices only to establish that the basic sketch of galaxy formation theory within the \(\Lambda\)CDM cosmogony is plausible. The development of simulations that can be used to stress test a truly comprehensive theory of galaxy formation and evolution will require confrontation of the devil in the detail. Simulations of the galaxy population do not accurately reproduce the internal and vertical structure of galaxies and their reliance on subgrid methods to approximate the influence of unresolved physics leaves lingering degeneracies that diminish predictive power. Moreover, the simulations neglect physical processes known to be significant in certain regimes. From our suite of examples, only IllustrisTNG models magnetic fields, which could impact cosmic gas flows and thus affect the evolution of galaxies (Pillepich et al. 2018a, van de Voort et al. 2021). The omission of magnetic fields also precludes realistic modelling of the influence of cosmic rays, which, owing to their ISM energy density being comparable with those of the thermal and magnetic pressures, likely influence galaxy-wide outflows (Uhlig et al. 2012). Similarly, thermal conduction influences the structure of the ICM and cooling onto massive galaxies (Carilli & Taylor 2002) but is generally neglected. Modelling the UVB as a spatially-uniform radiation field that'switches on' at a fixed redshift is a particularly crude approximation for the evolution of galaxies during the EoR, when local sources of ionizing radiation can dominate the regulation of galaxy growth (Wise & Cen 2009, Trebitsch et al. 2017, Katz
et al. 2020). Locally-varying radiation fields, such as those produced by flickering AGN, can also overionise gas and lead to cooling rates being overestimated by orders of magnitude if equilibrium conditions are assumed (Vasiliev 2011, Richings et al. 2014).
It is natural to consider how the field will build on the foundation provided by the current state-of-the-art generation of hydrodynamic simulations of the galaxy population. There are several routes by which we envisage that progress will be made, beyond the usual pursuit of superior resolution (which is needed to achieve converged results in the CGM, e.g. van de Voort et al. 2019). Despite remarkable recent demonstrations of simulation codes running on hundreds of thousands of compute cores (e.g. Schaller et al. 2016, Pakmor et al. 2023), subgrid models will remain a necessary component of simulations of the galaxy population for some years to come, precluding holistic, quantitative predictive power. There is, however, much scope to develop more detailed implementations, and to interface them with the numerical calculation on shorter spatial scales, enabling more detailed confrontations with observations and increasing the diversity of the lines of enquiry for which the simulations can offer authoritative predictions. Moreover, the ill-constrained parameters of the subgrid models used by leading simulations have been calibrated manually, by performing a small set of parameter-spanning simulations, analysing their outputs, and choosing updated parameters for the subsequent set based on the practitioner's intuition for the response of the galaxy population to the adjustment. Statistical methods exist to formalise this process and ensure that the plausible parameter space is efficiently explored. There is also much to be gained not only from improving the simulations themselves, but also the techniques used to analyse their outcomes.
The development and testing of new or improved treatments of physical processes, whether implemented numerically or subgrid, is usually pioneered using simulations that adopt idealised or zoomed cosmological initial conditions. At fixed resolution, such simulations are markedly less expensive than simulations of representative volumes or, for a fixed number of computational core hours, they allow galaxies to be evolved at much higher resolution. Such simulations have been used to pursue more detailed treatments of the multiphase ISM that require fewer assumptions, in particular relating to the star formation efficiency of dense gas (e.g. Hopkins et al. 2014, Semenov et al. 2016, Kim & Ostriker 2017). Feldmann et al. (2023, see also Fig. 4) showed recently that the use of these more detailed models in a periodic cosmological volume does not guarantee the emergence of realistic GSMF, even in the low stellar mass regime for which the included physical processes are expected to dominate galaxy regulation. A number of suites of zoom simulations focusing on dwarf galaxies have achieved sufficiently high resolution to model individual SNe explosions within a multiphase ISM (e.g. Wheeler et al. 2019, Agertz et al. 2020, Gutcke et al. 2022). Detailed study of these processes in idealized or zoom simulations paves the way towards development of coarse-grained descriptions that, although still approximate, can be more realistic than the simple (and diverse) treatments of stellar feedback used by today's galaxy population simulations. An extreme interpretation of this methodology is to treat outflows exclusively with a phenomenological approach, even on numerically-resolved scales (e.g. Huang et al. 2020). Similarly, the multiphase structure of the CGM is not resolved in cosmological simulations, specifically the cool gas and the interfaces between the cool and hot CGM (McCourt et al. 2018, Fielding et al. 2020, e.g.), see also Faucher-Giguere & Oh, this volume. Accurately treating the cool and intermediate temperature gas may require developing subgrid models within the CGM.
The recent discovery with JWST of multiple galaxies with spectroscopically-confirmed
redshifts \(z>10\)(Curtis-Lake et al., 2023) highlights the urgent need to model the EoR galaxy population with explicit radiation hydrodynamics (RHD). Zoom simulations have proven useful for examination of the internal structure of individual galaxies in this regime (e.g. Pallottini et al., 2017) but suffer from potential selection biases and preclude examination of the influence of galaxies on the IGM as the latter undergoes a global phase transition. High-resolution RHD simulations of representative volumes remain extremely challenging, but Borrow et al. (2023) were able to show using simulations of small periodic volumes that the use of a spatially-uniform UVB fosters significant inaccuracies in the space density and internal properties of low-mass simulated galaxies in the EoR, and results in the density and temperature structure of the IGM being too uniform.
In recent years, machine learning has steadily grown in influence in nearly every aspect of astrophysics (see review by Smith & Geach, 2023). Machine learning techniques have emerged as an effective means of elucidating formally the complex relationships between the properties of simulated galaxies and those of their host dark matter haloes (Icaza-Lizaola et al., 2021, Piotrowska et al., 2022). The CAMELS (Cosmology and Astrophysics with Machine Learning Simulations) project (Villaescusa-Navarro et al., 2021) uses many simulations evolved with variations of the IllustrisTNG and SIMBA models, varying both cosmological and subgrid parameters, as a basis for the application of machine learning techniques. Jo et al. (2022) use the suite as a testbed for calibrating model parameters using neutral networks trained as emulators. Interestingly, they found that the emulators could identify parameter sets that accurately reproduce a GSMF drawn from the input simulations chosen as a 'target observable', but none that reproduce a real observationally-inferred GSMF. It is unclear whether this signals a problem with the method, inconsistencies in the observationally-inferred GSMF, or a genuine limitation of the physics implemented within the input simulations. If the latter, this methodology may prove a particularly effective means of guiding the development of more sophisticated subgrid models.
## 8 Summary
Cosmological, hydrodynamical simulations have catalysed significant strides in our understanding of the physics that governs the formation and evolution of galaxies. The much improved realism of the current generation of state-of-the-art simulations, relative to their predecessors, has engendered greater confidence in the insight obtained from the confrontation of simulations with observational data, and has diversified the lines of enquiry for which such comparisons are useful.
A crucial lesson learned from some of the first simulations of galaxies is that their evolution is strongly regulated by energetic feedback processes. Modern, realistic simulations model (at minimum) feedback associated with the formation and evolution of stellar populations (which dominates in low mass galaxies), and that associated with the accretion of gas onto SMBHs (which dominates in massive galaxies). However, it remains the case (and will do so for some years to come) that this modelling is achieved in an approximate fashion using subgrid models, whose governing parameters are ill-constrained. The greater realism of the current generation of simulations has therefore followed from the pragmatic approach of calibrating these parameters.
Simulations calibrated to reproduce the stellar masses and sizes of galaxies also reproduce many well-known observed scaling relations, e.g. the Tully-Fisher relation and the SFMS. This is often because they concern related properties, or because reproduction of the
relation relies primarily on ensuring that galaxies of a fixed stellar mass are associated with dark matter haloes of the correct total mass and hence exhibit (for example) realistic space densities and cosmic matter inflow rates. Other scaling relations, particularly those related to the baryon cycle in haloes such as the mass - metallicity relation, the baryon fraction - halo mass relation, and galaxy - absorber statistics, are more sensitive to the details of the adopted subgrid models. Quantitative agreement between the simulations and observations for these relations tends to require that the simulations be calibrated specifically to achieve it. The quantities in these relations also tend to be those for which numerical convergence is most challenging to achieve. Perhaps unsurprisingly then, there is a conspicuous absence of inter-simulation consensus in this regime.
Different subgrid prescriptions therefore appear able to reproduce many galaxy properties equally well, whilst yielding predictions for other properties (often those only weakly constrained by observations) that are markedly different. As we highlight in the summary boxes below, the current generation of state-of-the-art simulations of the galaxy population has therefore fostered important and enduring successes, but many outstanding challenges remain. The simulations provide a sound foundation from which to pursue the more sophisticated models needed to tackle these challenges and further our understanding of the galaxy population. This pursuit will of course also be driven by future observational discoveries, and more detailed characterisation of the CGM is likely to prove amongst the most fruitful avenues for constraining, in particular, the macroscopic effects of energetic feedback.
Finally, we remark that the public data release (with detailed accompanying documentation) of major simulation campaigns, while being a major undertaking for the simulation teams, has proven a tremendously successful exercise. It has enabled their use by many researchers who were not involved with the development of the simulations, and many astronomers without prior specialism in simulations, allowing more thorough exploitation and analysis of the simulations and more diverse comparisons with observations. We energetically encourage the developers of future simulation campaigns to follow suit.
SUMMARY POINTS
* By regulating star formation with plausible quantities of energetic feedback associated with the formation of stars and the growth of black holes, the current generation of state-of-the-art cosmological hydrodynamical simulations form a galaxy population with broadly realistic stellar masses and sizes.
* Although the more realistic of these simulations were calibrated against present-day galaxy masses and sizes, the evolution of these quantities was not, and the observed evolutionary trends are also broadly reproduced.
* Simulated galaxy populations exhibit the diversity of present-day morphologies exemplified by the Hubble Sequence, as a natural outcome of the diversity of galaxy assembly histories and the intrinsic properties of their host haloes.
* The observed clustering of galaxies, as a function of stellar mass, galaxy colour, and atomic gas content, is reproduced by simulations on the scales they are able to reliably sample and adequately resolve.
* halo mass relation, which has been shown to emerge primarily in response to gas expulsion in the low-mass regime, and throttling of cooling from the CGM onto the ISM in more massive haloes, due to AGN
feedback.
* The simulations also illuminate the origin of scatter about scaling relations, with key examples being the scatter in star formation rate, and in metallicity, at fixed stellar mass. Both can be explained in terms of the balance of gas flows into and out of galaxies, star formation, and black hole growth.
* Once armed with a realistic model, examination of partner simulations in which model components are adjusted or toggled has proven an effective approach to illuminating the sensitivity of galaxy properties and observables to physical processes.
## Future Issues
* Simulations of the galaxy population still rely on simplified subgrid models to treat unresolved physical processes, including feedback associated with the formation of stars and the growth of black holes. Simulations thus remain distantly removed from ab initio theory.
* Physical processes treated with subgrid models dominate the systematic uncertainty on the properties of simulated galaxies. Since different subgrid models can produce broadly realistic simulated galaxy populations, significant degeneracies remain between state-of-the-art simulation suites and predictive power is limited.
* Even in simulations whose subgrid models were designed to minimise their sensitivity to resolution, the convergence behaviour of some physical properties (particularly those related to dense gas) can be poor, eroding confidence in outcomes.
* The detail of simulations of the galaxy population remains relatively poor: the internal and vertical structure of galaxy discs is unrealistic owing to the simplistic subgrid modelling of interstellar gas.
* There is relatively little quantitative inter-simulation consensus concerning the properties of gas flows and the CGM, which are particularly sensitive to the implementation of feedback processes.
## Disclosure statement
The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review.
## Acknowledgements
We extend our gratitude to the many colleagues and collaborators with whom we have enjoyed illuminating discussions that have helped to shape this review. We thank Romeel Dave, Jon Davies, Alex Hill, Rudiger Pakmor, and Michael Tremmel for providing data used in the figures, and the EAGLE and IllustrisTNG teams for making their data publicly accessible. We are particularly grateful for Debora Sijacki's contribution to the early phase of assembling this review, and Tim Davis, Matthieu Schaller, and Renske Smit for valuable discussions. We thank Roselyn Lowe-Webb and Luis Ho, respectively our production and scientific editors at ARA&A, for their patience and encouragement, and Glenda Mahoney
for assistance with figures, in particular for creating Figure 2. The authors are supported by Royal Society University Research Fellowships.
|
2301.13431 | Breaking Out of the Ivory Tower: A Large-scale Analysis of Patent
Citations to HCI Research | What is the impact of human-computer interaction research on industry? While
it is impossible to track all research impact pathways, the growing literature
on translational research impact measurement offers patent citations as one
measure of how industry recognizes and draws on research in its inventions. In
this paper, we perform a large-scale measurement study primarily of 70,000
patent citations to premier HCI research venues, tracing how HCI research are
cited in United States patents over the last 30 years. We observe that 20.1% of
papers from these venues, including 60--80% of papers at UIST and 13% of papers
in a broader dataset of SIGCHI-sponsored venues overall, are cited by patents
-- far greater than premier venues in science overall (9.7%) and NLP (11%).
However, the time lag between a patent and its paper citations is long (10.5
years) and getting longer, suggesting that HCI research and practice may not be
efficiently connected. | Hancheng Cao, Yujie Lu, Yuting Deng, Daniel A. McFarland, Michael S. Bernstein | 2023-01-31T05:56:59Z | http://arxiv.org/abs/2301.13431v1 | # Breaking Out of the Ivory Tower:
###### Abstract.
What is the impact of human-computer interaction research on industry? While it is impossible to track all research impact pathways, the growing literature on translational research impact measurement offers patent citations as one measure of how industry recognizes and draws on research in its inventions. In this paper, we perform a large-scale measurement study primarily of 70,000 patent citations to premier HCI research venues, tracing how HCI research are cited in United States patents over the last 30 years. We observe that 20.1% of papers from these venues, including 60-80% of papers at UIST and 13% of papers in a broader dataset of SIGCHI-sponsored venues overall, are cited by patents--far greater than premier venues in science overall (9.7%) and NLP (11%). However, the time lag between a patent and its paper citations is long (10.5 years) and getting longer, suggesting that HCI research and practice may not be efficiently connected.
Industry impact, technology transfer, translational science, patent, citation analysis +
Footnote †
successes and painful failures become easier to point to, it becomes more and more urgent that we also assess broader patterns.
To fill this gap, we draw on methods from the growing measurement literature on innovation in translational sciences (Friedman et al., 2016; Goyal et al., 2017; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018), where patent citations to research have been regarded as a valuable proxy of the impact that science has on industrial practice. While patent citation to research citation does not directly guarantee industry impact, it reveals one potential pathway through which industrial inventors are aware of and recognize research articles: a necessary but not sufficient step towards industry impact.2 Work using this approach has revealed the relevance of research and practice across science (Friedman et al., 2016), mapped the translation landscape in bio-medicine (Goyal et al., 2018; Goyal et al., 2018), and demonstrated that referencing science in the invention is associated with greater practical value (Goyal et al., 2018).
Footnote 2: More discussion and reflection on the usage of patent citation to science to study industry impact of research in Section 3.1 and Section 5.3
Leveraging the modern analysis approaches from this line of work (Goyal et al., 2018; Goyal et al., 2018), we report the first large-scale quantitative analysis of how HCI research is (and is not) being cited by patents. In doing so, we focus on one possible route of industry impact through HCI research: patents. There are many types of contributions in HCI--design patterns, behavioral results and theory among many others--and a patent lens focuses us only on styles of contribution that are considered prior art for patents, often systems and interaction contributions. Specifically, we draw on data from Microsoft Academic Graph, Semantic Scholar, the United States Patent and Trademark Office (USPTO), and linkages between them (Goyal et al., 2018; Goyal et al., 2018). This dataset enables us to study research papers from four premier venues in HCI, including CHI, CSCW, UIST, and UbiComp, and then replicate across all 20 SIGCHI sponsored venues that appear in Microsoft Academic Graph, tracing how those research papers are cited in patent documents from the 1980s through 2018. We study the institutes involved in the process, leverage citation analysis to measure the number and proportion of papers cited by patents over time and measure the length of time it takes before a paper is recognized by patents. We further conduct textual analysis to understand the topics that are likely to be cited in patents, and compare how patent-cited research differs from its non-patent cited counterparts.
We observe that: (1) HCI research has been cited extensively by patents -- overall 20% of papers from CHI, CSCW, UIST and UbiComp, and 13.4% of SIGCHI sponsored venues, are patent-cited, including a surprising 60-80% of UIST papers over a twenty year period, higher than 1.5% of science overall and 7.7% of biomedicine; (2) The patent-paper time lag is long (on average 10.5 years) and is getting longer, such that citations from academic HCI research have dropped off by the time a paper receives patent attention; and (3) Within HCI research, there is substantial heterogeneity in patent citations across topics, for example, interaction and input techniques research are especially likely to be referenced by patents while theory, social and experience design research are not. This analysis provides the first quantitative survey of the HCI technology transfer landscape. While acknowledging potential limitations of patent citation as a method, we conclude that HCI has had a considerable impact on industry and is finding more relevance to practice than most disciplines in science. Yet, it takes a long time for innovations in academia to be recognized and taken up by industry, corroborating the "long nose" theory on HCI innovation (Goyal et al., 2018; Goyal et al., 2018).
The contributions of this paper are as follows:
* We introduce measuring patent citations to science as a novel method to study research-practice relationships in HCI. This provides quantitative evidence that complements qualitative evidence in existing HCI literature. We release our analyzed dataset to enable future analysis.3 Footnote 3: Available dataset at: [https://doi.org/10.7910/DVN/QMSSIG](https://doi.org/10.7910/DVN/QMSSIG).
* We present the first large-scale, empirical study measuring the translational, longitudinal landscape of HCI research from paper to patent inventions with comparisons to other fields. This allows us to better understand and evaluate how HCI as an applied field is or is not finding connections to practice.
* Our work contributes to reflections and recommendations for the HCI community to better foster a translational environment and recognize impacts beyond academia.
## 2. Background and Related Work
In this section, we position our work in the literature on industry impact, the HCI research-practice divide, and bibliometric analysis in HCI.
### Industry impact
Industry impact are often achieved through technology transfer, which refers to the transmission of knowledge generated by an individual, the university, government agencies, or any institution capable of generating knowledge, to another person or organization, in an attempt to transform inventions and scientific outcomes into new products and services that benefit society (Goyal et al., 2018). Government and funding agencies (e.g., in the United States, NSF and NIH) increasingly seek to nurture "translational research" to facilitate industry impact from basic research so as to generate greater applied value and promote technology advances (Shi et al., 2018; Shi et al., 2018), and prior research has shown inventions that refer to high-quality research are more likely to be great inventions of value (Goyal et al., 2018; Goyal et al., 2018).
Prior research has sought to identify when, where, and how scientific research influences industry invention (Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018). There, patent citations to science have been widely used as a proxy for studying technology transfer from research to practice despite noises, as it is one of the only available large-scale records of the knowledge flow from research to practice that demonstrate the initial awareness and recognition of research in industrial inventions. For instance, Tijssen (Tijssen, 2018) revealed through patent-paper citations how Dutch-authored research papers influence inventions. Likewise, Ahmadpoor and Jones (Friedman et al., 2016) studied 4.8 million US patents and how they link to 32 million research articles, finding that over half of patents cite back to a research article and that patents and papers are on average 2-4 degrees separated from the other domain, providing some insight into the interplay between patents and prior research. Jefferson et al. (Jefferson et al., 2018), Manjunath et al. (Manjunath et al., 2018) used patent citations to science data, measuring and reporting statistics describing how research in biomedicine turns into inventions. Liaw et al. (Liaw et al., 2018) proposed a method to rank academic journals that utilizes non-patent references in patent documents to evaluate their practical
impact. Other works used patent citation to science to study the strategy of inventors (e.g. deep search vs. wider scope search) and how the strategy relates to technology impacts and organization performance (Bouquet et al., 2017; Kudritz et al., 2018; Kudritz et al., 2019). To facilitate further studies on how inventions rely on basic science, Marx and Fuegi (Marx and Fuegi, 2017; Kudritz et al., 2019) linked and disambiguated patent citations to science linking the USPTO dataset and Microsoft Academic Graph.4
Footnote 4: We leverage this particular dataset in our analysis.
We build off this rich social science literature by studying industry impacts of HCI research through leveraging and extending their methods(Kudritz et al., 2019).
### From HCI research to practice
HCI is a field that emphasizes the design and the use of computer technology, especially interfaces between people and computers. HCI research implement, demonstrate and test new technologies through prototyping and end-user feedback (Kurz and Fuegi, 2017), and most HCI work includes 'design implications' sections aiming to translate their research insights to more practical outcomes. The applied nature of HCI lead to the community's long-standing interest in industry impact, with many publications and panel discussions at conferences aimed at facilitating better technology transfer (Hernandez et al., 2017; Kudritz et al., 2019; Kudritz et al., 2019). One line of the literature primarily focus on the many barriers HCI faced in translating research insights to industrial practice (Kudritz et al., 2019; Kudritz et al., 2019), while another line of literature speaks to the considerable impact that HCI research has had or could have on the industry (Kudritz et al., 2019; Kudritz et al., 2019; Kudritz et al., 2019).
Many papers argue that despite the insights that HCI research can offer to practitioners, HCI research findings are rarely used in industry (Kudritz et al., 2019): that there has been an "immense" research practice gap in practice that is "real and frustrating" (Kudritz et al., 2019), that "HCI researchers and HCI practitioners work in relatively separate spheres of influence" (Kudritz et al., 2019), and that "attendees at venues like ACM CHI often lament that no HCI research ever goes into product" (Kudritz et al., 2019). Colusso et al. (2019) interviewed design practitioners so as to understand why they do not use academic research and why and how they use other resources in their works, presenting a detailed catalog of barriers that inhibit academic resources usage in industry, such as the content being hard to read, hard to find, and not actionable. Chilana et al. (Chilana et al., 2019) stated the distinct goals of HCI research and product may result in a research-practice gap, that the users who are the major focus of the user-centered design approach in HCI research are generally not the buyers of HCI products, and that to make a research-to-product transition one has to switch from being user-centered to adoption-centered. Furthermore, prior work (Kudritz et al., 2019; Kudritz et al., 2019) suggested that HCI researchers usually lack the knowledge, resources, connections, experience, interest, or time to pursue technology transfer. Other work has shown similar results demonstrating a research practice gap in HCI (Kudritz et al., 2019; Kudritz et al., 2019).
Prior research has discussed potential approaches to address the research-practice gap. For instance, Velt et al. (Velt et al., 2017) identified two key dimensions of the research-practice gap - general theory vs. particular artifacts, and academic HCI research vs. professional UK design practice - and discussed the benefits of translation led by researchers, by practitioners, or co-produced by both as boundary objects. Colusso et al. (2019) proposed a continuum translational science model for HCI that consists of three steps: basic research, applied research, and design practice. Shneiderman (Shneiderman, 2017) wrote a book proposing principles to better blend science, engineering and design to achieve innovations and breakthroughs. Other work discusses the challenges and lessons learned from the specific translation of HCI research to practice (Kudritz et al., 2019; Kudritz et al., 2019).
Meanwhile, another line of work argues that HCI research could have considerable impact on industrial practice despite the barriers. Harrison argues that "HCI is at the vanguard of innovation and has repeatedly influenced industry [...] HCI research has a much greater impact in identifying opportunities in the first place, establishing the science and methods, building a shared vision, and developing a pipeline of human talent" (Kudritz et al., 2019). Likewise, Myers et al. (Myers et al., 2019) wrote "There is no question that research in the area of user interface software tools has had an enormous impact on the current practice of software development. Virtually all applications today are built using window managers, toolkits, and interface builders that have their roots in the research of the 70's, 80's, and 90's. Shneiderman's work (Shneiderman, 2017) further stated that "The remarkably rapid dissemination of HCI research has brought profound changes that enrich people's lives", but also providing a tire-tracks diagram showing how HCI research on subjects such as hypertext, direct manipulation, etc. turned into product innovations by industry. Similarly, product innovations over the years mirror the early ideas of canon HCI visions (Kudritz et al., 2019; Kudritz et al., 2019). Other research detailed successful cases of tech transfer, such as the translation of the multi-touch interface from research into the Apple iPhone and Microsoft Surface, while highlighting a long time lag between initial research and commercialization, which can be 20 years or more (Kudritz et al., 2019; Kudritz et al., 2019; Kudritz et al., 2019).
This prior work guides us to the following research questions:
**RQ1**: _What_ is the impact of HCI research on patents? How much HCI research is cited in patents?
**RQ2**: _When_ is the impact of HCI research on patents? How long does that impact take?
**RQ3**: _Where_ is the impact of HCI research on patents? Which topics of research are especially likely or unlikely to diffuse?
**RQ4**: _Who_ is involved in the process of recognizing HCI research on patents? Which institutions produce such work, and which consume it?
The rich qualitative insights derived from case studies, field-work, interviews, and personal experience, open an opportunity for complementary work that engages in quantitative, longitudinal analysis that directly measures how HCI research gets recognized in industry inventions and technologies. We believe that such a viewpoint might systematically detail the translation landscape of HCI as a field.
### Bibliometrics and HCI
As an important area of computing and information science, HCI has featured several projects (e.g., (Kudritz et al., 2019; Kudritz et al., 2019)) that quantitatively understand the structure and evolution of the field through the study of writing and citation patterns, known as bibliometrics (Shneiderman, 2017).
One commonly used bibliometric method is an analysis of a large-scale citation network, which leverages the increasingly available citation data from publishers such as Web of Science and Microsoft Academic Graph and their associated metadata of the scientific
publications (e.g. institutes, authors), and even textual analysis (e.g. topic modeling, keyword extraction) of the scientific publications, so as to gain insights on patterns behind the diffusion of scientific ideas (Zhou et al., 2017; Zhang et al., 2018), research productivity (Mateja et al., 2018; Zhang et al., 2018), and identify potential ethical and social issues in science (Kouramaditis and Hussain, 2018; Kouramaditis and Hussain, 2018). For instance, Koumaditis and Hussain (2018) leveraged citation data from 962 HCI publications and reveal that HCI research can be categorized into major themes of design, data management, user interaction, psychology, and cognition, and they identified more recent trends in HCI in the workplace, sensors, and wearables. Likewise, Kay (Kaye, 2018) reported "some statistical analyses of CHI", including author counts, gender analysis, and representations of repeat authors so as to motivate discussions on the preferred state of CHI. Bartneck and Hu (Bartneck and Hu, 2018) reveal that only a small percent of countries account for the majority of CHI proceedings, and present a ranking of countries and organizations based on their H-index of CHI proceedings. Correia et al. (Correia et al., 2018) used 1713 CSCW publications and characterized top CSCW papers, citation patterns, prominent institutes as well as frequent topics, highlighting the fact that CSCW is influenced primarily by a few highly recognized scientists and papers. The authors further quantitatively explored the relationship between collaboration types and citations, paper frequency, etc (Correia et al., 2018). Similar types of analysis have also been done on more regional HCI conferences (Kouramaditis and Hussain, 2018; Zhou et al., 2017; Zhang et al., 2018; Zhang et al., 2018) as well as studying subcommunities in HCI (Kouramaditis and Hussain, 2018; Zhang et al., 2018; Zhang et al., 2018).
Visual analytics is another approach used to help understand HCI's evolution. For instance, Lee et al. (Lee et al., 2018) proposed a system PaperLens to reveal trends, connections, and activity of 23 years of the CHI conference proceedings. Matejka et al. (Mateja et al., 2018) proposed an interactive visualization that highlights family trees of CHI and UIST papers. Henry et al. (Henry et al., 2018) presented a visual exploration of four HCI conferences. They showed that the years when a given conference was most selective are not correlated with those that produced its most highly referenced articles and that influential authors have distinct patterns of collaboration.
To the best of our knowledge, there have been no analyses leveraging quantitative methods to study recognition of HCI research beyond academia as we present in this article. In contrast with prior work, we leverage large-scale patent citations to quantify the impact of HCI research in practice.
## 3. Method
In this section, we describe the method we used to study the impact of HCI research papers in practice using patent citations to science.
### Patent citations as a pathway to study industry impact of research papers
We leverage patent citations to research as a proxy to study the influence of HCI research on industrial practice at scale. While patent citation to research citation does not directly mean industry impact, it reveals one important potential pathway from research to practice where industrial inventions become aware of and recognize research articles, which is often a necessary but not sufficient step towards producing industry impact. Alongside with studying other forms of influence, such as design processes (e.g., usability testing, heuristic evaluation), design patterns, open source software (e.g., d3, Vega), patent citations to science could help us piece together the translational landscape in HCI. This method is widely used in the innovation literature (e.g., (Bartneck and Hu, 2018; Zhou et al., 2017; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018)). Patent citations to research are considered valuable signals indicating the influence of research on the industry, signals that "reflect genuine links between science and technology." (Zhou et al., 2018), and "appear" to be a substantive if a noisy indicator of the role of specific, prior scientific advances" (Bartneck and Hu, 2018). While citations between research articles capture research influence (Zhou et al., 2018), patent-to-research citations capture "how basic research influences commercialization and thus provides a complementary measure of impact" (Zhang et al., 2018). Such data has been used extensively to measure knowledge spillovers from academia and government to industry (Bartneck and Hu, 2018; Zhou et al., 2018; Zhang et al., 2018).
The rationale behind the validity of this approach is that in patented inventions, inventors are obliged to disclose _any_ "prior art" related to their invention, i.e., all information known to that individual to be material to patentability",5 including materials that the inventors leveraged in the invention process, or other similar material to the focal invention in order to distinguish it. The prior art includes both references to prior patents, and references to non-patent literature, such as academic articles. Patent citation is an important part of a patent, as missing prior art (either prior patents or non-patent literature), could have potential legal issues. Apart from citations provided by inventors, patent examiners who review patents for approval or rejections also add references they think are of relevance to ensure the legitimacy of the patent.
Footnote 5: [https://www.uspto.gov/web/offices/pac/mpep/mpep-2000.pdf](https://www.uspto.gov/web/offices/pac/mpep/mpep-2000.pdf)
Prior work has validated this method. Nagaoka et al. (Nagaoka et al., 2018) surveyed 843 inventors finding patent citations to science are indeed important linkages to science, despite possible errors of over- and under-inclusion. Calalert et al. (Callalert et al., 2018) interviewed 36 inventors and report 44% of patent citations to science are considered as "important" or "very important", and another 34% are "background" citations. Based on the rich literature in this space, we conclude that patent citation to science can be used as a reliable data source to measure the recognition of HCI research efforts in inventions, thus providing a valuable proxy of HCI research impact in the industry. Of course, there is no perfect approach for studying industry impact: we discuss and reflect on the limitations of our method in detail in Section 5.3, and it is especially important to bear in mind there are multiple translational gaps in HCI research (Kouramaditis and Hussain, 2018), and we are only studying one important step in the process with regard to patent, where certain types of contribution such as theory are likely to be under evaluated through this dimension.
Empirically, we find support for the validity of using patent citations to research as a proxy of impact in industry. We manually check patent reference lists of a number of patents. As shown in Figure 1, the highly-cited patent by Apple Inc. "Mode-based graphical user interfaces for touch sensitive input devices" (cited 1,898 times),6 cites closely related research papers in CHI on multi-touch, such as "A Multi-Touch Three Dimensional Touch-sensitive Tablet", which is the case of technology transfer discussed by Buxton (Bartneck and Hu, 2018). The even more well-cited Apple Inc. patent (cited 4,018 times) "Method and apparatus for integrating manual input" 7 also made reference to several relevant HCI papers. These cases motivate
us to leverage patent citations as a signal indicating the invention's recognition of research.
### Dataset
To study how HCI papers are recognized by patents, we required a citation graph from patent to research, and the metadata (e.g., author name, affiliation, publication year, title, venue) from both the paper side and patent side. The data preparation pipeline is composed of three steps: 1) Prepare metadata of papers and patents, and the citation graph from patents to research, 2) Select papers from the venues of interest and clean the data, and 3) Link the clean metadata based on the citation graph. This pipeline could be applied to other research communities, or other venues within SIGCHI, by selecting other venues of interest.
_Patent citation to science that connects USPTO to Microsoft Academic Graph._ To capture references from patents to HCI research papers, we drew on a public dataset (Sang et al., 2018; Wang et al., 2018). This dataset is a state-of-the-art approach to connect each patent reference in USPTO (1947-2020) to academic papers (1800-2020) from Microsoft Academic Graph through matching unstructured front-page and in-text references in patents to published papers using a disambiguation matching method, resulting in 22 million patent citations to research papers (known as Patent Citation Science dataset).8 In their papers, the dataset creators verified the quality of their datasets through manual checking and error analysis. We captured the reference type (e.g., from applicant, from examiner, unknown), whether the reference appears in-text or on front page, the time between paper publication and the citing patent application, and whether a patent citation is a self-citation to a research paper by one of the patent authors. A paper to patent pair is considered self-cited when there is an overlap between the inventors of the patent and the authors of the cited scientific papers.
Footnote 8: Specifically, we used the patent-to-article citations of Version v37 (Jul 19, 2022) at Zenodo: [http://relianceonscience.org](http://relianceonscience.org)
_Microsoft Academic Graph Metadata._ The Microsoft Academic Graph is a heterogeneous graph that provides scientific publication records, citation relationships, the information of authors, institutions, journals, conferences, and fields of study. We leveraged the public Microsoft Academic Graph dataset provided at Zenodo Reliance on Science project site9 so as to extract information with regard to academic publications, e.g., title, author, author affiliation, and year.
Footnote 9: [http://relianceoniscn.org](http://relianceoniscn.org)
_USPTO metadata._ We leveraged US patent data from the United States Patent and Trademark Office (USPTO)10 to represent technological inventions. Patents have similar fields as academic publications, e.g., title, abstract, inventor, assignee, and year.
Footnote 10: [https://patentreview.org/download/data-download-tables](https://patentreview.org/download/data-download-tables)
_Semantic Scholar (abstract, citation)._ The abstract information of the paper and their academic influence (e.g., number of published papers, citation count) are missing or hard to process in the original Microsoft Academic Graph metadata.11 To further expand data information about authors, papers, citations, and venues, we utilize the Semantic Scholar Academic Graph API,12 which fills in this data.
Footnote 12: [https://docs.microsoft.com/en-us/academic-services/graph/resources-faq](https://docs.microsoft.com/en-us/academic-services/graph/resources-faq)
The details of the data we utilize can be found in Appendix A.
### Data Preprocessing
_Venue selection._ In our analysis, we primarily considered four impactful Human-Computer Interaction (HCI) venues: the ACM CHI Conference on Human Factors in Computing Systems (CHI), ACM Conference On Computer-Supported Cooperative Work And Social Computing (CSCW), ACM Symposium on User Interface Software and Technology (UIST), and International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp).13 For a broader footprint of HCI research, we created a second dataset of SIGCHI sponsored venues14 -- a total of all 20 SIGCHI sponsored venues15 that appear in the Microsoft Academic Graph, which covers not only large, premier venues such as CHI, but also smaller, more specialized venues such as MobileHCI and CHI PLAY. We used this second set as more representative of the overall field of HCI, to further validate our findings and compare with overall patterns reported in other fields of science in a fairer way16.
Footnote 13: [https://www.semanticscholar.org/product/api](https://www.semanticscholar.org/product/api)
_Data Cleaning._ We further conducted data cleaning on the four chosen venues by looking up papers in Semantic Scholar rather than Microsoft Academic Graph. We found that Microsoft Academic Graph metadata sometimes wrongly classify venues such as "Brazil Symposium on Human Factors in Computing Systems" as "CHI". To solve this issue, we filtered out irrelevant papers by manually checking the full name of the venue column from Semantic Scholar, which proves to be of better quality. We then applied this filtering process to all the paper and patent citations to science files by joining over the paper id.
_Data Linking._ In order to better combine the paper and patent information for analysis, we linked patent data, Microsoft Academic Graph data and Semantic scholar data via the Patent Citation Science dataset.17 The joined data after 2019 has incomplete or little coverage, thus we focus our analysis on HCI research papers and patents that cite HCI papers before 2019.
Footnote 13: Starting 2017, the UbiComp conference main technical tracks consist of papers published in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (MWUT), which we captured in our data.
_Final Data Statistics._ Our final data for analysis includes 23,432 papers from the four chosen venues, with 16,014 from CHI, 3,084 from CSCW, 1,746 from UIST, and 2,588 from UbiComp across 1980 to 2018. Within these papers, we captured 69,900 citation records from patent to science, with 42,676 from CHI, 5,900 from CSCW, 17,040 from UIST, and 4,284 from UbiComp, which are associated with 30,660 patents. The broader SIGCHI sponsored venue data include 57,385 papers in total (41% are papers from the four premier
* [12]**United States Patent Hotelling et al.** (10) **Patent No.: US 8,239,784 B2 Hotelling et al.** (45) **Date of Patent:** **Aug. 7, 2012**
* [13]**MODE-BASED GRAPHICAL USER INTERFACES FOR TOUCH SENSITIVE INPUT DEVICES 3,333,160 A 7/1967 Gonski 3,354,514,Ai 11/1970 Englebart 3,609,695 3,9197 PHRle 3,609,105 3,6197 PHRle 3,6197 PHRle 3,748,751 A 7/1973 Broglia et al.** (US); **Duncan Robert Kerr**, San Francisco, CA (US); **Bas Ording**, San Francisco, CA (US); **Bas Ording**, San Francisco, CA (US); **Inaran Chaudhri**, J.S., 3752,220 3/1973 Barka et al.** (178/18) San Francisco, CA (US); **Inaran Chaudhri**, J.S., 3825,730 7/1974 Worthington, Jr. et al.** (179/17)** **San Francisco, CA (US); **Greg Chriffe**, 3,846,826 A 11/1974 Mueller San Jose, CA (US); **Jonathan P. Ive**, San Francisco, CA (US); **Jonathan P. Ive**, San Francisco, CA (US) 4,014,848 A 3/1977 Manas, Jr.** (179/17)** **Jonathan P. Ive**, San Francisco, CA (US) 4,146,924 A 3/1979 Birk et al.** (Continued)
* [14]**Notice: Subject to any disclaimer, the term of this patent is extended or adjusted under 35 CA 1243096 10/1988 U.S.C. 154(b) by 936 days.** (Continued)
## (57)
A user interface method is disclosed. The method includes detecting a touch and then determining a user interface mode when a touch is detected. The method further includes activating one or more GUI elements based on the user interface mode and in response to the detected touch.
Figure 1. Patents are obliged to cite prior art, including prior patents and non-patent literature (e.g. research articles). Here, a patent by Apple Inc., “Mode-based Graphical User Interfaces for Touch Sensitive Input Devices” [36], has citation to relevant HCI papers, including “ActiveClick: Tactile Feedback for Touch Panels”, “A Multi-Touch Three Dimensional Touch-sensitive Tablet”, a mis-named citation to Ken Hinckley (“Kinkley et al.”), and many other references to HCI research.
venues), 83,793 citation records (51% are citations made to the four premier venues), and are associated with 36,024 patents in total (85% patents cited papers from the four premier venues).
Note that for all chosen venues, our data includes not only main conference papers but also extended abstracts, posters and other forms of publications. We did not attempt to filter and focus our analysis only on main conference papers, given the difficulty to classify and challenge fuzzy matching based on venue name (e.g. in our dataset, many posters are not explicitly labeled as poster publications and are hard to differentiate from main conference papers).
We release our dataset at: [https://doi.org/10.7910/DVN/QM851G](https://doi.org/10.7910/DVN/QM851G).
## 4. Results
### RQ1: What is the impact of HCI research on patents?
We first study the quantity of HCI papers that are later recognized by patents and present a table of top papers cited by patents.
_Proportion of papers that get cited by patents._ To assess the extent of HCI research being recognized in patents, we first calculated the aggregated proportion of the number of HCI papers at our four premier HCI venues, and SIGCHI sponsored venues overall, that were cited by patents. We found 20.1% of papers in the four venues, and 13.4% of papers from SIGCHI sponsored venues overall, are recognized by patents. This rate is much higher than the proportion of science cited by patents overall (approximately 1.5% [(51)]), and the prominent journal paper patent rate (9.7% across multiple scientific fields [(8)]). The rate is also much higher than that of bio-medicine in general, a field that has a rich tradition emphasizing translational science, which is at 7.7% [(50)]. We replicated our analysis on premier venues in other areas of Computer Science by comparing the premier HCI venue patent rate (20.1%) with premier venue patent rate of other subfields, finding that AI patent rates (as measured through AAAI and IJCAI, two of the largest and premier AI conferences) are 5%, Natural Language Processing patent rates (as measured through ACL, EMNLP, and NAACL, three of the largest and premier NLP conferences) are 11%, and Computer Vision patent rates (as measured through CVPR, ECCV, and ICCV, three of the largest and premier computer vision conferences) are 25%. Two-proportion z tests further confirm the significance of the difference in percentages with \(z=51.1\), 23.9, -13.1, (\(p<.001\)) when comparing premier HCI venue patent rate with patent rates of premier venues in AI, Natural Language Processing and Computer Vision. Taken together, these results suggest that HCI's impact through patent citations is higher than science overall, biomedicine, AI, and NLP, and roughly at par with Computer Vision, an area of intense industry interest.
Are research citations in patents truly central to the patents, or are they thrown in just to satistify a patent examiner? To answer this question, we leverage a distinction between in-text citations and front page citations in patents. This distinction allows us to more directly measure the impact of HCI research in patents. In-text patent citation to science, as suggested by prior work [(8; 52)], are more likely to "capture the scientific articles upon which the scientists truly relied upon for inspiration" and "have the potential to more accurately represent the sources of scientific inspiration upon which the inventors actually drew in the invention process" since they "tend to be supplied by the inventors themselves", in contrast to "legally binding" front page citations which "tend to be carefully reviewed (and sometimes added) by patent attoreys" [(52)]. We find 4.1% papers in our chosen four venues have been cited intact by patents, whereas the proportion of patent in-text citation to science is 2.3% for SIGCHI sponsored venues and 1.4% for science overall. This result further replicates our finding that HCI research appears to have real impact, surprisingly even morese than many other fields.
Investigating temporal patterns, we plot the total number of HCI research papers in each of the four venue published over years, shown in red in Figure 2. HCI research has grown rapidly over the past 38 years for all four venues, especially at CHI: from 74 papers in 1982 to 1200 in 2018. This growth is particularly pronounced within the last 10 years. We then counted the total number of HCI papers cited by patents by the publication year of the paper and calculated the ratio between the number of HCI papers cited and the total number of HCI papers accepted in a particular year by each venue (blue line in Figure 2). The citation ratios start climbing especially starting around 1990 and persist since then (Figure 2),18 with several conferences observing a third to a half of their papers cited by patents. At UIST in particular, the patent citation ratio reaches 60% - 80% from 1990 - 2010.
Footnote 18: We removed years where conferences did not meet from our analysis and smoothed the curve, e.g. CSCW was only held every other year until 2010.
The citation ratio decreased after 2015. One possible explanation is the time lag between patent and paper is long, e.g., it might take a decade for a paper to start gathering patent citations, and papers since 2015 are still too young by this metric. This time lag will be further discussed in Section 4.2. In other words, the data are right censored, i.e., more recent papers have not been fully recognized by patents captured in our dataset. As such, we expect a higher proportion of HCI papers overall will be referenced by patents eventually.
_Increasing citations to HCI research in patents._ A total of 30,660 patents cite research in the four chosen venue, and 36024 patents cite research from SIGCHI sponsored venues overall. This raw volume began increasing after 2000 (Figure 3, and has more than quintupled since 2000 at CHI from around 175 patents per year in 2000 to over 1000 per year in 2014). However, the number of patents plateaus and even decreases a bit in more recent years, e.g. patents begin citing less and less CSCW research starting in 2014. This could be a result of changes on the demand side, e.g., the industry is less interested in novel social computing applications, or on the supply side, e.g., HCI publishing more papers that are not intended to be as industry-relevant. More evidence is needed to derive the mechanisms behind this result, beyond the scope of our current work.
_Top cited papers by patents in HCI._ We further examined the HCI papers that were cited the most by patents by each venue (Table 1). Papers highly cited by patents also tend to be highly cited by research. The papers most highly cited by patents are primarily
systems work, e.g., building a new system or proposing a new design. This result parallels with the earlier observation that UIST has the highest rate of papers cited by patents since UIST is particularly targeted at new interfaces, software, and technologies. Most papers in this list were published prior to 2005; however, the majority of the patents that cited HCI papers come after 2005, indicating again the potential long time lag between paper publication and patent reference in Section 4.2.
_Highly-cited papers in academia are more likely to be recognized by patents_. Moreover, we investigated how academic impact
Figure 2. Left: the number of papers published by each conference per year (red) and the number of papers published in that year that were later cited by at least one patent (blue), at ACM CHI, CSCW, UbiComp, and UIST. Right: a substantial proportion of HCI papers are recognized by patents, e.g. 60% - 80% UIST papers are recognized by patents 1990 - 2010.
relates to patent impacts, measured by the paper's number of citations from other academic papers (academic citation count) and the number of citations from patents (patent citation count). Figure 4 shows the academic citation count for both papers recognized by patents and papers not recognized by patents over time. Patent-cited papers have higher paper citations (average academic citation count 117.1) than non-patent-cited papers (average academic citation count 27.9), a difference that is significant via an unpaired t-test (\(p<.001\)), Cohen's D=0.58.
We further conducted zero-inflated negative binomial regression19 over patent citation and paper citation count in CHI, CSCW, UIST, and UbiComp and get regression coefficient of 0.0233, 0.0172, 0.0316, and 0.0175 respectively (\(p<.001\)). The coefficient indicates that highly-cited papers in academia are indeed more likely to be cited by patents. Such a relationship is especially salient at UIST.
Footnote 19: Zero-inflated negative binomial regression is ideal for modelling count-based dependent variables with zeroes, which corresponds to our data where a significant proportion of HCI papers get no patent citation.
### RQ2: When is the impact of HCI research on patents?
How long does it take for patents to recognize papers? To examine this question, we investigated the time lag between patent and paper.
_The time lag between patent and paper is long and getting longer._ To measure how long it takes for an HCI paper to be recognized by patents, for each patent, we investigated the time lag between the issue date of the patent and the publication date of all papers it cited from our four chosen venues. We measured the lag from the patent backward rather than from the paper forward because we cannot know whether a paper will receive a citation but has not yet--but we can know how far back a patent's citations reach.
In the four premier HCI venues, the average patent-paper lag is 10.5 years (\(\sigma=6.8\) years), indicating that patents on average reference HCI research papers published 10.5 years before the patent filling date but there is significant variance over the time lag.
We then studied how the time lag varies over time by aggregating the patent-paper time lag at the individual patent levels. As Figure 5a) shows, the median difference between the time the cited paper is published and the time the paper is cited by the patent, is becoming larger from 1989 to 2014 for all the venues from about around 5 years to around \(10-15\) years. However, since 2014, this trend bifurcates among different venues: the time lag for CSCW increases to over 15 years and Ubicomp decreases to about 10 years in 2017. We also noticed that all venues have nearly indistinguishable trends except Ubicomp, which has about 3 years of time lag lower than other venues. In recent years, CSCW takes the longest time to be recognized by patents, while UIST and UbiComp take a shorter time, which could be explained by the fact that more system-driven works are likely to diffuse more quickly into practice.
We also examined the time lag between the patent and its most recent cited paper (Figure 5b), testing how recent the freshest research is that patents draw on. These general trends are consistent with the median time lag. Again, the difference between the time its most recent cited paper was published and the time it is patented also becomes larger from 1989 to 2011 for all the venues, from less than 5 years to around 10 years. This increase gradually slowed down, leading to a slight decrease in more recent years.
The patent citation also involves different sources, some are added by the applicants/inventors, while others are added by patent examiners. The dataset we used also provides a breakdown of reference types, including applicant/inventor added, the examiner added, other, and unknown types. References added by patent examiners are generally more recent (average time lag: 6 years) than what the inventor added (average 11.8 years), although similar trends of long time lags and increasing time lags are still observed.
Figure 3. Left: over 1000 patents are citing CHI paper each year after 2014. The number of patents citing HCI research began rising after 2000 and more than quintupled since then. Right: the number of patents citing SIGCHI sponsored venues follow similar trend, as a large proportion (85%) made references to the four premier venues.
All results here indicate that patents mostly cite old research, and are citing increasingly older research, which holds true across venues and reference types. This conclusion is largely identical to what is found in science in general [52]. We replicated our analysis on other areas of Computer Science in a similar way as in Sec 4.1, finding that the time lag between patent and their referenced papers for AI, Natural Language Processing, and Computer Vision are 17 years, 13 years, and 10 years respectively, suggesting similar patterns across subfields in Computer Science.
_HCI research has moved on by the time a paper receives patent attention._ Has the HCI community left an idea behind by the time industry gets interested? Concerns circulate that HCI has a reputation for trend following and jumping to new shiny areas every few years [12; 32]. Are patent-cited papers still receiving academic interest by the time it starts receiving patent citations? To answer this question, for all papers from the four chosen venues that eventually get cited by patents in our dataset, we compare (a) the time lag between the publication year of the paper and the issue year of the first patent that cites the research paper (_first
\begin{table}
\begin{tabular}{l c c c} \hline
**Title** & & **Patent Citations** & **Paper Citations** & **Year Published** \\ \hline \hline \multicolumn{4}{c}{**CHI**} \\ \hline A multi-touch three dimensional touch-sensitive tablet & 708 & 231 & 1985 \\ PaperLink: a technique for hyperlinking from real paper to electronic content & 200 & 134 & 1997 \\ Bringing order to the Web: automatically categorizing search results & 196 & 486 & 2000 \\ A study in two-handed input & 175 & 544 & 1986 \\ Generalized fisheye views & 175 & 2180 & 1986 \\ SmartSkin: an infrastructure for freehand manipulation on interactive surfaces & 166 & 770 & 2002 \\ AppLens and launchTlle: two designs for one-handed thumb use on small devices & 159 & 133 & 2005 \\ Active click: tactile feedback for touch panels & 156 & 195 & 2001 \\ Finding others online: reputation systems for social online spaces & 153 & 100 & 2002 \\ Applying electric field sensing to human-computer interfaces & 142 & 272 & 1995 \\ \hline \multicolumn{4}{c}{**CSCW**} \\ \hline GroupLens: an open architecture for collaborative filtering of ntenews & 185 & 5771 & 1994 \\ WebSplitter: a unified XML framework for multi-device collaborative Web browsing & 166 & 186 & 2000 \\ Blogging as a social activity, or, would you let 900 million people read your diary? & 121 & 584 & 2004 \\ MMCConf: an infrastructure for building shared multimedia applications & 106 & 313 & 1990 \\ An experiment in integrated multimedia conferencing & 103 & 157 & 1986 \\ Collaboration using multiple PDAs connected to a PC & 94 & 391 & 1998 \\ Interaction and outeraction: instant messaging in action & 90 & 1225 & 2000 \\ Providing presence cues to telephone users & 83 & 177 & 2000 \\ Design of a multi-media vehicle for social browsing & 72 & 331 & 1988 \\ Distributed multiparty desktop conferencing system: MERMAID & 69 & 153 & 1990 \\ \hline \multicolumn{4}{c}{**UIST**} \\ \hline Sensing techniques for mobile interaction & 254 & 592 & 2000 \\ The world through the computer: computer: computer augmented interaction with real-world environments & 227 & 487 & 1995 \\ HolWolall: designing a finger, hand, body, and object sensitive wall & 197 & 243 & 1997 \\ A survey of design issues in spatial input & 166 & 417 & 1994 \\ Tilting operations for small screen interfaces & 158 & 412 & 1996 \\ Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays & 156 & 527 & 2003 \\ DiamondTouch: a multi-user touch technology & 153 & 1336 & 2001 \\ The document lens & 135 & 416 & 1993 \\ The DigitalDesk calculator: tangible manipulation on a desk top display & 132 & 324 & 1991 \\ Pad++: a zooming graphical interface for exploring alternate interface physics & 131 & 754 & 1994 \\ \hline \multicolumn{4}{c}{**UbiComp**} \\ \hline Validated caloric expenditure estimation using a single body-worn sensor & 113 & 83 & 2009 \\ InfoScope: Link from Real World to Digital Information Space & 67 & 34 & 2001 \\ Self-Mapping in 802.11 Location Systems & 63 & 130 & 2005 \\ The NearMe Wireless Proximity Server & 62 & 162 & 2004 \\ Predestian: Inferring Destinations from Partial Trajectories & 51 & 498 & 2006 \\ UbTable: Impromptu Face-to-Face Collaboration on Horizontal Interactive Surfaces & 40 & 261 & 2003 \\ Accurate GSM Indoor Localization & 37 & 537 & 2005 \\ Very Low-Cost Sensing and Communication Using Bidirectional LEDs & 34 & 157 & 2003 \\ Particle Filters for Location Estimation in Ubiquitous Computing: A Case Study & 33 & 254 & 2004 \\ PowerLine Positioning: A Practical Sub-Room-Level Indoor Location System for Domestic Use & 31 & 152 & 2006 \\ \hline \end{tabular}
\end{table}
Table 1. Top CHI, CSCW, UIST, and UbiComp papers cited by patents. The majority of them are highly-cited papers in academia whose major contribution is a system.
ponent citation lag_), and (b) the time lag between the publication year of the paper and the paper's "peak citation year" when the research paper gets the most academic citations (_peak citation lag_).
Peak citation lag averages 5.74 years in our dataset, compared with 7.48 years for first patent citation lag.20 A paired t-test confirms that the difference between these two lags are significant \(t(3740)=18.3\) (\(p<.001\)), Cohen's D=0.38. This result supports the concern that HCI's focus shifts to other topics by the time industry take up an idea.
Footnote 20: The first patent citation lag is lower than patent backward citation lag reported earlier (10.5 years) due to right censoring, i.e. recent patent-cited papers are biased towards short lags since those with long lags have not yet been observed in the dataset. Peak citation flag have similar issues. If we allow paper enough time to accrue patent citations, e.g. focus the analysis on papers published before 2000 (cutoff year), we get an average first patent citation lag of 10.4 years (thus replicating the prior results) and peak citation lag of 7.5 years. We varied the cutoff year, and found on average first patent citation lag is always longer than peak citation lag which suggests the robustness of our finding.
Self-cite tends to be faster.One exception to this temporal pattern is that self-citation patents have a shorter patent-paper time lag. Since 2008, the time lag for the non-self-cite patents increased
Figure 4. Papers cited by patents receive more academic citations in HCI.
Figure 5. The time lag between patent and paper is long and getting longer across venues.
rapidly and was above 14.6 years in 2018, while the self-cite patents remain below 6.3 years, which suggests that papers transferred faster by authors themselves into patents compared with those transferred by others.
### RQ3: Where is the impact of HCI research on patents?
Which HCI research topics are the focus of industry activity? To answer this question, we compare non-patent-cited HCI papers to patent-cited HCI papers in the four chosen venues via Latent Dirichlet Allocation (LDA), a classic method of topic modeling (Beng et al., 2018). LDA automatically discovers topics within documents, where each topic is represented as a probability distribution of words. Each document can also be represented as a probability distribution over different topics.
We concatenated each paper title with its abstract (if available) to represent its contents. Similarly, we concatenated each patent title with its abstract (if available) to represent the patent's contents. We then tokenized the text corpora into unigrams and bigrams, filtered out terms that appear fewer than 5 times in the corpus, removed stop words in English, and then ran LDA modeling. We varied the number of topics and align on seven topics resulting in the highest quality topics. Figure 6 reports the result. Through checking representative documents and word clusters with HCI experts, we titled each topic: topic 0 is related to patent terms, the topic is 1 on modalities, topic 2 is system interaction, topic 3 is on evaluations, topic 4 is on theory, topic 5 is on social and experience design, and topic 6 is on input techniques.
We then computed the topic distributions for each document (paper or patent) in our corpus, then aggregated topic distributions of all documents within a specific year that belong to a certain document category (patents, patent cited papers, or non-patent cited papers) so as to get an estimated number of documents that belong to a particular topic for that document category for a particular year. In the first row of Figure 7, we plotted the topic distribution for patent-cited HCI papers (left), non-patent cited HCI papers (middle), and patents (right), i.e., how many papers belong to topic X in year Y. The second row of Figure 7 normalizes this topic distribution, i.e., what is the proportion of topic X in year Y for a specific document category, to better illustrates the distribution pattern.
As can be observed from Figure 7, system interaction has dominated the patent-cited HCI papers over time, indicating that system-oriented research has been of considerable importance in patent-cited HCI research. From 1980 to 2000, about 40% patent-cited HCI paper are system interaction related. After 2000, the percentage of system interaction decreased to about 20% but began expanding again in 2015. We also observed that input techniques have expanded significantly over time and reached nearly 20% after 2015. Evaluations have also grown in general and contributed about 20% of all patent-cited HCI papers.
In comparison, the topic distribution of non-patent cited papers shows a very different pattern. The results mirror the methodological plurality of HCI, where not all contribution types have an industry impact. Theory work is highly visible in non-patent cited HCI papers over time, though the proportion is gradually decreasing from about 40% before 2000 to about 20% in 2018. Social and experience design has grown significantly from nearly 0 percent in 1980 to about 20% in 2018, indicating behavior-oriented research has been of considerable importance in non-patent-cited HCI papers. Evaluations and system interaction contributed to about half of all non-patent-cited HCI papers in 1980, but this percentage has decreased to about 30% in 2018. Through unpaired t-test, we further verify there exist statistically significant differences between topic distributions in patent-cited papers and non-patent cited papers: there is a higher proportion of theory (\(p<.001\)), social & experience design (\(p<.05\)) work, and lower proportion of system interaction (\(p<.001\)), modalities (\(p<.001\)) work in non-patent-cited HCI papers compared to patent-cited counterparts. We emphasize that this is not a negative outcome for theory, behavioral, and other research that does not produce artifacts, as they have an impact through other channels, or could influence patent in an indirect way (Kang et al., 2018).
Additionally, the variation of the patents' topic distribution over time is not consistent with that of papers. Since 1990, patent topics have been dominated by input techniques,21 which first expand from 1990 to 1993, then slightly shrink from 1993 to 2010 and expand again since 2010. In 2018, about 40% of patents that cite HCI research papers are input techniques. We also observed this growth in patent-cited HCI papers, but not this significant.
Footnote 21: We exclude analysis of topic - ‘patent terms’ as the topic is generic language use in patents.
### RQ4: Who is involved in the process of recognizing HCI research on patents?
Last, we investigate through the four premier HCI venues which institutions are most likely to develop patents that recognize HCI research, and which institutions conduct HCI research that are most cited by patents. Such analysis is important because it identifies the role of different stakeholders within the technology translation landscape (Kang et al., 2018).
Apple, Microsoft, IBM, but no longer Xerox: top institutes citing HCI researchWe examined who are the top patent assignees (the entity that has the property right to the patent, e.g. firm) that cite HCI research. The top patent assignees have been dominated by companies: Apple, Microsoft, and International Business Machines Corporation (IBM) are the top three companies that were granted the highest number of HCI-citing patents in the dataset. Other rise and fall over time. See appendix C for more details.
PARC, CMU, MIT: top institutes that publish patent-cited researchWe assessed the institutes that published the most patented HCI papers across the years. As Figure 8 shows, contrary to the fact that top patent assignees have been dominated by industries, top institutes that published patent-cited HCI papers have been a combination of universities and companies. Top universities include Carnegie Mellon University, Massachusetts Institute of Technology, University of California, and University of Washington. Top companies that published patent-cited HCI papers include Xerox Palo Alto Research Center and Microsoft. The ratio of patents cited among all HCI papers significantly dropped from nearly half before 2005 to less than 30% for most institutes after 2005, due to
the fact that the total number of HCI papers grew significantly and the right censoring issue.
Overall, 35.5% of Microsoft's papers, 31.0% of IBM's, and 65.1% of Xerox's were cited by patents. In comparison, universities have a lower rate of papers cited by patents, e.g. 25.2%, 15.3%, 26.9% of papers were recognized among Carnegie Mellon University, the University of California system, and MIT respectively. This indicates that among institutes publishing the most HCI papers, the most HCI papers are the most HCI papers.
Figure 6. Topics were identified through a Latent Dirichlet Allocation (LDA) analysis of the combined paper-patent corpus.
Figure 7. The first row shows the breakdowns of papers across 7 topics in HCI over time. The second row depicts the percentage of each topic in terms of paper number. Three columns depict “topic distribution of patent-cited HCI papers”, “topic distribution of non-patent cited HCI papers” and “topic distribution of patents” respectively. System Interaction dominates the patent-cited HCI papers while Theory dominates the non-patent cited HCI papers and Input Techniques dominate patents over time.
papers from the industry have a higher proportion of papers recognized by patents. However, the difference between industry and universities becomes smaller when removing self-citing patents.
_Self-citation._ We also explored the degree of self-citation. We find that 13.9% of patents self-cite the inventor's own research. Although the number of self-citing patents is growing, the percentage of self-citations in all HCI patent citations is decreasing from around 20% to 5% in recent years. This suggests that while the HCI field is expanding, the number of researchers directly referring to their own research in patents is not growing at the same rate. Most of the self-citations also come from industry, with Microsoft and Xerox constituting 34.8% and 11.2% of total self-citations. Self-citation from academia is much less common.
**Summary of conclusions:** Through our analysis, we find that HCI research has had a significant impact on patents, with an increasing number of patents recognizing research in CHI, CSCW, UIST, and UbiComp. Patents are more likely to refer to systems-oriented and highly-cited research in academia. However, the time lag between patent and paper is long (>10 years) and getting longer, suggesting HCI research and practice may be inefficiently connected. We further verify the robustness of our main findings through two additional analyses, which we report in Appendix D.
## 5. Discussion
In this section, we discuss the implications of our findings:
### The patent-research relevance landscape in HCI
By combining the findings from our large-scale analyses with that of prior qualitative evidence established by literature (e.g. case studies (K
_Issues with the current HCI translation into patents:_ As argued by Bill Buxton in 'the long nose of innovation' (Bill et al., 2015), the bulk of innovation takes place over a long period: the mouse was first built in 1965 by William English and Doug Engelbart, but was only popularized in the 1990s when Microsoft released a large-scale commercial mouse; multitouch was published in 1985, but took 22 years to become a product. Our analysis further demonstrates that even the initial step of having research recognized in a patent, which may be well before there is an actual product, takes considerable time. In fact, the ubiquity of long time delay between research and practice, and thus lack of immediate impact on the industry after the publication of a research paper, could be one underlying reason why many papers on HCI translation argue that HCI lacks practical impact (Bill et al., 2015; Kiefer et al., 2016; Kiefer et al., 2016). Furthermore, our analysis demonstrates that the time lag between patent and research is getting longer over time, indicating that the translation process in HCI may become more inefficient over time. This result is in line with a general trend across science (average over time: 14.4 years), where they report an average patent citation to science time lag of about 8 years in the 1990s, rising to about 15 years in 2018 (Kiefer et al., 2016). The specific reason for the (increasing) time lag would need further work. We also show that the HCI community often leaves an idea behind by the time industry gets interested, as a paper's peak citation lag is generally shorter than the paper's first patent citation lag. The result indicates that with a long time lag, HCI research has moved on and is exploring new emerging technologies that are not yet reliable enough, cheap enough, power-efficient enough, or accurate enough for the industry yet. The observation supports the observation that HCI research often plays "the time machine game",23 where it fast forwards into the future by acquiring early versions of emerging technology (e.g., VR, AR, multi-touch, AI) and exploring the interactive applications of that technology. Unless HCI is directly working on reducing those barriers to industry entry for that technology, HCI research cannot directly accelerate the time lag: it is simply painting a compelling vision of the future before that future arrives.
Footnote 23: A term attributed to Jeff Pierce, formerly a research manager at IBM Research and faculty member at Georgia Tech.
Footnote 23: Microsoft Research, for example, would award decorative ”patent cubes” to researchers for each new patent they co-authored, which researchers would often stack into decorative pyramids and display in their offices
### How could the HCI community do better to facilitate technology transfer and industrial impact?
_Encourage communications and collaborations across academia and industry._ Through our analysis, we have found that even though research articles from both academia and industry are recognized by patents, the proportion of papers in academia recognized by patents is much lower. While the result could be that industry research papers by themselves are more applied than research papers from academia, or that industry has more internal incentives to have their research patented24, this could also be a sign that practitioners are not fully aware of some application-oriented advances in academia, and that information diffusion between academia and practice is inefficient (Bill et al., 2015).
Footnote 24: Microsoft Research, for example, would award decorative ”patent cubes” to researchers for each new patent they co-authored, which researchers would often stack into decorative pyramids and display in their offices
Our work thus echoes calls for a more inclusive and translation-friendly environment (Bill et al., 2015; Kiefer et al., 2016; Kiefer et al., 2016; Kiefer et al., 2016): that both academia and industry should 1) better recognize the importance of technology translation rather than considering translation irrelevant, 2) establish more communication and collaboration channels to engage people, e.g. SIGGRAPH-style Emerging Tech festivals where academic researchers show their published HCI work to an applied audience and encourage researchers in serving as advising role in the industry, and 3) involve more HCI materials in Computer Science curriculum at universities to get 'future practitioners' more familiar with HCI research ideas, and thus prepare them as translational developers who are more likely to bridge academia and industry (Kiefer et al., 2016)
_Encourage self-driven technology transfer._ Self-driven technology transfer (e.g. patents recognizing one's own paper) generally happens much faster than technology transfer in general. Intuitively, the self-driven transfer would not encounter many of the same communication and information diffusion barriers. Self-driven technology transfer could also potentially solve many of the'recognition' issues in the translational process as discussed in prior works (Kiefer et al., 2016). However, as shown in our analysis, though the amount of self-driven technology transfer in HCI is going up over time, it is not on par with the rate of increase for research articles. While not all researchers should actively engage in technology transfer, there could be more steps to be taken to encourage self-driven technology transfer from the academic side so that translation could happen more efficiently, e.g. through better supporting and recognizing attempts to self-translate one's own research by providing legal apparatuses and funding support. Meanwhile, we want to emphasize while there are benefits of self-driven transfer, it may currently not distribute opportunities equally. For instance, in the life sciences (Kiefer et al., 2016), women faculty members patent at about 40% of the rate of men. It would be important to identify and mitigate these potential issues so as to ensure an inclusive technology transfer environment. Relatedly, as suggested by prior work (Kiefer et al., 2016), there exist multiple translational gaps in HCI, and basic researchers should also be encouraged to engage more with applied researchers and do more system work, which would eventually help translate HCI research insights into industry impacts.
_Recognizing translational work in HCI._ More broadly, our work echoes prior work on the need of recognizing translational efforts in HCI. For instance, when allocating funding or considering researcher promotion, their impacts in the industry could be taken into consideration as a separate metric aside from impacts within academia. Our work points to a potential way to quantify one important pathway towards HCI research's impacts on the industry, through analyzing patent-to-science citation data.
_Impact signals._ Prior approaches to quantifying research impact mostly focus on impact within academia through bibliometric analysis. However, no quantitative metric fully captures the complexities of our world. Could the h-index be fruitfully complemented with other information? (a "patent relevance" p-index?) While our analysis show impacts in academia and impacts in patents correlate, we also find papers with high patent citations do not necessarily have high paper citations: in one extreme case, the most
patent-cited paper in our dataset, "A multi-touch three-dimensional touch-sensitive tablet" (Krishnan et al., 2017), is more popular in the patent world than in academia. If evaluations primarily consider the academic impacts of such research work, the work's value may have been underestimated. As one potential pathway to industry impacts that are relatively easy to scale, patents provide a potential signal to more holistically evaluate research.
Of course, patent relevance, or practice relevance in general25, is not the solitary metric of scientific value, and research and researchers should not be judged based on a single metric, e.g. to receive funding or get a promotion. Thus, our work should not be interpreted as stating that non-patent cited research represents any sort of failure. There are many, many examples of influential HCI research that is not patented (or even patentable). For instance, our work shows that system building or application-oriented HCI research is more likely to find relevance in patents rather than design-oriented or behavioral research. The result is not an indication that applied-oriented research is more valuable: there could be the indirect influence of other types of works on application-oriented research, e.g. applied research getting inspiration from behavior work, as suggested by the translational science model in HCI (Krishnan et al., 2017) - which we seek to address in future work, and 2) it is equally important to maintain a diversity of research ideas, which has proven to facilitate greater innovation for science in general (Krishnan et al., 2017). If the measurement of this impact is desirable, we will require new methods, such as multi-hop influence over citation network (Bogorian et al., 2017), linguistic concept diffusion (Krishnan et al., 2017), from the paper to the public or media (Krishnan et al., 2017; Krishnan et al., 2017).
Footnote 25: Though arguably it’s much harder to quantify other forms of practice relevance, e.g. how research influence design patterns and open source software
### Limitations and Future Work
Patent citations to research are only a proxy signal of industry impact, which is a hard-to-quantify concept otherwise. It is only one, among many (e.g. open source software, design patterns), potential pathway to industry impacts. First, not all patents will turn into products or practices, so they may not be actual "industry impact" instances (false positives). There could be many other factors, such as assignee strategy and resources, that could influence the process. Even if a patent does end up as a product, most of the time the patent will not be valuable or impactful, with 97% of all patents never recouping the cost of filing them26. However, the fact that inventors decide to go through the long and expensive process of filing a patent to protect their intellectual property does indicate they are considering their invention having at least some potential to be of relevance to the practice domain, which could be regarded as an intended act aiming at industry impact or technology transfer.
Footnote 26: [https://www.forbes.com/sites/stephenkey/2017/11/13/in-todays-market-da-patents-eur-matter/](https://www.forbes.com/sites/stephenkey/2017/11/13/in-todays-market-da-patents-eur-matter/)
Second, industry impact could happen even if there is no patenting process involved (false negative), which is not uncommon in software (Krishnan et al., 2017): startups will launch products without patents from time to time, which is quite different from the innovation landscape of more traditional fields; design processes (e.g., usability testing, heuristic evaluation), design patterns, and open source software (e.g., d3, Vega Lite) also have significant industry impact that is not reflected though patents. As such, our analysis of using patent citation to HCI research papers could be different from the actual translation landscape: the patent dataset could introduce both false positives and negatives, e.g., even if a patent cites a HCI paper, it may never be taken up in practice as product, and an actual product that gets influenced by HCI research that is unpatentable will not be observed and measured through our current approach.
Despite all the shortcomings of patent citation to science, the availability and scale of the dataset make it a rare lens in the innovation literature to enable conclusions on the research-practice relationship at scale (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). In our work, in addition to building on these methods from the innovation literature, we tied our analysis to qualitative evidence discussed in prior works so as to validate our findings.
In future work, we plan to 1) involve more qualitative evidence (e.g. interviewing inventors' motivation behind citing HCI research) to further validate our findings, and 2) take more steps to quantify how HCI research turns into _valuable_ inventions, e.g. by using patent citations to other patents as a proxy of patent value, which correlates well with other metrics of patent value, e.g. whether they are renewed to a full term, and whether they get licensed (Krishnan et al., 2017; Krishnan et al., 2017).
Our work also currently mostly focuses on measuring industry relevance at the paper level, which may not necessarily be the principal unit of knowledge: for example, several papers on the same idea can get cited by patents. While we have made preliminary attempts to analyze the topics prevalent to patents, patent-cited research papers, and non-patent cited research papers, future work could better study at the concept level what specific research ideas are transferred into research, either through keywords provided by the author (which is unfortunately not available in our current dataset), or natural language processing based approach such as phrase mining (Krishnan et al., 2017), which may help track transfer of innovations at a more fine-grained level.
Other limitations include: (1) our dataset is focused on United States patents, which limits our cultural context and generalizability, though arguably a significant proportion of inventors/organizations using (and pushing) HCI research in practice are US-based (Krishnan et al., 2017); (2) while discussing in a descriptive way in our paper with findings on the role of academic impacts (section 4.1), topic (section 4.3), and institute/actors (section 4.4) in relating to patent impact, we do not have causal evidence/analysis on the causal mechanisms what cause some papers to have more industry relevance, which is an important topic we seek to address in future work, and (3) if there are recent trends in the last 5-10 years that have changed these patterns, it is still too recent to see their impact.
## 6. Conclusions
In this work, drawing inspiration from the innovation literature, we quantitatively study one important pathway from HCI research to industry impact by conducting a large-scale analysis of how patent documents from USPTO refer to research articles in CHI, CSCW, UIST, UbiComp and other SIGCHI sponsored venues. We contribute to the literature by measuring to what extent HCI research has been featured in patent citations, with a high proportion of papers referenced in patents. Patents are more likely to refer to systems-oriented and highly-cited research in HCI. However, we also reveal potential translation issues: HCI research and practice may not be
efficiently coupled, since the time lag between paper and patent is long and getting longer. Our work not only demonstrates the potential of using patent citation data to science as a powerful tool to study the industry impact of HCI research, but also points to suggestions for the HCI community to better facilitate translation from research to practice.
## Acknowledgments
The authors thank Yian Yin for helpful suggestions on polishing the work, and Mary Czerwinski, Bongshin Lee, Lucy Lu Wang, James Zou, Shumin Zhai and many others for insightful discussions. Hancheng Cao was supported by Stanford Interdisciplinary Graduate Fellowship.
|
2309.13213 | The LHCb ultra-fast simulation option, Lamarr: design and validation | Detailed detector simulation is the major consumer of CPU resources at LHCb,
having used more than 90% of the total computing budget during Run 2 of the
Large Hadron Collider at CERN. As data is collected by the upgraded LHCb
detector during Run 3 of the LHC, larger requests for simulated data samples
are necessary, and will far exceed the pledged resources of the experiment,
even with existing fast simulation options. An evolution of technologies and
techniques to produce simulated samples is mandatory to meet the upcoming needs
of analysis to interpret signal versus background and measure efficiencies. In
this context, we propose Lamarr, a Gaudi-based framework designed to offer the
fastest solution for the simulation of the LHCb detector. Lamarr consists of a
pipeline of modules parameterizing both the detector response and the
reconstruction algorithms of the LHCb experiment. Most of the parameterizations
are made of Deep Generative Models and Gradient Boosted Decision Trees trained
on simulated samples or alternatively, where possible, on real data. Embedding
Lamarr in the general LHCb Gauss Simulation framework allows combining its
execution with any of the available generators in a seamless way. Lamarr has
been validated by comparing key reconstructed quantities with Detailed
Simulation. Good agreement of the simulated distributions is obtained with
two-order-of-magnitude speed-up of the simulation phase. | Lucio Anderlini, Matteo Barbetti, Simone Capelli, Gloria Corti, Adam Davis, Denis Derkach, Nikita Kazeev, Artem Maevskiy, Maurizio Martinelli, Sergei Mokonenko, Benedetto Gianluca Siddi, Zehua Xu | 2023-09-22T23:21:27Z | http://arxiv.org/abs/2309.13213v1 | # The LHCb ultra-fast simulation option, Lamarr
###### Abstract
Detailed detector simulation is the major consumer of CPU resources at LHCb, having used more than 90% of the total computing budget during Run 2 of the Large Hadron Collider at CERN. As data is collected by the upgraded LHCb detector during Run 3 of the LHC, larger requests for simulated data samples are necessary, and will far exceed the pledged resources of the experiment, even with existing fast simulation options. An evolution of technologies and techniques to produce simulated samples is mandatory to meet the upcoming needs of analysis to interpret signal versus background and measure efficiencies. In this context, we propose Lamarr, a Gaudi-based framework designed to offer the fastest solution for the simulation of the LHCb detector. Lamarr consists of a pipeline of modules parameterizing both the detector response and the reconstruction algorithms of the LHCb experiment. Most of the parameterizations are made of Deep Generative Models and Gradient Boosted Decision Trees trained on simulated samples or alternatively, where possible, on real data. Embedding Lamarr in the general LHCb Gauss Simulation framework allows combining its execution with any of the available generators in a seamless way. Lamarr has been validated by comparing key reconstructed quantities with Detailed Simulation. Good agreement of the simulated distributions is obtained with two-order-of-magnitude speed-up of the simulation phase.
## 1 Introduction
The LHCb experiment [1] has been originally designed to study rare decays of particles containing \(b\) and \(c\) quarks produced at the Large Hadron Collider (LHC). The LHCb detector is a single-arm forward spectrometer covering the pseudorapidity range of \(2<\eta<5\), that
includes a Tracking system and a Particle Identification (PID) system [2]. The Tracking system provides high-precision measurements of the momentum \(p\) of charged particles and the position of primary vertices. Different types of charged hadrons are separated using the response of two ring-imaging Cherenkov (RICH) detectors. Photons, electrons and hadrons are identified by the calorimeter system relying on an electromagnetic calorimeter (ECAL) and a hadron calorimeter (HCAL). Finally, a dedicated system named MUON identifies muons alternating layers of iron and multi-wire proportional chambers. The RICH, calorimeters and MUON detectors are part of the PID system.
Interpreting signal, rejecting background contributions and performing efficiency studies requires to have a full understanding of its data sample, from the high-energy collisions to the set of physics processes responsible for the detector high-level response. This kind of studies greatly benefits from the use of simulated samples. At LHCb, the simulation production mainly relies on the Gauss framework [3] that implements the generation and simulation phases, and is based on the Gaudi processing framework [4]. The high-energy collisions and all the physics processes that produce the set of particles (e.g., muons, pions, kaons or protons) able to traverse the LHCb spectrometer are simulated during the _generation phase_ using software like Pythia8[5] and EvtGen[6]. The radiation-matter interactions between the detector materials and the traversing particles are reproduced during the _simulation phase_ that aims to compute the energy deposited in the active volumes and relies on the Geant4 toolkit [7]. Then, a separate application converts the energy deposits into raw data compatible with the real one collected by LHCb.
The simulation of all the physics events occurring within the detector is the major consumer of CPU resources at LHCb, having used more than 90% of the total computing budget during LHC Run 2. The upgraded version of the experiment is designed to collect one-order-of-magnitude larger data samples during Run 3. Meeting the upcoming and future requests for simulated samples is not sustainable relying only on the traditional _detailed simulation_. For this reason, the LHCb Collaboration is spending great efforts in modernizing the simulation software stack through the novel experiment-independent framework Gaussino1[8; 9] on which a newer version of Gauss will be built on, and in developing faster simulation options, some of which also powered by machine learning algorithms [10; 11; 12; 13].
Footnote 1: Visit [https://gaussino.docs.cern.ch](https://gaussino.docs.cern.ch) for additional details.
## 2 Fast simulation VS. ultra-fast simulation
Simulating all the physics processes of interest for LHCb is extremely expensive in terms of computing resources, especially the Geant4-based step that is the major CPU-consumer. Speeding up the computation of the energy deposits or, more generally, the detector response is mandatory to satisfy the demand for simulations expected for Run 3 and those that will follow. Actually, this is a shared problem across the High Energy Physics (HEP) community that is collectively facing it, including by exploiting the latest achievements in Computer Science and adapting _deep generative models_ to parameterize the low-level response of the various experiments [14; 15; 16]. The literature refers to this kind of strategies with the term _fast simulation_. Fast simulations share their data processing scheme and the reconstruction step with the detailed simulation (as depicted in Figure 1), and are proven capable of reducing the computation cost of a simulated sample up to a factor of 20.
To meet the upcoming and future requests for simulated samples, the LHCb Collaboration is also considering a more radical approach based on the so-called _ultra-fast simulation_ paradigm. In this case, the aim is to directly reproduce the high-level response of the detector relying on a set of parameterizations developed to transform generator-level particles
information into reconstructed physics objects as schematically represented in Figure 1 (bottom). Such parameterizations can still be built using generative models, like _Generative Adversarial Networks_ (GAN), proven to succeed in reproducing the high-level response of the LHCb detector [17] and offering reliable synthetic simulated samples [18]. Following pioneering studies on the ultra-fast simulation of the electromagnetic calorimeter based on GANs [19], the CMS Collaboration has recently started developing a full-scope ultra-fast simulation based on _Normalizing Flow_, named FlashSim[20].
## 3 Lamarr: the LHCb ultra-fast simulation framework
Lamarr[12, 13] is the official ultra-fast simulation framework for LHCb, able to offer the fastest options for simulation. Originating from the attempt of an LHCb customized version of Delphes[21, 22], Lamarr is an independent project retaining only the inspiration of its modular layout from Delphes. In particular, the Lamarr framework consists of a pipeline of modular parameterizations, most of which based on machine learning algorithms, designed to take as input the particles generated by the physics generators and provide as output the high-level response of the various LHCb sub-detectors.
The Lamarr pipeline can be logically split in two separated chains according to the charge of the generated particles. We expect that charged particles leave a mark in the Tracking system that Lamarr characterizes in terms of acceptance, efficiency and resolution as described in Section 3.1. The reconstructed tracking variables are then used to compute the response of the PID system for a set of traversing charged particles (muons, pions, kaons or protons) as detailed in Section 3.2. In case of neutral particles (e.g., photons), the calorimeters play a key role and, since multiple photons can concur to the energy of a single calorimetric cluster, parameterizing particle-to-particle correlation effects is of major relevance. The solutions under investigation are reported in Section 3.3. The Lamarr pipelines described above are shown in Figure 2.
### Tracking system
One of the aims of the LHCb Tracking system is to measure the momentum \(p\) of charged particles (i.e., electrons, muons, pions, kaons and protons), exploiting the deflection of their trajectories due to the dipole magnet located in between the tracking detectors. Hence, the first step of the _charged chain_ reported in Figure 2 is the propagation through the magnetic field of the particles provided by the physics generators. Lamarr parameterizes the particle trajectories as two rectilinear segments with a single deflection point (inversely proportional to the transverse momentum \(p_{T}\)), implementing the so-called _single \(p_{T}\) kick_ approximation.
The next step requires to select the subset of tracks that fall within the LHCb geometrical acceptance and that have any chance to be reconstructed. To this end, Lamarr uses _Gradient
Figure 1: Schematic representation of the data processing flow in the _detailed_ (top), _fast_ (center) and _ultra-fast_ (bottom) simulation paradigms.
_Boosted Decision Trees_ (GBDT) trained to learn the fraction of candidates that are in the acceptance as a function of the kinematic information provided by the physics generators. Given a generated track in acceptance, we ask whether the latter will be reconstructed and, in case of positive answer, which tracking detectors are involved in the reconstruction procedure. Lamarr statistically infers such information, namely the tracking efficiency, relying on _neural networks_ trained to perform a multi-class classification according to the track kinematics. A major effort is ongoing to improve the performance of the efficiency model on the basis of the type of tracks and particle species (i.e., electrons, muons or hadrons).
At this point, Lamarr disposes of the subset of the generated particles that can be considered as reconstructed tracks, but their kinematics and geometry are still identical to those provided by the physics generators. The smearing of these features, mimicking the effect of the reconstruction, is achieved using GANs. Driven by a _binary cross-entropy_ loss function and powered by _skip connections_, GANs succeed in describing the resolution effects due to, for example, multiple scattering phenomena, only relying on track kinematic information at generator-level as input conditions. A similar GAN-based architecture is used to provide the correlation matrix obtained from the Kalman filter adopted in the reconstruction algorithm to define the position, slope and curvature of each track.
Stacking the parameterizations described above, Lamarr is able to provide the high-level response of the LHCb Tracking system. The resulting reconstructed quantities can be further processed using the LHCb analysis software to combine the parameterized tracks into decay candidates as depicted by the green slot in Figure 1 (bottom).
### Particle identification system
To accomplish the LHCb physics program, disposing of a high-performance PID system is crucial since it allows for discriminating the various particle species that traverse the detector. Lamarr provides parameterizations for the majority of the charged particles for which the PID detectors are relevant (i.e., muons, pions, kaons or protons). Specialized parameterizations for the electrons, encoding the multiple scattering and Bremsstrahlung emission contributions in the interaction with the detector materials, is planned as future development.
Identifying these subset particles involves mainly the RICH and MUON detectors, while the role played by the calorimeters is minor. In general, we expect that the response of the PID system depends only on the specie of the traversing particle, its kinematics, and the detector occupancy. According to these dependencies, Lamarr provides the high-level response for both the detectors using GAN-based models properly conditioned [11, 18]. Given the particle specie from the physics generators, its kinematic information results from the Lamarr Tracking modules, while the detector occupancy is described by the total number of tracks traversing the detector.
Figure 2: Scheme of the Lamarr modular pipeline. According to the charge of the particle provided by the physics generator, two sets of parameterizations are defined: the charged particles are passed through the Tracking and PID models, while the neutral ones follow a different path where the calorimeter modeling plays a key role.
In real data, the combination of the responses from RICH detectors, calorimeters, MUON system and a binary muon-identification criterion implemented via FPGA and named isMuon allows to compute the higher-level response of the PID system, referred to as GlobalPID variables. The parameterization of the GlobalPID variables still relies on conditioned GANs, adding as input what results from the RichGAN and MuonGAN models. The binary output of a neural-network-based implementation of isMuon is used as additional input feature, while no explicit calorimeters contribution is defined leaving the missing information problem to the generator _latent space_.
GAN-based models, driven by a _Wasserstein distance_ loss function and trained using a Lipschitz-constrained discriminator [23], succeed in describing the high-level response of the RICH and MUON systems. Chaining together different GANs, Lamarr is also able to provide the higher-level response of the LHCb PID system, injecting an implicit contribution from the calorimeters.
### Electromagnetic calorimeter
Providing a parameterization for the electrons requires describing the response to Bremsstrahlung photons by the LHCb ECAL detector. Since interested by a multitude of secondary particles, the detailed simulation of the calorimeter system is the most computationally expensive step in the simulation pipeline. The latter is a shared problem across the HEP community, that is investing great efforts in tuning deep generative models to properly parameterize the energy deposited in the calorimeter cells [10, 14, 15, 16]. Such studies belong to the fast-simulation paradigm that aims to reduce the Geant4 use, providing models for the low-level response of the various experiments.
The current version of Lamarr provides a simplified parameterization for the LHCb calorimeter, designed for detector studies and based on a fast-simulation approach. Disposing information at the calorimeter cell level requires running reconstruction algorithms to obtain analysis-level quantities that may become rather CPU-expensive for high-multiplicity events. In addition, since non-physical strategies are used to simulate the energy deposits (as is the case for GANs), there is no certainty that the reconstruction software stack can correctly reproduce the expected distributions for the high-level variables [24]. Hence, the Lamarr project is actively working to provide an ultra-fast solution for the ECAL detector.
Reproducing the calorimeter high-level response is a non-trivial task since traditional generative models rely on the hypothesis that an unambiguous relation between the generated particle and the reconstructed object exists2. Instead, the presence of merged \(\pi^{0}\) and Bremsstrahlung photons may lead to having \(n\) generated particles responsible for \(m\) reconstructed objects (in general with \(n\neq m\)). A strategy to face this particle-to-particle correlation problem can be built using techniques designed in the context of Language Modeling, describing the calorimeter simulation as a _translation problem_. To this end, _Graph Neural Network_ (GNN) [25] and _Transformer_[26] models are currently under investigation.
Footnote 2: To a first approximation, the response of the Tracking and PID systems satisfy this condition.
Both the models are designed to process a sequence of \(n\) generated photons and infer the kinematics of a sequence of \(m\) reconstructed clusters. The non-trivial correlations between any particles of the source sequence (photons) and the target one (clusters) rely on the _attention mechanism_[26, 27]. To improve the quality of the resulting parameterizations, the training of both GNN and Transformer-based models is driven by an adversarial procedure (similarly to what occurs for GANs). The discriminator is currently implemented through a _Deep Sets_ model [28], while further studies are ongoing to replace it with a second Transformer [29]. Considering the complexity of the problem, the preliminary results are promising as depicted in Figure 3, where the joint action of Transf
in deriving the energy distribution on the ECAL face. The center of the calorimeter has not active material since is used to host the LHC beam pipe. It should be pointed out that no constraints are applied to the model output to reproduce such conditions, and that the empty space shown in Figure 3 (right) is the result of the adversarial training procedure.
## 4 Validation campaign and timing performance
The ultra-fast philosophy at the base of the Lamarr framework is being validated by comparing the distributions obtained from machine-learnt models trained on detailed simulation and the ones resulting from standard simulation strategies. In particular, we will briefly discuss the validation studies performed for the charged particles pipeline using simulated \(\Lambda_{b}^{0}\to\Lambda_{c}^{+}\mu^{-}\bar{\nu}_{\mu}\) decays with \(\Lambda_{c}^{+}\to pK^{-}\pi^{+}\). The semileptonic nature of the \(\Lambda_{b}^{0}\) decay requires an interface with dedicated generators, in this case EvtGen. Deeply studied by LHCb, this decay channel includes in its final state the four charged particle species parameterized in the current version of Lamarr, namely muons, pions, kaons and protons.
The validation of the Lamarr Tracking modules is depicted in Figure 4 (left) where the agreement between the \(\Lambda_{c}^{+}\) invariant mass distribution resulting from the ultra-fast paradigm and the one obtained from detailed simulation proves that the decay dynamics is well reproduced and the resolution effects correctly parameterized. To show the good performance of the Lamarr PID models, a comparison between the selection efficiencies for a tight requirement on a multivariate proton classifier is shown in Figure 4 (right).
Comparing the CPU time spent per event by Geant4-based production of \(\Lambda_{b}^{0}\to\Lambda_{c}^{+}\mu^{-}\bar{\nu}_{\mu}\) samples and the one needed by Lamarr, we estimate a CPU reduction of two-order-of-magnitude only for the simulation phase. Interestingly, since the generation of \(b\)-baryons is exceptionally expensive, Pythia8 becomes the major consumer of CPU resources in the ultra-fast paradigm. A further speed-up can be reached reducing the cost for generation, for example using a _Particle Gun_ that simulates directly the signal particles without going through the high-energy collisions, not needed since Lamarr parameterizes the detector occupancy. Even in these physics-simplified settings, the ultra-fast philosophy succeeds in reproducing thee distributions obtained from detailed simulation [12].
Figure 3: Distribution of the \((x,y)\)-position of the reconstructed clusters on the LHCb ECAL face for a \(2000\times 1500\) mm\({}^{2}\) frame placed around the center. The geometrical information is combined with the energy signature properly weighting each bin entry. What obtained from detailed simulation is reported on the left, while the predictions of an adversarial trained Transformer model is shown on the right. The corresponding LHCB-FIGURE is in preparation.
## 5 Integration with the LHCb simulation framework
To be integrated within the LHCb software stack, the parameterizations provided by Lamarr need to be queried from a C++ application, running in the Gaudi framework. Traditional deployment strategies were found to lead to unacceptably large overheads due to the presence of different multi-threading schedulers and context switching issues. Hence, a custom deployment strategy was preferred: models trained with scikit-learn and Keras are converted into compatible C code using the scikinC toolkit [30], and then distributed through the LHCb Computing Grid via the CERN VM file-system (cvmfs) [31].
The modular layout of Lamarr enables a variety of studies and developments on the single parameterizations, providing a unique and shared infrastructure for validation and performance measurements. While crucial for applications within LHCb, the integration with Gaudi and Gauss makes the adoptions of Lamarr unappealing for researchers outside of the LHCb community. The SQLamarr package3 aims to mitigate this problem, providing a stand-alone ultra-fast simulation framework with minimal dependencies. Based on SQLite3, SQLamarr provides a set of classes and functions for loading data from physics generators and defining pipelines from compiled models. An integration between SQLamarr and Gaussino is currently under investigation with the aim of providing ultra-fast parameterizations following the experiment-independent philosophy of the newest LHCb simulation framework, named Gauss-on-Gaussino4[8; 9].
Footnote 3: Visit [https://lamarrsim.github.io/SQLamarr](https://lamarrsim.github.io/SQLamarr) for additional details.
Footnote 4: Visit [https://llncb-gauss.docs.cern.ch/Futurev5](https://llncb-gauss.docs.cern.ch/Futurev5) for additional details.
## 6 Conclusion
An evolution of the LHCb software stack and the simulation techniques are mandatory to meet the upcoming and future demand for simulated samples expected for Run 3 and those that will follow. Ultra-fast-based solutions will play a key role in reducing the pressure on pledged CPU resources, without compromising unreasonably the description of the uncertainties introduced in the detection and reconstruction phases. Such techniques, powered by deep generative models, are provided to LHCb via the novel Lamarr framework. Well integrated with the physics generators within the Gauss framework, Lamarr delivers two
Figure 4: Validation plots for \(\Lambda_{b}^{0}\to\Lambda_{c}^{*}\mu^{-}\bar{\nu}_{\mu}\) decays with \(\Lambda_{c}^{*}\to pK^{-}\pi^{+}\) simulated with Pythia8, EvtGen and Lamarr (orange markers) and compared with detailed simulation samples relying on Pythia8, EvtGen and Geant4 (cyan shaded histogram). Reproduced from LHCB-FIGURE-2022-014.
pipelines according to the charge of the generated particle. The statistical models for the Tracking and the charged PID systems have been deployed and validated with satisfactory results on \(\Lambda_{b}^{0}\to\Lambda_{c}^{+}\mu^{-}\bar{\nu}_{\mu}\) decays. Several models are currently under investigation for the neutral pipeline, where the translation problem approach offers a viable solution to face the particle-to-particle correlation problem. Further development of the integration between Lamarr and the LHCb simulation framework is one of the major ongoing activities to put the former in production and make its parameterizations available to the HEP community.
## Acknowledgements
This work is partially supported by ICSC - Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing, funded by European Union - NextGenerationEU.
|
2309.05471 | Electron and photon energy calibration with the ATLAS detector using LHC
Run 2 data | This paper presents the electron and photon energy calibration obtained with
the ATLAS detector using 140 fb$^{-1}$ of LHC proton-proton collision data
recorded at $\sqrt{s}=13$ TeV between 2015 and 2018. Methods for the
measurement of electron and photon energies are outlined, along with the
current knowledge of the passive material in front of the ATLAS electromagnetic
calorimeter. The energy calibration steps are discussed in detail, with
emphasis on the improvements introduced in this paper. The absolute energy
scale is set using a large sample of $Z$-boson decays into electron-positron
pairs, and its residual dependence on the electron energy is used for the first
time to further constrain systematic uncertainties. The achieved calibration
uncertainties are typically 0.05% for electrons from resonant $Z$-boson decays,
0.4% at $E_\text{T}\sim 10$ GeV, and 0.3% at $E_\text{T}\sim 1$ TeV; for
photons at $E_\text{T}\sim 60$ GeV, they are 0.2% on average. This is more than
twice as precise as the previous calibration. The new energy calibration is
validated using $J/\psi \to ee$ and radiative $Z$-boson decays. | ATLAS Collaboration | 2023-09-11T14:12:47Z | http://arxiv.org/abs/2309.05471v2 | # Electron and photon energy calibration with the ATLAS detector using LHC Run 2 data
###### Abstract
This paper presents the electron and photon energy calibration obtained with the ATLAS detector using 140 fb\({}^{-1}\) of LHC proton-proton collision data recorded at \(\sqrt{s}=13\) TeV between 2015 and 2018. Methods for the measurement of electron and photon energies are outlined, along with the current knowledge of the passive material in front of the ATLAS electromagnetic calorimeter. The energy calibration steps are discussed in detail, with emphasis on the improvements introduced in this paper. The absolute energy scale is set using a large sample of \(Z\)-boson decays into electron-positron pairs, and its residual dependence on the electron energy is used for the first time to further constrain systematic uncertainties. The achieved calibration uncertainties are typically 0.05% for electrons from resonant \(Z\)-boson decays, 0.4% at \(E_{\rm T}\sim 10\) GeV, and 0.3% at \(E_{\rm T}\sim 1\) TeV; for photons at \(E_{\rm T}\sim 60\) GeV, they are 0.2% on average. This is more than twice as precise as the previous calibration. The new energy calibration is validated using \(J/\psi\to ee\) and radiative \(Z\)-boson decays.
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISATION FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCHARCH (CERN)
+
Footnote †: EUROPEAN ORGANISON FOR NUCLEAR RESEARCH (CERN)
###### Contents
* 1 Introduction
* 2 Electron and photon reconstruction with the ATLAS detector
* 2.1 The ATLAS detector
* 2.2 Energy measurement, electron and photon reconstruction and identification
* 3 Collision data and simulation
* 3.1 Dataset
* 3.2 Simulation samples
* 3.3 Passive material model
* 4 Overview of the calibration procedure
* 5 Effects on the uniformity and stability of the energy response
* 5.1 Uniformity
* 5.2 Stability
* 5.3 ADC non-linearity correction
* 5.4 Energy response in high and medium gain
* 6 Intercalibration of the EM calorimeter layers
* 6.1 Presampler energy scale
* 6.2 Intercalibration of the first and second calorimeter layers
* 7 Determination of the energy scale and resolution with \(Z\to ee\) events
* 8 Photon-specific calibration
* 8.1 Modelling of the photon reconstruction classification
* 8.2 Out-of-cluster energy leakage mis-modelling
* 9 Electron and photon energy scale uncertainties
* 10 Energy linearity and constraints on the calibration uncertainties
* 10.1 Energy linearity measurement
* 10.2 Constraints on the calibration systematic uncertainties
* 11 Calibration cross-checks
* 11.1 Checks using \(J/\psi\to ee\) events
* 11.2 Checks using \(Z\to\ell\ell\gamma\) events
* 12 Conclusion
## 1 Introduction
During the 2015-2018 data-taking period (Run 2) of the Large Hadron Collider at CERN, the ATLAS experiment accumulated a large sample of proton-proton collisions at \(\sqrt{s}=13\) TeV, corresponding to an integrated luminosity of 140 fb\({}^{-1}\). Such a sample provides significant opportunities for improvements in detector performance and calibration precision, further exploration of the Standard Model and searches for new physics. Optimal energy reconstruction and calibration of the electromagnetic calorimeter is necessary for all analyses involving electrons and photons, and especially for precise measurements of the masses and properties of the Higgs, \(W\) and \(Z\) bosons. The present paper describes the calorimeter energy calibration using the Run 2 data sample.
The calibration scheme comprises several steps: a simulation-based optimization of the energy measurement for electrons and photons, corrections for observed differences between data and simulation, calibration of the layers of the calorimeter and a final adjustment of the global energy scale using the abundant sample of electron-positron pairs from \(Z\)-boson decays. The resulting calibration corrections are validated using electrons from \(J/\psi\) decays and photons from radiative \(Z\)-boson decays.
The procedure applied to the full Run 2 dataset is similar to the one in Refs. [1; 2]. Compared to the previous publication, the methodology has been updated in order to reduce the impact of the dominant sources of uncertainty: the offline reconstruction of electrons and photons in the calorimeter moved from a clustering algorithm that produced fixed-size clusters to one producing variable-size'superclusters' [3]; muons are now used for the presampler calibration, instead of electrons and photons; the layer intercalibration is now obtained by combining scales extracted using both electrons and muons; finally, dedicated data allowed further studies of the intercalibration of the high- and medium-gain electronics readouts. For the first time, the resulting calibration uncertainties are further constrained using a precise measurement of the energy dependence, or linearity, of the calorimeter response.
The calibration steps are discussed in the following, with special focus where improved methods have been utilized. Section 2 briefly describes the ATLAS detector and summarizes the electron and photon reconstruction algorithms applied in this analysis. Section 3 describes the data and simulated event samples used for the studies, as well as the present knowledge of the passive material in front of the calorimeter. Section 4 gives an overview of the calibration procedure and details the changes relative to the previous procedure. Section 5 enumerates corrections applied to the data to balance geometrical inhomogeneities, and to account for residual non-linearity in the electronics readout. This section also details an improved analysis of the transition between the high- and medium-gain electronics readouts. Section 6 describes the calibration of the calorimeter layers, namely constraints on the presampler energy scale and the relative responses of the first and second compartments. Section 7 combines the results of these studies to extract the final adjustment of the global energy scale. Calibration corrections specific to photons are described in Section 8. Calibration uncertainties at this stage are discussed in Section 9. The linearity of the electron energy scale is studied in Section 10. Finally, the validity of the energy calibration is established using independent samples of electron-positron pairs from \(J/\psi\) decays, and photons from radiative \(Z\)-boson decays, as discussed in Section 11.
## 2 Electron and photon reconstruction with the ATLAS detector
### The ATLAS detector
The ATLAS experiment [4] at the LHC is a multipurpose detector with cylindrical geometry1 covering almost \(4\pi\) in solid angle. Closest to the collision point, ATLAS is instrumented with an inner tracking detector (ID) covering the pseudorapidity range of \(|\eta|<2.5\) and consisting of a silicon pixel detector, including the insertable B-layer installed as a new innermost layer before Run 2 [5, 6], followed by a silicon strip detector (SCT), and a transition radiation tracker (TRT) in the region \(|\eta|<2.0\). The ID is surrounded by a superconducting solenoid producing an axial magnetic field of 2 T such that the ensemble enables efficient reconstruction of the tracks and momenta of charged particles, measurement of primary and secondary vertices, discrimination between electrons and pions, and reconstruction of photon conversions in ID material at radii up to 800 mm.
Footnote 1: ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the \(z\)-axis along the beam pipe. The \(x\)-axis points from the IP to the centre of the LHC ring, and the \(y\)-axis points upward. Cylindrical coordinates \((r,\phi)\) are used in the transverse plane, \(\phi\) being the azimuthal angle around the \(z\)-axis. The pseudorapidity is defined in terms of the polar angle \(\theta\) as \(\eta=-\ln\tan(\theta/2)\). The transverse energy is defined as \(E_{\rm T}=E/\cosh\eta\).
The solenoid surrounding the ID is encompassed by an electromagnetic (EM) calorimeter, which consists of lead absorbers folded in an accordion geometry and immersed in a liquid-argon (LAr) bath. The EM calorimeter is divided into three regions, each contained in a separate cryostat: the barrel section (EMB) covering the central pseudorapidity region \(|\eta|<1.475\) and two endcaps (EMEC) covering the acceptance regions \(1.375<|\eta|<3.2\). The EMB and EMEC are segmented longitudinally into three (two) layers within \(|\eta|<2.5\) (\(2.5<|\eta|<3.2\)) with variable cell sizes, such that the direction of photon showers can be measured. The first layer (Layer 1) spans the regions \(|\eta|<1.45\) and \(1.5<|\eta|<2.4\), and has a thickness between three and five radiation lengths (\(X_{0}\)), depending on \(\eta\), and cells with a fine segmentation of \(0.003\times 0.1\) in \(\Delta\eta\times\Delta\phi\) in the EMB, providing excellent discrimination between single photon showers and the showers of two nearly collinear photons from high-momentum pion decay. The second layer (Layer 2), with a cell granularity of \(0.025\times 0.025\) in \(\Delta\eta\times\Delta\phi\), has a thickness between 17 \(X_{0}\) and 20 \(X_{0}\) and collects most of the energy deposited in the calorimeter by electron and photon showers. A third layer (Layer 3) with a thickness of 2-10 \(X_{0}\) and a coarser granularity of \(0.05\times 0.025\) in \(\Delta\eta\times\Delta\phi\) is used to collect the energy tails of very energetic showers. A thin presampler (PS) layer, placed in front of the accordion layers and covering the region \(|\eta|<1.8\), is used to correct for energy losses upstream of the calorimeter. This detector consists of a 1 cm (0.5 cm) active LAr layer in the barrel (endcap) region with a coarse granularity of \(0.025\times 0.1\) in \(\Delta\eta\times\Delta\phi\). Scintillators are placed between the barrel and endcap cryostats (\(1.37<|\eta|<1.52\)) to improve the energy measurement in this region.
The EM calorimeter is surrounded by an iron/scintillator hadron calorimeter in the region \(|\eta|<1.7\). In the endcap regions, copper/LAr calorimeters are used up to \(|\eta|=3.2\). Energy measurements at higher \(|\eta|\), up to \(|\eta|=4.9\), are made using a combination of forward copper/LAr and tungsten/LAr modules placed inside the endcap cryostats with the EMEC. Muons are accurately measured and identified up to \(|\eta|=2.7\) by a muon spectrometer located behind the calorimeters. It consists of three air-core superconducting toroids with eight coils each, precision tracking chambers, and fast chambers for triggering up to \(|\eta|=2.4\).
A two-level trigger system is used to select events. The first-level trigger is implemented in hardware and uses a subset of the detector information to accept events at a rate below 100 kHz. This is followed by a software-based high-level trigger that reduces the accepted event rate to 1 kHz on average depending on the data-taking conditions.
### Energy measurement, electron and photon reconstruction and identification
The current generated in an EM calorimeter cell by ionizing particles is collected, amplified, and shaped to reduce the impact of out-of time showers, especially in high instantaneous luminosity conditions [7]. The signal is sampled at 40 MHz and digitized by a 12-bit analogue-to-digital converter (ADC) in three different electronics readout gains, high, medium and low, in the front-end boards. The signal observed in the sample matching the trigger time defines which gain to use for the readout. Four digitized samples (two before the sample with highest energy, and one after) are sent to the back-end electronics. The energy deposited in the calorimeter cell is estimated through an optimal filtering procedure applied to the four samples after pedestal subtraction [8], corrected by factors describing the conversion from ADC count to current and from current to energy. The pedestal, ADC-to-current conversion and signal shape of all calorimeter cells are derived from specific electronics calibration data. The pile-up dependence of the resulting energy value is corrected for using the measured instantaneous luminosity for the considered bunch crossings.
Electrons and photons are reconstructed from energy deposits in the cells, using a dynamic, variable cluster-size algorithm to form superclusters [3, 9], allowing the recovery of energy from bremsstrahlung photons or from electrons from photon conversions. In this method an electron candidate is identified as a supercluster matching a track reconstructed in the ID. If a match is found, the track is re-fitted to account for bremsstrahlung. Superclusters without a matching track in the ID define unconverted-photon candidates. Converted-photon candidates are defined as a cluster matching a track that originates from a conversion vertex. The fraction of photons that convert varies from 20% in the central region to 65% in the endcaps.
Selection criteria are applied after reconstruction to identify genuine electrons while rejecting a large fraction of fake electrons. Depending on the desired background rejection factor and the specific needs of each analysis, four operating points, called Very Loose, Loose, Medium and Tight, are optimized for electrons in bins of \(\eta\) and \(E_{\mathrm{T}}\). The criteria were chosen by using a likelihood discriminant based on a number of track and cluster properties for which probability distributions are derived from electron and pion candidates in data. The identification efficiency of the selection is measured using electrons from \(Z\)-boson and \(J/\psi\) decays. Descriptions of the methods, the used samples and the obtained results are given in Refs. [3, 10, 11].
Prompt photons are identified using two selection criteria, Loose and Tight, which are based on the EM shower shapes. The working points are defined in bins of \(|\eta|\) and, in the case of the Tight selection, also in bins of \(E_{\mathrm{T}}\). The Loose criterion is independent of the conversion status of the photon. The Tight selection uses shower shape information from the first calorimeter layer and it is optimized separately for the two cases, accounting for the opening angle of the \(e^{+}e^{-}\) pair in the magnetic field, which may impact the response for converted photons. The identification efficiency is measured using distinct data samples: inclusive photon production, photons from radiative \(Z\)-boson decays, and electrons from \(Z\)-boson decays after modifying their shower shapes to resemble those of photons. The corresponding analyses are described in detail in Refs. [3, 11, 12].
## 3 Collision data and simulation
### Dataset
The analyses described in this paper use the full \(pp\) collision dataset recorded by ATLAS between 2015 and 2018 with the LHC operating at a centre-of-mass energy of \(\sqrt{s}=13\) TeV and 25 ns bunch spacing. The sample corresponds to an integrated luminosity of 140 fb\({}^{-1}\) after quality cuts [13, 14]; the typical mean number of interactions per bunch crossing, \(\langle\mu\rangle\), was on average 13, 25 and 37 for the 2015, 2016 and 2017-2018 data, respectively. Special samples, called 'low-\(\mu\)' samples in the following, were recorded in 2017 and 2018 at low instantaneous luminosity, with \(\langle\mu\rangle\sim 2\); after applying data-quality requirements, the corresponding integrated luminosity amounts to 340 pb\({}^{-1}\).
The measurements of the electromagnetic energy response use a large sample of \(Z\to ee\) events selected with single-electron and dielectron triggers [15]. The dielectron high-level triggers use a transverse energy threshold ranging from 12 GeV (2015) to 17 or 24 GeV (2016-2018) and a Loose (2015) or Very Loose (2016-2018) identification criterion. The single-electron high-level trigger has a transverse energy threshold ranging from 24 GeV in 2015 and most of 2016 to 26 GeV at the end of 2016 and during 2017 and 2018; it applies Tight identification and loose track-based isolation criteria [10]. The offline selection for the energy calibration measurement requires two electrons satisfying Medium identification, loose isolation and \(E_{\mathrm{T}}>27\) GeV, resulting in \(\sim\)57 million \(Z\to ee\) candidate events.
A sample of \(J/\psi\to ee\) events with at least two electron candidates with \(E_{\mathrm{T}}>5\) GeV and \(|\eta|<2.4\) is used to validate the electron energy scale at low \(E_{\mathrm{T}}\). It was collected using dedicated prescaled dielectron triggers with asymmetric \(E_{\mathrm{T}}\) thresholds ranging from 4 to 14 GeV. The sample contains \(\sim\)260 000 events.
Samples of \(Z\to\ell\ell\gamma\) events (\(\ell=e,\mu\)), used to validate the photon energy scale, were selected with the same triggers as for the \(Z\to ee\) sample for the electron channel and with single-muon or dimuon triggers [16] for the muon channel. The high-level dimuon (single-muon) trigger's transverse momentum threshold was 14 (26) GeV; a loose track-based isolation criterion was applied in the high-level single-muon trigger. The \(\mu\mu\gamma\) (\(ee\gamma\)) samples, after requiring two muons (electrons) with Medium identification [17], transverse momentum \(p_{\mathrm{T}}>15\) GeV (18 GeV) and one tightly identified and loosely isolated photon with \(E_{\mathrm{T}}>15\) GeV, contain \(\sim\)210 000 (\(\sim\)100 000) events.
### Simulation samples
Large Monte Carlo (MC) samples of \(Z\to\ell\ell\) and \(W\to\ell\nu\) events were simulated at next-to-leading order (NLO) in QCD using Powheg[18] and the Pythia 8[19] parton shower model. The CT10 [20] parton distribution function (PDF) set was used in the matrix element calculation. The AZNLO set of tuned parameters [21] and the CTEQ6L1 [22] PDF set were used in the modelling of non-perturbative effects. Photos++ 3.52 [23] was used for QED emissions from electroweak vertices and charged leptons.
Both non-prompt (originating from \(b\)-hadron decays) and prompt (not originating from \(b\)-hadron decays) \(J/\psi\to ee\) samples were generated using Pythia 8. The A14 set of tuned parameters [24] was used together with the CTEQ6L1 PDF set, and EvtGen 1.2.0 [25] was used to model \(b\)- and \(c\)-hadron decays.
Samples of \(Z\to\ell\ell\gamma\) events with photon \(E_{\mathrm{T}}>10\) GeV were generated with Sherpa 2.2.4 [26] using QCD leading-order matrix elements with up to three additional partons in the final state. The NNPDF3.0nnlo
PDF set was used in conjunction with the dedicated parton shower tuning developed by the Sherpa authors.
The energy resolution of the new reconstruction algorithm was optimized using samples of 40 million single-electron and single-photon events simulated without pile-up. Their transverse energy distribution covers the range from 1 GeV to 3 TeV. Smaller samples with a flat \(\langle\mu\rangle\) spectrum between 0 and 60 were also simulated to assess the performance as a function of \(\langle\mu\rangle\).
An extensive software suite [27] is used in data simulation, in the reconstruction and analysis of real and simulated data, in detector operations, and in the trigger and data acquisition systems of the experiment. The generated events were processed through the full ATLAS detector simulation [28] based on Geant4[29]. The MC events were simulated with additional interactions in the same or neighbouring bunch crossings to match the pile-up conditions during LHC operations. The overlaid \(pp\) collisions were generated with the soft QCD processes of Pythia 8 using the A3 set of tuned parameters [30] and the NNPDF2.3lo PDF [31]. Although this set of tuned parameters improves the modelling of minimum-bias data relative to the A2 set [32] used previously, it overestimates the hadronic activity measured using charged-particle tracks by roughly 3%. Simulated events were weighted to reproduce the distribution of the average number of interactions per bunch crossing in data, scaled down by a factor 1.03.
### Passive material model
Measurements of electron and photon energies are affected by the passive material in front of the EM calorimeter. The simulation-based energy calibration accounts for this effect, but any differences between the simulated detector model and the actual detector produce discrepancies between the energy responses. For electrons of \(E_{\mathrm{T}}~{}\approx~{}40\) GeV, such discrepancies are absorbed through the \(Z\)-based energy calibration, but biases remain that depend on the particle type (electron, unconverted or converted photon) and energy. The model used for the passive material and its associated uncertainties was derived from Run 1 and partial Run 2 data [1, 10], and is summarized briefly below.
The model of the ID is based on measurements performed during its construction [4], leading to a 5% uncertainty in the amount of material. In Run 2, a 10% uncertainty is assigned to the material of the insertable B-layer, and 25% to that of the inner-detector service patch panel [33], affecting the high-\(|\eta|\) region.
The material between the ID and the LAr calorimeter was probed [1] using the longitudinal development of electron and photon showers. The ratio of the energies deposited in the first and second accordion layers was found to be sensitive to the total amount of material traversed by these particles before entering the calorimeter. After calibration of the layers' energy response, this ratio was measured with a relative precision of about 1% in the barrel, and about 2% in the endcaps, leading to a determination of the passive material with a typical precision of 5%\(X_{0}\) for \(|\eta|<1.3\), 10%\(X_{0}\) for \(1.6<|\eta|<2.1\), and up to 20%\(X_{0}\) for \(2.1<|\eta|<2.4\). The impact of the passive material uncertainties on the energy measurement was parameterized using simulation as a function of the particle type, \(E_{\mathrm{T}}\), and \(|\eta|\). The corresponding calibration uncertainties mostly affect electrons at low \(E_{\mathrm{T}}\) and unconverted photons, and typically reach 1%-2%. No measurement was performed in the barrel-endcap transition regions, so these regions have larger uncertainties.
The resulting material model is presented in Figure 1, which summarizes the passive material between the interaction point and the calorimeter. The measured passive material is compared with the simulation and its uncertainties in Figure 1.
## 4 Overview of the calibration procedure
The different steps in the procedure used to calibrate the energy response for electrons and photons described in this paper are illustrated in Figure 2, and summarized below.
The energy of an electron or photon candidate is built from the energy of a cluster of cells in the electromagnetic calorimeter. The measurement of electron and photon energies is optimized using a simulation-based boosted-decision-tree regression algorithm, combining energy deposits belonging to the reconstructed supercluster, in the presampler and in the calorimeter layers. The optimization is performed separately for electrons, converted photons and unconverted photons, taking into account the particle position. It is the same as the one used in Ref. [3] and the methodology is discussed in detail in Ref. [2].
Since the EM calorimeter is segmented in depth, the longitudinal layers should be calibrated separately to provide a correct description of the calorimeter response as a function of \(E_{\mathrm{T}}\) (step 1). After these corrections, the simulation-based calibration is applied identically to the cluster energies reconstructed from collision data and simulated event samples (step 2).
A set of additional corrections is applied to data to account for response variations not included in the simulation in specific detector regions, e.g. regions with non-optimal high voltage, azimuthal non
Figure 1: (a) Amount of material traversed by a particle, in units of radiation lengths \(X/X_{0}\), as a function of \(|\eta|\) in the nominal simulation. (b) Measured difference between the data and the nominal simulation of the detector material up to the first layer of the EM calorimeter. The data points are obtained using the longitudinal shower profile of electrons from \(Z\)-boson decays in partial Run 2 data, and have negligible statistical uncertainties. The blue band summarizes the material uncertainties, relative to the nominal simulation, as determined in Run 1 calibration studies. The green band includes additional uncertainties related to the introduction of new material into the detector for Run 2, and covers potential mis-modelling of the insertable B-layer and a modified inner-detector patch panel (PP0). At high \(|\eta|\), these uncertainties are dominated by a 50% uncertainty in the simulated PP0 material.
uniformities, or biases associated with the liquid-argon calorimeter's electronics calibration (step 3). The stability of the calorimeter response as a function of azimuth, time and pile-up is also studied.
A final adjustment of the calorimeter response is derived from samples of \(Z\to ee\) events, so that the peak of the \(Z\) resonance reconstructed in data coincides with that in the simulation. The response corrections are applied to the data. Using the same event samples, it is found that the resolution in data is slightly worse than that in simulation, and appropriate corrections are derived and applied to the simulation to match the data (steps 4, 5). The passive material model and the intercalibration corrections carry uncertainties that affect the energy calibration with a specific dependence on the particle \(E_{\mathrm{T}}\). Measuring the residual energy dependence of the energy scale thus allows further adjustments of the calibration model, and provides additional constraints on the associated uncertainties. High-precision measurements, such as those measuring the masses of the Higgs and \(W\) bosons, will profit significantly from these uncertainty reductions.
The calibration factors extracted from \(Z\to ee\) events are assumed to reflect the intrinsic response of the calorimeter, and are thus applied identically to electrons and photons. Nevertheless, photon-specific corrections are needed to account for differences in the lateral development of electron and photon showers (step 6). Finally, the calibration chain is validated in data with low-\(E_{\mathrm{T}}\) electron candidates from \(J/\psi\to ee\) decays, and with photon candidates from radiative \(Z\)-boson decays (step 7).
## 5 Effects on the uniformity and stability of the energy response
This section discusses the stability of the calorimeter energy response as a function of azimuthal angle, time, and pile-up. The dependence of the energy reconstruction on the readout electronics (ADC calibration and readout gain) is also discussed. Not all effects are modelled in the simulation. Corrections are defined for the ADC calibration and the azimuthal non-uniformity, while a systematic uncertainty is assigned for the observed dependence of the energy measurement on the readout gain.
Figure 2: Schematic overview of the electron and photon energy calibration procedure in ATLAS.
### Uniformity
Gravity-induced mechanical deformations of the calorimeter cause variations in the size of the liquid-argon gaps between the absorbers as a function of azimuthal angle. The resulting energy response variations are at the level of 0.1%-0.2% in the barrel, and up to 1% in the endcaps. Energy corrections are derived from the modulations of the response as a function of \(\phi\), separately in six intervals of absolute pseudorapidity (0-0.6, 0.6-1.0, 1.0-1.37, 1.37-1.55, 1.55-1.82, 1.82-2.47). In each \(|\eta|\) interval, the relative response is defined from profiles of \(E_{\mathrm{T}}/\langle E_{\mathrm{T}}\rangle\) in \(Z\)-boson decays, where \(E_{\mathrm{T}}\) is the electron transverse energy at a given \(\phi\) value, and \(\langle E_{\mathrm{T}}\rangle\) is the average over \(\phi\).
The effect of this correction is illustrated in Figure 3. While the \(\phi\)-averaged energy response is unchanged by construction, the better uniformity is expected to yield a small improvement in the overall energy resolution. In practice, the resolution's constant term, discussed in Section 7, is reduced by 5%-10% in the endcaps, where the correction is most significant.
### Stability
The stability of the calorimeter response is studied using the reconstructed peak position of the dielectron mass distribution, \(m_{ee}/\langle m_{ee}\rangle\), in \(Z\to ee\) candidate events. This is illustrated in Figure 4 for the data taken between 2015 and 2018. Stability at the level of 0.05% is observed over the full data-taking period.
Figure 4 shows \(m_{ee}/\langle m_{ee}\rangle\) as a function of the average number of interactions per bunch crossing for the data collected between 2015 and 2018. The bipolar shaping of the calorimeter signals [7] protects the energy measurement against pile-up fluctuations, and after correcting for bunch-to-bunch variations of the instantaneous luminosity, the residual dependence of the energy scale on \(\langle\mu\rangle\) is below 0.1%. The small increase in energy observed in data is consistent with the MC expectation over most of the \(\langle\mu\rangle\) range and is related to the new dynamical clustering used for the energy measurement [3]. The high-\(\langle\mu\rangle\) region mostly reflects data taken in 2017, and the discrepancies observed in this region justify the extraction of dedicated energy-scale corrections for each data-taking year. More details are given in Section 7.
The stability of the response as a function of the number of reconstructed collision vertices (\(N_{\mathrm{vtx}}\)) [34] is shown in Figure 4. Classifying events according to \(N_{\mathrm{vtx}}\), related to the number of interactions in the
Figure 3: Electron energy response as a function of electron azimuth, in the endcaps, before (red dots) and after (black dots) the \(\phi\)-uniformity energy correction. This correction was computed from 2015+2016 data and applied to the full Run 2 sample. The observed non-zero residuals have no impact on the final result
specific bunch crossing, biases the pile-up activity of colliding bunches relative to the average. In this case the compensation of the pile-up contributions to the reconstructed energy by the bipolar shaping becomes imperfect, giving rise to the observed slope. The description of this effect in the simulation is accurate to 0.1% for \(\left<\mu\right><25\), rising to 0.5% at high \(N_{\mathrm{vtx}}\). The larger discrepancy at high \(N_{\mathrm{vtx}}\) reflects the effect observed for \(\left<\mu\right>\).
### ADC non-linearity correction
The energy reconstruction in a LAr calorimeter cell is based on a linear conversion from ADC counts to current, and an additional factor converting current into energy. In practice, the energy response of each cell is determined during dedicated electronics calibration runs, parameterizing the relation between the injected current and the measured ADC count using a linear function. The fits to the calibration data show non-zero residuals, caused by intrinsic non-linear behaviour of the electronics. This is illustrated in Figure 5 for an example cell and a calibration run performed in medium gain (MG). The residuals deviate from a linear fit by about 0.3%; comparable deviations are observed in high-gain (HG) calibration runs.
Figure 4: Relative variation of the peak position of the reconstructed dielectron mass distribution in \(Z\to ee\) events as a function of (a) time, (b) the average number of pile-up interactions, \(\left<\mu\right>\), and (c) the number of reconstructed collision vertices, \(N_{\mathrm{vtx}}\).
The implications of this non-linearity are twofold. First, the residuals of the ADC-to-current correction contribute to the non-linearity of measurements of electron and photon energies, and must therefore be accounted for in view of the linearity analysis in Section 10. A correction is implemented, based on fits of a fifth-order polynomial to the residuals, separately for each cell of the calorimeter, as exemplified in Figure 5. The parameterized residual is then added to the cell energy estimate. The impact on electron or photon cluster energies is estimated by repeating this procedure for all cells belonging to the cluster, and computing the modified cluster energy. The final correction is built such that it does not modify the cluster energies of electrons at \(E_{\mathrm{T}}=40\) GeV, where the global energy scale is set (see Section 7). Results of this procedure for electrons, unconverted photons and converted photons are illustrated in Figure 6. Cluster energies are increased by about 0.4% at low \(E_{\mathrm{T}}\), and decreased by about 0.2% at high \(E_{\mathrm{T}}\), with a moderate dependence on particle type and pseudorapidity. A relative uncertainty of 30% is assigned to this correction.
In addition, non-linearities in the ADC-to-current conversion affect comparisons of the energy response in different readout gains. For a given energy, the measured ADC count and the residuals of the current conversion depend on the gain in which the cell is recorded, so that a direct comparison of the energy response in HG and MG is difficult to interpret. The ADC non-linearity correction described above removes this bias. The energy measurement in high and medium gains can then be compared directly, as described in the next section.
### Energy response in high and medium gain
Non-linearity in the cell energy measurement introduces a dependence of the energy response on the energy of the reconstructed particle. As discussed above, the linearity of the readout electronics is better than a few per mille in each of the three gains used to digitize the calorimeter signals. In the standard configuration, the HG readout is used for the majority of cells in clusters from electrons in \(Z\to ee\) decays, especially in the barrel, where the transition to medium gain occurs for a cell energy of about 25 GeV, for cells in the second layer. The relative calibration of the different readout gains is assumed to be perfect in the simulation, but less well understood in data [1, 2]. To study the accuracy of this relative calibration,
Figure 5: Output of a calibration run performed in medium gain, for a example cell in the second layer of the EM calorimeter. The relation between injected current, in DAC (digital-to-analogue converter) units, and ADC counts is assumed to be linear (top panel). The evolution of the residuals (in DAC units) between the measurements and the linear fit as a function of ADC counts (bottom panel) is parameterized with a fifth-order polynomial and used to correct the cell energy.
data recorded in 2017 and 2018 under special conditions are used, with an integrated luminosity of about 0.3 fb\({}^{-1}\). For these data, the threshold to switch from HG to MG readout for the cells in the second layer was lowered, typically by a factor of three, such that almost all electrons from \(Z\)-boson decays have at least the highest-energy cell in Layer 2 recorded in MG.
The reconstructed dielectron invariant mass distribution in the special runs is compared with the distribution observed in 1.5 fb\({}^{-1}\) of data with the standard gain transition, and recorded around the same time. The procedure described in Section 5.3 is used to calibrate the ADC-to-current conversion function in both gains. An example comparison is shown in Figure 7 for electrons with \(|\eta|<0.8\). The invariant mass distributions built in standard runs are parameterized with a sum of three Gaussian functions in \((i,j)\) categories, where \(i\) and \(j\) label the five \(|\eta|\) regions2 where the electrons of the pair are reconstructed. The same shapes are used to describe the corresponding distributions for the data taken in the special runs, but with the means and widths of the Gaussian functions modified by a multiplicative factor \(\sqrt{(1+\alpha_{\mathrm{G},i})(1+\alpha_{\mathrm{G},j})}\), where \(\alpha_{\mathrm{G},i}\) is the energy scale difference between the two datasets in the \(|\eta|\) region \(i\). These \(\alpha_{\mathrm{G},i}\) parameters are then extracted from a simultaneous fit of all \((i,j)\) regions. The measured values of \(\alpha_{\mathrm{G}}\) are shown in Figure 7 as a function of \(|\eta|\). The uncertainties shown are statistical only; systematic uncertainties are negligible for this study. For perfectly intercalibrated HG and MG readouts, \(\alpha_{\mathrm{G}}=0\). Instead, small but significant differences are observed, especially for \(0.8<|\eta|<1.37\). Compared to the previous analysis of this effect in Ref. [2], \(\alpha_{\mathrm{G}}\) is found to be smaller by about a factor of two; this change is mostly driven by the improved ADC calibration using a residual correction discussed in Section 5.3.
Footnote 2: The definition of these regions is motivated by the calorimeter geometry.
The relative difference between the energy responses in HG and MG can be written as a function of the particle type, \(E_{\mathrm{T}}\) and \(\eta\) as follows:
\[\frac{\Delta E}{E}=\alpha_{\mathrm{G}}(\eta)\cdot\frac{1}{\delta_{Z}(\eta)} \cdot\delta_{\mathrm{G}}^{e,\gamma}(\eta,E_{\mathrm{T}}), \tag{1}\]
where
Figure 6: Relative cluster energy correction as a function of \(E_{\mathrm{T}}\), for (a) \(|\eta|<0.8\) and (b) \(1.8<|\eta|<2.4\) for electrons (black), unconverted photons (red) and converted photons (blue). For each particle type, the size of the envelope reflects the dependence of the correction on \(\eta\), at a given \(E_{\mathrm{T}}\).
* \(\delta_{Z}(\eta)\) quantifies the fractional change in energy for electrons from \(Z\)-boson decays between the data with lower and standard thresholds for a given change in the energy recorded in MG. This sensitivity factor is about 0.3 to 0.4 (0.2 to 0.25) in the barrel (endcap) calorimeter. It takes into account the fact that only a fraction of the electron energy is recorded in MG in the special-settings data taking, while in data recorded with normal settings some cells in the second layer can be read out in MG. This is particularly true for the endcaps, where the electron energies are larger.
* \(\delta_{\mathrm{G}}^{e,\gamma}(\eta,E_{\mathrm{T}})\) quantifies, for a given particle, the fractional change in total energy for a given change in the energy recorded in MG, for standard gain thresholds. It is estimated using simulated single-particle samples, and is close to zero up to \(E_{\mathrm{T}}=40\) GeV for electrons, and \(E_{\mathrm{T}}=60\) GeV for photons, and rises to reach an asymptotic value of about 0.8 for \(E_{\mathrm{T}}\) above a few hundred GeV.
The gain dependence of the energy response, \(\Delta E/E\), is considered as a systematic uncertainty in the energy measurement; the size of the uncertainty is defined as the full size of the observed dependence. The effect increases with energy, and typically reaches 0.1% in the barrel, and 0.4% in the endcaps, with low dependence on the particle type.
## 6 Intercalibration of the EM calorimeter layers
### Presampler energy scale
The presampler energy scale \(\alpha_{\mathrm{PS}}\) is defined as the ratio of the presampler energies in data and simulation. The analysis presented in Refs. [1; 2] used samples of electrons and photons to measure \(\alpha_{\mathrm{PS}}\). However, the interplay between \(\alpha_{\mathrm{PS}}\) and the amount of material in front of the presampler produces a strong correlation between the corresponding uncertainties. In this paper, the determination of \(\alpha_{\mathrm{PS}}\) is performed using muon candidates selected in the low-\(\mu\) data sample. The muon energy deposits are insensitive to the material in front of the presampler, and therefore provide a direct measurement of the presampler energies. A high-purity sample of \(W\to\mu\nu\) and \(Z\to\mu\mu\) events is selected by requiring one or two isolated muons, with additional criteria for the transverse energy and mass, and in the \(Z\)-boson case, the dimuon mass.
Figure 7: (a) Example dielectron invariant mass distributions, one from events collected in special runs and the other from standard runs, and (b) corresponding energy scale factors and their statistical uncertainties as a function of \(|\eta|\).
Although the signal-to-noise ratio for a typical energy deposit from a muon in a presampler cell is rather low, the near absence of pile-up in low-\(\mu\) data ensures it is still significant if the sample is large enough. Figure 8 shows examples of muon energy deposits in data and MC samples. The noise distributions, as measured by the energy deposits in the neighbouring cells located at \(\Delta\eta=\pm 0.025\) from the crossed cells, are also shown for the simulation. The peak position in data and simulation matters for this study, while differences in the width, as observed in the endcaps, have no impact. The mean energy deposits measured in a cell are \(\sim 45,100\) and \(75\) MeV at \(\eta\sim 0,1.2\) and \(1.6\), respectively.
The mean values of these distributions are extracted within the interval \([-1.6,1.6]\) GeV for data and simulation, and \(\alpha_{\rm PS}\) is determined from their ratio. The measurement is performed in nine pseudorapidity bins corresponding to the size of the presampler modules (\(\Delta\eta~{}=~{}0.2\) up to \(|\eta|=1.4\), \(\Delta\eta~{}=~{}0.12\) in the last barrel module and \(\Delta\eta~{}=~{}0.3\) in the endcaps).
Systematic uncertainties in the measurements of \(\alpha_{\rm PS}\) are estimated as the envelope of the difference between the nominal \(\alpha_{\rm PS}\) values and alternative measurements obtained by 1) varying the muon candidate selection, i.e. selecting candidates crossing a cell close to its centre; 2) varying the presampler energy definition, i.e. using a cluster of three cells in \(\eta\) instead of a single cell; 3) subtracting a pedestal energy estimated from neighbouring cells in \(\phi\); and 4) varying the interval used to determine the mean of the \(E_{\rm PS}\) distributions. These changes contribute equally to the total systematic uncertainty. At a given \(|\eta|\), the values of \(\alpha_{\rm PS}\) are found to be compatible for positive and negative pseudorapidity, and are averaged. The results are shown in Figure 9. The statistical uncertainties are at the percent level, while the systematic uncertainties vary between \(2\%\) and \(4\%\) depending on \(|\eta|\). The results obtained here using muons are compatible with the earlier electron- and photon-based measurements.
### Intercalibration of the first and second calorimeter layers
The intercalibration of the first and second calorimeter layers is paramount in controlling the linearity of the electron and photon responses. Differently from previous iterations, both the muon and electron candidates from \(Z\)-boson decays are used here to measure the relative calibration of the two layers.
Figure 8: Distributions of muon energy deposits in presampler cells traversed by muon tracks, in two \(|\eta|\) regions: (a) barrel and (b) endcap. The corresponding distributions for the neighbouring cells in \(\eta\) are shown for the simulation by the dashed histograms.
#### 6.2.1 Intercalibration using muons
The measurements using muon candidates from \(Z\)-boson decays closely follow the analysis in Ref. [2]. The intercalibration factor is defined as \(\alpha_{12}=(\langle E_{1}\rangle/\langle E_{2}\rangle)^{\rm data}/(\langle E_{1 }\rangle/\langle E_{2}\rangle)^{\rm MC}\) where \(\langle E_{i}\rangle\) is the mean value of the distribution of the energy deposited in layer \(i\). Two methods differing in the estimation of \(\langle E_{i}\rangle\) are considered: the _most probable value_ method extracts it from a fit of a Landau distribution convolved with a noise template. The _truncated mean_ (TM) method uses the mean of the distribution computed over a restricted window to minimize the sensitivity to the tails. In both methods, these quantities are measured in intervals of \(\langle\mu\rangle\) and the final \(\alpha_{12}\) measurement is obtained by extrapolating linearly to \(\langle\mu\rangle=0\). The extrapolation parameters are determined from the measurements in the interval \(\langle\mu\rangle\in[10,40]\). In the TM method, the extrapolation was validated by comparing the nominal results with the \(\alpha_{12}\) value estimated with the low-\(\mu\) data sample. This is illustrated for \(0.3<|\eta|<0.4\) in Figure 10; a comparison of the measurements using the standard data sample and the low-\(\mu\) one is shown in Figure 10. For both methods, calibration uncertainties are derived by varying the muon selection cuts and the fitting or averaging ranges, as described in Ref. [2]. Agreement between the two measurements is excellent and well within the total uncertainty of the nominal result. The total uncertainty in the muon measurements varies from about 0.7% in the barrel to about 2% in the endcaps.
#### 6.2.2 Intercalibration using electrons
In the previous analyses [1; 2], electron probes were used only as a cross-check of the layer intercalibration performed with muons. From the few discrepancies between electron and muon correction factors observed in some pseudorapidity regions, systematic uncertainties were derived and applied to the muon result. The method first described in Ref. [1] was revisited with the full Run 2 data sample in order to better constrain the layer intercalibration using electrons, and to combine the electron-based measurement with the result obtained from muons.
The dependence of the energy response on the depth of the EM shower allows a direct extraction of \(\alpha_{12}\). This parameter is determined in 25 bins of \(|\eta|\in[0,2.4]\), using the ratio of the measured energy \(E\) and momentum \(p\) of electron candidates, \(E/p\), and the distribution of the invariant mass of electron pairs, \(m_{ee}\). The \(E/p\) distribution is affected by bremsstrahlung from interactions between the electrons and the
Figure 9: Measured presampler energy scale \(\alpha_{\rm PS}\) as a function of \(|\eta|\). The error bars represent the statistical uncertainty and the yellow band shows the total uncertainty.
material in front of the calorimeter, and is modelled using a Crystal Ball distribution [35] in the range \(0.9<E/p<1.3\). For this method, only the region \(|\eta|<1.37\) is considered since the ID momentum resolution deteriorates rapidly at large \(|\eta|\). The invariant mass distribution is fitted in each (\(|\eta|,E_{1}/E_{2}\)) bin3, in the range \([80,100]\) GeV, using the convolution of a Crystal Ball function and a Breit-Wigner distribution, while the small background in data is modelled by a second-order Chebyshev polynomial. In both the \(E/p\) and \(m_{ee}\) methods, the most probable value of the fitted Crystal Ball function is used as the estimator. Examples of fits to the dielectron invariant mass and the \(E/p\) distributions are illustrated in Figure 11.
Footnote 3: Each \(m_{ee}\) value is considered in the two relevant (\(|\eta|,E_{1}/E_{2}\)) bins, where \(E_{i}\) is the cluster energy contained in Layer \(i\).
In a given \(|\eta|\) bin, the dependence of the \(E/p\) and \(m_{ee}\) estimators on \(E_{1}/E_{2}\) is determined in data and in simulation. For a perfect layer intercalibration, the ratio of the estimator in data and the nominal simulation is expected to be constant, and any mis-calibration induces a slope. A constant data-to-simulation ratio is recovered by rescaling \(E_{1}\) in data and recomputing \(m_{ee}\) and \(E/p\) accordingly, adjusting \(\alpha_{12}\) to minimize the deviation of the ratio from a horizontal line. Figures 12(a) and 12(b) show the dependence of \(m_{ee}\) and \(E/p\) on \(E_{1}/E_{2}\) for data and simulation, for \(|\eta|\in[0.1,0.2]\) (\(m_{ee}\)) and \(|\eta|\in[1.0,1.1]\) (\(E/p\)). Data-to-simulation ratios are shown in the bottom panels. For \(|\eta|\in[0.1,0.2]\), the uncorrected ratio is compatible with a constant, indicating a good layer intercalibration, while for \(|\eta|\in[1.0,1.1]\), a constant ratio is recovered after rescaling \(E_{1}\) by \(0.97\). The measurements of \(\alpha_{12}\) using \(m_{ee}\) or \(E/p\) are compatible in the whole \(|\eta|\) range.
The following sources of systematic uncertainty are considered:
* The effect of uncertainties in the amount of material is estimated from simulations in which the passive material in the barrel cryostat, the SCT and the endcaps of the TRT is increased according
Figure 10: (a) Evolution of the truncated mean of the muon energy deposit distribution for \(0.3<|\eta|<0.4\), for the first (dots) and second (triangles) calorimeter layers, in data (open symbols) and in simulation (full symbols), as a function of the average number of pile-up interactions per bunch crossing \(\langle\mu\rangle\). The values obtained from the low-\(\mu\) samples are also shown, together with the values extracted from a MC sample without pile-up. The lines show the result of linear fits to the points for \(\langle\mu\rangle\in[10,40]\) and the dotted lines show the extrapolation to lower and higher \(\langle\mu\rangle\). (b) Comparison of the evolution with \(|\eta|\) of \(\alpha_{12}\) obtained from extrapolation at \(\mu=0\), for the low-\(\mu\) and standard data samples, for the TM method. The yellow band shows the total uncertainty of the nominal results, obtained from the standard dataset. The error bars on the data points are statistical only.
the uncertainties described in Section 3.3. The distorted MC samples are used as pseudo-data, and the deviation of \(\alpha_{12}\) from 1 defines the associated uncertainty. As can be seen in ratio panels of Figure 12, the method is rather insensitive to passive material variations (the constant ratio of the distorted MC to nominal MC values in the range \(|\eta|<1.35\) is also observed in finer \(|\eta|\) bins). The uncertainty ranges from \(\sim\)0.5% at low \(|\eta|\) to \(\sim\)5% for \(|\eta|\in[1.4,1.8]\).
* The intervals used to fit the energy response are modified to \([70,100]\) GeV or \([84,98]\) GeV for the \(m_{ee}\) method, and to \([0.95,1.5]\) for the \(E/p\) method. The difference between the \(\alpha_{12}\) values measured in the nominal and alternative intervals is used as an uncertainty.
* The uncertainty in the presampler energy scale also impacts the measurement. The corresponding
Figure 11: Distributions of (a) the dielectron invariant mass and (b) electron \(E/p\) for electrons with \(0.1<|\eta|<0.2\) and \(0.1<E_{1}/E_{2}<0.2\). The fitted functions used to estimate the most probable value are superimposed.
Figure 12: Evolution of (a) \(m_{ee}\) and (b) \(E/p\) estimators as a function of \(E_{1}/E_{2}\) for the data (full dots), the data with a scaling of \(E_{1}\) (red squares), and the nominal simulation (open squares). The \(m_{ee}\) and \(E/p\) figures show values for electrons in the pseudorapidity bin \(0.1<|\eta|<0.2\) and \(1.0<|\eta|<1.1\) respectively. The bottom panels show the ratio to the nominal simulation. In addition, the evolution for a simulation with additional material is also shown. For this sample, the range \(|\eta|<1.35\) is used.
uncertainty in \(\alpha_{12}\) can be as large as 1% at the end of the barrel for the \(E/p\) method.
* Variations of the electron identification working point, the residual bias of the method, and the limited size of the \(Z\to ee\) sample contribute with much smaller uncertainties.
The precision of the \(\alpha_{12}\) measurement with electrons ranges from 0.7% to 2% in the barrel, and from 1.5% to 6.2% in the endcap calorimeters. In the barrel-endcap transition regions and in the first half of the endcap calorimeters, the distributions in data and simulation differ significantly, regardless of \(\alpha_{12}\), leading to poor convergence of the minimization procedure and increased uncertainties.
#### 6.2.3 Combination
The two muon measurements and the two electron measurements are combined using the BLUE prescription [36]. In each channel, the statistical correlation between the two methods is ignored, as systematic uncertainties dominate; the latter are found to be largely uncorrelated as a function of \(|\eta|\). First, the muon and electron measurements are combined separately. The results are shown as the blue and red open dots in Figure 13. The uncertainty in the combined muon measurement includes the systematic uncertainty induced by the extrapolation of this calibration to electron and photon showers [1]. This uncertainty accounts for the uncertainties in the simulation of the ionization current induced by muons, and how these uncertainties affect EM showers. These two combined results are then combined to provide the final measurement, illustrated in Figure 13. When the \(\chi^{2}\) of the combination is larger than one, the combined uncertainties are rescaled by a factor \(\sqrt{\chi^{2}}\). The total uncertainty is presented in Figure 13, and varies between 0.6% in the central part of the barrel to 3% at \(|\eta|\sim 2.4\). The inclusion of the electron measurement allows the uncertainty to be reduced by a factor of \(\sim\)1.8 in the first half of the barrel. In the endcaps, the combined uncertainties are dominated by the differences between the electron and muon results.
Figure 13: (a) Relative calibration scale factor \(\alpha_{12}\) of the first and second EM calorimeter layers as a function of \(|\eta|\). Open blue squares correspond to the results obtained from the study of muon energy deposits in \(Z\to\mu\mu\) events, combining the _truncated mean_ and _most probable value_ methods. Open red circles show the extracted combined values obtained from the study of the dependence of the dielectron invariant mass \(m_{ee}\) and the \(E/p\) ratio as a function of \(E_{1}/E_{2}\) in \(Z\to ee\) events. The final scale factors, combining electron and muon results, are shown as the black solid lines. (b) Corresponding total uncertainties, taking into account rescaling in the case of large \(\chi^{2}\).
## 7 Determination of the energy scale and resolution with \(Z\to ee\) events
Electron energy scale and resolution corrections are determined using electron pairs from \(Z\)-boson decays. For electrons reconstructed in \(\eta\) regions labelled \(i\) and \(j\), the difference between the positions of the resonance in data and simulation is used to determine invariant mass scale corrections \(\alpha_{ij}\), defined by \(m_{ij}^{\rm corr}=m_{ij}/(1+\alpha_{ij})\). Similarly, a correction to the mass resolution is parameterized as \((\sigma_{m}/m)_{ij}^{\rm corr}=(\sigma_{m}/m)_{ij}\oplus c_{ij}\). The values of \(\alpha_{ij}\) and \(c_{ij}\) are those which give the best agreement between the invariant mass distributions in data and simulation, separately for each \((i,j)\) category.
The analysis in Section 6.1 measures an absolute energy scale for the presampler, but the calibration procedure for the first and second accordion layers presented in Section 6.2 only determines their relative response. Since the contribution of the third accordion layer to the energy measurement is negligible for electrons from \(Z\)-boson decays, the only remaining degree of freedom is the overall energy scale of the accordion calorimeter. Accordion energy4 scale corrections for single electrons, defined by \(E_{i}^{\rm acc,corr}=E_{i}^{\rm acc}/(1+\alpha_{i}^{\rm acc})\), are at first order related to the invariant mass scale factors \(\alpha_{ij}\) following
Footnote 4: The accordion energy is the sum of the energies in the three layers of the accordion calorimeter: \(E^{\rm acc}=\sum_{\rm layer=1}^{3}E_{\rm layer}\).
\[\alpha_{ij}=\frac{f_{i}^{\rm acc}\alpha_{i}^{\rm acc}+f_{j}^{\rm acc}\alpha_{ j}^{\rm acc}}{2},\]
where \(f_{i}^{\rm acc}\), shown in Figure 14, is determined from simulation. It represents the sensitivity of the total calibrated electron energy to the energy measured in the accordion calorimeter, for electrons in \(\eta\) bin \(i\). It is expected to be smaller than one since part of the electron energy is deposited in the presampler (for \(|\eta|<1.8\)) and in the scintillators in the transition regions between the barrel and endcap calorimeters (\(1.4<|\eta|<1.6\)). The accordion energy scale corrections are applied only to the data.
The invariant mass resolution correction \(c_{ij}\) can be expressed in terms of single-electron energy resolution corrections, \(c_{i}\), as \(c_{ij}=(c_{i}\oplus c_{j})/2\). The resolution correction is applied to the reconstructed energy in simulation.
The invariant mass window considered for the determination of \(\alpha_{ij}\) and \(c_{ij}\) is \(80<m_{ee}<100\) GeV, and the \(\alpha_{i}^{\rm acc}\) and \(c_{i}\) parameters are extracted from simultaneous fits of all categories. Figure 15 shows the results for (a) \(\alpha_{i}^{\rm acc}\) and (b) \(c_{i}\) derived in 68 and 24 \(\eta\) intervals, respectively, separately for the 2015, 2016, 2017
Figure 14: Sensitivity of the calibrated electron energy to the energy measured in the accordion calorimeter, \(f^{\rm acc}\), as a function of \(|\eta|\) for electrons from \(Z\)-boson decays.
and 2018 data samples. Changes in the \(\alpha_{i}^{\rm acc}\) values between successive years are mainly due to variations of the LAr temperature and the instantaneous luminosity. The temperature variations induce changes in the charge/energy collection, affecting the energy response by about \(-2\%\)/K [37]. Increases in luminosity during Run 2 imply that more energy is deposited in the liquid-argon gap, which creates a higher current in the high-voltage lines, effectively reducing the high voltage applied to the gap and changing the response by up to \(0.1\%\) in the endcap regions. Given the small size of the observed dependence, dedicated energy scale corrections for each data-taking year provide adequate stability for the energy measurement.
The energy scale correction is applied as an overall correction to the energy measured in the three accordion layers, and the \(Z\)-based calibration fit is repeated to verify that the procedure converges. The residual energy scale factors \(\alpha_{i}^{\rm closure}\) obtained from this iteration are also shown in Figure 15(a). Their values are below \(10^{-4}\) everywhere except in the transition regions, \(1.37<|\eta|<1.52\), where scintillators installed between the barrel and endcap cryostats contribute to the energy measurement but are not the subject of a specific calibration. The residual non-closure is applied as a final correction to the reconstructed energy.
For the constant term resolution corrections \(c_{i}\), a dependence on the pile-up level is observed through the different values obtained from the 2015-2018 data. The dependence of the \(c_{i}\) values on the amount of pile-up is explained by the larger pile-up noise predicted by the simulation, compared with that observed in the data, for a given value of \(\mu\)[3]. A weighted average of the \(c_{i}\) values for the different years is applied in the analyses of the complete dataset. The additional constant term in the energy resolution is typically less than \(1\%\) in most of the barrel and between \(1\%\) and \(2\%\) in the endcaps.
Systematic uncertainties in the determination of \(\alpha_{i}^{\rm closure}\) and \(c_{i}\) are assessed using variations of the event selections and fitting range. The event selection variations are a change of the electron identification criterion from Medium to Tight, removal of the isolation criterion, and addition of a cut on the momentum fraction lost by bremsstrahlung inside the ID. The mass window used for the fit is varied from \(80<m_{ee}<100\) GeV to \(87<m_{ee}<94.5\) GeV. The combined effect of these variations is a systematic uncertainty of about \(0.05\%\) in \(\alpha_{i}^{\rm closure}\) over most of the calorimeter acceptance, but reaching \(0.5\%\) in the transition regions between the barrel and endcap calorimeters and at the edge of the calorimeter (\(|\eta|>2.3\)). The systematic uncertainty in the \(c_{i}\) corrections ranges between \(0.1\%\) and \(0.2\%\) across the detector acceptance. The dominant contribution to the systematic uncertainty comes from the mass window variation.
Figure 16 shows the invariant mass distribution of \(Z\to ee\) candidates in data and in simulation after applying the energy scale correction to the data and the resolution correction to the simulation. Background contamination is not taken into account in this comparison, but it is expected to be no more than \(1\%\) in the shown mass range. The uncertainty band corresponds to the propagation of the uncertainties in the \(\alpha_{i}^{\rm closure}\) and \(c_{i}\) factors. Within these uncertainties, the data and simulation are in fair agreement.
## 8 Photon-specific calibration
### Modelling of the photon reconstruction classification
The simulation-based energy reconstruction procedure, discussed in Section 4, is optimized and applied separately for unconverted photons and converted photons. Therefore, a difference between data and simulation in the rates of classifying photons as converted or unconverted generates a bias in the photon energy scale. Misclassifications arise from inefficiencies in the conversion-finding algorithm and from the incorrect classification of genuine unconverted photons as converted photons by matching the cluster
Figure 15: (a) Energy scale calibration factors \(\alpha_{i}^{\rm acc}\) and \(\alpha_{i}^{\rm closure}\), and (b) the additional constant term \(c_{i}\), as a function of \(\eta\). The shaded areas correspond to the statistical uncertainties. The bottom panels show the differences between (a) \(\alpha_{i}^{\rm acc}\) and (b) \(c_{i}\) measured in a given data-taking period and the measurements using 2018 data.
to pile-up-induced tracks. The rates of correct and incorrect classification are measured using a sample of photons selected from radiative \(Z\) events. As described in Ref. [12], these rates are evaluated, in both data and simulation, using the ratio of the energies deposited in the first and the second layers of the calorimeter to discriminate between genuine converted and unconverted photons. The uncertainty in the energy scale is evaluated, as a function of \(|\eta|\) and \(E_{\mathrm{T}}\), by reweighting the conversion fractions in a sample of simulated single photons according to the values obtained from the radiative \(Z\) sample in simulation and data, respectively, and it is taken to be the relative difference of the energy responses.
For photons with \(E_{\mathrm{T}}=60\) GeV, the uncertainty in the energy scale for unconverted photon candidates is about 0.02% in the barrel and 0.02%-0.13% in the endcaps. For converted photon candidates, it is about 0.12% in the barrel and smaller than 0.01% in the endcaps. For photons with lower energy the uncertainty increases significantly: for \(E_{\mathrm{T}}=15\) GeV, it amounts to 0.18% in the barrel and 0.08%-0.67% in the endcaps for unconverted photons. For converted photons, it becomes 0.69%-1.31% in the barrel and 0.01%-0.1% in the endcaps. This systematic uncertainty is considered as a single source, correlated between converted and unconverted photons.
### Out-of-cluster energy leakage mis-modelling
Electrons and photons deposit about 1% to 6% of their energy outside of the cluster used in the reconstruction, depending on \(E_{\mathrm{T}}\), \(\eta\) and the particle type. This effect is corrected for by the MC-based energy response calibration. However, a bias in the reconstructed energy could appear in data if this lateral leakage is mis-modelled by the simulation. For electrons, the global energy scale correction (Section 7) absorbs any potential discrepancy at \(\langle E_{\mathrm{T}}\rangle\approx 40\) GeV.5 To take into account possible differences between electron and photon showers related to the different probabilities for interaction with the material in front of the
Figure 16: Comparison of the invariant mass distributions of the electron pair in the selected \(Z\to ee\) candidates in data and simulation, after the calibration and resolution corrections are applied. The total number of events in the simulation is normalized to that in data.
calorimeter, the lateral energy leakage in the calorimeter outside the area of the cluster is studied directly in data and simulation.
The \(Z\to\ell\ell\gamma\) and \(Z\to ee\) event samples are used to estimate this difference. The lateral energy leakage, \(l\), is defined by comparing the energy collected in the second-layer cells belonging to the supercluster, \(E_{\rm nom}^{\rm 12}\), with the energy deposited in second-layer cells in a larger rectangular window of size \(7\times 11\) in \(\eta\times\phi\) around it, \(E_{7\times 11}^{\rm 12}\):
\[l=\frac{E_{7\times 11}^{\rm 12}}{E_{\rm nom}^{\rm 12}}-1.\]
The difference between data and simulation is presented in Figure 17 as a function of \(E_{\rm T}\) and in two pseudorapidity regions for electrons, unconverted photons and converted photons.
The double difference between electrons and photons in data and simulation, \(\alpha_{l}\), defined as :
\[\alpha_{l}=\left(l_{e}-l_{\gamma}\right)^{\rm data}-\left(l_{e}-l_{\gamma} \right)^{\rm MC},\]
is estimated separately for converted and unconverted photons and is used to correct the photon energy scale. The reconstruction of photon conversions and the classification into converted photon and unconverted photon categories have an impact on \(\alpha_{l}\) that is estimated by reweighting the simulation to match the data (as detailed in Section 8.1), and the difference between the nominal \(\alpha_{l}\) values and the values obtained without reweighting is taken as a systematic uncertainty. The \(\alpha_{l}\) values vary from \(-\)(0.3-0.2)% at low \(E_{\rm T}\) to \(-\)(0.1-0.05)% at high \(E_{\rm T}\), with an absolute uncertainty ranging from 0.01% to 0.07% depending on the kinematic bins and the particle type.
Discrepancies between leakage modelling and data were observed in previous publications [1, 2], and the full size of the discrepancies was considered as a systematic uncertainty in the photon energy calibration. The statistical power of the Run 2 data allows an improved mapping of the \(|\eta|\) and \(E_{\rm T}\) dependence of this effect, and a correction is derived. The systematic variations, defined as above, reduce the residual uncertainty and allow a decrease in the corresponding calibration uncertainty by a factor of about two.
Figure 17: The difference between the leakage fractions in data and simulation, in (a) the barrel and (b) the endcaps as a function of \(E_{\rm T}\) and the particle type. A variable bin size was chosen to make optimal use of the available samples. The last bin covers the range \(E_{\rm T}>40\) GeV, and the corresponding markers are set near the \(E_{\rm T}\)-average in that bin.
## 9 Electron and photon energy scale uncertainties
The complete systematic uncertainty model contains 64 and 67 independent uncertainty variations for the electron and photon energy scales, respectively.
Uncertainties in the material upstream of the calorimeter are derived in Refs. [1, 2, 3] from a combination of detector construction information and _in situ_ measurements, and are evaluated for up to nine \(|\eta|\) regions.
The uncertainty model for the cell readout non-linearity has been revised with respect to Ref. [2]. The present analysis considers separate sources of uncertainty for the transitions between High and Medium gain, and between Medium and Low gain in Layer 2. In the previous analyses, a single uncertainty was used to cover both gain transitions. In addition, because of the improved strategy for the \(\alpha_{12}\) measurement (now also using an electron sample), the size of the uncertainty associated with the transition between High and Medium gain in Layer 1 was re-estimated for the current analysis. The corresponding sources of uncertainty are considered fully correlated in pseudorapidity.
The presampler calibration and the intercalibration of the first and second accordion layers are estimated in nine and seven \(|\eta|\) regions, respectively, with corresponding sources of uncertainty. Uncertainties corresponding to the \(Z\) scale calibration, photon reconstruction and classification, pile-up modelling, and lateral shower shape development are all fully correlated in pseudorapidity. For the last category, an exception is the energy scale dependence on the shower width in \(\eta\), where the region \(1.52<|\eta|<1.82\) is considered to be independent of the rest of the pseudorapidity interval.
The full list of systematic uncertainty sources is summarized in Table 1. The impact of the most significant ones is illustrated as a function of the electron or photon \(E_{\mathrm{T}}\) in Figure 18, for two pseudorapidity values.
The uncertainty in the energy resolution is shown in Figure 19 as a function of \(E_{\mathrm{T}}\), for electrons and unconverted photons, at \(|\eta|=0.3\). Only the uncertainty related to the resolution correction (Section 7) was updated in this analysis with respect to the descriptions provided in Refs. [1, 2, 3]. At high \(E_{\mathrm{T}}\), the resolution uncertainties are larger than those presented in Ref. [2], where fixed-size clusters were used for electron and photon reconstruction. The present clustering algorithm is more sensitive to pile-up fluctuations. This effect is larger for electrons than for photons.
## 10 Energy linearity and constraints on the calibration uncertainties
### Energy linearity measurement
In order to test the \(E_{\mathrm{T}}\) dependence of the energy scale corrections \(\alpha_{i}\), the procedure in Section 7 is repeated in bins of \(|\eta|\) and \(E_{\mathrm{T}}\). Specifically, an extended definition of the calibrated energy is introduced as
\[E^{\mathrm{data,corr}}=E^{\mathrm{data}}/[(1+\alpha_{i})(1+\alpha_{j}^{\prime })],\]
where the energy scale factors \(\alpha_{i}\) are left at their values determined above, and \(\alpha_{j}^{\prime}\) quantifies the energy dependence of the energy scale (in the absence of energy dependence, \(\alpha^{\prime}=0\)). The index \(j\) labels a two-dimensional mapping of the electron phase space, with the following bin boundaries:
* 0, 0.6, 1.0, 1.37, 1.55, 1.82, 2.47 in \(|\eta|\);
* 27, 33, 38, 44, 50, 62, 100, \(\sqrt{s}/2\) in \(E_{\mathrm{T}}\) [GeV].
\begin{table}
\begin{tabular}{l l c} \hline \hline Source of uncertainty & Methodology & Description & Number of \\ & & \(|\eta|\) regions \\ \hline ID material & Run 1 detector construction & [1, 3] & 4 \\ & Pixel services description & [2, 3] & 2 \\ Material presampler to calorimeter & Run 1 measurement with unconv. photon & [1, 3] & 10 \\ (\(|\eta|<1.8\)) & Simulation of long. shower shape unconv. photon & [1] & 2 \\ Material ID to presampler & Run 1 measurement with electrons & [1, 3] & 9 \\ (\(|\eta|<1.8\)) & Simulation of long. shower shape electrons & [1] & 2 \\ Material ID to calorimeter & Run 1 measurement with electrons & [1, 3] & 3 \\ (\(|\eta|>1.8\)) & Simulation of long. shower shape electrons & [1] & 1 \\ All material ID to calorimeter & Variations of Geant4 physics list & [1] & 1 \\ \hline Cell readout non-linearity & ADC non-linearity & Section 5.3 & 1 \\ & Medium gain/High gain Layer 2 & Section 5.4 & 1 \\ & Low gain/Medium gain Layer 2 & Section 5.4 & 1 \\ & Medium gain/High gain Layer 1 & [2] & 1 \\ & Pile-up shift & [2] & 1 \\ \hline Presampler calibration & \(\alpha_{\text{PS}}\) measurement & Section 6.1 & 9 \\ Layer 1/Layer 2 calibration & \(\alpha_{12}\) measurement & Section 6.2 & 7 \\ Barrel–endcap gap scintillator & Scintillator calibration & [2] & 3 \\ (\(1.4<|\eta|<1.6\)) & & & \\ \hline \(Z\to ee\) calibration & Statistical uncertainty & Section 7 & 1 \\ & Systematic uncertainty & Section 7 & 1 \\ \hline Conversion reconstruction & Classification (efficiency and fake rate) & Section 8.1 & 1 \\ & Radius dependence of conversion reconstruction & [1] & 1 \\ \hline Lateral shower shape modelling & Dependence on shower \(\eta\) width & [2] & 2 \\ & Lateral leakage for electrons & Section 8.2 & 1 \\ & Lateral leakage for unconv. photons & Section 8.2 & 1 \\ & Lateral leakage for conv. photons & Section 8.2 & 1 \\ \hline Pile-up modelling & Mis-modelling of pile-up noise vs \(\langle\mu\rangle\) & [3] & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of the independent systematic uncertainties affecting the energy calibration and their division into a number of \(|\eta|\) regions, between which the uncertainties are not correlated.
Figure 18: Relative energy scale calibration uncertainty for (a, b) electrons, (c, d) unconverted photons and (e, f) converted photons, as a function of \(E_{\rm T}\) for (a, c, e) \(|\eta|=0.3\) and (b, d, f) \(|\eta|=2.1\). The total uncertainty is shown along with the main contributions, which are represented by the signed impact of a one-sided variation of the corresponding uncertainty. Only a one-sided variation for each uncertainty source is shown for clarity, except for the uncertainty related to the _in situ_ global energy scale determination with \(Z\to ee\) candidate events.
Events are assigned to a bin \(j\) if either one of the two decay electrons falls in this bin, and \(\alpha^{\prime}_{j}\) is varied to obtain the best agreement between the invariant mass distributions in data and simulation, separately for each bin \(j\). Since the second electron is distributed randomly, iterations are required and the procedure is repeated until convergence is reached.
The \(\alpha^{\prime}\) coefficients are shown in Figure 20 as a function of \(|\eta|\) and \(E_{\mathrm{T}}\). The observed energy dependence is significant, and is measured with a typical precision of \(0.03\%\), including the effect of systematic variations as described in Section 7. In the regions \(1.37<|\eta|<1.55\) and \(1.55<|\eta|<1.82\), the precision is about \(0.3\%\). The measurement precision is better than the calibration uncertainty derived in the previous sections, as can be seen from the outer uncertainty band in Figure 20. The energy linearity measurement can thus be used to further constrain the uncertainty model, as described in the following.
Figure 19: Relative uncertainty in the energy resolution, \(\delta_{\sigma}/\sigma\) as a function of \(E_{\mathrm{T}}\) for (a) electrons and (b) unconverted photons at \(|\eta|=0.3\). The total uncertainty is shown along with the breakdown into the different contributions.
Figure 20: Comparison of the measured values of \(\alpha^{\prime}\) with the pre-fit and post-fit linearity models. The full lines represent the nominal pre- and post-fit models, and the bands represent the corresponding uncertainties. The measurements in the \(|\eta|\) range corresponding to the barrel–endcap transition regions, \(1.37\leq|\eta|<1.55\), and from the \(J/\psi\to ee\) analysis are not included in the energy linearity fit. The analysis described in Section 11.1 was repeated with the present \(\eta\) binning to obtain the open dots in these figures.
### Constraints on the calibration systematic uncertainties
Accounting for the energy scale calibration using \(Z\to ee\) events, varying a given source of uncertainty by one standard deviation affects the reconstructed energy of an electron as follows:
\[\delta_{\rm rel}E_{k}(E_{\rm T},\eta)=\Delta_{\rm rel}E_{k}(E_{\rm T},\eta)- \Delta_{\rm rel}E_{k}(\langle E_{\rm T}\rangle,\eta),\]
where \(k\) labels the source of uncertainty (see Table 1), \(\Delta_{\rm rel}E_{k}(E_{\rm T},\eta)\) represents its fractional impact on the reconstructed energy, and \(\langle E_{\rm T}\rangle\approx 40\) GeV is the average transverse energy for electrons produced in \(Z\)-boson decays. The quantities \(\Delta_{\rm rel}E_{k}(E_{\rm T},\eta)\) are estimated by varying the corresponding sources of uncertainty in the reconstruction or detector simulation. The \(Z\)-based calibration absorbs the systematic uncertainty effect for electrons with \(E_{\rm T}=\langle E_{\rm T}\rangle\) and leaves the residual effect \(\delta_{\rm rel}E_{k}(E_{\rm T},\eta)\). In this model, the total effect of all systematic variations on the linearity can be parameterized as
\[\alpha^{\prime}_{\rm mod,\,\,\,j}=\sum_{k}\delta_{\rm rel}E_{jk}\theta_{k},\]
where \(\delta_{\rm rel}E_{jk}\) is the average of \(\delta_{\rm rel}E_{k}(E_{\rm T},\eta)\) over \(E_{\rm T}\) and \(\eta\) in bin \(j\), and \(\theta_{k}\) is the normally distributed nuisance parameter (NP) associated with the source \(k\). The calibration model is fitted to the data by adjusting the nuisance parameters \(\theta\), minimizing the following \(\chi^{2}\):
\[\chi^{2}=\sum_{j_{1},j_{2}}\left[\alpha^{\prime}_{j_{1}}-\alpha^{\prime}_{\rm mod,\,\,j_{1}}(\theta)\right]\,C^{-1}_{j_{1},j_{2}}\left[\alpha^{\prime}_{j_{2}}- \alpha^{\prime}_{\rm mod,\,\,j_{2}}(\theta)\right]+\sum_{k}\theta_{k}^{2}, \tag{2}\]
where \(C\) is the covariance of the \(\alpha^{\prime}\) measurements, calculated from the statistical and systematic uncertainties in the measurement procedure. The model contains 46 nuisance parameters6 with the corresponding constraint terms, and 7\(\times\)5 measured values of \(\alpha^{\prime}\). The transition regions between the barrel and endcap calorimeters are not included in the fit. The nuisance parameters represent uncertainties in the passive material model, electromagnetic shower shape development, readout electronics calibration and layer intercalibration; each of these uncertainty classes may be subdivided into up to 12 pseudorapidity regions.
Footnote 6: The full model contains 64 nuisance parameters relevant for the electron energy scale (Section 9). Two of them related to the final energy scale determination (Section 7) are not part of the \(\theta_{k}\) and 16 are removed from the fit by a pruning procedure.
The \(\chi^{2}\) minimization is performed analytically. Denoting the fitted values of the nuisance parameters by \(\hat{\theta}_{k}\) and their post-fit covariance by \(V\), the post-fit linearity and its uncertainty are defined as:
\[\hat{\alpha}^{\prime}_{\rm mod,\,\,\,j}=\sum_{k}\delta_{\rm rel}E_{jk}\,\hat{ \theta}_{k}\ \ \,\ \ \delta\hat{\alpha}^{\prime}_{\rm mod,\,\,\,j}=\left[\sum_{k_{1},k_{2}} \delta_{\rm rel}E_{jk_{1}}\,\delta_{\rm rel}E_{jk_{2}}\,V_{k_{1}k_{2}}\right]^ {1/2}.\]
For comparison, the pre-fit expressions are \(\alpha^{\prime}_{\rm mod,\,\,\,j}=0\) and \(\delta\alpha^{\prime}_{\rm mod,\,\,\,j}=\left[\sum_{k}\delta_{\rm rel}E_{jk}^ {2}\right]^{1/2}\). The minimization result is sensitive to the details of the covariance matrix \(C\) assigned to the linearity measurements. A global goodness-of-fit of \(\chi^{2}/N=90/35\) is obtained, assuming full correlation of the \(\alpha^{\prime}_{j}\) systematic uncertainties across \(E_{\rm T}\) bins within each \(|\eta|\) bin, but ignoring correlations across \(|\eta|\) bins. The MC statistical uncertainty is accounted for in the evaluation of the systematic uncertainties. For \(|\eta|\) bins with a partial \(\chi^{2}\) per degree of freedom \(\chi^{2}_{\rm bin}/N_{\rm bin}\) greater than one, the \(\alpha^{\prime}\) measurement uncertainties are rescaled by a factor \(\sqrt{\chi^{2}_{\rm bin}/N_{\rm bin}}\). This scaling typically increases the fit uncertainties by 5%. The final goodness-of-fit is \(\chi^{2}/N=41/35\), corresponding to a \(p\)-value of 0.22.
The results of the analysis are illustrated in Figures 20 and 21. With few exceptions, the measured values of \(\alpha^{\prime}_{j}\) are well within the initial calibration uncertainties. The fit \(\chi^{2}\) in Eq. (2) captures the measured non-linearities, and reduces the uncertainties by up to a factor of two for \(E_{\mathrm{T}}\;<50\GeV\), and up to three for \(E_{\mathrm{T}}\sim 150\GeV\). As can be seen in Figure 21, the reduction in uncertainty is mostly driven by the nuisance parameters associated with the cell-level non-linearity (ADC corrections, and HG/MG transition) and the shower development (lateral leakage and shower width). Most other nuisance parameters are typically constrained by 5%-10%. All nuisance parameter shifts are within the initially assigned uncertainties, with the exception of a \(1.5\sigma\) effect observed for the presampler calibration in one specific \(|\eta|\) region.
The impact of the linearity fit on the electron energy scale uncertainty is illustrated in Figure 22 as a function of \(E_{\mathrm{T}}\) and \(|\eta|\). The precision for electrons with \(E_{\mathrm{T}}\sim 40\GeV\) is mostly unchanged, since these particles are typical of on-shell \(Z\)-boson decays and their calibration is essentially determined by the energy scale analysis in Section 7. Energy scale uncertainties for electrons with \(E_{\mathrm{T}}=10\GeV\) or \(E_{\mathrm{T}}=1\TeV\) are typically reduced by 30%-50%, and vary from 0.2%-0.3% for \(|\eta|<1\) and \(|\eta|>1.8\) to between 0.5% and 1% for \(1<|\eta|<1.8\).
The impact of the present analysis on photon calibration uncertainties is shown in Figure 22 for converted and unconverted photons, and for \(E_{\mathrm{T}}=60\GeV\), which is typical for photons from Higgs boson decays. Uncertainties for converted photons, which are experimentally close to electrons, are only moderately reduced for this energy. For unconverted photons, the energy calibration uncertainty is typically reduced by 30% in the barrel, and by up to a factor of two in the endcaps.
Figure 21: Shifts and constraints on the nuisance parameters of the systematic uncertainty model from the energy linearity fit. A digit after the NP name represents a given \(\eta\) range (_a priori_ different for different NP sources).
## 11 Calibration cross-checks
### Checks using \(J/\psi\to ee\) events
The known mass of the \(J/\psi\) resonance provides a completely independent check of the energy calibration for electrons with transverse energy in the range from 5 to 30. For this, the full calibration procedure discussed in the previous sections, including the energy scale derived from \(Z\to ee\) events, is applied. The difference between data and simulation for \(J/\psi\to ee\) events is then quantified using residual energy scale factors extracted from the peak positions of the reconstructed invariant mass. If the energy calibration is correct, the residual energy scale factors should be consistent with zero within the combined uncertainties of the \(J/\psi\to ee\) measurement and the systematic uncertainty of the energy calibration.
The event selection requires two tightly identified, loosely isolated, opposite-sign electron candidates with \(E_{\mathrm{T}}>5\) and \(|\eta|<2.4\). The primary vertex must be located within \(|z|=150\) mm of the IP, and the dielectron invariant mass must be in the range [1, 5].
The residual energy scale factors are denoted by \(\alpha_{i}\), where \(i\) labels the kinematic bin, and determined as follows:
* \(J/\psi\) particles can be produced promptly or in \(b\)-hadron decays. The hadronic activity surrounding the decay electrons differs in both cases, biasing the energy-scale determination if the relative event fractions are not modelled accurately. The prompt fraction is extracted from a fit to the proper decay-time distribution of the data and is found to be between 76% and 82% depending on the leading electron's \(E_{\mathrm{T}}\), with uncertainties of up to 4%. The simulated prompt and non-prompt samples are then combined using the measured fractions.
Figure 22: Total relative systematic uncertainty in the energy scale as a function of \(|\eta|\) for (a) electrons with \(E_{\mathrm{T}}=10\) and \(40\) or \(1\) and (b) photons with \(E_{\mathrm{T}}=60\), after the constraints from the linearity fit. The bottom panels show the ratio of the post-fit to pre-fit uncertainties.
* The data are divided into categories depending on the \(\eta\) values of the two selected electrons. The \(\eta\) bin boundaries are \(-2.4\), \(-1.52\), \(-1.37\), \(-1.0\), \(-0.8\), \(-0.4\), \(0\), \(0.4\), \(0.8\), \(1.0\), \(1.37\), \(1.52\), and \(2.4\). The region \(1.37<|\eta|<1.52\) is not considered for the nominal results. The measurement can also be performed as a function of \(E_{\mathrm{T}}\) and integrated in \(\eta\). In this case, the \(E_{\mathrm{T}}\) bin boundaries are \(5\), \(10\), \(15\), \(20\), and \(30\) GeV.
* Comparison of the dielectron invariant mass distributions in data and simulation requires the background contributions to be subtracted from the data sample. The subtraction is performed by fitting a signal+background distribution to the data, separately in each category. The total distribution is expressed as the sum of two double-sided Crystal Ball functions to describe the \(J/\psi\) and \(\psi\left(2S\right)\) resonances, and a second-order Chebyshev polynomial to represent the continuum background. The parameters describing the resonance distributions are fixed to values determined from simulation, with the exception of the peak positions, which are parameterized using energy scales \(\alpha_{i}\) and \(\alpha_{j}\). An example of an invariant mass fit is shown in Figure 23.
* A simultaneous fit to all categories is performed to extract the residual energy scale factors. The considered systematic uncertainties are related to the modelling of signal and background, the fitted mass range, the uncertainties in the prompt fraction, and the modelling of the pseudorapidity distribution of electrons in simulation. The fit is repeated, varying each of the above sources of uncertainty in turn, and the deviations from the nominal \(\alpha\) value are added in quadrature to obtain the final uncertainty in the measurement.
The results are given in Figure 24, where the evolution of \(\alpha\) is shown as a function of (a) \(\eta\) and (b) \(E_{\mathrm{T}}\), before and after including the constraints from the linearity fit. The residual post-fit scale factors are below \(0.5\%\) and are compatible with zero within the total calibration uncertainty.
### Checks using \(Z\to\ell\ell\gamma\) events
Radiative \(Z\)-boson decay events provide a way to investigate the validity of the final photon energy scale. The selection requires a \(Z\)-boson candidate decaying into two opposite-sign electrons or muons and a photon from final-state radiation. Electrons (muons) must meet medium identification criteria with
Figure 23: An example dielectron invariant mass distribution with the fitted signal, background and \(\psi\left(2S\right)\) contributions. Both electrons are required to have \(|\eta|<0.4\).
\(E_{\rm T}>18\) (15) \(\rm GeV\). Tightly identified photons with \(E_{\rm T}>10\) \(\rm GeV\) are selected. Electrons and photons in the barrel-endcap transition regions are not considered. Loose isolation requirements are applied to all objects. The invariant mass of the dilepton+photon system is required to be in the range \(80<m_{\ell\ell\gamma}<100\) \(\rm GeV\) and the dilepton invariant mass must be in the range \(40<m_{\ell\ell}<80\) \(\rm GeV\). Figure 25 shows the \(m_{\ell\ell\gamma}\) distributions for the (a) dielectron and (b) dimuon channels. The inclusive residual photon energy scale factors are measured to be \((3.3\pm 2.0)\times 10^{-3}\) and \((1.4\pm 1.1)\times 10^{-3}\) in the \(Z\to ee\gamma\) and \(Z\to\mu\mu\gamma\) samples, respectively. This is consistent with the larger offset observed in Figure 25 than in Figure 25. The residual scales measured in the two channels agree within one standard deviation. The uncertainties combine the statistical uncertainty and the systematic uncertainty originating from the uncertainty in the lepton-energy calibration (see Ref. [38] for a description of the muon momentum scale determination).
To extract the residual photon energy scale, MC templates are compared with the data distribution through a \(\chi^{2}\) test, for the electron and muon channels independently. These templates are built by modifying the photon \(E_{\rm T}\) by a factor \((1+\alpha)\), where \(\alpha\) is varied from \(-0.0200\) to \(0.0200\) in steps of \(0.0004\), and the overall energy scale residual is given by the template providing the lowest \(\chi^{2}\). The two channels are compatible and are statistically combined.
The dependence of \(\alpha\) on the energy and pseudorapidity of the photon is illustrated in Figure 26, separately for converted photons and unconverted photons. The residual photon energy scales are compared with the total energy calibration uncertainty for photons from \(Z\to\ell\ell\gamma\) decays, before and after including the constraints from the linearity fit. The error bars assigned to the data are typically \(0.15\%\), and are dominated by the uncertainty in the lepton-energy calibration. As in the \(J/\psi\) analysis, the linearity-constrained calibration tends to reduce the values of the residual calibration factors, which are found to be within the corresponding calibration uncertainties. Mild tension is observed for \(|\eta|>1.8\), for unconverted photons; this effect is driven by low-energy photons, and disappears at high \(E_{\rm T}\) as indicated by Figure 26.
Figure 24: Variation of the residual energy scale as a function of (a) \(\eta\) and (b) \(E_{\rm T}\), as measured with \(J/\psi\to ee\) events. The data points and uncertainty bands are shown for both the pre- and post-linearity-fit energy scale models. The uncertainty bands correspond to the energy calibration uncertainty for the energy range of the \(J/\psi\to ee\) decays.
Figure 26: Variation of the residual energy scale for (a, b) unconverted and (c, d) converted photons as a function of (a, c) \(|\eta|\) and (b, d) \(E_{\rm T}\), as measured with \(Z\to\ell\ell\gamma\) events. The data points and uncertainty bands are shown for both the pre- and post-linearity-fit energy scale models. The uncertainty bands correspond to the energy calibration uncertainty for photons from \(Z\to\ell\ell\gamma\) decays.
Figure 25: Comparison of the (a) \(ee\gamma\) and (b) \(\mu\mu\gamma\) invariant mass distributions in data and simulation.
## 12 Conclusion
This paper presents the energy calibration for electrons and photons reconstructed in 140 fb\({}^{-1}\) of 13 TeV proton-proton collision data recorded by ATLAS during Run 2 of the LHC. All of the major sources of uncertainty have been re-evaluated since the previous publications, and new methods are introduced to further reduce their impact. Improved methods to calibrate energies in the calorimeter cells and layers, and an improved measurement of lateral energy leakage from reconstructed electron and photon energy clusters, reduce the _a priori_ calibration uncertainty by about 30%. In addition, a precise measurement of the energy linearity, using electrons from \(Z\)-boson decays, provides a further reduction by about a factor of two. The overall calibration uncertainty is reduced by a factor of 2-3, depending on the particle type, pseudorapidity and energy. The achieved calibration uncertainties are typically 0.05% for electrons from \(Z\)-boson decays, 0.4% at \(E_{\mathrm{T}}\sim 10\) GeV, and 0.3% at \(E_{\mathrm{T}}\sim 1\) TeV; for photons at \(E_{\mathrm{T}}\sim 60\) GeV, they are 0.2% on average. These improvements are validated using independent samples of \(J/\psi\to ee\) decays and radiative \(Z\)-boson decays. The achieved precision is adequate for high-precision measurements of fundamental parameters such as the masses and properties of the Higgs, \(W\) and \(Z\) bosons, and is expected to improve the sensitivity of searches and measurements at the weak scale.
## Acknowledgements
We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently.
We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWFW and FWF, Austria; ANAS, Azerbaijan; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; ANID, Chile; CAS, MOST and NSFC, China; Micencias, Colombia; MEYS CR, Czech Republic; DNRF and DNSRC, Denmark; IN2P3-CNRS and CEA-DRF/IRFU, France; SRNSFG, Georgia; BMBF, HGF and MPG, Germany; GSRI, Greece; RGC and Hong Kong SAR, China; ISF and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; NWO, Netherlands; RCN, Norway; MEiN, Poland; FCT, Portugal; MNE/IFA, Romania; MESTD, Serbia; MSSR, Slovakia; ARRS and MIZS, Slovenia; DSI/NRF, South Africa; MICINN, Spain; SRC and Wallenberg Foundation, Sweden; SERI, SNSF and Cantons of Bern and Geneva, Switzerland; MOST, Taiwan; TENMAK, Turkiye; STFC, United Kingdom; DOE and NSF, United States of America. In addition, individual groups and members have received support from BCKDF, CANARIE, Compute Canada and CRC, Canada; PRIMUS 21/SCI/017 and UNCE SCI/013, Czech Republic; COST, ERC, ERDF, Horizon 2020 and Marie Sklodowska-Curie Actions, European Union; Investissements d'Avenir Labex, Investissements d'Avenir Idex and ANR, France; DFG and AvH Foundation, Germany; Herakleitos, Thales and Aristeia programmes co-financed by EU-ESF and the Greek NSRF, Greece; BSF-NSF and MINERVA, Israel; Norwegian Financial Mechanism 2014-2021, Norway; NCN and NAWA, Poland; La Caixa Banking Foundation, CERCA Programme Generalitat de Catalunya and PROMETEO and GenT Programmes Generalitat Valenciana, Spain; Goran Gustafssons Stiftelse, Sweden; The Royal Society and Leverhulme Trust, United Kingdom.
The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN, the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA), the Tier-2 facilities worldwide and large non-WLCG resource providers. Major contributors of computing resources are listed in Ref. [39]. |
2309.14568 | Introducing DictaLM -- A Large Generative Language Model for Modern
Hebrew | We present DictaLM, a large-scale language model tailored for Modern Hebrew.
Boasting 7B parameters, this model is predominantly trained on Hebrew-centric
data. As a commitment to promoting research and development in the Hebrew
language, we release both the foundation model and the instruct-tuned model
under a Creative Commons license. Concurrently, we introduce DictaLM-Rab,
another foundation model geared towards Rabbinic/Historical Hebrew. These
foundation models serve as ideal starting points for fine-tuning various
Hebrew-specific tasks, such as instruction, Q&A, sentiment analysis, and more.
This release represents a preliminary step, offering an initial Hebrew LLM
model for the Hebrew NLP community to experiment with. | Shaltiel Shmidman, Avi Shmidman, Amir David Nissan Cohen, Moshe Koppel | 2023-09-25T22:42:09Z | http://arxiv.org/abs/2309.14568v1 | # Introducing DictaLM - A Large Generative Language Model for Modern Hebrew
###### Abstract
We present DictaLM, a large-scale language model tailored for Modern Hebrew. Boasting 7B parameters, this model is predominantly trained on Hebrew-centric data. As a commitment to promoting research and development in the Hebrew language, we release both the foundation model and the instruct-tuned model under a Creative Commons license1. Concurrently, we introduce DictaLM-Rab, another foundation model geared towards Rabbinic/Historical Hebrew. These foundation models serve as ideal starting points for fine-tuning various Hebrew-specific tasks, such as instruction, Q&A (Cohen et al., 2023), sentiment analysis (Amram et al., 2018), and more (Bareket and Tsarfaty, 2021). This release represents a preliminary step, offering an initial Hebrew LLM model for the Hebrew NLP community to experiment with.
Footnote 1: For specifics on the license, visit [https://creativecommons.org/licenses/by-sa/4.0/](https://creativecommons.org/licenses/by-sa/4.0/)
## 1 Introduction
Language models have revolutionized the realm of natural language processing, facilitating significant advancements in tasks ranging from sentiment analysis to machine translation. As the breadth and depth of these models expand, so does the aspiration for linguistic diversity. Yet, while the majority of state-of-the-art models cater predominantly to widely spoken languages, there exists a vast landscape of languages and dialects that are underrepresented in currently existing large-scale language models. Hebrew is one such language.
In this paper, we make strides to bridge this gap by introducing DictaLM - the first large-scale language model crafted for Modern Hebrew. By leveraging a dataset dominated by Hebrew-centric content, our endeavor was not only to construct a model adept at understanding and generating Modern Hebrew but also to lay down a foundation that facilitates further advancements in the field. As part of this initiative, we also present DictaLM-Rab, a parallel model pretrained for Rabbinic/Historical Hebrew, thereby encompassing the vast chronological spectrum of the Hebrew language. This release serves as a preliminary step, providing an initial tentative version to the Hebrew NLP community as a foundation for further refinements, adaptations, and collaborative enhancements. Figure 1 demonstrates example output from the instruct-tuned model.
## 2 Datasets
In this section, we elucidate the datasets employed for training and fine-tuning DictaLM. The assemblage of data, amassing a total of 7.5 billion tokens, originates from a mixture of authentic sources; no synthetic data was added. The pre-training phase is followed by a fine-tuning stage through instruct datasets derived from Hebrew Question-Answering datasets and a translated version of the MPT Instruct Dataset.
### Pre-training Data
The dataset is built up of several different components:
**C4 [80%]**. We start with the HeDC4 corpus released by (Shalumov and Haskey, 2023), and
Figure 1: We present two instances of DictaLM utilization: in the first instance, the model exhibits common sense reasoning, while in the second, it displays worldly knowledge.
continue further cleaning it. We removed approximately 15% of the corpus using various techniques including histograms, gibberish detectors, as well as removing sentences that had a very high perplexity when running through a Modern Hebrew BERT model. In addition, we limited our training corpus to contain only words in English and Hebrew, and all other languages were reduced to a designated _<foreign>_ token to avoid cluttering the tokenizer with non-Hebrew tokens. The resulting corpus contains approximately 6B byte-pair tokens.
**Other sources [20%].** We collected data from various other sources including news sites, blogs, tv and movie subtitles, novels, and more. This data was also run through a similar cleaning process to the C4 corpus, as described above, and resulted in an additional 1.5B byte-pair tokens.
#### 2.1.1 Instruct Data
Our instruct-tuning data contains a mixture of 2 different datasets, each processed and modified in order to teach the model to follow as many different instructions as possible.
**QA Datasets**. We take the HeQ (Cohen et al., 2023) and ParaShoot (Keren and Levy, 2021) training datasets and format them as instructions. The prompt contains the context paragraph followed by the question, with a system instruction. The system instruction starts with a general instruction (in Hebrew) stating "Please read the following paragraph and answer the question that comes after", and 60% of the time also instructs the system to format a specific type of response (e.g., "Short and to the point", "Please cite the sentence to support your answer", and more). We list a few examples in Appendix A.
**Translated MPT Instruct**. We took the MPT Instruct Dataset from huggingface2 and ran it through a translation API. We then reformatted the prompt to remove the constant structure, and left the question only. We then added in each question three times: Once with no system prompt, and twice with two different prompts chosen based on the length of the response, asking the model to be concise, expand, answer in X sentences, etc. We list a few examples in Appendix B.
Footnote 2: [https://huggingface.co./datasets/mosaicml/dolly_hhrlhf](https://huggingface.co./datasets/mosaicml/dolly_hhrlhf)
## 3 Model architecture
### Tokenizer
A major problem we encountered when attempting to use other multilingual LLMs for Hebrew was the tokenization. When the corpus contains a very small percentage of a language, then the number of tokens representing that language in the vocabulary is significantly reduced. In addition, due to the nature of UTF-8 encoding, byte-pair tokenization methods result in even scarcer representation of Hebrew in the vocabulary. As can be seen in OpenAI's GPT-3 tokenizer3, if one inserts a few paragraphs of Hebrew text, the tokenizer will average 1.1 tokens per **character**.
Footnote 3: [https://platform.openai.com/tokenizer](https://platform.openai.com/tokenizer)
We train our tokenizer using the byte-pair encoding (BPE) algorithm (Sennrich et al., 2015) on our cleaned corpus with a vocabulary size of 56000. The resulting tokenizer had a ratio of approximately 1.3 tokens per **word**.
### Architecture
In this section, we detail the architectural framework of DictaLM. Following recent work on large language models, our network is based on the transformer architecture (Vaswani et al., 2017). Our architecture encompasses several enhancements aimed at boosting training stability and overall performance:
**Normalization**. To improve training stability and balance the input, we normalize the input of each transformer layer before and after the attention calculation. We use the LayerNorm1P normalization with \(\epsilon=1e-5\), which is a slightly modified version of the _FastLayerNorm_ normalization offered by NVIDIA's APEX library4.
Footnote 4: [https://github.com/NVIDIA/apex](https://github.com/NVIDIA/apex)
**GeLU Activation**. As reported by (Hendrycks and Gimpel, 2023), we use the GeLU activation function.5
Footnote 5: We considered using other activations (such as SwiGLU (Shazeer, 2020)), but in the end we went with GeLU
**Rotary Embeddings**. Shown to be effective for extending the sequence length without a performance trace-off, we use rotary positional embedding (RoPE) with a \(0.5\%\) dimension percentage, introduced by (Su et al., 2022), at each layer of the network.
**Separate embedding and output weights**. As shown by (Welch et al., 2020), separating the embeddings and the output weights leads to better
performance.
### Training Details and Hyperparameters
We trained our model using the NeMo framework6 which is highly optimized for training compute-heavy machine learning models on NVIDIA hardware. We pre-trained the model on 8 H100 GPUs with tensor parallel size of 2 for a total of 150 hours completing 2.5 epochs (\(\sim\)18.5B tokens), and then fine-tuning for instructions for 8 hours. The training was done in a combination of bf16 and fp8 precision using NVIDIA's transformer engine7. The training was done with a global batch size of 128. We used the FusedAdam optimizer, with an initial learning rate of \(0.00016\), betas of \(0.9,0.95\) and the Cosine-Annealing schedule with a warmup of 750 steps and a minimum learning rate of \(1e-5\). The details for the model size are listed in Table 1.
Footnote 6: [https://github.com/](https://github.com/) NVIDIA/NeMo
Footnote 7: [https://github.com/](https://github.com/) NVIDIA/TransformerEngine
Footnote 8: [https://www.sefaria.org.il/](https://www.sefaria.org.il/)
### DictaLM-Rab Model
In addition to the model we described above, we also trained a model DictaLM-Rab for use with Rabbinic Hebrew tasks. We used the same approach as above, adjusting the input corpus to contain a large sampling of Rabbinic Hebrew data.
Specifically, we added a corpus of 1.2B tokens of Rabbinic Hebrew texts taken from various sources (e.g. Sefaria8, Dicta9). We combined this corpus together with the modern Hebrew corpus that we described above, sampling the data such that fifty percent of the training sequences would be from the Rabbinic Hebrew corpus (with oversampling).
Footnote 8: [https://library.dicta.org.il/](https://library.dicta.org.il/)
The model uses the same tokenizer as DictaLM, and was trained for a total of 1.5 iterations (\(\sim\)12.5B tokens).
We are pleased to also release this foundation model, tailored to benefit researchers working on Rabbinic Hebrew. This model can be used as a base model for fine-tuning on specific tasks relevant to the Rabbinic Hebrew domain. Our internal experiments reveal encouraging results with Rabbinic texts, details of which will be shared in forthcoming publications.
## 4 Drawbacks
Our model was trained on the full dataset without any censorship for offensive or biased material, and therefore it may generate sentences that are offensive to some users.
Also, we would like to highlight that this project is in its alpha phase. While we are releasing DictaLM to facilitate research endeavors, and while we believe that it can serve as a useful foundation for specific fine-tuned tasks in the realm of Hebrew NLP, we acknowledge that the quality of the model does not yet match industry standards.
## 5 Conclusion
We are pleased to present the three models described within this paper: the two foundational models (suitable as base models for further fine-tuning for tasks concerning both Modern and Rabbinic Hebrew), and the instruct model, fine-tuned to address instruction prompts in Modern Hebrew. The public release of these models aims to contribute to the advancement of research and development within the Hebrew NLP domain. The models can be accessed via the following links:
* Foundation model DictaLM: [https://huggingface.co./dicta-il/dictalm-7b](https://huggingface.co./dicta-il/dictalm-7b)
* Instruct model DictaLM-Instruct: [https://huggingface.co./dicta-il/dictalm-7b-instruct](https://huggingface.co./dicta-il/dictalm-7b-instruct)
* Foundation model for Rabbinic Hebrew DictaLM-Rab: [https://huggingface.co./dicta-il/dictalm-rab-7b](https://huggingface.co./dicta-il/dictalm-rab-7b)
|
2309.15676 | Joint Sampling and Optimisation for Inverse Rendering | When dealing with difficult inverse problems such as inverse rendering, using
Monte Carlo estimated gradients to optimise parameters can slow down
convergence due to variance. Averaging many gradient samples in each iteration
reduces this variance trivially. However, for problems that require thousands
of optimisation iterations, the computational cost of this approach rises
quickly.
We derive a theoretical framework for interleaving sampling and optimisation.
We update and reuse past samples with low-variance finite-difference estimators
that describe the change in the estimated gradients between each iteration. By
combining proportional and finite-difference samples, we continuously reduce
the variance of our novel gradient meta-estimators throughout the optimisation
process. We investigate how our estimator interlinks with Adam and derive a
stable combination.
We implement our method for inverse path tracing and demonstrate how our
estimator speeds up convergence on difficult optimisation tasks. | Martin Balint, Karol Myszkowski, Hans-Peter Seidel, Gurprit Singh | 2023-09-27T14:21:13Z | http://arxiv.org/abs/2309.15676v1 | # Joint Sampling and Optimisation for Inverse Rendering
###### Abstract.
When dealing with difficult inverse problems such as inverse rendering, using Monte Carlo estimated gradients to optimise parameters can slow down convergence due to variance. Averaging many gradient samples in each iteration reduces this variance trivially. However, for problems that require thousands of optimisation iterations, the computational cost of this approach rises quickly.
We derive a theoretical framework for interleaving sampling and optimisation. We update and reuse past samples with low-variance finite-difference estimators that describe the change in the estimated gradients between each iteration. By combining proportional and finite-difference samples, we continuously reduce the variance of our novel gradient meta-estimators throughout the optimisation process. We investigate how our estimator interlinks with Adam and derive a stable combination.
differentiable rendering, inverse rendering, gradient estimation, gradient descent +
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †:
Such inversion tasks are typically solved by gradient descent. Physically-based differentiable renderers (Jakob et al., 2022; Li et al., 2018; Zhang et al., 2020) facilitate these gradient-based optimisation methods. The process involves backpropagating from an underlying loss function, quantifying the disparity between an image generated with the current parameters and the target image, resulting in gradients w.r.t. the scene parameters. These gradient values are approximated from a given set of samples, and subsequently, the scene parameters are adjusted using these gradients to minimise the loss. The ultimate goal is to converge to a parameter set that produces the target image. Due to the nature of Monte Carlo integration, the estimated gradients can be extremely noisy, hampering the performance of gradient-based optimisers. In inverse rendering, gradients are estimated with tens to hundreds of rays traced per pixel (Nimier-David et al., 2020; Zhang et al., 2019) to minimise noise. Usually, inverse rendering requires hundreds to thousands of iterations to converge; recomputing these gradient estimates in every iteration comes at a large cost.
In this paper, we propose a theoretical framework that jointly considers sampling and optimisation by deriving a theoretical framework for interleaving them. We reuse past samples without introducing bias thanks to finite-difference estimators that describe the change in the estimated gradients between each iteration. By combining proportional and finite-difference samples, we continuously reduce the variance of our novel gradient meta-estimators through-out the optimisation process.
First, we introduce our meta-estimation theory and then discuss our variance estimation strategies used to derive coefficients for our meta-estimator. We investigate how our estimator interlinks with Adam and derive a stable combination. We run experiments to evaluate our method in the context of inverse rendering. Finally, we discuss our method concerning future and concurrent work and give our conclusions.
Our contributions include:
* Meta-estimation theory on combining proportional and finite-difference estimators.
* Practical variance approximation techniques to effectively implement meta-estimation.
* Implementation and evaluation of meta-estimation for inverse rendering. (We will release our code upon acceptance.)
## 2. Related Work
Differentiable Path TracingPath tracing accounts for global illumination through physically accurate light transport by Monte Carlo integration of the rendering equation (Kajiya, 1986). Recent works proposed various approaches to differentiate such Monte Carlo integrals and estimate derivatives w.r.t. scene parameters. (Jakob et al., 2022; Li et al., 2018; Nimier-David et al., 2020; Zeltner et al., 2021; Zhang et al., 2020). While our work applies to any method using gradient descent on Monte Carlo estimated gradients, we mainly experiment with Path Replay Backpropagation (Vicini et al., 2021); a well-established state-of-the-art method for inverse path tracing, implemented in Mitsuba 3 (Jakob et al., 2022).
Previous works have focused on sampling strategies (Bangaru et al., 2020; Yan et al., 2022; Zhang et al., 2021) and improving the optimisation itself (Nimier-David et al., 2020; Vicini et al., 2021) to reduce noise in the gradients. Particularly relevant, concurrent work by Chang et al. (2023) applies ReSTIR (Bitterli et al., 2020) in parameter-space with the same goal of reducing the variance of the estimated gradients.
Ray DifferentialsIgehy (1999) first proposed ray differentials to approximate derivatives for texture interpolation and anisotropic filtering. Kettunen et al. (2015) combine ray differentials with gradient-domain MLT (Lehtinen et al., 2013) to build unbiased image-space gradient estimators for gradient-domain path tracing. Manzi et al. (2016) extend their work to the spatiotemporal domain.
We apply the general idea of finite-difference estimation to temporal gradient averaging on a set of parameters. As we do not assume any structure between individual parameters, we forgo Poisson reconstruction and instead statistically average proportional and integrate finite-difference samples.
Gradient averagingIterating with the arithmetic mean of gradient samples is well-understood to improve the convergence of optimisers. Several recursive schemes (Nesterov, 1983; Polyak and Juditsky, 1992) achieve fast convergence on convex problems (Moulines and Bach, 2011), with some proving particularly useful in deep learning (Sutskever et al., 2013). Kingma and Ba (2014) propose start-up bias-corrected exponential moving averaging on gradients; Adam remains the de-facto optimisation algorithm for deep learning and inverse rendering applications.
In recent work, Gower et al. (2020) analyse gradient averaging methods based on finite sums; they show improvements in convergence analogous to our work, although limited to convex problems. Unfortunately, the finite-sum setting of algorithms like SAGA (Defazio et al., 2014) and SVRG (Johnson and Zhang, 2013) does not generalise to Monte Carlo integration (Nicolet et al., 2023).
Reducing the gradient variance is well understood to improve convergence speed and stability. Previous works on optimising neural networks increase the batch size to reduce this variance, which is often preferable over slower learning rates (Smith et al., 2018).
Control VariatesFieller and Hartley (1954) first propose control variates as a weighted combination of correlated estimators, one of which must be of a closed-form integral. Owen (2013) shows that the optimal control weight is proportional to the covariance of the estimators. Rousselle et al. (2016) generalise control variates to any pair of correlated estimators through two-level Monte Carlo integration; they apply their work to spatiotemporal gradient-domain rendering. Concurrent with our work, Nicolet et al. (2023) further generalise control variates to recursive estimation, applying it to primal renderings in the context of inverse path tracing.
Our work is distinctively different from control variates in that we build on an independent finite-difference estimator rather than a pair of correlated estimators. In particular, this formulation lets us avoid covariance terms in our weighting scheme.
## 3. Differential Meta-Estimators
Various Monte Carlo methods estimate a sequence of integrals. Often, each integral is a function of the previous one, with the sequence converging to a solution. Optimisation via inverse Monte
Carlo is a prime example; we estimate gradients in each iteration, adjust parameters accordingly, and repeat the process.
Our work is focused on improving the convergence speed and stability of the optimisation process by reducing the variance of the estimated gradients. We draw inspiration from control theory, specifically noise reduction through the combination of proportional and differential signals. These methods assume that samples are drawn from known probability distributions, usually normal distributions with known variances (Kalman, 1960). Unfortunately, we cannot make such assumptions when dealing with Monte Carlo noise.
We combine two estimators: a _proportional estimator_\(\langle F_{i}\rangle\) -- any Monte Carlo gradient estimator sampled independently between iterations -- and a _finite-difference estimator_\(\langle\Delta F_{i}\rangle\) that estimates the change of a gradient between two consecutive iterations.
NotationLet \(F_{i}\) denote the integral of function \(f\) over the domain \(\mathcal{X}\), given parameters \(\pi_{i}\) for the current iteration \(i\in[0,\infty)\):
\[F_{i}=\int_{\mathcal{X}}f(\mathbf{x},\pi_{i})\mathrm{d}\mathbf{x}. \tag{1}\]
Let \(\langle F_{i}\rangle\) denotes the (proportional) Monte Carlo estimator of \(F_{i}\), meaning \(\mathbb{E}[\langle F_{i}\rangle]=F_{i}\). For example, an estimator may sample \(f\) given a density \(p\) over \(\mathcal{X}\):
\[\langle F_{i}\rangle=\frac{f(\mathbf{x},\pi_{i})}{p(\mathbf{x},\pi_{i})}. \tag{2}\]
Finite-difference estimationWe write the change of \(F_{i}\) between consecutive steps as:
\[\Delta F_{i}=F_{i}-F_{i-1}. \tag{3}\]
A finite-difference estimator (\(\langle\Delta F_{i}\rangle\)) estimates this change, ideally with a low variance. For example, we can substitute Equation (1) into Equation (3):
\[\Delta F_{i}=\int_{\mathcal{X}}f(\mathbf{x},\pi_{i})-f(\mathbf{x},\pi_{i-1}) \mathrm{d}\mathbf{x}\, \tag{4}\]
and sample with a density \(p\) like in Equation (2):
\[\langle\Delta F_{i}\rangle=\frac{f(\mathbf{x},\pi_{i})-f(\mathbf{x},\pi_{i-1} )}{p(\mathbf{x},\pi_{i})}. \tag{5}\]
Here we assume that \(f\) is continuous w.r.t. \(\pi_{i}\). Although our theory may apply to any Monte Carlo integral \(F_{i}\), we analyse the case where \(f(\mathbf{x},\pi_{i})=\partial\mathcal{L}/\partial\pi_{i}\) is the gradient at the \(i\)-th iteration, for some objective \(L\).
### Meta-estimation
Our _meta-estimator_ aims to optimally combine each proportional \(\langle F_{i}\rangle\) and finite-difference estimator \(\langle\Delta F_{i}\rangle\) available until the current step \(i\). In this subsection, we establish the theoretical conditions required for a variance-optimal, unbiased combination of both estimators.
A finite-difference estimator \(\langle\Delta F_{i}\rangle\), by its definition in Equation (3), lets us update any proportional estimator from the previous step (\(\langle F_{i-1}\rangle\)) to the current step \(i\). This update can be done simply by addition without introducing any bias. We can easily show this by expanding the expected value of the sum:
\[\mathbb{E}[\langle F_{i-1}\rangle+\langle\Delta F_{i}\rangle]=\\ \mathbb{E}[\langle F_{i-1}\rangle]+\mathbb{E}[\langle\Delta F_{i }\rangle]=F_{i-1}+F_{i}-F_{i-1}=F_{i}. \tag{6}\]
We define our meta estimator as a weighted sum over the combination of all previous estimators up until step \(i\)-1 i.e., \(\langle F_{i-1}\rangle_{M}\) and the current proportional and finite-difference estimators:
\[\boxed{\langle F_{i}\rangle_{M}=a_{i}(\langle F_{i-1}\rangle_{M}+\langle \Delta F_{i}\rangle)+(1-a_{i})\langle F_{i}\rangle\.} \tag{7}\]
We initialise \(\langle F_{0}\rangle_{M}=\langle F_{0}\rangle\). As we sample all \(\langle F_{i}\rangle\) and \(\langle\Delta F_{i}\rangle\) independently, the optimal \(a_{i}\) coefficients are given by inverse variance weighting (Sinha et al., 2011):
\[a_{i}=\frac{\mathrm{Var}[\langle F_{i}\rangle]}{\mathrm{Var}[\langle F_{i} \rangle]+\mathrm{Var}[\langle F_{i-1}\rangle_{M}]+\mathrm{Var}[\langle\Delta F _{i}\rangle]}. \tag{8}\]
This simple recurrent relation captures the variance optimal combination of all previously sampled proportional and finite-difference estimators.
To summarise, we introduce the optimal and unbiased meta-estimator in Equation (7). However, in practice, we use a more efficient implementation which suffers from some start-up bias. We describe this version in the following section. To visualise the estimators mentioned above and to motivate the design of our optimiser, we show a simple example in Figure 2.
## 4. Variance estimation
Implementing Equation (7) in practice presents a challenge due to the unknown variances of our estimators. In this section, we describe how we approximate each variance term. We must balance three main objectives: the efficiency of our variance approximation methods, the optimality of our approximated \(\alpha_{i}\) coefficients, and any bias potentially introduced to \(\langle F_{i}\rangle_{M}\).
### Proportional estimator variance
We approximate \(\mathrm{Var}[\langle F_{i}\rangle]\) as a zero-centred raw moment (Papoulis and Pillai, 1984), computed using an _exponential moving average_ (EMA) with coefficient \(\beta_{F}\):
\[\mathrm{Var}[\langle F_{i}\rangle]=\beta_{F}\mathrm{Var}[\langle F_{i-1} \rangle]+(1-\beta_{F})\langle F_{i}\rangle^{2}. \tag{9}\]
This formulation is similar to Adam's second moment estimate (Kingma and Ba, 2014). Here, \(\mathrm{Var}[\langle F_{i}\rangle]\) is a large, stable value that only varies in the initial stage of optimisation, when parameter changes can notably affect the problem's overall noise characteristics. A large \(\beta_{F}\) coefficient minimises the correlation of the approximate variance to any singular \(\langle F_{i}\rangle\), resulting in an overall stable variance approximation.
### Finite-difference estimator variance
As the proportional estimator reaches steady-state, we can safely assume that \(\langle F_{i}\rangle\) are identically distributed over consecutive iterations. Unfortunately, the same observation does not apply to finite-difference estimation as this finite-difference depends on the optimisation step we take in the previous iteration. For example, a larger step will cause a larger shift in the per-parameter gradients.
To resolve this issue we propose to decouple the optimisation step size (\(|\Delta\pi_{\parallel}|\)) from the approximated finite-difference estimator variance (\(\mathrm{Var}[\langle\Delta F_{i}\rangle]\)). We begin the derivation of this decoupling
by expanding the fraction in Equation (5) by the Euclidean step size \(||\Delta\pi_{i}||_{2}\) of the previous iteration:
\[\langle\Delta F_{i}\rangle=\frac{f(\mathbf{x},\pi_{i})-f(\mathbf{x},\pi_{i-1})}{ p(\mathbf{x},\pi_{i})}=\frac{f(\mathbf{x},\pi_{i})-f(\mathbf{x},\pi_{i-1})}{|| \Delta\pi_{i}||_{2}p(\mathbf{x},\pi_{i})}\,||\Delta\pi_{i}||_{2}\,. \tag{10}\]
For sufficiently small step sizes, we can rearrange the terms in Equation (10) to approximate the finite-difference of gradients \(f\) as the second-order gradient \((\partial f/\partial\pi)\), times a unit-directional vector, times the left over terms:
\[\langle\Delta F_{i}\rangle\approx\left(\frac{\partial f}{\partial\pi}( \mathbf{x},\pi_{i})\cdot\frac{\Delta\pi_{i}}{||\Delta\pi_{i}||_{2}}\right) \frac{||\Delta\pi_{i}||_{2}}{p(\mathbf{x},\pi_{i})}\,. \tag{11}\]
Applying the variance operator to Equation (11) gives us the decoupled finite-difference variance (\(\mathrm{Var}[\langle\Delta F_{i}\rangle]_{D}\)):
\[\mathrm{Var}\big{[}\langle\Delta F_{i}\rangle\big{]}\approx \mathrm{Var}\left[\left(\frac{\partial f}{\partial\pi}(\mathbf{x},\pi_{i}) \cdot\frac{\Delta\pi_{i}}{||\Delta\pi_{i}||_{2}}\right)\frac{1}{p(\mathbf{x},\pi_{i})}\right]||\Delta\pi||_{2}^{2}\] \[=\mathrm{Var}\big{[}\langle\Delta F_{i}\rangle\big{]}_{D}|| \Delta\pi||_{2}^{2}\,. \tag{12}\]
We use a zero-centred EMA, with a coefficient \(\beta_{\Delta}\), to approximate this decoupled variance as:
\[\mathrm{Var}\big{[}\langle\Delta F_{i}\rangle\big{]}_{D}=\beta_{\Delta}\, \mathrm{Var}\big{[}\langle\Delta F_{i-1}\rangle\big{]}_{D}+(1-\beta_{\Delta}) \frac{\langle\Delta F_{i}\rangle^{2}}{||\Delta\pi||_{2}^{2}}\,. \tag{13}\]
We generally use a small \(\beta_{\Delta}\) coefficient as \(\mathrm{Var}\big{[}\langle\Delta F_{i}\rangle\big{]}_{D}\) can change quickly and is typically less noisy than \(\mathrm{Var}\big{[}\langle F_{i}\rangle\big{]}\). Finally, we can rescale \(\mathrm{Var}\big{[}\langle\Delta F_{i}\rangle\big{]}_{D}\) to estimate \(\mathrm{Var}\big{[}\langle\Delta F_{i}\rangle\big{]}\):
\[\mathrm{Var}\big{[}\langle\Delta F_{i}\rangle\big{]}=\mathrm{Var}\big{[} \langle\Delta F_{i}\rangle\big{]}_{D}\,||\Delta\pi||_{2}^{2}\,. \tag{14}\]
We found this decoupled variance more closely distributed between iterations, better suited for approximation via moving averages.
### Meta-estimator variance
We approximate the variance of our meta estimator in Equation (7) by recurrently applying the variance operator:
\[\mathrm{Var}\big{[}\langle F_{i}\rangle_{M}\big{]}=\alpha_{i}^{2} \big{(}\mathrm{Var}\big{[}\langle F_{i-1}\rangle_{M}\big{]}+\mathrm{Var} \big{[}\langle\Delta F_{i}\rangle\big{]}\big{)}\\ +(1-\alpha_{i})^{2}\mathrm{Var}\big{[}\langle F_{i}\rangle\big{]}\,. \tag{15}\]
Here we assume \(\alpha_{i}\) to be a non-random value to simplify our mathematical derivation. Later in Section 6, we show the choice of \(\alpha_{i}\) is less significant as long as its correlation with the gradient samples diminishes.
### Alpha clipping
Meta-estimation is most vulnerable to underestimated \(\mathrm{Var}\big{[}\langle F_{i}\rangle_{M}\big{]}\); unless a significant \(\mathrm{Var}\big{[}\langle\Delta F_{i}\rangle\big{]}\) indicates a shift, \(\langle F_{i}\rangle_{M}\) will only slowly correct its overconfidently estimated value by averaging \(\langle F_{i}\rangle\) over many iterations. The risk of underestimation is the greatest while our exponential moving averages accumulate their initial samples. Clipping alpha based on the iteration resolves this risk:
\[\alpha_{i}=\min(\alpha_{i},1-1/(i+1))\,. \tag{16}\]
Intuitively, Equation (16) constrains alpha by the perfect average of all previous estimates; any value above this must be overestimated. We generalise this observation to the entire optimisation process, assuming that \(\mathrm{Var}\big{[}\langle F_{i}\rangle\big{]}\) is similar in subsequent steps:
\[\alpha_{i}=\min(\alpha_{i},1/(2-\alpha_{i-1}))\,. \tag{17}\]
## 5. Optimisation
Combining meta-estimation and optimisation creates a complex feedback loop; the meta-estimated gradients \(\langle F_{i}\rangle_{M}\) depend on their finite-difference estimates \(\langle\Delta F_{i}\rangle\), which depend on the optimiser's steps \(\Delta\pi_{i}\), which, in turn, depend on the gradients estimated in the
Figure 2. We optimise the rate parameter of an exponential distribution such that the mean of the distribution matches our target value of 2.0. We take 32 samples of the distribution in each iteration and compute an L2 loss between their mean and the target value. The bottom row shows insets of the graphs in the top row, indicated by grey regions. On the left, we show how Adam and our method reach the ground truth rate parameter 0.5. Error bars show the run-to-run variation of the optimised parameter. Our method converges significantly faster and is more stable than Adam. On the right, we show the estimators we use for our method; the proportional estimator has a large variance, while the finite-difference estimator is much less noisy. Our meta-estimator combines both, with its variance reducing over time.
previous step \(\langle F_{i-1}\rangle_{M}\). It becomes crucial that the meta-estimator provides the optimiser with reliable gradients and that the optimiser makes steps that let the meta-estimator converge. We aim to combine Adam with meta-estimated gradients. We explain Adam's variance approximation and update rule to show where we can integrate meta-estimation.
Adam (Kingma and Ba, 2014) is well known for its robustness to outlier gradient samples; upon encountering an outlier, the estimated second moments adjust in the same step, swiftly pulling down the step size. This mechanism works because Adam first updates its second-moment estimate:
\[v_{i}=\beta_{2}v_{i-1}+(1-\beta_{2})\langle F_{i}\rangle^{2}\, \tag{18}\]
corrects the EMA startup bias:
\[\hat{v_{i}}=v_{i}/(1-\beta_{2}^{i})\, \tag{19}\]
and then divides the step size by its square root:
\[\Delta\pi_{i+1}=-\eta\frac{\hat{m_{i}}}{\sqrt{\hat{v_{i}}+\epsilon}}\, \tag{20}\]
where \(m_{i}\) and \(v_{i}\) refer to Adam's moment estimates, \(\eta\) to the learning rate, \(\beta_{2}\) to Adam's second moment coefficient, and \(\epsilon\) to a small value to ensure numerical stability. \(\alpha_{i}\) and \(\operatorname{Var}\{\langle F_{i}\rangle\}\) behave similarly in our case; first, we update \(\operatorname{Var}\{\langle F_{i}\rangle\}\) for the current step (Equation (9)), compute \(\alpha_{i}\) (Equation (8)), and add the outlier gradient (\(F_{i}\)) to our meta-estimator \(\langle F_{i}\rangle_{M}\) weighted by \((1-\alpha_{i})\) (Equation (7)). Therefore, just like \(\beta_{2}\) for Adam, \(\beta_{F}\) offers a tradeoff between outlier robustness and estimation bias.
When using optimisers like RMSProp (Graves, 2014) and Adam (Kingma and Ba, 2014), lower variance gradients naturally accelerate convergence since these optimisers divide their step size by the standard deviation of the gradients (Equation (20)). Additionally, Momentum (Sutskever et al., 2013) helps these optimisers handle tricky non-linear, multivariate curvatures such as ravines. The optimisers' effectiveness is greatly reduced if the noise in the gradients overpowers the variance arising from non-linearities in the estimated moments required for these mechanisms.
Naively feeding the meta-estimated gradients to Adam is problematic; Adam computes its moment estimates, assuming the input gradients in each iteration to be independent. Meanwhile, our meta-estimator outputs an already averaged gradient (Equation (7)) with a strong positive correlation to previous averages. Adam's moment estimates are also redundant since we already estimate the variance of our meta-estimator \(\operatorname{Var}\{\langle F_{i}\rangle_{M}\}\). Therefore, we formulate the update step in terms of our estimates:
\[\Delta\pi_{i+1}=-\eta\frac{\langle F_{i}\rangle_{M}}{\sqrt{\operatorname{ Var}\{\langle F_{i}\rangle_{M}\}}+\epsilon}. \tag{21}\]
Dividing by \(\sqrt{\operatorname{Var}\{\langle F_{i}\rangle_{M}\}}\) sets the step size based on our meta-estimator. As \(\operatorname{Var}\{\langle F_{i}\rangle_{M}\}\) responds to changes in the estimated gradients much more quickly than Adam's second-moment estimate with the suggested \(\beta_{2}=0.999\) parameter, the stability of our method may seem uncertain. We observe that the responsivity of our method actually improves convergence, especially when combined with the decoupled estimation of \(\operatorname{Var}\{\langle\Delta F_{i}\rangle\}\). Optimisation speeds up quickly when low-noise gradients are available and slows down naturally when approaching a minimum.
## 6. Experiments
We run several experiments to confirm our method's behaviour and verify its theory. We also compare our method against Adam, as it is used in state-of-the-art inverse rendering pipelines. We implement our method in Mitsuba 3 (Jakob et al., 2022) and use Path Replay Backpropagation (Vicini et al., 2021) to sample gradients computed with the unbiased Mean Relative Squared Error loss (Deng et al., 2022; Pidhorskyi et al., 2022). For texture optimisation tasks, we use gradient preconditioning as proposed by Nicolet et al. (2021). We compute \(\langle\Delta F_{i}\rangle\) with a simplified form of the shift mapping proposed by Kettunen et al. (2015), only accounting for the BRDF sampling. While this implementation is sufficient for our proof-of-concept demonstrations, a full implementation of shift mapping can also account for changes in geometry at an insignificant cost compared to proportional samples. Unless mentioned otherwise, we tune learning rates of each method in each experiment.
Variance reduction without lagWe investigate the variance reduction our method can achieve while the scene parameters are changing. We run a fixed linear interpolation of the parameters without an optimiser to prevent any effects from the feedback of the gradients.
Forward gradients of several pixels in Figure 3 show that our method avoids the lag in gradients typical of EMAs. Our meta-estimate's actual variances and estimate variances are much tighter than the estimates computed by Adam. Furthermore, our method remains more stable upon encountering outliers.
Approximation accuracyWe repeat the previous setup in Figure 4, only now we test an exponentially decaying change in the gradients. Again, our meta-estimator stays within 0.5 to 2 times its predicted standard deviation. As the gradients settle, our method provides a consistent variance reduction (Row 1), averaging a large number of samples wherever possible. Meanwhile, Adam struggles with high-variance gradients (Row 2) and is thrown off by outliers.
We also show the approximated variances compared to ground truth variances computed over 1000 independent runs. Our approximation methods perform reasonably, only overestimating \(\operatorname{Var}\{\langle F_{i}\rangle\}\) (Row 3). This overestimation results in generally conservative \(\alpha_{i}\) values, erring on the side of robustness rather than maximising variance reduction (Row 5). On the other hand, we approximate \(\operatorname{Var}\{\langle\Delta F_{i}\rangle\}\) (Row 4) with little bias, although with often a large run-to-run variance.
Multivariate optimisationWe simultaneously optimise an object's colour, metalness, and roughness, as shown in Figure 1. Thanks to our meta-estimator, our method can traverse the loss surface without losing past samples. Furthermore, our finite-difference estimates let our meta-estimator adjust rapidly, avoiding the overshoots typical of Momentum-based methods. Even when tuning Adam's hyperparameters for the specific problem, it can only match our method at an over 20 times increase in computational cost, not counting the time spent on hyperparameter tuning.
Texture optimisationWe show a difficult texture optimisation case in Figure 6. Texture optimisation requires disentangling global illumination with very few gradient samples per texel. Adam can only take a few steps within a fixed budget at a high sample count,
requiring a high learning rate that skips over the intricate loss surface necessary to navigate for disentangling various effects. At a lower sample count, however, Adam struggles to progress as steps devolve into a random walk as the scale of the gradients shrinks close to minima.
High-dimensional optimisation.In Figure 7, we optimise an emissive-absorptive volume of size 256x256x256 voxels, totalling 70 million parameters. Perfectly fitting such a non-physical volume to rendered images is impossible. Thus, the optimiser needs to balance per-pixel losses for a good approximation, further needing to disentangle the small subset of parameters visible through any given pixel. Previous works avoid convergence to local minima by upsampling the optimised volume in several stages; our method does not need this workaround. On the other hand, Figure 8 shows that our method provides less benefit when our finite-difference estimator's sampling is too sparse across the volume.
Zero-centred EMAs.We chose to use zero-centred moving averages so that we do not need to approximate the mean of our estimators directly. This approach is generally more robust and memory efficient, though it overestimates variance at large signal-to-noise ratios. However, gradients generally have a low signal-to-noise ratio, so this tradeoff works in our favour. Figure 5 (top-left) demonstrates how non-zero-centred variance approximation is unstable, providing unreliable alphas, thus causing optimisation to diverge.
Figure 4. Following the same setup as Figure 3, we show the estimated gradients and standard deviations of our meta-estimator and Adam. Our meta-estimators achieve significantly lower actual variance in each case, while also providing much more accurate approximations. Dashed lines represent actual or optimal values, solid lines a randomly sampled run, while error bars show run-to-run variation. In addition, we show violin plots for alpha, demonstrating how our method is more likely to be conservative and not to be swayed by outliers.
Figure 3. We estimate forward gradients of the left wall’s colour’s blue channel while linearly changing the scene from the initial state (top left) to the target state over 100 iterations. The dashed line represents the actual gradient, dots the gradient samples, the solid line the estimated gradient, and the shaded area the estimated standard deviation. Error bars every 10 iterations show the run-to-run variation of the estimated gradient. Meta-estimation eliminates lag, improves robustness to outliers, and offers lower variance while more accurately estimating this variance. We select the three pixels w.r.t. the actual gradient variance; blue is the noisiest, orange is at 75’th percentile, and green is the median.
_Alpha clipping._ Alpha clipping helps resolve cases when our approximated variances are inaccurate, improving robustness. It may hinder the variance reduction of our method in the special case when \(\operatorname{Var}[\langle F_{i}\rangle]\) is sharply decreasing over iterations. However, we have not encountered this behaviour with our tested proportional estimators. Figure 5 (bottom-left) shows an ablation without alpha clipping for a scene from Figure 6, demonstrating rapid divergence as the initial variance approximations are unreliable.
_Sample reuse._ We use the same samples for rendering and variance approximation. This correlation introduces some bias at the start of the optimisation process, which diminishes over time. Figure 5 (top-right) shows an unbiased ablation using uncorrelated samples. Although this independently approximated variance eliminates bias, it misses outliers in the samples used for gradient estimation, causing the parameters receiving these outliers to diverge.
## 7. Limitations
Estimators \(\langle\Delta F_{i}\rangle\) are not generally available for many problems. Kettunen et al. (2015) propose shift mapping for path tracing, which we use in our work. Our meta-estimators rely heavily on \(\langle\Delta F_{i}\rangle\); as we recurrently sum \(\operatorname{Var}[\langle\Delta F_{i}\rangle]\) in Equation (15), it inherently bounds the variance of our meta-estimator. Doing so is fine as long as \(\operatorname{Var}[\langle\Delta F_{i}\rangle]\) is quadratic w.r.t. the step size (Equation (14)). Thus, we need to ensure this property when building finite-difference estimators while also aiming for the lowest variance to achieve the best stability and convergence with meta-estimation.
Zelner et al. (2021) show that gradient estimators benefit from specialised differential sampling strategies. The same is true of finite-difference estimators; our naive toy formulation in Equation (5) glosses over this problem where \(p(\mathbf{x},\pi_{i})\) is usually only optimised by importance sampling \(f(\mathbf{x},\pi_{i})\), not the difference between \(f(\mathbf{x},\pi_{i})\) and \(f(\mathbf{x},\pi_{i-1})\).
Suboptimal sampling strategies of \(\langle F_{i}\rangle\) compound the issue. As our work focuses on gradient estimation, meaning \(F_{i}\) are gradients, sampling of \(\langle F_{i}\rangle\) is not yet well established. For example, Zelner et al. (2021) show the poor performance of roughness gradient estimators. We experience these issues first-hand, as we show in Figure 9.
## 8. Conclusion
Our proposed meta-estimation technique and corresponding adaptation of the Adam update rule can substantially improve convergence when descending on noisy gradients, reducing computation costs by several orders of magnitude. We solve cases where low-sample-count gradients are too noisy for fast convergence while high-sample-count gradients are prohibitively expensive to compute for the required number of iterations on difficult non-linear, multivariate problems.
_Future work._ We look forward to applications of meta-estimation to various inverse Monte Carlo problems, especially as MC gradient estimators become prominent in machine learning (Mohamed et al., 2020). Building good gradient and finite-difference estimators may seem challenging -- and are the main limitation of our method -- but it is undoubtedly a fruitful direction for future work. We did not investigate training deep neural networks in this work but see it as the next step once low-variance finite-difference estimators become available.
###### Acknowledgements.
This work is supported by an academic gift from Meta. We thank the anonymous reviewers for their valuable feedback.
|
2309.12119 | Pseudo-Bayesian unit level modeling for small area estimation under
informative sampling | When mapping subnational health and demographic indicators, direct weighted
estimators of small area means based on household survey data can be unreliable
when data are limited. If survey microdata are available, unit level models can
relate individual survey responses to unit level auxiliary covariates and
explicitly account for spatial dependence and between area variation using
random effects. These models can produce estimators with improved precision,
but often neglect to account for the design of the surveys used to collect
data. Pseudo-Bayesian approaches incorporate sampling weights to address
informative sampling when using such models to conduct population inference but
credible sets based on the resulting pseudo-posterior distributions can be
poorly calibrated without adjustment. We outline a pseudo-Bayesian strategy for
small area estimation that addresses informative sampling and incorporates a
post-processing rescaling step that produces credible sets with close to
nominal empirical frequentist coverage rates. We compare our approach with
existing design-based and model-based estimators using real and simulated data. | Peter A. Gao, Jon Wakefield | 2023-09-21T14:39:20Z | http://arxiv.org/abs/2309.12119v1 | # Pseudo-Bayesian unit level modeling for small area estimation under informative sampling
###### Abstract
When mapping subnational health and demographic indicators, direct weighted estimators of small area means based on household survey data can be unreliable when data are limited. If survey microdata are available, unit level models can relate individual survey responses to unit level auxiliary covariates and explicitly account for spatial dependence and between area variation using random effects. These models can produce estimators with improved precision, but often neglect to account for the design of the surveys used to collect data. Pseudo-Bayesian approaches incorporate sampling weights to address informative sampling when using such models to conduct population inference but credible sets based on the resulting pseudo-posterior distributions can be poorly calibrated without adjustment. We outline a pseudo-Bayesian strategy for small area estimation that addresses informative sampling and incorporates a post-processing rescaling step that produces credible sets with close to nominal empirical frequentist coverage rates. We compare our approach with existing design-based and model-based estimators using real and simulated data.
Introduction
Producing estimates of health and demographic indicators such as child mortality rates at subnational resolutions is valuable for assessing inequality between regions. The problem of reliably estimating subpopulation quantities based on survey data is commonly called small area estimation. Pfeffermann [1], Rao and Molina [2], and Ghosh [3] provide recent reviews of research in small area estimation. Small area estimation methods have been used for subnational mapping of a variety of outcomes, including indicators of poverty [4, 5, 6], health outcomes [7, 8], and crop yield [9].
When survey data are limited, direct weighted estimators of small area means such as the Horvitz-Thompson [10] or Hajek [11] estimators can be imprecise or unreliable. Statistical models can improve estimates by incorporating auxiliary covariate information, explicitly accounting for between area variability using random effects, and leveraging spatial dependence to smooth across nearby areas. When survey microdata are available, individual survey responses can be modeled via unit level models. These models are used to motivate estimators of either finite population means and totals or of superpopulation quantities of interest.
Unit level models, especially those incorporating spatial random effects, are commonly used for mapping subnational health and demographic indicators in low- and middle-income countries (LMIC). These models often account for spatial variation using spatially continuous Gaussian processes, allowing estimates to be generated at arbitrary resolutions. Such models have been used to map a variety of outcomes including child mortality [12], vaccination rates [7] and disease prevalence [13].
Ideally, estimates based on statistical models will be robust to model misspecification. When using unit level models with complex survey data, we consider two types of model misspecification. First, the analyst-chosen model for population responses may be misspecified. Second, model-based approaches to small area estimation often assume that the sampling design is ignorable, meaning that the distribution of sampled responses will be identical to that of non-sampled responses, which may not be the case if the survey design features unequal sampling probabilities or clustering. When the sampling design is not ignorable, it is crucial to account for potential differences between sampled and non-sampled units. One approach is to include all variables used to specify the sampling design as predictors in a model, so that a particular unit's response will
be independent of whether it is sampled after conditioning on design variables. If we can specify such a model, we can say that sampling is uninformative with respect to the model. For example, if a particular survey design involves sampling clusters with probability proportional to size, cluster size may be included in the model.
However, we may only observe a subset of relevant design variables or the functional form of the relationship between the design variables and responses may be unknown. In this paper, we describe sampling as being informative with respect to a model if the model does not apply to both the sampled and non-sampled units. We aim to address informative sampling by leveraging design information such as sampling weights. The sampling weight for an individual unit is defined as the inverse of that unit's probability of inclusion in the sample. Sampling weights are commonly used to compute direct weighted estimators such as the Horvitz-Thompson [10] or Hajek [11] estimators. Area level models commonly used in small area estimation like the Fay-Herriot model [14] account for the survey design by approximating the sampling distributions of these direct weighted estimators. When estimating unit level models, addressing informative sampling with sampling weights is less straightforward.
Rao and Molina [2] and Parker et al. [15] review proposed approaches for incorporating sampling weights when fitting unit level models.. These modifications account for some design features such as unequal sampling probabilities, but may not explicitly address informative sampling and must be extended for use with non-Gaussian response variables.
More generically, for inference using parametric models with complex survey data, pseudo-likelihood methods [16] incorporate sampling weights and can achieve design-consistent estimation of model parameters under certain asymptotic assumptions. Analogously, pseudo-likelihoods can be used to conduct approximate Bayesian inference using pseudo-posterior distributions [17]. These pseudo-likelihood methods have been extended to mixed effects models [18, 19, 20], but frequentist maximum pseudo-likelihood estimators can be sensitive to weight scaling [21, 22]. Moreover, credible sets based on pseudo-posterior distributions do not generally achieve valid frequentist coverage rates, even asymptotically [23, 24, 25]. This body of research has generally focused on estimation of fixed effects, treating random effects as nuisance parameters. In the context of small area estimation, prediction of random effects at the small area level is principally important. Although previous research has applied pseudo-Bayesian approaches for small area estimation [26], the issues of weight
scaling and miscalibrated interval estimates have not been explored extensively in the context of small area estimation.
In this article, we outline a strategy for conducting pseudo-Bayesian inference using unit level models. As pseudo-Bayesian credible sets for model parameters may not converge on valid frequentist confidence sets due to dependence between units and informative sampling, we adapt a post-processing method proposed by Williams and Savitsky [24] to rescale our credible sets for small area means. In simulations that we report, the rescaled interval estimates achieve close to nominal empirical coverage rates. We apply our strategy for estimating small area means of both continuous and binary response variables.
The rest of this article is organized as follows. In Section 2, we outline our notation and describe the combined model- and design-based inferential framework we use to assess our estimators. Section 3 reviews standard estimation approaches for unit level models using sampling weights and Section 4 details our pseudo-Bayesian approach for generating point and interval estimates of small area means. In Section 5, we evaluate the performance of our approach in simulation, and in Section 6, we apply our method to estimate vaccination rates using data from the Demographic and Health Surveys. Finally, in Section 7, we discuss our method and outline directions for future research.
## 2 Background and inferential framework
### Notation
Let \(U=\{1,\ldots,N\}\) index a finite population of size \(N\). For all \(j\in U\), we let \(y_{j}\) denote the response value of interest for unit \(j\) and \(\mathbf{z}_{j}\) denote a vector of auxiliary variables. We assume \(U\) can be partitioned into \(m\) disjoint administrative areas, \(U=U(1)\cup\cdots\cup U(m)\), where \(U(i)\) denotes the \(N(i)\) indices corresponding to units in area \(i\). Let \(S=\{j_{1},\ldots,j_{n}\}\subset U\) denote a random set of \(n\) sampled indices, where \(S=S_{1}\cup\cdots\cup S_{m}\) is the corresponding partition by administrative area.
We assume a probability sampling scheme where for all \(j\in U\), \(\pi_{j}\) denotes the probability that \(j\in S\), also called the inclusion probability of unit \(j\), which may depend on \(\mathbf{z}_{j}\). For all \(j\in U\), we define \(\delta_{j}\) to be the inclusion indicator for unit \(j\). In other words, \(\delta_{j}=1\) if \(j\in S\) and \(\delta_{j}=0\) otherwise. We let \(w_{j}=1/\pi_{j}\) denote the sampling weight for unit \(j\).
Following Rao and Molina [2], we let \(y_{ij}=y_{j}\) if \(j\in U(i)\) and \(y_{ij}=0\)
otherwise. We define \(\delta_{ij}\), \(w_{ij}\), and \(\mathbf{z}_{ij}\) analogously. We define \(\mathbf{Z}\) to be the matrix of auxiliary variables. The finite population small area means \(\overline{\mathbf{Y}}=\{\overline{Y}_{1},\ldots,\overline{Y}_{m}\}\) can be defined such that for each \(i\),
\[\overline{Y}_{i}=\frac{1}{N(i)}\sum_{j\in U(i)}y_{ij}.\]
### Unit level modeling for small area estimation
Unit level modeling approaches to small area estimation relate individual survey responses \(y_{ij}\) to unit-specific auxiliary information and borrow strength from similar or nearby areas when estimating a small area quantity. For continuous responses, Battese, Harter, and Fuller [27] introduced the nested error regression model (also called the basic unit level model by Rao and Molina [2]):
\[y_{ij}=\beta_{0}+\mathbf{x}_{ij}^{T}\boldsymbol{\beta}_{1}+u_{i}+\varepsilon_{ij} \tag{1}\]
where \(\beta_{0}\) denotes an intercept term, \(\mathbf{x}_{ij}\) denotes observed covariate values, and \(\boldsymbol{\beta}_{1}=(\beta_{1},\ldots,\beta_{p})\) denotes the corresponding coefficients. We assume that \(\mathbf{x}_{ij}\) corresponds to a subset of the variables included in \(\mathbf{z}_{ij}\), allowing for the possibility that not all relevant variables used to design the survey are observed. The area level effects, denoted \(u_{i}\stackrel{{ iid}}{{\sim}}N(0,\sigma_{u}^{2})\) and \(\varepsilon_{ij}\stackrel{{ iid}}{{\sim}}N(0,\sigma_{\varepsilon }^{2})\), represent random and independent unit level effects. For binary or count responses, analogous linear models wtih appropriate link functions an be used for \(y_{ij}\).
Under this model, \(\overline{Y}_{i}=\beta_{0}+\overline{\mathbf{x}}_{i}^{T}\boldsymbol{\beta}_{1 }+u_{i}+\overline{\varepsilon}_{i}\) where \(\overline{\mathbf{x}}_{i}\) and \(\overline{\varepsilon}_{i}\) denote the area means of \(\mathbf{x}_{ij}\) and \(\varepsilon_{ij}\), respectively. From a model-based perspective, if we view \(\varepsilon_{ij}\) as representing noise or measurement error added to the true quantity of interest for individual \(j\), then a more appropriate target estimand is
\[\mu_{i}=E(\overline{Y}_{i}\mid\overline{\mathbf{x}}_{i},u_{i})=\beta_{0}+ \overline{\mathbf{x}}_{i}^{T}\boldsymbol{\beta}_{1}+u_{i},\]
assuming we know \(\overline{\mathbf{x}}_{i}\) for each area. Another justification for using \(\mu_{i}\) instead of \(\overline{Y}_{i}\) as the target estimand is that even if we view \(y_{ij}\) as being measured without error, by the law of large numbers, \(\overline{\varepsilon}_{i}\) converges in probability to \(E(\varepsilon_{ij})=0\) as \(N(i)\rightarrow\infty\). As such, it is standard in the small area estimation literature to focus on estimation of \(\mu_{i}\) instead of \(\overline{Y}_{i}\) for the basic unit level model.
### Interpretation of the basic unit level model
Practically, treating the area specific intercepts \(u_{i}\) as Gaussian random effects explicitly models variability between areas and shrinks small area mean estimates towards \(\beta_{0}+\overline{\mathbf{x}}_{i}^{T}\boldsymbol{\beta}_{1}\). Traditionally, the observed values of a random effect such as \(u_{i}\) are viewed as draws from some population, but only population characteristics (averaged over \(u_{i}\)), and not the draws themselves, are of interest. Hodges [28] calls random effects interpreted in this way "old-style" to distinguish them from "new-style" random effects, which represent the entire population of interest or represent draws from some distribution from which additional draws cannot be obtained. Under the model (1), \(u_{i}\) represent area-specific deviations from the global mean. For small area estimation, we are interested in estimates for a fixed set of \(m\) areas, so we interpret the random effects as "new-style" effects. From this perspective, incorporating area-specific random effects produces a flexible model constrained by the Gaussian assumption on \(u_{i}\), preventing overfitting when data are limited.
When specifying a population level model, however, it may be undesirable to use a model of the form (1) including random effects. We can rewrite the nested error regression model as follows:
\[y_{ij}=\beta_{0i}+\mathbf{x}_{ij}^{T}\boldsymbol{\beta}_{1}+\varepsilon_{ij} \tag{2}\]
where area specific intercepts \(\beta_{0i}=\beta_{0}+u_{i}\) are independent \(N(\beta_{0},\sigma_{u}^{2})\) variables and now \(\mu_{i}=\beta_{0i}+\overline{\mathbf{x}}_{i}^{T}\boldsymbol{\beta}_{1}\). Instead of treating the \(\beta_{0i}\) as draws from a Gaussian distribution, we could view them as fixed area specific intercepts from a frequentist perspective, which would make \(\mu_{i}\) fixed across populations after conditioning on auxiliary variables \(\mathbf{X}\). Given sufficient data in each area, it could be sensible to use a model with only fixed effects to avoid shrinkage. From this perspective, \(\beta_{0i}\) account for stable population level differences between areas that are not explained by differences in the available predictors \(\mathbf{x}_{ij}\). From a Bayesian hierarchical modeling perspective, the difference between using "fixed" effects versus random effects is less salient: the \(\beta_{0i}\) parameters are always treated as random and the shift only involves a change in the prior on \(\beta_{0i}\). Under the random effects model, \(\sigma_{u}^{2}\) is also a random variable while under the fixed effects model, \(\sigma_{u}^{2}\) would be fixed and typically large.
### Joint model-and-design based inference
When conducting inference for model parameters such as \(\mu_{i}\), it is common to assume that sampling is uninformative with respect to the model. In other words, one model is assumed to hold for both the sample and population. In practice, this assumption is difficult to verify, as the model must accurately describe the functional form of the relationship between \(\mathbf{x}_{ij}\) and \(y_{ij}\).
If the model is misspecified for the sample data, then model-based estimators need to be adjusted. Pfefferman and Sverchkov directly address this by modeling the sample inclusion mechanism [29]. Another approach is to use the population level model to define "census" model-based estimators for model parameters that could be computed given complete population data. Traditional design-based sample estimators that utilize sampling weights can subsequently approximate these census estimators. This inferential approach accounts for model-based variability in the census estimators and design-based variability in the sample estimators.
We study point and interval estimators of \(\mu_{i}\) under this combined model- and design-based framework, as developed by Rubin-Bleuer and Kratina [30] and also examined by Savitsky and Williams [17] and Han and Wellner [23]. We extend our notation to consider a sequence of sampling designs and populations indexed by \(\nu\). Let \(U_{\nu}=\{1,\ldots,N_{\nu}\}\) index a finite population of size \(N_{\nu}\), where \(N_{\nu}\) increases in \(\nu\). Let \(\mathcal{S}_{\nu}\) be the collection of all possible subsets of \(U_{\nu}\).
Let \((\mathcal{Y},\mathcal{B}_{\mathcal{Y}})\) and \((\mathcal{Z},\mathcal{B}_{\mathcal{Z}})\) be measurable spaces for the response and auxiliary variables. Assume \(\{(Y_{j},\mathbf{Z}_{j})\in\mathcal{Y}\times\mathcal{Z}\}_{j=1}^{N_{\nu}}\) are independent and identically distributed random vectors drawn from a superpopulation model on the probability space \((\Omega,\mathcal{F},P_{(Y,Z)})\equiv(\mathcal{Y}\times\mathcal{Z},\mathcal{B }_{\mathcal{Y}}\times\mathcal{B}_{\mathcal{Z}},P_{(Y,Z)})\) where \(P_{(Y,Z)}\) denotes the superpopulation measure. We use \(P_{0}\) to denote the marginal superpopulation distribution of \(Y\).
For cluster and multistage designs, we may wish to consider more complicated dependence structures for the superpopulation model. As an example, Rubin-Bleuer and Kratina outline a two-stage super-population model under which the population is partitioned into primary sampling units (PSU), within which responses and auxiliary variables may be dependent. The above notation may be adapted to reflect this dependence structure where \(\{(Y_{j},\mathbf{Z}_{j})\}_{j=1}^{N_{\nu}}\) are organized into groups of final-stage sampling units that are independent from one another. In order to simplify the exposition, we continue the discussion treating \(\{(Y_{j},\mathbf{Z}_{j})\}_{j=1}^{N_{\nu}}\) as independent.
Conditionally on \(\mathbf{Z}^{(\nu)}=(\mathbf{Z}_{1},\ldots,\mathbf{Z}_{N_{\nu}})\), we can define a sampling design \(P_{\nu}\), which we view as a probability distribution over the space of possible samples \(S\in\mathcal{S}_{\nu}\). We let \(D^{(\nu)}=\{\mathbf{Y}^{(\nu)},\mathbf{Z}^{(\nu)},\delta^{(\nu)},\pi^{(\nu)}\}\) denote the data for the \(\nu\)th finite population where \(\delta^{(\nu)}\) denotes the vector of sample inclusion indicators and \(\pi^{(\nu)}\) denotes the vector of sample inclusion probabilities. As outlined by Rubin-Bleuer and Kratina, we can construct a product measurable space \((\mathcal{S}_{\nu}\times\Omega,\sigma(\mathcal{S}_{N})\times\mathcal{F}, \mathbb{P})\) where \(\sigma(\mathcal{S}_{N})\) is the \(\sigma\)-algebra generated by \(\mathcal{S}_{N}\) and \(\mathbb{P}\) is a combined model- and design-based probability measure for \(D^{(\nu)}\). We use \(P_{0,\nu}\) to denote the marginal distribution of the observed \(Y\), accounting for both model- and design-based randomness.
We seek to understand the asymptotic behavior of our estimators as \(\nu\to 0\) under the combined probability measure \(\mathbb{P}\). Similarly, when evaluating estimators, we will generally consider average error metrics taking expectations with respect to \(\mathbb{P}\). Under informative sampling, we consider a combined model-and-design based mean squared error for evaluating point estimators:
\[\text{MSE}(\widehat{\mu}_{i})=\mathbb{E}_{\mathbb{P}}[(\widehat{\mu}_{i}-\mu_ {i})^{2}]\]
We are also interested in identifying interval estimates \((\widehat{\mu}_{i}^{-},\widehat{\mu}_{i}^{+})\) such that
\[\mathbb{P}\left(\mu_{i}\in(\widehat{\mu}_{i}^{-},\widehat{\mu}_{i}^{+})\right) =1-\alpha\]
for some pre-specified level \(\alpha\), where \(\mathbb{P}\) indicates the joint probability measure.
Note that \(P_{0}\) the superpopulation law for \(Y\) does not need to belong to the model specified by the data analyst. We primarily consider estimators based on nested error regression models of the form specified in Equation (1). In the Appendix, we discuss the impact of model misspecification, finding that if the estimation model is chosen carefully, model-based estimators can provide reasonable estimates of small area means.
## 3 Standard estimation approaches
In this section, we review parameter estimation approaches for the nested error regression model, beginning with approaches which assume ignorability of the sampling design. We proceed to review pseudo-likelihood and pseudo-Bayesian approaches that incorporate sampling weights.
### Parameter estimation assuming ignorability
First, we consider the model (2) treating \(\beta_{0i}\) as random, under the assumption that sampling is uninformative with respect to the model. As detailed by Rao and Molina [2], the frequentist approach proceeds by estimating variance components \(\{\widehat{\sigma}_{u}^{2},\widehat{\sigma}_{\varepsilon}^{2}\}\) via restricted maximum likelihood or a method of moments. Based on these estimates, the empirical best linear unbiased predictor (EBLUP) \(\widehat{\mu}_{i}^{EBLUP}\) can be computed for all \(i\). Either linearization-based approximation or resampling methods can be used to estimate the MSE of \(\widehat{\mu}_{i}^{EBLUP}\). Prediction intervals can be constructed around the EBLUP based on the asymptotic distribution of \(\widehat{\mu}_{i}^{EBLUP}-\mu_{i}\).
The model (1) can be reframed as a Bayesian hierarchical model by placing a known prior distribution \(g\) (which may depend on hyperparameters \(\tau\)) on the parameters \(\theta\):
\[y_{ij}\mid\beta_{0},\boldsymbol{\beta}_{1},u_{i},\sigma_{ \varepsilon}^{2} \sim N(\beta_{0}+\mathbf{x}_{ij}^{T}\boldsymbol{\beta}_{1}+u_{i},\sigma_{\varepsilon}^{2}) \tag{3}\] \[u_{i}\mid\sigma_{u}^{2} \sim N(\mathbf{0},\sigma_{u}^{2})\] \[\theta=(\beta_{0},\boldsymbol{\beta}_{1},\sigma_{u}^{2},\sigma_{ \varepsilon}^{2}) \sim\Pi(\theta).\]
Under this model, our targets are the posterior distributions \(p(\mu_{i}\mid\mathbf{Y}^{(\nu)},\mathbf{X}^{(\nu)},\boldsymbol{\delta}^{(\nu)})\). These posterior distributions are not available in closed form, so sample-based approaches are popular. Using samples from this posterior, where we denote the \(k\)th sample using \(\widehat{\mu}_{i}^{B(k)}\), we can compute posterior summary statistics and credible intervals for \(\mu_{i}\).
### Parameter estimation under informative sampling
Unless sampling probabilities are constant within areas, the EBLUP for the model (2) is not design-consistent [2]. To address this, You and Rao [31] propose a pseudo-EBLUP method for unequal probability sampling designs that incorporates sampling weights when estimating regression coefficients \(\boldsymbol{\beta}_{1}\). This approach is not intended to address general informative sampling, though it can do so in many cases. As for the standard EBLUP, the variance components \(\sigma_{u}^{2}\) and \(\sigma_{\varepsilon}^{2}\) can be estimated via restricted maximum likelihood. Subsequently the fixed effects parameters are obtained by solving weighted estimating equations.
The resulting parameter estimates are used to predict \(u_{i}\) and compute a pseudo-EBLUP \(\widehat{\mu}_{i}^{psEBLUP}\). The MSE of \(\widehat{\mu}_{i}^{psEBLUP}\) can be estimated via linearization-based approximations [31] or resampling [32].
More generically, given a parametric model, pseudo-likelihood methods incorporate survey weights to construct a sample weighted log-likelihood that approximates the full population log-likelihood [16, 33]. Instead of attempting to incorporate all relevant design features when specifying a model likelihood \(p_{\theta}\), pseudo-likelihood methods propose a particular superpopulation model of interest that could be fit given full population data. The pseudo-likelihood is subsequently used to approximate complete population inference for superpopulation parameters using sampling weights. If the weights contain information about informative sampling that cannot otherwise be easily incorporated into a regression model or prediction algorithm, then these approaches may yield estimates with reduced bias.
If the population were fully observed, the census log-likelihood would take the form:
\[\ell_{\theta}(\mathbf{Y})=\sum_{j=1}^{N_{\nu}}\log p_{\theta}(y_{j}) \tag{4}\]
where \(p_{\theta}\) denotes the likelihood of \(y_{ij}\) given parameters \(\theta\). The census log-likelihood may be approximated via a sample weighted pseudo-log-likelihood [16]:
\[\ell_{\theta}^{\pi}(\mathbf{Y})=\sum_{j=1}^{N_{\nu}}\frac{\delta_{\nu j}}{\pi _{\nu j}}\log p_{\theta}(y_{j}) \tag{5}\]
where \(\delta_{\nu j}\) and \(\pi_{\nu j}\) denote the inclusion indicator and inclusion probability for unit \(j\) for the \(\nu\)-th population. The census log-likelihood and pseudo-log-likelihood may be used to derive census estimating equations and analogously, sample weighted estimating equations. The weighted estimating equations can be used to derive maximum pseudo-likelihood estimates of \(\theta\). More generally, similar estimating equations can be used to estimate any finite population parameter of interest that can be specified as a solution to a system of census estimating equations, even without some motivating superpopulation model.
The pseudo-log-likelihood implies a pseudo-likelihood of the form
\[\prod_{j=1}^{N_{\nu}}p_{\theta}(y_{j})^{w_{\nu j}}=\prod_{j=1}^{N_{\nu}}p_{ \theta}(y_{j})^{\delta_{\nu j}/\pi_{\nu j}}. \tag{6}\]
The pseudo-likelihood is not a true likelihood due to the introduction of the weights, but by treating it as such, pseudo-Bayesian inference can be conducted for \(\theta\), as introduced by Savitsky and Toth [17].
If the entire population were observed, the population posterior for \(\theta\) could be defined as follows, for all measurable subsets \(B\subset\Theta\):
\[\Pi_{\nu}(B\mid\mathbf{Y}^{(\nu)})=\frac{\int_{B}\prod\limits_{j=1}^{N_{\nu}}p_{ \theta}(y_{j})\Pi(\theta)d\theta}{\int\prod\limits_{j=1}^{N_{\nu}}p_{\theta}(y _{j})\Pi(\theta)d\theta}=\frac{\int_{B}\exp(N_{\nu}\mathbb{P}_{\nu}\log p_{ \theta})\Pi(\theta)d\theta}{\int\exp(N_{\nu}\mathbb{P}_{\nu}\log p_{\theta}) \Pi(\theta)d\theta} \tag{7}\]
where \(\Pi(\theta)\) denotes a prior on the hyperparameters \(\theta\) and \(\mathbb{P}_{\nu}\) denotes the empirical measure based on the \(\nu\)-th population:
\[\mathbb{P}_{\nu}(t)=\frac{1}{N_{\nu}}\sum\limits_{j=1}^{N_{\nu}}t(Y_{j}) \tag{8}\]
where \(t\) denotes a measurable real-valued function. When only a sample of size \(n_{\nu}\) is observed, the population posterior distribution can be approximated by a pseudo-posterior distribution replacing the population likelihood with the pseudo-likelihood:
\[\Pi_{\nu}^{\pi}(B\mid D^{(\nu)}) =\frac{\int_{B}\prod\limits_{j=1}^{N_{\nu}}p_{\theta}(y_{j})^{ \delta_{\nu j}/\pi_{\nu j}}\Pi(\theta)d\theta}{\int\prod\limits_{j=1}^{N_{\nu} }p_{\theta}(y_{j})^{\delta_{\nu j}/\pi_{\nu j}}\Pi(\theta)d\theta} \tag{9}\] \[=\frac{\int_{B}\exp(N_{\nu}\mathbb{P}_{\nu}^{\pi}\log p_{\theta}) \Pi(\theta)d\theta}{\int\exp(N_{\nu}\mathbb{P}_{\nu}^{\pi}\log p_{\theta})\Pi (\theta)d\theta} \tag{10}\]
where \(\mathbb{P}_{\nu}^{\pi}\) is the sample weighted empirical measure for measurable \(t\):
\[\mathbb{P}_{\nu}^{\pi}(t)=\frac{1}{N_{\nu}}\sum\limits_{j=1}^{N_{\nu}}\frac{ \delta_{\nu j}}{\pi_{\nu j}}t(Y_{j}). \tag{11}\]
Note that \(\Pi_{\nu}^{\pi}\) is not a standard posterior distribution due to the introduction of sampling weights, but is scaled to integrate to one. In this sense, inference based on the pseudo-posterior distribution can be viewed as approximating inference based on the population posterior. As with the unweighted posterior, we can draw samples from the pseudo-posterior for \(\theta\) and accordingly obtain estimates of \(\mu_{i}\).
Intuitively, pseudo-posterior credible sets for a superpopulation parameter can be viewed as approximations of the corresponding population posterior cred
ible sets, which would be based on the full population of size \(N_{\nu}\). In general, credible sets based on pseudo-posterior samples will be too conservative. Leon-Novelo and Savitsky [25] observe undercoverage of the credible sets based on pseudo-posterior samples. Parker et al. [15] provide an example of pseudo-Bayesian inference applied for small area estimation, but do not explicitly discuss the issue of undercoverage. Various solutions have been proposed for this problem, including rescaling of weights and post-processing of pseudo-posterior samples.
For pseudo-likelihood based approaches, sampling weights are often rescaled so that the weights \(w_{ij}\) sum to the sample size or to the "effective" sample size, defined as the sample size for a simple random sample achieving the same variance for an estimator as with the existing design [34]. Rescaling methods have been discussed for a frequentist multilevel model [18] and in the pseudo-Bayesian setting [17].
## 4 Proposed approach
In this section, we describe a general pseudo-Bayesian approach for small area estimation using unit level models. We apply the post-processing rescaling method described by Williams and Savitsky [24] to correct the coverage of credible sets for \(\boldsymbol{\mu}\) based on pseudo-posterior samples. Although pseudo-Bayesian approaches have previously been adopted for small area estimation, they have generally been applied on an ad hoc basis and do not explicitly address miscalibration of the pseudo-posterior credible sets. We assume a sequence of sampling designs such that as \(\nu\to\infty\), \(n(i)\to\infty\) for all areas \(i\). Under this asymptotic framework, as \(\nu\to\infty\), direct estimators of small area means become more reliable.
Han and Wellner [23] and Williams and Savitsky [24] establish results on the asymptotic behavior of the pseudo-posterior distribtion and the pseudo-maximum likelihood estimator (pseudo-MLE) under certain regularity conditions. In particular, they establish Bernstein-von Mises type results for the pseudo-posterior distribution and derive the asymptotic sampling distribution of the pseudo-MLE. Both of these distributions are asymptotically normal and concentrate on \(\theta^{*}\), the parameter vector minimizing the Kullback-Leibler divergence \(\theta\mapsto P_{0}\log(p_{0}/p_{\theta})\). However, their asymptotic covariances do not agree, so credible intervals based on pseudo-posterior distributions will not generally converge on valid frequentist confidence intervals. Note that neither Han and
Wellner nor Williams and Savitsky explicitly addresses misspecification of the superpopulation model but both rely upon results of Kleijn and Van der Vaart [35], which establishes a Bernstein-von Mises result for misspecified Bayesian models that illustrates the posterior's concentration on \(\theta^{*}\). In the Appendix, we outline how the results of Han and Wellner, Williams and Savitsky, and Kleijn and Van der Vaart can be adapted for this small area estimation context.
### Computing a pseudo-posterior
We describe our approach to pseudo-Bayesian inference for the hierarchical model (3) before applying our strategy to other models. Under the hierarchical model, our parameters of interest are \(\theta=(\beta_{0},\mathbf{\beta}_{1},\sigma_{u}^{2},\sigma_{\varepsilon}^{2})\) and \(\mathbf{u}=(u_{1},\ldots,u_{m})\), so our goal is to approximate the joint population posterior density:
\[p_{\theta}(\mathbf{u}\mid D^{(\nu)})\propto\prod_{i=1}^{m}\prod_{j\in U(i)}p_{ \theta}(y_{ij}\mid u_{i})p_{\theta}(u_{i})\Pi(\theta) \tag{12}\]
where \(p_{\theta}(y_{ij}\mid u_{i})\) denotes the density for response \(y_{ij}\) given area effect \(u_{i}\) and parameter vector \(\theta\), \(p_{\theta}(u_{i})\) denotes the density for the area effect, and \(\Pi(\theta)\) denotes the prior. To approximate this population posterior, we use the following sampling-weighted pseudo-posterior density:
\[p_{\theta}^{\pi}(\mathbf{u}\mid D^{(\nu)})\propto\prod_{i=1}^{m}\prod_{j\in U (i)}p_{\theta}(y_{ij}\mid u_{i})^{\delta_{\nu j}/\pi_{\nu j}}p_{\theta}(u_{i} )\Pi(\theta) \tag{13}\]
We can approximate this density via sampling algorithms or numerical approximation and use this pseudo-posterior to conduct inference for \(\mathbf{u}\), \(\theta\), and subsequently \(\mu_{i}\).
### Post-processing adjustment
Generalized posteriors produced by replacing a standard likelihood in a Bayesian analysis with a pseudo-likelihood are not expected to quantify parameter uncertainty accurately as pseudo-likelihoods are not generally true likelihoods [36, 37]. In a complex survey sampling context, both Williams and Savitsky [24] and Han and Wellner [23] consider pseudo-Bayesian inference, noting that "vanilla" credible sets for model parameters based on a pseudo-posterior distribution do not generally converge on valid frequentist confidence intervals. The need to rescale
pseudo-posterior distributions for pairwise likelihood analysis with survey data is also discussed by Thompson et al. [38].
Given a correctly specified parametric superpopulation model Williams and Savitsky [24] observe that under certain assumptions for the sampling design and the model likelihood, the pseudo-MLE, denoted \(\hat{\theta}_{\nu}^{\pi}\) and defined as the estimator obtained by maximizing the frequentist pseudo-likelihood, is asymptotically Gaussian. More generally, the pseudo-MLE converges to the population MLE, so in the case of model misspecification, the pseudo-MLE converges asymptotically to the Kullback-Leibler divergence minimizing parameters \(\theta^{*}\), assuming such a vector \(\theta^{*}\) exists in the interior of the parameter space. In particular, \(\sqrt{N_{\nu}}(\hat{\theta}_{\nu}^{\pi}-\theta^{*})\) converges asymptotically to a Gaussian random variable with mean 0 and variance \(H_{\theta^{*}}^{-1}J_{\theta^{*}}^{\pi}H_{\theta^{*}}^{-1}\) where \(H_{\theta^{*}}\) is the Fisher information:
\[H_{\theta^{*}}=-\frac{1}{N_{\nu}}\sum_{j\in U_{\nu}}\mathbb{E}_{P_{\theta^{*} }}\ddot{\ell}_{\theta^{*}}(y_{j}) \tag{14}\]
where \(\ell_{\theta^{*}}=\log p_{\theta^{*}}\) denotes the log-likelihood, and \(J_{\theta^{*}}^{\pi}\) is the variance matrix of the score functions under the combined measure \(\mathbb{P}\):
\[J_{\theta^{*}}^{\pi}=\mathbb{E}_{\mathbb{P}}\left[\mathbb{P}_{\nu}^{\pi}\dot{ \ell}_{\theta^{*}}\dot{\ell}_{\theta^{*}}^{T}\right] \tag{15}\]
Moreover, under their set of regularity conditions, Williams and Savitsky derive the asymptotic distribution of the pseudo-posterior:
\[\sup_{B}\left|\Pi_{\nu}^{\pi}(B\mid D^{(\nu)})-\mathcal{N}_{\hat{\theta}_{ \nu}^{\pi},N_{\nu}^{-1}H_{\theta^{*}}^{-1}}(B)\right|\to 0 \tag{16}\]
where \(\hat{\theta}_{\nu}^{\pi}\) is the pseudo-MLE and \(H_{\theta^{*}}^{-1}\) is the Fisher information.
Based on the differing forms of the covariance matrices for the pseudo-MLE and pseudo-posterior, Williams and Savitsky propose adjusting samples as follows:
\[\widehat{\theta}^{WS(k)}=\left(\widehat{\theta}^{(k)}-\overline{\theta} \right)R_{2}^{-1}R_{1}+\overline{\theta} \tag{17}\]
where \(R_{1}^{T}R_{1}=H_{\theta^{*}}^{-1}J_{\theta^{*}}^{\pi}H_{\theta^{*}}^{-1}\) and \(R_{2}^{T}R_{2}=H_{\theta^{*}}^{-1}\). Here, \(\widehat{\theta}^{(k)}\) is the \(k\)th sample from the pseudo-posterior and \(\hat{\theta}^{WS(k)}\) the \(k\)th adjusted sample. Finally, \(\overline{\theta}\) is the mean of the pseudo-posterior draws, but could be replaced with the pseudo-MLE. The authors call \(R_{2}^{-1}R_{1}\) a multivariate design effect adjustment, which will vanish for a SRS.
In practice, \(H_{\theta^{*}}\) is estimated as the observed information, i.e. the negative
Hessian of the weighted log-likelihood at the pseudo-MLE. Following Williams and Savitsky, we estimate \(J_{\theta^{*}}^{\pi}\) via a resampling approach [39] that seeks to estimate the variance of the score functions by sampling PSUs with replacement from the sample. We use numerical differentiation when estimating both \(H_{\theta^{*}}\) and \(J_{\theta^{*}}^{\pi}\). Further details are provided in the Appendix.
### Rescaling small area estimates
Under the hierarchical model (3), this rescaling approach enables us to produce credible sets for the parameter vector \(\theta=(\beta_{0},\boldsymbol{\beta}_{1},\sigma_{u}^{2},\sigma_{\varepsilon}^ {2})\) that converge on asymptotically correct frequentist confidence sets for the true model parameters. However the interpretation is less clear for rescaled credible sets of area level random quantities such as \(u_{i}\). For the purpose of small area estimation, we are interested in between area variations that are stable as \(\nu\to\infty\). As such, we propose a strategy that rescales the pseudo-posterior distributions for \(\beta_{0i}=\beta_{0}+u_{i}\) based on the asymptotic distributions of the pseudo-MLEs resulting from likelihood analysis of the model treating the \(\beta_{0i}\) parameters as fixed effects.
In practice, for \(k=1,\ldots,K\), we draw samples \(\beta_{0}^{(k)},\boldsymbol{\beta}_{1}^{(k)},\sigma_{u}^{2(k)},\sigma_{ \varepsilon}^{2(k)}\), and \(\mathbf{u}^{(k)}\) from the pseudo-posterior distribution based on the hierarchical model (3). We can then transform these samples to express them in terms of the parameters of the model (2) with fixed area-specific intercepts, yielding a sample vector \(\widehat{\theta}^{(k)}=(\beta_{0i}^{(k)},\boldsymbol{\beta}_{1}^{(k)}, \sigma_{\varepsilon}^{2(k)})\). We then estimate the rescaling matrices \(H_{\theta^{*}}\) and \(J_{\theta^{*}}^{\pi}\) using the likelihood arising from (2), treating \(\beta_{0i}\) as fixed parameters. In other words, the model we use for rescaling is a fixed effects model and the asymptotic distribution of the pseudo-MLE is based on a model that treats \(\beta_{0i}\) as stable across populations. From a Bayesian standpoint, this perspective shift is natural as the distinction between fixed and random effects is less salient: we are simply defining a hierarchical prior on the parameters of interest \(u_{i}\).
## 5 Simulations
To assess the performance of our pseudo-Bayesian approach we carry out a simulation study using a range of population models and sampling designs. For each choice of population model, we generate a single finite population of responses. Using each design, we repeatedly sample a subset of responses and then compute estimators of \(\mu_{i}\), which we compare with the finite population means
### Population generating models
We carry out simulations for populations of continuous response data and binary response data generated using the models described below. For both response models, we first generate auxiliary variables for a clustered population letting \(\mathbf{z}_{icj}\) denote the auxiliary variables for individual \(j\) in cluster \(c\) in area \(i\). We assume that each unit belongs to one cluster and clusters are nested within areas. Finally, we assume area \(i\) contains \(N_{C}(i)\) clusters indexed by the set \(C(i)=\{c_{i_{1}},\ldots,c_{i_{N_{C}(i)}}\}\).
1. We generate \(z_{1icj}\overset{ind}{\sim}N\left(\frac{i}{m},1\right)\), where \(1\leq i\leq m\) indexes the area, so the mean of \(z_{1icj}\) varies across areas.
2. We generate \(z_{2icj}=\frac{i}{m}+z^{\prime}_{2icj}\), where \(z^{\prime}_{2icj}\overset{iid}{\sim}\text{Exp}(1/2)\). The variable \(z_{2icj}\) represents a measure of unit size, which we will use to specify a sampling design. We define \(z_{2ic.}=\sum_{j}z_{2icj}\) to be the cluster size obtained by summing the sizes for all units in the relevant cluster. We define the scaled unit size \(x_{2ij}\) to be equal to \(z_{2icj}\) scaled to have mean zero and variance 1. Similarly, we define the scaled cluster size \(\widetilde{x}_{2ij}\) to be equal to \(z_{2ic.}\) scaled to have mean zero and variance 1, where \(c\) is the cluster containing unit \(j\).
#### 5.1.1 Continuous responses
To generate continuous response data, we simulate data from population models of the form
\[y_{ij}=\beta_{0}+\beta_{1}x_{1ij}+\beta_{2}\widetilde{x}_{2ij}+\varepsilon_{ij} \tag{18}\]
where \(\varepsilon_{ij}\overset{iid}{\sim}N(0,\sigma_{\varepsilon}^{2})\). As described above, \(\widetilde{x}_{2ij}\) denotes the cluster size, representing a relevant design variable that is unavailable to the analyst. To estimate the area level means, we fit the following nested error regression model:
\[y_{ij}=\beta^{\prime}_{0}+\beta^{\prime}_{1}x_{1ij}+u_{i}+\varepsilon^{\prime }_{ij}=\beta^{\prime}_{0i}+\beta^{\prime}_{1}x_{1ij}+\varepsilon^{\prime}_{ij} \tag{19}\]
For the purpose of estimating model parameters, we assume that \(\varepsilon^{\prime}_{ij}\overset{iid}{\sim}N(0,\sigma_{\varepsilon}^{2})\). Note that for the estimation model, we use \(\beta^{\prime}_{0},\beta^{\prime}_{1}\), and \(\varepsilon^{\prime}_{ij}\) to denote model parameters to emphasize that this model is misspecified and we cannot expect to obtain consistent estimators for the true population parameters in general. Based on this estimation model, \(u_{i}\) is used to capture stable between area differences induced by area level differences in the mean value of \(\widetilde{x}_{2ij}\).
#### 5.1.2 Binary responses
We also implement simulations for binary response data using the population generating models of the form:
\[\begin{split} y_{ij}\mid q_{ij}&\sim\text{Bernoulli}(q_ {ij})\\ q_{ij}&=\text{expit}(\beta_{0}+\beta_{1}x_{1ij}+ \beta_{2}\widetilde{x}_{2ij})\end{split} \tag{20}\]
We again use \(\widetilde{x}_{2ij}\) to denote the unobserved cluster size for individual \(j\). The pseudo-Bayesian approach described above can be adapted to non-Gaussian response data by using alternative likelihoods. To estimate the area level means, we fit the following logistic regression model:
\[\begin{split} y_{ij}\mid q_{ij}&\sim\text{ Bernoulli}(q_{ij})\\ q_{ij}\mid u_{i}&=\text{expit}(\beta_{0}^{\prime }+\beta_{1}^{\prime}x_{1ij}+u_{i})=\text{expit}(\beta_{0i}^{\prime}+\beta_{1}^ {\prime}x_{1ij})\\ u_{i}&\overset{iid}{\sim}N(0,\sigma_{u}^{2})\end{split} \tag{21}\]
The areal effects \(u_{i}\) capture between area differences in the log-transformed odds induced by area level differences in the mean value of \(\widetilde{x}_{2ij}\). Again, we can either view \(\beta_{0i}^{\prime}=\beta_{0}^{\prime}+u_{i}\) as a fixed area specific intercept term or as a random intercept by placing a Gaussian prior on \(u_{i}\). Based on this estimation model, the target of estimation is defined as
\[\mu_{i}=\mathbb{E}_{\mathbb{P}}\left[\text{expit}(\beta_{0i}^{\prime}+\beta_{1} ^{\prime}x_{1ij})\right] \tag{22}\]
where the expectation is taken with respect to both the model and design.
### Estimation procedure
Based on our estimation models, we consider three approaches for estimating area level means. First, we consider a Bayesian approach ignoring the weights (**Unwt**) and treating sampling as ignorable. Next, we implement our pseudo-Bayesian approach using the sampling weights, both with (**WtRscl**) and without (**Wt**) the rescaling step described in the previous section. The sampling weights used in this analysis are normalized so that their sum is equal to the observed sample size \(n\). For all of these Bayesian estimators, we compute posterior medians and 90% credible sets. We compare these three Bayesian estimators with two design-based estimators: the **Hajek** estimator and a general
ized regression estimator (**GREG**) based on a working model with fixed area specific intercepts. For continuous data, this working model takes the form \(y_{ij}=\beta^{\prime}_{0i}+\beta^{\prime}_{1}x_{1ij}+\varepsilon^{\prime}_{ij}\). We compute 90% prediction intervals based on the estimated mean squared predictive error of these estimators.
We approximate the unscaled pseudo-posterior distributions using integrated nested Laplace approximation, as implemented in the INLA package [40], which facilitates fast approximate Bayesian inference for latent Gaussian models as an alternative to other methods such as Markov chain Monte Carlo. Using INLA, we can obtain samples from the pseudo-posterior distributions for the parameters of interest. Further detail on the estimation procedures, including descriptions of priors for model hyperparameters can be found in the Appendix. Code used to produce the results throughout this manuscript can be found on GitHub.
Since \(\widetilde{x}_{2ij}\) is unobserved, our estimation models are misspecified. As noted by Williams and Savitsky [24], their asymptotic results rely on correct parameterization of the dependence structure for the population. However, the unobserved \(\widetilde{x}_{2ij}\) is constant within clusters, and induces cluster dependence in our observations. Simulations by Williams and Savitsky indicate that their proposed rescaling method may be robust to this misspecification. We consider an asymptotic framework in which the number of clusters sampled in each area, but not the size of the clusters, is increasing, but we are primarily interested in the performance of these estimators in a small sample setting, which will be of practical relevance to small area estimation.
### Sampling designs
We consider different sampling designs which induce dependence between observed response values, some of which are informative with respect to the analyst-specified model. For all designs, we stratify sampling by the \(m\) small areas.
1. **Stratified random sampling without replacement (SRS)** Within each area \(i\), we sample \(n(i)\) individuals at random without replacement. Under this design, assuming the sampling fraction is small, the design effect is expected to be small, making the effect of incorporating sampling weights during estimation negligible.
2. **Single stage informative sampling (PPS1)** For this design, within
each area, we sample \(n(i)\) units without replacement, with probability proportional to size \(s_{ij}=x_{2ij}-\min(x_{2ij})+1\) using Midzuno's method as implemented in the R package sampling[41]. This yields an single stage design with unequal sampling probabilities (PPS1) that is informative with respect to the analyst-specified model since \(s_{ij}\) is correlated with the unobserved cluster size \(\widetilde{x}_{2ij}\).
3. **Two stage informative sampling (PPS2)** Within each area \(i\), we then sample \(n_{C}(i)\) clusters without replacement with probability proportional to size \(\widetilde{x}_{2ij}-\min(\widetilde{x}_{2ij})+1\). Within each sampled cluster, we sample \(n(i,c)\) units with probability proportional to size \(s_{ij}=x_{2ij}-\min(x_{2ij})+1\). This yields a two stage design with unequal sampling probabilities (PPS2) that is informative with respect to the model since \(\widetilde{x}_{2ij}\) is unobserved.
### Results
For each data generating model, we generate auxiliary variables for a finite population consisting of \(N=90,000\) individuals divided evenly between \(m=20\) areas. Each area is divided into \(N_{C}=150\) clusters of thirty individuals. Based on this fixed auxiliary data, we repeatedly simulate response data and sample from the resulting population for each sampling design for a total of 1,000 simulations. For the continuous response case, we simulate data from the following model:
\[y_{ij}=x_{1ij}+2\widetilde{x}_{2ij}+\varepsilon_{ij} \tag{23}\]
where \(\varepsilon_{ij}\stackrel{{ iid}}{{\sim}}N(0,1)\). For the binary response case, we simulate population data from the following model:
\[\begin{split} y_{ij}\mid q_{ij}&\sim\text{ Bernoulli}(q_{ij})\\ q_{ij}&=\text{expit}(x_{1ij}+2\widetilde{x}_{2ij} )\end{split} \tag{24}\]
We compute the finite population area level mean \(\overline{Y}_{i}\) for each area \(i\). In each simulation, we compute point estimates \(\widehat{\mu}_{i}\) as well as 90% interval estimates (\(\widehat{\mu}_{i}^{-},\widehat{\mu}_{i}^{+}\)) for every estimator. For each method, we compute root mean squared error (RMSE) and mean absolute error (MAE). We also compute the emprical coverage of the 90% interval estimates and the mean interval lengths (MIL)
across all areas, averaged across all simulations.
\[\text{RMSE}(\widehat{\mathbf{\mu}}) =\sqrt{\frac{1}{m}\sum_{i}(\overline{Y}_{i}-\widehat{\mu}_{i})^{2}} \tag{25}\] \[\text{MAE}(\widehat{\mathbf{\mu}}) =\frac{1}{m}\sum_{i}|\overline{Y}_{i}-\widehat{\mu}_{i}|\] (26) \[\text{Cov}_{90}(\widehat{\mathbf{\mu}}) =\frac{1}{m}\sum_{i}\mathbf{1}\{\overline{Y}_{i}\in(\widehat{\mu}_{i} ^{-},\widehat{\mu}_{i}^{+})\}\] (27) \[\text{MIL}_{90}(\widehat{\mathbf{\mu}}) =\frac{1}{m}\sum_{i}(\widehat{\mu}_{i}^{+}-\widehat{\mu}_{i}^{-}) \tag{28}\]
Table 1 summarizes the continuous response simulation results for a small sample setting where \(n(i)=30\) and Table 2 provides an analogous summary for a larger sample setting where \(n(i)=100\). For the PPS2 design, we assume that \(n(i)/5\) clusters are sampled and five individuals are sampled within each cluster.
Under stratified random sampling, the model-based (Unwt, Wt, and WtRsl) approaches perform similarly and produce point estimates with the lowest RMSE
\begin{table}
\begin{tabular}{l l r r r r} \hline \hline Design & Method & RMSE (x 100) & MAE (x 100) & MIL (x 100) & 90\% Int. Cov. \\ \hline SRS & Hajek & 38.2 & 30.4 & 125.1 & 89 \\ \cline{2-6} & GREG & 33.5 & 26.7 & 109.8 & 89 \\ \cline{2-6} & Unwt & 32.6 & 26.0 & 107.7 & 90 \\ \cline{2-6} & Wt & 32.6 & 26.0 & 107.7 & 90 \\ \cline{2-6} & WtRsl & 32.6 & 26.0 & 104.7 & 88 \\ \hline PPS1 & Hajek & 41.4 & 33.0 & 133.9 & 88 \\ \cline{2-6} & GREG & 36.7 & 29.2 & 117.4 & 88 \\ \cline{2-6} & Unwt & 35.9 & 28.7 & 109.1 & 87 \\ \cline{2-6} & Wt & 35.6 & 28.4 & 107.5 & 87 \\ \cline{2-6} & WtRsl & 35.6 & 28.4 & 112.0 & 87 \\ \hline PPS2 & Hajek & 70.6 & 56.1 & 217.4 & 84 \\ \cline{2-6} & GREG & 67.6 & 53.7 & 208.1 & 84 \\ \cline{2-6} & Unwt & 72.1 & 57.3 & 105.1 & 54 \\ \cline{2-6} & Wt & 65.0 & 51.6 & 103.0 & 58 \\ \cline{2-6} & WtRsl & 65.0 & 51.6 & 200.4 & 84 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Averaged evaluation metrics of estimators of area level means across 1,000 continuous response simulations for SRS, PPS1, and PPS2 designs for a sample size of thirty units per area.
and MAE, and achieve close to nominal interval coverage rates. For the large sample simulations, the model-based estimates perform similarly to the GREG estimator, indicating the reduced shrinkage compared with the small sample case.
Under the PPS1 sampling, in the small sample simulations, the weighted model-based methods perform best in terms of point estimates, with the weighted and rescaled estimates (WtRsl) yielding slightly wider interval estimates on average. For the large sample simulations, the unweighted and weighted but not rescaled interval estimates exhibit undercoverage while the weighted and rescaled intervals are better calibrated.
Finally, for the PPS2 sampling design, the weighted and rescaled method achieves the best performance in terms of RMSE and MAE as well as calibrated interval coverage. Under this design, the other model-based methods exhibit large undercoverage.
Table 3 summarizes simulation results for binary response data with sample size \(n(i)=30\) and Table 4 provides an analogous summary with sample size \(n(i)=100\). The results from these simulations are similar to those from
\begin{table}
\begin{tabular}{l l r r r r} \hline \hline Design & Method & RMSE (x 100) & MAE (x 100) & MIL (x 100) & 90\% Int. Cov. \\ \hline SRS & Hájek & 20.8 & 16.7 & 69.0 & 90 \\ \cline{2-6} & GREG & 18.3 & 14.6 & 60.6 & 90 \\ \cline{2-6} & Unwt & 18.1 & 14.5 & 60.1 & 90 \\ \cline{2-6} & Wt & 18.1 & 14.5 & 60.1 & 90 \\ \cline{2-6} & WtRsl & 18.1 & 14.5 & 59.5 & 90 \\ \hline PPS1 & Hájek & 22.7 & 18.1 & 74.6 & 90 \\ \cline{2-6} & GREG & 20.0 & 15.9 & 65.4 & 90 \\ \cline{2-6} & Unwt & 23.2 & 18.8 & 60.9 & 81 \\ \cline{2-6} & Wt & 19.8 & 15.8 & 60.1 & 87 \\ \cline{2-6} & WtRsl & 19.8 & 15.8 & 64.3 & 89 \\ \hline PPS2 & Hájek & 37.4 & 29.4 & 126.1 & 90 \\ \cline{2-6} & GREG & 35.4 & 28.1 & 120.7 & 90 \\ \cline{2-6} & Unwt & 47.5 & 39.0 & 60.5 & 44 \\ \cline{2-6} & Wt & 34.9 & 27.7 & 59.4 & 61 \\ \cline{2-6} & WtRsl & 34.9 & 27.7 & 118.7 & 89 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Averaged evaluation metrics of estimators of area level means across 1,000 continuous response simulations for SRS, PPS1, and PPS2 designs for a sample size of one hundred units per area.
the continuous case under the SRS and PPS2 designs, with the weighted and rescaled estimates generally producing the best point estimates and interval estimates with close to nominal coverage. Under the PPS1 design, the benefits of rescaling are less clear, but the weighted and rescaled method does not perform significantly worse than the other model-based approaches.
## 6 Application
We apply the pseudo-Bayesian approach to estimate measles vaccination coverage for prefectures in Guinea in 2018 based on data from the Demographic and Health Surveys (DHS) Program. The DHS Program conducts surveys in many LMIC, typically using a stratified two-stage cluster sampling design. Each country is first divided by its principal administrative regions, usually called Admin-1 regions. Each region is partitioned into urban and rural components. Sampling is stratified by crossing the Admin-1 regions with urban/rural labels. For the first stage of sampling, each stratum is divided into clusters, or enumeration areas (EAs). Within each stratum, a pre-specified number of clusters is sampled
\begin{table}
\begin{tabular}{l l r r r r} \hline \hline Design & Method & RMSE (x 100) & MAE (x 100) & MIL (x 100) & 90\% Int. Cov. \\ \hline SRS & Hajek & 8.1 & 6.4 & 26.4 & 89 \\ \cline{2-6} & GREG & 7.7 & 6.1 & 25.1 & 88 \\ \cline{2-6} & Unwt & 7.2 & 5.8 & 23.1 & 89 \\ \cline{2-6} & Wt & 7.2 & 5.8 & 23.1 & 89 \\ \cline{2-6} & WtRscl & 7.2 & 5.8 & 23.2 & 89 \\ \hline PPS1 & Hajek & 8.9 & 7.0 & 28.5 & 87 \\ \cline{2-6} & GREG & 8.5 & 6.7 & 27.0 & 87 \\ \cline{2-6} & Unwt & 7.6 & 6.0 & 23.0 & 88 \\ \cline{2-6} & Wt & 7.8 & 6.2 & 23.1 & 87 \\ \cline{2-6} & WtRscl & 7.8 & 6.2 & 25.1 & 89 \\ \hline PPS2 & Hajek & 11.8 & 9.4 & 35.5 & 80 \\ \cline{2-6} & GREG & 11.5 & 9.2 & 34.3 & 80 \\ \cline{2-6} & Unwt & 10.8 & 8.4 & 22.6 & 74 \\ \cline{2-6} & Wt & 10.3 & 8.1 & 22.9 & 75 \\ \cline{2-6} & WtRscl & 10.3 & 8.1 & 31.6 & 85 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Averaged evaluation metrics of estimators of area level means across 1,000 binary response simulations for SRS, PPS1, and PPS2 designs for a sample size of thirty units per area.
with probability proportional to size. In the second stage, a fixed number of households is sampled from each selected cluster. Under this sampling design, cluster size is a relevant design variable that is typically not made public but which could be associated with the response.
We estimate subnational vaccination rates for the first dose of measles-containing-vaccine (MCV1) among children aged 12-23 months in Guinea using data from the 2018 Guinea DHS [42]. This survey interviewed mothers in each selected household and collected vaccination data for their children based on vaccination cards or caregiver recall. We produce estimates for each prefecture, which correspond to subdivisions of Guinea's eight Admin-1 regions. We refer to these prefectures as Admin-2 regions. We rely on the boundaries published by Database of Global Administrative Areas (GADM) [43].
The design for the 2018 DHS was based on a sampling frame created using data from a 2017 census which identified 9679 enumeration areas divided into 15 strata (from splitting eight Admin-1 areas into urban/rural components minus the entirely urban zone of Conakry). Data were collected from 401 clusters. The DHS Program publishes coordinates for all selected clusters after displacing their
\begin{table}
\begin{tabular}{l l r r r r} \hline \hline Design & Method & RMSE (x 100) & MAE (x 100) & MIL (x 100) & 90\% Int. Cov. \\ \hline SRS & Hajek & 4.4 & 3.5 & 14.5 & 90 \\ \cline{2-6} & GREG & 4.2 & 3.3 & 13.8 & 90 \\ \cline{2-6} & Unwt & 4.1 & 3.3 & 13.4 & 90 \\ \cline{2-6} & Wt & 4.1 & 3.3 & 13.4 & 90 \\ \cline{2-6} & WtRscl & 4.1 & 3.3 & 13.4 & 90 \\ \hline PPS1 & Hajek & 4.9 & 3.9 & 15.8 & 89 \\ \cline{2-6} & GREG & 4.7 & 3.7 & 15.0 & 89 \\ \cline{2-6} & Unwt & 4.6 & 3.6 & 13.4 & 87 \\ \cline{2-6} & Wt & 4.5 & 3.6 & 13.4 & 87 \\ \cline{2-6} & WtRscl & 4.5 & 3.6 & 14.6 & 89 \\ \hline PPS2 & Hajek & 6.3 & 5.0 & 20.8 & 88 \\ \cline{2-6} & GREG & 6.2 & 4.9 & 20.2 & 88 \\ \cline{2-6} & Unwt & 7.1 & 5.6 & 13.3 & 66 \\ \cline{2-6} & Wt & 5.9 & 4.7 & 13.4 & 75 \\ \cline{2-6} & WtRscl & 5.9 & 4.7 & 19.5 & 89 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Averaged evaluation metrics of estimators of area level means across 1,000 binary response simulations for SRS, PPS1, and PPS2 designs for a sample size of one hundred units per area.
locations by small distances to protect privacy. Figure 1 provides the Admin-1 and Admin-2 boundaries and displaced EA locations in Guinea for which data were collected.
We generate estimates using design-based methods and unit level logistic regression models. For the unit level models, we use two covariates based on estimated travel times to cities in 2015 [44] and the intensity of night time lights as observed via satellite imagery in 2016 [45]. Note that these covariates are themselves estimated using statistical modeling. We also use estimated population counts produced by WorldPop [46] to create a binary covariate classifying pixels as either urban or rural by the highest population pixels in an area to be urban so that the proportion of individuals classified urban equals that which is reported in the 2018 Guinea DHS report.
We use a logistic regression model, so covariate information for the entire population is required to generate estimates for each individual. Since complete population data is not available, instead of making a separate prediction for each child, we make predictions for each pixel and aggregate these predictions to get an area level estimate of the mean outcome of interest. When aggregating,
Figure 1: Map of Guinea with Admin-1 level boundaries (thick borders) and Admin-2 level boundaries (thin borders). Points indicate enumeration area locations for which data on measles vaccination is available.
we weight each pixel by its estimated age 1-5 population [47]. We project all covariate values to the 1km by 1km grid used by WorldPop.
Figure 2 compares point estimates of measles vaccination rates and 3 provides the length of interval estimates among children aged 12-23 months for Guinea's prefectures in 2018. We provide point and interval estimates for all methods in the Appendix. In general, the point estimates produced by all methods are quite similar, but the estimates of uncertainty vary considerably. In general, the unweighted Bayes and weighted but not rescaled Bayes estimates have the shortest prediction intervals, indicating the least uncertainty. The design-based Hajek and GREG approaches produce longer prediction intervals. The weighted and rescaled method generally produces larger estimates of uncertainty, generating intervals whose lengths more closely resemble those of the design-based approaches. Note that the direct Hajek estimate of the vaccination rate for the Admin-2 prefecture of Kouroussa is zero and thus a direct estimator of the associated variance is unavailable. As a result, we omit estimates for Kouroussa, which is depicted in gray in Figures 2 and 3.
## 7 Discussion
Pseudo-Bayesian approaches inference enables analysts to leverage available sampling weights to adjust for features of the survey design that cannot be incorporated into a small area estimation model. However, since credible sets based on a naive pseudo-posterior exhibit undercoverage [25], the pseudo-posterior must be rescaled to produce credible sets that quantify uncertainty meaningfully. Using a rescaling post-processing adjustment proposed by Williams and Savitsky [24], we show that pseudo-Bayesian approaches can be used to generate improved point and interval estimators for small area means of continuous and binary outcomes.
Previous applications of pseudo-Bayesian approaches rely on scaling the sampling weights to sum to the sample size as an ad hoc solution for scaling the pseudo-posterior. The approach proposed by Williams and Savitsky first scales the sampling weights but then also estimates a multivariate design effect for the parameters of interest using the available data, which is subsequently used to rescale the pseudo-posterior. Both the initial scaling of sampling weights and the rescaling of parameter samples are potentially valuable. The initial scaling of sampling weights controls degree of shrinkage induced by the Gaussian prior
Figure 2: Estimated measles vaccination rates among children aged 12-23 months for Admin-2 areas in Guinea in 2018.
Figure 3: Prediction interval lengths for estimated measles vaccination rates for Admin-2 areas in Guinea in 2018.
on the random effects. If the unscaled weights are used, then the degree of shrinkage may be too low because inference on the random effects proceeds as though a population of size \(N\) is observed. The subsequent rescaling of the samples from the pseudo-posterior is aimed at improving the coverage of credible sets for parameters of interest. Han and Wellner [23] propose a similar rescaling approach but do not explicitly encourage rescaling the sampling weights when defining the pseudo-posterior.
A key limitation of the approach presented here is that we focus on estimation targets that can be expressed in terms of fixed parameters for which we have asymptotically increasing amounts of data. Our approach relies upon having sufficient data to rescale model-based estimates of uncertainty, so as observed in Section 6, when we have severely limited or no data in an area, we are unable to construct valid prediction intervals. When using unit level models for small area estimation, it has become increasingly common to model outcomes of interest as continuous spatial processes and generate area level estimates by aggregating predictions made on a high-resolution spatial grid. Under such an approach, predictions may be required for each individual cluster or location. Our approach does not account for the case in which the targets of estimation are themselves observed at random. For example, we do not seek to estimate individual cluster level effects. Aggregating unit level model predictions for individual clusters or pixels to obtain area level predictions is difficult and requires careful consideration of the sampling design [48, 49].
|
2309.05465 | "Toward" Metal-Organic Framework Design by Quantum Computing | The article summarizes the study performed in the context of the Deloitte
Quantum Climate Challenge in 2023. We present a hybrid quantum-classical method
for calculating Potential Energy Surface scans, which are essential for
designing Metal-Organic Frameworks for Direct Air Capture applications. The
primary objective of this challenge was to highlight the potential advantages
of employing quantum computing. To evaluate the performance of the model, we
conducted total energy calculations using various computing frameworks and
methods. The results demonstrate, at a small scale, the potential advantage of
quantum computing-based models. We aimed to define relevant classical computing
model references for method benchmarking. The most important benefits of using
the PISQ approach for hybrid quantum-classical computational model development
and assessment are demonstrated. | Kourosh Sayar Dogahe, Tamara Sarac, Delphine De Smedt, Koen Bertels | 2023-09-11T14:07:30Z | http://arxiv.org/abs/2309.05465v1 | # "Toward" Metal-Organic Framework Design by Quantum Computing
###### Abstract
The article summarizes the study performed in the context of the Deloitte Quantum Climate Challenge in 2023 12. We present a hybrid quantum-classical method for calculating Potential Energy Surface scans, which are essential for designing Metal-Organic Frameworks for Direct Air Capture applications. The primary objective of this challenge was to highlight the potential advantages of employing quantum computing. To evaluate the performance of the model, we conducted total energy calculations using various computing frameworks and methods. The results demonstrate, at a small scale, the potential advantage of quantum computing-based models. We aimed to define relevant classical computing model references for method benchmarking. The most important benefits of using the PISQ approach for hybrid quantum-classical computational model development and assessment are demonstrated.
Footnote 1: Deloîte hosts an annual Quantum Climate Challenge aiming to encourage climate-relevant collaborations between sustainability and quantum computing experts. The QBee team won first place in the 2023 challenge among 118 registrations from 33 countries. We want to express our gratitude to Deloitte and IBM Quantum companies for their assistance in providing the necessary materials. Link: [ Delloitte Challenge \({}^{2}\)3].
Footnote 2: We would also like to thank QBee Company for their generous support and for helping us with our Quantum Computing research and development needs. To learn more about QBee Company, please visit their website at: [ QBee.eu].
_Keywords: quantum computing; quantum computational chemistry; PISQ; molecular simulation; VQE._
## 1 Introduction
With the anticipated progress in quantum technology, quantum computing could bring about significant advantages over classical computing in the future. These include the ability to tackle previously intractable problems and the potential for substantial computational speedup. Computational chemistry is a domain anticipated as among the first ones to potentially benefit from quantum computing. It tackles the behavior of electrons and atoms at the quantum level, and classical computers struggle to accurately simulate those systems. Therefore, it is crucial to develop quantum technology-based computational methods, also in the domain of computational chemistry.
Quantum computing is a multidisciplinary domain, that combines hardware and software development. Extensive work on quantum computing in the last decade led to the establishment of approaches like the NISQ (Noisy Intermediate-Scale Quantum), which refers to a category of near-term quantum computers handling noisy qubits, resulting from different noise sources such as decoherence, gate errors, and measurement errors [1]. Going further, there is the PISQ (Perfect Intermediate-Scale Quantum) approach that enables various scientific and industrial communities to step into the quantum computing field by letting the hardware concerns aside and focusing on the quantum computing logic for their specific expert domains [2]. The "perfect qubit" is a theoretical concept, defined after the famous talk by Richard Feynman [3], that is based on perfect qubit behavior simulation on a classical computer. Perfect qubits are often used to explore the
full potential of quantum computing algorithms and protocols. PISQ and NISQ represent complementary strategies that are expected to converge to an operational field within the next 10-15 years. [1], [2]
In the framework of Deloitte's Quantum Climate Challenge 2023, an interesting use case was proposed. The primary goal was to investigate how quantum computers may help to improve materials used in Direct Air Capture (DAC) of CO\({}_{2}\). We approached the challenge from the chemistry domain following the PISQ approach, with a goal to assess the structure of metal-organic frameworks (MOFs), with particular attention to MOF74 [4].
The main observation indicates that a potential advantage of quantum computing-based models over classical methods can be demonstrated at a small scale. Namely, quantum computation based results reach the accuracy of the methods that incorporate an analytical approach, going beyond the accuracy obtained by conventional classical methods employment. Our attempt to scale up the quantum-based computation revealed some drawbacks of present quantum models and tools.
The paper is structured as follows. We first present the goal of the challenge and our approach to it in the Problem Definition Section. In the Method section, we provide detailed descriptions of all the methods and implementations. This is followed by a Results and Discussion section where the methods are benchmarked and evaluated. The article concludes by summarizing our impressions of the quantum model's application and announcing future steps.
## 2 Problem definition
MOFs consist of two key components: an inorganic metal cluster and an organic molecule. The choice of metal and organic components influences the structure and, consequently, the properties of the MOF. There is a broad range of possibilities for combining metallic cations and organic ligands to construct the MOF74 [5]. A well-designed MOF is capable of efficiently capturing CO\({}_{2}\) in DAC filters while hindering the clogging of the filter by other gases. Screening for potentially suitable MOF74 structures involves calculating the Dissociation Energy (DE) of a complex formed by MOF74 and gas molecules. To calculate the DE of a MOF-gas complex, we require Potential Energy Surface (PES) scans of both the complex and individual molecules, as depicted in Equation 1.
\[\mathrm{DE}=\mathrm{PES_{gas-ion}}-(\mathrm{PES_{gas}}+\mathrm{PES_{ion}}) \tag{1}\]
The method for computing PES scans was developed using Variational Quantum Eigensolver (VQE), which is defined as a hybrid quantum-classical, variation principle-based algorithm[6]. The model is exemplified using the interaction between a magnesium ion (Mg\({}^{2+}\)) and carbon dioxide gas (CO\({}_{2}\)).
Due to the structural complexity of the MOF-gas system, the latter must be down-scaled for the analysis of quantum-based results. Fig. 1 presents an example of system simplification, from a network of MOF unit cells, passing through a singular unit cell with six open reaction sites (metallic ions), and culminating in one preferential binding site (metal ion) - gas interaction, neglecting the organic part [4].
Figure 1: The MOF structure simplification scheme
The electronic structure of CO\({}_{2}-\) Mg\({}^{2+}\), modeled using the minimal basis set STO-3G, encompasses a total of 48 spin orbitals and approximately 700,000 fermionic terms. During the construction of the Electronic Hamiltonian, these spin orbitals require encoding using roughly 48 qubits, leading to the generation of over 12 million Pauli terms. To this day, quantum computing resources are constrained by a limited number of qubits and low circuit depth for both physical quantum computers and classical emulation of quantum computing processes. Consequently, the number of spin orbitals was reduced within the scope of this study using the AS Transformation, as described in the Method section.
The challenge was to demonstrate the potential benefits of a quantum computing-based method. To evaluate this, we benchmarked total energy calculations using various computing approaches and methods, including:
1. VQE algorithm as a hybrid quantum-classical computation implemented by perfect qubits;
2. VQE algorithm as a hybrid quantum-classical computation implemented by superconducting physical qubits, i.e., using quantum hardware;
3. _Ab-initio_ classical methods: the low-cost Restricted Hartree-Fock (RHF) and the higher-cost Coupled Cluster single and Double excitation (CCSD) [7];
4. Complete Active Space (AS) Configuration Interaction (CASCI) analysis as the classical reference method [8].
A 3D potential energy surface scan in quantum mechanics offers a comprehensive exploration of molecular behavior by systematically altering atomic positions and calculating corresponding potential energies [9], [10]. Such scans are valuable for interpreting chemical reactions, reaction pathways, and mechanisms, as well as pinpointing transition states and energy barriers. The potential of VQE to produce PES scans that consider multiple degrees of freedom has been also examined through the creation of CO\({}_{2}-\) Mg\({}^{2+}\) 3D PES scan.
Finally, a draft approach to compute the energy of a full unit cell was proposed. It is based on what is known as the classical hybrid approach, involving the combination of low-cost computational methods on a larger scale; and local high-cost computational methods focused on the open reaction sites of a molecule [11]. Following a similar logic, we applied a low-cost classical method on the unit cell - CO2 interaction and quantum computing-based calculations on open reaction sites, which are metal ions in this case. The obtained results are well-converged and they follow the expected trend.
## 3 Method
**Geometry Optimization.** The Avogadro software was used to define the optimized geometry of CO\({}_{2}-\) Mg\({}^{2+}\). To evaluate different degrees of freedom within the molecule, we selected the Z-matrix coordinate system. In the optimization process using the Universal Force Field (UFF) method, we executed the Steepest Descent method with 500 steps.
**Active Space Transformation.** The AS Transformation involves the construction of an AS configuration by selecting a certain number of orbitals from the full electronic structure. It is challenging to find the AS configuration that contributes the most to the total energy since it relies on varying factors such as the Highest Occupied Molecular Orbitals (HOMOs) and Lowest Unoccupied Molecular Orbitals (LUMOs) within the electronic structure [8]. Quantum computation is performed only for the defined AS. The number of HOMO and LUMO orbitals chosen to construct the AS configuration was varied. By reducing the number of contributor spin orbitals in the model, the computational resources required for the calculations were also reduced (see Table 1).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**System** & **System** & **No. Spin Orbitals** & **No. Fermionic** & **No. Pauli** & **Hamiltonian’s** \\
**Size** & **Abv.** & **= No. qubits** & **Terms** & **Terms** & **Circuit Depth** \\ \hline Full & Full & 48 & 730612 & 1220792 & 254329 \\ \hline
5 HOMO - 5 LUMO & 5h5l & 20 & 20136 & 142940 & 7147 \\ \hline
4 HOMO - 4 LUMO & 4h4l & 16 & 8292 & 46608 & 2913 \\ \hline
3 HOMO - 3 LUMO & 3h3l & 12 & 2664 & 11076 & 923 \\ \hline
2 HOMO - 2 LUMO & 2h2l & 8 & 564 & 1544 & 193 \\ \hline
1 HOMO - 1 LUMO & 1h1l & 4 & 36 & 60 & 15 \\ \hline \end{tabular}
\end{table}
Table 1: The overview of resource requirements for CO\({}_{2}-\) Mg\({}^{2+}\) at different levels of approximation
The following procedure was used to select the orbitals: Molecular Orbitals (MOs) index was generated using the PySCF programming tool [12] and the electronic Hamiltonian corresponding to the desired AS configuration was constructed using the AS transformation module.
**Energy Computation on Hybrid Quantum-Classical Framework.** The PES computation was performed using the IBM Qiskit Nature Module [13] with a PySCF interface. We employed the VQE algorithm and designed it using the Qiskit library. The VQE algorithm was executed on the _statevector_ backend using perfect qubits but also on quantum hardware using the _Ibmq-nairobi_ backend via superconducting qubits. Henceforth, we will refer to simulations based on perfect qubits as "VQE - perfect q." and simulations based on physical qubits as "VQE - physical q."
The singlet Hartree-Fock state was defined as the initial state for all quantum computing-based computations. The variational form was constructed by the Unitary Coupled Cluster Single and Double excitation (UCCSD) using fully-entangled hardware-efficient ansatz. The ansatz implemented on the _ibmq-nairobi_ backend (VQE-physical q.) was parameterized by the rotational parameters obtained from VQE-perfect q. computations. The fermionic terms of selected AS configurations (1h1l, 2h2l, 3h3l, and 4h4l; see Table 1) were mapped to a qubit Hamiltonian in the minimal basis set (STO-3G) using the Bravy-Kitaev transformation. The two-qubit reduction technique was employed to decrease the number of qubits by two in each generated qubit Hamiltonian circuit. The SLSQP and SPSA optimizers were used to optimize the ansatz parameters toward minimizing the electronic structure energy in _statevector_ and _ibmq-nairobi_ simulators, respectively. To mitigate errors incurred during qubit readout in physical q-based implementation, we applied readout error mitigation at _resilience level one_.
**Energy Computation on Classical Framework.** The PES of the full \(\mathrm{CO_{2}-Mg^{2+}}\) electronic structure was calculated in PySCF, applying RHF and CCSD methods on the STO-3G basis set to obtain the full electronic structure energy references. To obtain a relevant total energy reference, with respect to the quantum computing-based method and AS Transformation applied, the localized Full Configuration Interaction analysis was performed. This was done by exact Diagonalization of different AS configuration matrices embedded into the Hartree-Fock spin orbitals, which is known as CASCI. The diagonalization was done using _NumpyEigensolver_.
## 4 Results and Discussion
The choice of an energy calculation method in quantum chemistry depends on the balance between computational resources and the desired level of accuracy. The system state is defined by the Schrodinger equation and several methods are employed to solve it, varying in accuracy level. The mean-field approach, such as HF, provides a simplified but computationally efficient representation of electron-electron interactions, considering electrons as independent in an average field generated by all other electrons. Since it neglects the electrons correlation effect, it leads to a low accuracy calculation, especially for strongly correlated systems [14]. CCSD is an extension or correction with respect to the HF. It accounts for some electron correlation, regarding various levels of excitation and often yields highly accurate results for many molecular properties. The computational cost for CCSD is expensive with respect to the other classical methods [15]. The most accurate manner to simulate the molecular systems is by considering all possible electronic configurations and interactions, this could be done by the FCI approach. FCI becomes computationally intractable for large systems due to its exponential scaling. Local application of FCI on a subset of important configurations (CASCI), strikes a balance between accuracy and computational cost [8]. The configuration interaction size selected is dictated by the computational power of the machine used and even the most powerful classical machines fail to perform FCI on large molecular systems.
VQE belongs to the category of variational methods, unlike FCI which is a deterministic method. VQE aims to compute an upper bound for the lowest possible expectation value of an observable with respect to a trial wave function. The objective of the VQE is therefore to find a parametrization of the trial wave function, such that the expectation value of the Hamiltonian (energy equation) is minimized. VQE is assumed to achieve higher accuracy than classical mean-field methods but at a lower cost compared to FCI. However, it is worth mentioning that its accuracy depends on factors like the choice of initial state, ansatz, optimization method, execution approach, etc. [16], [17].
**The Total Energies at the Optimized Geometry.** To compare the accuracy level of energy computation for the optimal inter-nuclear distance, the total energies of the electronic structure of
(ground state energy) are shown as a function of AS configuration size (number of HOMOs and LUMOs orbitals involved), see Fig. 2. Fig. 2 (A) schematically presents the different AS configurations. The HOMO and LUMO orbitals chosen to create an AS configuration are marked blue. The remaining inactive orbitals are marked red. Different markings highlight that CASCI and VQE computation were applied on AS configurations (blue orbitals), while the mean-field (HF) was applied on inactive space (red). To obtain the total energy, CASCI/ VQE was embedded into the mean field calculation.
In Fig. 2 (B), RHF and CCSD results shown as dotted lines, illustrate classical method thresholds of low and higher accuracy, respectively. One should note that these two methods were applied to the entire \(\mathrm{CO_{2}-Mg^{2+}}\) electronic structure. Two different energy scales were used to evaluate the data: total energy (right axes) and total energy improvement with respect to the RHF accuracy level (left axes, \(\Delta\mathrm{E=E-E_{RHF}}\)). The lowest computed energy, alongside its corresponding methods and available computing resources, is used to represent the highest accuracy level (solid line).
It can be observed that accuracy significantly increases when more than 1h1l orbitals are involved. AS configuration size is expected to influence energy value [18]. VQE-Perfect q. 2h2l result is comparable to CASCI 2h2l. This demonstrates that the hybrid model performs comparably well as its classical reference, and both are above the CCSD level. The deviation observed for VQE-physical q. 2h2l implementation in comparison to the perfect qubit and classical outcome is a result of erroneous hardware execution. With increasing AS space size, considering the investigated range, the quantum computing-based method shows no significant improvement in the accuracy level. Contrarily, energy computed with CASCI decreases with increasing AS size. The lowest energy was achieved for CASCI implementation with 4h4l.
Figure 2: A) The schematic presentation of AS configurations B) The \(\mathrm{CO_{2}-Mg^{2+}}\) relative energy with respect to the \(\mathrm{RHF(\Delta E)}\) and absolute energy (Energy) at optimal bond length. For the methods where AS transformation was applied, \(\Delta E\) and Energy are presented as a function of AS Configuration interaction size.
**Potential Energy Surface Scan.** Fig. 3 (A-D) presents total energy evolution as a function of internuclear distance i.e. PES scan, obtained by various methods (see section Method) for 1h1l, 2h2l, 3h3l, and 4h4l configurations. RHF and CCSD are also shown in the figure representing full electronic system classical computations.
For 1h1l computation, VQE and CASCI computations meet RHF accuracy level. This can be expected, due to a low number of orbitals considered within 1h1l AS configuration. In general, one can notice that as the AS size expands, accuracy gradually increases. However, this trend does not hold true for the VQE calculations, with both perfect q. and physical q. Nonetheless, the data obtained by the VQE algorithm are well-converged and demonstrate a similar trend to the energy evolution calculated on classical computers. 2h2l, 3h3l, and 4h4l PES scan trends show a difference with respect to the full configuration PES scans (RHF and CCSD) at about 2.6 A. Namely, AS results exhibit energy decay for distances larger than \(\sim\) 2.6 A.
The observed deviation between VQE and CASCI results is analyzed in Fig 3 (E-H). For a particular AS-Configuration, CASCI computation is taken as a reference, and other computations are illustrated relative to their corresponding Localized CASCI. For the 1h1l configuration, it can be observed again that all the results overlapped with CASCI, except the CCSD, which reaches higher accuracy. VQE-perfect q. 2h2l exhibits a reasonable overlap with CASCI 2h2l, both being more accurate with respect to the RHF and CCSD. VQE-physical q. deviation resulting from erroneous behavior of actual quantum hardware can be observed, as
Figure 3: (A) (B), (C), and (D) are the \(\mathrm{CO_{2}-Mg^{2+}}\) PES scans for 1h1l, 2h2l, 3h3l, and 4h4l AS configuration as a function of internuclear distance, respectively, including the RHF and CCSD results for the full electronic structure. (E), (F), (G), and (H) represent the deviation VQE results with 1h1l, 2h2l, 3h3l, and 4h4l configurations from their own corresponding Localized CASCI energy reference, respectively. RHF and CCSD deviations are related to the calculation of the full electronic structure including the RHF and CCSD results.
expected. However, 3h3l and 4h4l configurations suffer from discrepancies between VQE and CASCI data at low distances. It is expected that these deviations become more obvious with the AS size increasing. Still, it is interesting to notice that energies obtained by VQE, for both perfect q. and physical q, seem to follow their corresponding CASCI results. Moreover, for the STO-3G minimal basis set, these surpass the accuracy of the CCSD, despite CCSD holding a reputation of a computationally demanding approach.
The potential of quantum computing-based methods can be envisioned through the fair agreement between the observed VQE-perfect q. and CASCI results. To further improve energy computation, one should enlarge an AS size. However, this is computationally expensive, and classical resources would be consumed before reaching a significant number of orbitals, especially for large and strongly correlated systems. With the assumption that quantum computing resources will grow, it can be presumed that quantum computing-based methods play a key role in reaching better accuracy levels for chemical system energy calculations. It's important to emphasize that the performance of quantum hardware would need to improve concurrently, as indicated by the deviation in the results between VQE-physical q. and VQE-perfect q. It was observed that the discrepancy between VQE and CASCI increases with increasing the AS configuration size, which suggests that VQE performance decreases with enlarging an electronic structure matrix. This demonstrates that further algorithm improvements and development are needed in order to reach an industry-relevant quantum computing application.
**3D Potential Energy Surface Scan.** Different adsorption orientation angles for the \(\mathrm{CO_{2}-Mg^{2+}}\) interaction were incorporated with existing calculations over the bond distance. This resulted in a 3D PES scan (Fig. 4), which is obtained by VQE-Perfect q.
Results presented in Fig 4. show that VQE can be used successfully to manipulate two degrees of freedom. Still, one should note that the calculation is not complete, with respect to the range of analysis needed for forming chemistry-relevant conclusions about, for example MOF - CO2 reaction path.
**The MOF74 Unit Cell Energy Computation.** The idea behind the proposed computation is to draft a direction toward the MOF74 full unit-cell energy calculation. A local energy correction (LEC) method was applied as schematically presented in Fig 5. The energy of the MOF74 unit cell, without \(\mathrm{CO_{2}}\) gases involved (E(RHF\({}_{\mathrm{S}}\))) was calculated with RHF, representing the low-cost method. The energy of the \(\mathrm{CO_{2}}\) interaction with the most active molecule part, which is six open reaction sites (6 \(\mathrm{Mg^{2+}}\) ), was calculated with VQE-perfect q. (E(VQE\({}_{\mathrm{S}}\))). In addition, RHF was applied to the open reaction sides (E(RHF\({}_{\mathrm{C}}\))), as it is required for final energy determination. The final \(\mathrm{E_{LEC}}\) was computed as presented in Eq. 2. Existence of preferential binding sites and entanglement between chemical structures calculated by RHF and VQE were not considered within this study.
Figure 4: (a) 3D PES scan of the \(\mathrm{CO_{2}-Mg^{2+}}\) obtained by variation of two degrees of freedom and, (b) Schematic of different orientations of \(\mathrm{CO_{2}}\) on a hypothetical \(\mathrm{CO_{2}-Mg^{2+}}\) embedded in a scaled-up MOF74 structure
Fig. 6 presents the total energy computation for CO2 - 6Mg MOF74 interaction and relative energy with respect to the RHF reference value, \(\Delta\)E. In addition, VQE-perfect q. 4h4l result is presented as a solid line to illustrate thresholds in the same way as in Fig 2. It can be observed that the RHF accuracy level is improved by applying quantum computing-based local energy correction.
## 5 Conclusion and future work
This work aimed to evaluate if MOF design could be enhanced by quantum computing in the future. To investigate that, a quantum computing-based energy calculation model was constructed following the PISQ approach and a comparison of perfect and physical q.-based implementation was done in parallel with _ab-initio_ (RHF, CCSD) and CASCI computations. The model was based on complete active space configuration interaction. The potential of quantum has been demonstrated through well agreement of VQE-perfect q. results with localized FCI-based calculations (CASCI). However, an attempt to scale up the model to involve more molecular orbitals and hardware implementation revealed drawbacks of current quantum computing:
Figure 5: Schematic of LEC method application for the PES computation of 6Mg-MOF74 - CO\({}_{2}\) complex
Figure 6: (_Left Axis_:) The CO\({}_{2}-6\)Mg - MOF74 relative total energies with respect to the RHF (\(\Delta\)E) and _(Right Axis_:) absolute total energies at optimal bond length. For the methods where CASCI and VQE approaches were applied, both are presented as a function of growing AS configurations.
low number of qubits available, erroneous behavior of qubits, and algorithm design challenges. A pathway toward MOF 74 full unit cell energy computation was proposed. It is based on low and high-cost model applications on different MOF unit cell building structures. Well-converged results followed the trend as expected. However, the energy computation needs to involve existing structure interactions in a more representative manner. Perfect qubit-based simulation proved to be a valuable approach to perform model evaluation. In addition, VQE-perfect q. results were used to parameterize VQE-physical q. implementation, which shortens hardware runtimes. Perfect qubit simulation was used to draft a pathway toward modeling 3D and full unit cell PES scans. These procedures do not reach chemistry-relevant outcomes yet, and implementing them on hardware brings no advantage. However, it is needed to validate the proof of concept and the PISQ approach is suitable. These indicate some of the most important benefits of the PISQ approach. The authors plan to expand the work by performing hardware implementation beyond the 2h2l AS configuration, to enable further analysis of the quantum potential. More investigation will be done to advance unit cell energy computation.
|
2310.02269 | ARRQP: Anomaly Resilient Real-time QoS Prediction Framework with Graph
Convolution | In the realm of modern service-oriented architecture, ensuring Quality of
Service (QoS) is of paramount importance. The ability to predict QoS values in
advance empowers users to make informed decisions. However, achieving accurate
QoS predictions in the presence of various issues and anomalies, including
outliers, data sparsity, grey-sheep instances, and cold-start scenarios,
remains a challenge. Current state-of-the-art methods often fall short when
addressing these issues simultaneously, resulting in performance degradation.
In this paper, we introduce a real-time QoS prediction framework (called ARRQP)
with a specific emphasis on improving resilience to anomalies in the data.
ARRQP utilizes the power of graph convolution techniques to capture intricate
relationships and dependencies among users and services, even when the data is
limited or sparse. ARRQP integrates both contextual information and
collaborative insights, enabling a comprehensive understanding of user-service
interactions. By utilizing robust loss functions, ARRQP effectively reduces the
impact of outliers during the model training. Additionally, we introduce a
sparsity-resilient grey-sheep detection method, which is subsequently treated
separately for QoS prediction. Furthermore, we address the cold-start problem
by emphasizing contextual features over collaborative features. Experimental
results on the benchmark WS-DREAM dataset demonstrate the framework's
effectiveness in achieving accurate and timely QoS predictions. | Suraj Kumar, Soumi Chattopadhyay | 2023-09-22T04:37:51Z | http://arxiv.org/abs/2310.02269v1 | # ARROP: Anomaly Resilient Real-time QoS Prediction Framework with Graph Convolution
###### Abstract
In the realm of modern service-oriented architecture, ensuring Quality of Service (QoS) is of paramount importance. The ability to predict QoS values in advance empowers users to make informed decisions, ensuring that the chosen service aligns with their expectations. This harmonizes seamlessly with the core objective of service recommendation, which is to adepty steer users towards services tailored to their distinct requirements and preferences. However, achieving accurate and real-time QoS predictions in the presence of various issues and anomalies, including outliers, data sparsity, grey sheep instances, and cold start scenarios, remains a challenge. Current state-of-the-art methods often fall short when addressing these issues simultaneously, resulting in performance degradation. In response, in this paper, we introduce an anomaly-resilient real-time QoS prediction framework (called ARROP). Our primary contributions encompass proposing an innovative approach to QoS prediction aimed at enhancing prediction accuracy, with a specific emphasis on improving resilience to anomalies in the data. ARROP utilizes the power of graph convolution techniques, a powerful tool in graph-based machine learning, to capture intricate relationships and dependencies among users and services. By leveraging graph convolution, our framework enhances its ability to model and seize complex relationships within the data, even when the data is limited or sparse. ARROP integrates both contextual information and collaborative insights, enabling a comprehensive understanding of user-service interactions. By utilizing robust loss functions, the approach effectively reduces the impact of outliers during the training of the predictive model. Additionally, we introduce a method for detecting grey sheep users or services that is resilient to sparsity. These grey sheep instances are subsequently treated separately for QoS prediction. Furthermore, we address the cold start problem as a distinct challenge by emphasizing contextual features over collaborative features. This approach allows us to effectively handle situations where newly introduced users or services lack historical data. Experimental results on the publicly available benchmark WS-DREAM dataset demonstrate the framework's effectiveness in achieving accurate and timely QoS predictions, even in scenarios where anomalies abound.
QoS Prediction, Service Recommendation, Graph Convolution, Anomaly Detection
## 1 Introduction
In today's service-oriented digital landscape, ensuring Quality of Service (QoS) is a critical imperative. QoS prediction [1], the process of forecasting service performance, is indispensable for users and systems to make informed decisions. However, this task is riddled with challenges, including data sparsity, the cold start problem, the presence of outliers, and grey sheep instances.
Collaborative filtering (CF) [2] has emerged as a highly promising solution for QoS prediction. However, early CF-based methods [3, 4, 5, 6], which rely on exploring the similarity of users/services, suffer from low prediction accuracy due to their lack of consideration for data anomalies.
Despite some advancements using low-rank matrix factorization [7, 8, 9, 10, 11, 12] and factorization machines [13, 14, 15] to tackle data sparsity, the cold start problem, and scalability, these methods frequently encounter challenges in achieving satisfactory performance. This is primarily because they have limited capacity to capture higher-order features and are incapable of handling various other anomalies such as grey sheep [16] and outliers [8] present in QoS data.
Among recent advancements on QoS prediction, learning-based methods [8, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26] are particularly notable for their performance improvement, as they excel in capturing the complex features of users and services. Nevertheless, the learning-based methods that heavily rely on contextual features [18, 27, 28, 29, 30] for QoS prediction may struggle to achieve desirable prediction accuracy when collaborative QoS features, which are often more relevant, are absent or not adequately considered. To further enhance prediction accuracy, recent techniques in the literature have aimed to address anomalies in QoS prediction, including handling outliers using robust outlier-resilient loss functions [8, 22, 25, 31, 32] and addressing the cold start problem [33]. However, many of these methods tend to focus on a subset of these challenges and may fall short when multiple issues coexist simultaneously, leading to performance degradation. Consequently, there is a pressing need for an innovative approach that not only enhances prediction accuracy but also bolsters resilience to anomalies in the data, which is the primary objective of this paper.
Our earlier work on QoS prediction [23] introduced a graph-based solution to mitigate the issue of sparsity but did not address other challenges. The present research is an extension of our earlier work. Here, we not only enhance our predictive model but also prioritize the resolution of other anomalies, including outliers, the cold start problem, and the presence of grey sheep instances. Consequently, we
have attained a substantial improvement in both prediction time and accuracy compared to our earlier work.
In this paper, we propose a scalable, real-time QoS prediction framework (ARRQP) that effectively addresses multiple anomalies, including outliers, data sparsity, grey sheep instances, and cold start, ultimately enhancing prediction accuracy. Our proposed framework consists of five key components: two anomaly detection blocks responsible for identifying grey sheep users/services and outliers, and three prediction blocks. The first prediction block is engineered to withstand outliers and data sparsity for regular users and services. The other two prediction blocks are specifically dedicated to predicting QoS for grey sheep users/services and newly added users/services that lack sufficient data. The primary contributions of this paper are outlined as follows:
1. We propose a multi-layer multi-head graph convolution matrix factorization (MhGCMF) model complemented by an outlier-resilient loss function. This model is designed to capture the intricate relationships among QoS data, effectively mitigating the influence of outliers. It not only enhances prediction accuracy but also ensures minimal prediction time, making it a valuable addition to QoS prediction methodologies.
2. The synergy between contextual and QoS features, in addition to the spatial features automatically extracted by MhGCMF, significantly enhances the model's expressiveness in capturing the complex, higher-order association between user and services. This enhancement eventually leads to improved prediction performance.
3. We introduce a sparsity-resilient method for detecting grey sheep users or services.
4. Grey sheep instances possess unique characteristics, making it challenging to predict QoS values for such users or services using collaborative filtering alone. Consequently, we have devised a distinct QoS prediction model specifically tailored to address grey sheep users or services. This model incorporates a quantitative distinction measure obtained from the grey sheep detection block, allowing us to provide more accurate predictions for this particular category of users or services.
5. We address the cold start problem by designing a separate model that prioritizes contextual features over collaborative features.
6. We conducted comprehensive experiments using the benchmark WS-DREAM RT and TP datasets [34] to assess the effectiveness of each block within ARRQP, as well as to evaluate the overall performance of ARRQP.
The rest of the paper is organized as follows. Section 2 presents an overview of the problem with its formulation. Section 3 then discusses the proposed solution framework in detail. The experimental results are analyzed in Section 4, while the literature review is presented in Section 5. Finally, Section 6 concludes this paper.
## 2 Overview and Problem Formulation
In this section, we discuss an overview of the QoS prediction problem followed by our problem formulation. We are given:
* A set of \(n\) users \(\mathcal{U}\)
* Contextual information \(\mathcal{C}^{u}\) of each \(u\in\mathcal{U}\), this contextual information includes user id, user region, autonomous system, etc.
* A set of \(m\) web services \(\mathcal{S}\)
* Contextual information \(\mathcal{C}^{s}\) of each \(s\in\mathcal{S}\), where contextual information comprises service id, service region, service provider, etc.
* A QoS parameter \(q\)
* A QoS log matrix \(\mathcal{Q}\) of dimension \(n\times m\) containing past user-service interactions in terms of \(q\), as defined: \[\mathcal{Q}=\begin{cases}q_{ij}\in\mathbb{R}_{>0},&\text{value of $q$ of $s_{j}$ invoked by $u_{i}$}\\ 0,&\text{otherwise}\end{cases}\] (1) Each non-zero entry (\(q_{ij}\neq 0\)) in the matrix represents the value of \(q\) of \(s_{j}\) invoked by \(u_{i}\). Zero entries, on the other hand, denote no interactions between user-service. It may be noted that the QoS log matrix is, in general, a sparse matrix.
The objective of the QoS prediction problem is to predict the QoS value of a given target user-service pair \(u_{i}\) and \(s_{j}\), where \(q_{ij}=0\). In general, the classical QoS prediction problem aims to reduce the prediction error as much as possible. However, minimizing the prediction error is challenging due to the following:
* _Data sparsity problem_ (S): As we discussed earlier, the QoS log matrix is highly sparse. Therefore, minimizing the prediction error while predicting a missing value in the presence of other missing values is a challenging task.
* _Presence of outliers_ (O): The presence of outliers in the QoS log matrix impedes minimizing the prediction error. Therefore, identifying and handling the outliers is essential to meet the objective.
* _Presence of grey sheep users/services_ (GS): The grey sheep users/services are the ones having unique QoS invocation patterns in terms of \(q\). Therefore, predicting the QoS values of the grey sheep users/services as compared to the other users/services is difficult.
* _Cold-start problem_ (C): This is the situation when new users/services have been added to the system. Due to the absence of any past data, it is difficult to predict the QoS values of the new users/services.
**Objective**: This paper aims to design a real-time and scalable QoS prediction framework by addressing the above challenges to attain reasonably low prediction error.
## 3 Methodology
In this section, we discuss our framework, ARRQP, for QoS prediction. ARRQP comprises two major anomaly detection blocks: (a) **G**rey sheep user/service **D**ection block (GD), (b) **O**utlier **D**etection block (OD); and three major prediction blocks: (a) Sparsity and **O**utlier **R**esilient **R**eal-time **Q**oS **P**rediction block (SORRQP), (b) Grey sheep users/services **R**esilient **R**eal-time **Q**oS **P**rediction block (GRRQP), (c) Cold-start **R**esilient **R**eal-time **Q**oS **P**rediction block (CRRQP). In the following subsections, we discuss each of these blocks in detail. We begin with discussing our first prediction block, SORRQP, which primarily deals with sparsity and outliers.
### _SORROP Block_
SORROP leverages graph convolution [35] to deal with the data sparsity. The graph convolutional network (GCN) takes privilege in graph architecture and effectively aggregates the node features through message passing between neighboring nodes. Graph architecture has more expressive power [35] than most known representations of users and services. Therefore, in this paper, we adopt graph convolution as the fundamental operation in the SORROP architecture. Here, we propose a multi-layer multi-head graph convolution matrix factorization (MhGCMF) model for QoS prediction. Fig. 1 shows the overview of SORROP architecture. Before discussing the further details of MMGCMF, we first define the QoS Invocation Graph (QIG), which is a basic building block of graph convolution operation.
**Definition 3.1** (QoS Invocation Graph (QIG)).: _A QIG, \(\mathcal{G}=\left(V_{1}\cup V_{2}\right.\), \(E\), \(\mathcal{E}_{1}\cup\mathcal{E}_{2}\)) is a bipartite graph, where \(V_{1}\) and \(V_{2}\) are the set of vertices representing the set of users and services, respectively. An edge \(e_{ij}=(v_{i}^{1}\in V_{1},v_{j}^{2}\in V_{2})\in E\) exists in \(\mathcal{G}\) if \(q_{ij}\neq 0\) in the QoS log matrix \(\mathcal{Q}\). \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) represent the set of feature embedding for each node in \(V_{1}\) and \(V_{2}\), respectively. \(\blacksquare\)_
We now illustrate the details of the feature embedding used in this paper.
#### 3.1.1 Construction of the Feature Embedding
Our initial feature embedding comprises a set of QoS features along with a set of contextual features. We now briefly discuss the details of this embedding.
**QoS features:** The QoS features include three different types of features, which are discussed below.
_(i) Statistical features_ (\(\mathcal{F}_{t}^{u},\mathcal{F}_{t}^{s}\)): To capture the self characteristics of each user \(u_{i}\) and service \(s_{j}\), we compute 5 statistical features, as shown in Table I. It may be noted that since \(\mathcal{Q}(i)\) is a partially filled QoS invocation vector (QIV), the data sparsity affects the statistical features.
_(ii) Collaborative features_ (\(\mathcal{F}_{n}^{u},\mathcal{F}_{n}^{s}\)): They are derived from the QoS log matrix. We perform non-negative matrix decomposition [7] of \(\mathcal{Q}\) to obtain the collaborative QoS features of \(u_{i}\) and \(s_{j}\), each with dimension \(d_{n}\).
_(iii) Similarity features_ (\(\mathcal{F}_{s}^{u},\mathcal{F}_{s}^{s}\)): They are also extracted from the QoS log matrix. We employ the cosine similarity metric (CSM) to obtain the similarity between pairwise users \((u_{i},u_{k})\) and services \((s_{j},s_{k})\), computed as:
\[CSM(u_{i},u_{k})=\frac{\mathcal{Q}(i)\cdot\mathcal{Q}(k)}{\left\|\mathcal{Q}(i )\right\|_{2}\left\|\mathcal{Q}(k)\right\|_{2}} \tag{2}\]
\[CSM(s_{j},s_{k})=\frac{\mathcal{Q}^{T}(j)\cdot\mathcal{Q}^{T}(k)}{\left\| \mathcal{Q}^{T}(j)\right\|_{2}\left\|\mathcal{Q}^{T}(k)\right\|_{2}} \tag{3}\]
It is worth noting that the similarity between two users/services is 0 if they do not have any common invocations. Therefore, in general, the similarity features remain sparse due to the sparsity of \(\mathcal{Q}\). Furthermore, depending on the number of users and services, the similarity feature vector may be a high-dimensional vector. It may be noted that the length of the similarity feature vector for each user \(u_{i}\) and service \(s_{j}\) are \(n\) and \(m\), respectively. Therefore, we employ two different auto-encoders [36] for the users and services separately to obtain the low-dimensional similarity feature vector of length \(d_{s}\).
**Contextual features** (\(\mathcal{F}_{c}^{u},\mathcal{F}_{c}^{s}\)): We utilize the contextual information associated with each \(u_{i}\) and \(s_{j}\) to prepare the contextual features. The user contextual features include the user ID, region, and autonomous system (AS) information, whereas the service contextual features comprise the service
\begin{table}
\begin{tabular}{l|l} \hline \(min_{i}^{u}:min(\mathcal{Q}(i))\) & \(min_{i}^{s}:min(\mathcal{Q}^{T}(j))\) \\ \hline \(max_{i}^{u}:max(\mathcal{Q}(i))\) & \(max_{i}^{s}:max(\mathcal{Q}^{T}(j))\) \\ \hline \(\mu_{i}^{s}:mean(\mathcal{Q}(i))\) & \(\mu_{i}^{s}:mean(\mathcal{Q}^{T}(j))\) \\ \hline \(med_{i}^{u}:median(\mathcal{Q}(i))\) & \(med_{i}^{s}:median(\mathcal{Q}^{T}(j))\) \\ \hline \(\sigma_{i}^{u}:std\_dev(\mathcal{Q}(i))\) & \(\sigma_{i}^{s}:std\_dev(\mathcal{Q}^{T}(j))\) \\ \hline \multicolumn{2}{c}{\(std\_dev: standard deviation} \\ \multicolumn{2}{c}{\(\mathcal{Q}(i)\): \(i^{th}\) row of \(\mathcal{Q}\): \(\mathcal{Q}^{T}\): Transpose of \(\mathcal{Q}\)} \\ \end{tabular}
\end{table} TABLE I: Feature embedding for QIG
Fig. 1: Architecture of SORROP: (a) Details of GCMFU; (b) Architecture for MhGCMF
ID, service region, and service provider information. We use the one-hot encoding for each contextual information. Here again, the dimension of the feature vector becomes large due to the high number of users and services. Therefore, we employ auto-encoders to obtain the contextual feature vector of length \(d_{c}\).
Finally, all the above feature vectors are concatenated to construct the feature embedding \(\mathcal{E}_{i}^{i}=(\mathcal{F}_{c}^{c_{i},i,j}|\mathcal{F}_{c}^{\gets u _{i},i,j}|\mathcal{F}_{c}^{\gets u_{i},i,j}|\mathcal{F}_{c}^{\gets u _{i},i,j})\in\mathcal{E}_{1}\) and \(\mathcal{E}_{2}^{j}=(\mathcal{F}_{t}^{\gets s,j}|\mathcal{F}_{n}^{ \leftarrow s,j}|\mathcal{F}_{s}^{\leftarrow s,j}|\mathcal{F}_{c}^{\gets s,j})\in\mathcal{E}_{2}\) for each user \(u_{i}\) and service \(s_{j}\), respectively. Here, \(\mathcal{F}_{t}^{\gets u_{i},i,j}\) refers to the statistical features for \(u_{i}\). The rest of them are denoted similarly. The length of the initial feature embedding (say, \(f\)) for each user/service is, therefore, \((5+d_{n}+d_{s}+d_{c})\). The initial feature embedding matrix \(F^{0}\) is defined in Eq. 4.
\[F^{0}=(\rho_{ij}^{0})\in\mathbb{Z}^{N\times f};\,\rho_{i}^{0}=\begin{cases} \mathcal{E}_{i}^{i}\in\mathbb{R}^{f},\text{ if }i\leq n\\ \mathcal{E}_{2}^{i-n}\in\mathbb{R}^{f},\text{ otherwise}\end{cases} \tag{4}\]
The contextual features are less proficient than QoS features in capturing the intricate relationship among users and services, particularly in the context of QoS prediction. However, relying solely on QoS features often results in lower accuracy due to the sparse nature of the QoS log matrix. Therefore, in this paper, we combine both types of features to create the initial feature embedding. In addition to the initial feature embedding, we incorporate spatial collaborative features obtained from the mMGCMF, which forms the core component of SORRQP. The graph architecture in the MhGCMF enables each user/service to integrate the neighbor information into its embedding, resulting in richer representations that facilitate effective QoS prediction. We now discuss the details of MhGCMF. The primary goal of the MhGMCF is to learn the spatial features from the initial feature embedding of the users and services, with the aim of enhancing the efficiency of QoS prediction. The GCMF Unit (GCMFU) is the central component for MhGCMF. Therefore, we begin with explaining the GCMFU.
#### 3.1.2 Architecture of GCMFU
GCMFU aggregates the neighborhood features of a node \(v_{i}^{k}\in(V_{1}\cup V_{2}),\,\,k\in\{1,2\}\) from its neighbors, as modeled by the two equations shown in Fig. 1(a). Here, we leverage the adjacency matrix representation of \(\mathcal{G}\) to aggregate the neighborhood features. To preserve the self features of each node in the aggregation, we additionally make the diagonal entries of the adjacency matrix as 1. The modified adjacency matrix \(\mathcal{A}\in\mathbb{R}^{N\times N}\) is defined in Eq. 5.
\[\mathcal{A}=(a_{ij})\in\{0,1\}^{N\times N};a_{ij}=\begin{cases}1,\text{ if }i\,=\,j\text{ or }e_{ij}\in E\\ 0,\text{ otherwise}\end{cases} \tag{5}\]
Where, \(N=n+m\). We then normalize \(\mathcal{A}\), as shown in Eq. 6, to avoid the scaling discrepancy caused by the varying degrees of the nodes in \(\mathcal{G}\) because of the sparsity of \(\mathcal{Q}\) and generate \(\bar{\mathcal{A}}\). The diagonal degree matrix (\(\mathcal{D}\)), as defined in Eq. 7, is used for the normalization. The normalization reduces the impact of the higher degree nodes in QoS prediction.
\[\bar{\mathcal{A}}=\mathcal{D}^{-1/2}\,\mathcal{A}\,\mathcal{D}^{-1/2} \tag{6}\]
\[\mathcal{D}=(d_{ij})\in\mathbb{Z}^{N\times N};\,\,\,d_{ij}=\begin{cases} \sum\limits_{k=1}^{N}\mathcal{A}(i,k),\text{ if }i=j\\ 0,\text{ otherwise}\end{cases} \tag{7}\]
The GCMFU receives \(\bar{\mathcal{A}}\) and a feature embedding matrix (say, \(F^{j}\)) as its input. It then performs a non-linear transformation on \(F^{j}\) with the help of a set of learnable parameters (refer to Fig. 1(a)) and produces the updated feature embedding matrix (say, \(F^{j+1}\)) by incorporating the spatial information of the neighborhood nodes. We now discuss the detailed architecture of the MhGCMF.
#### 3.1.3 Architecture of MhGCMF
Fig. 1(b) summarizes the architecture of the MhGCMF. Given the normalized adjacency matrix \(\bar{\mathcal{A}}\) and the initial feature embedding matrix \(F^{0}\), the objective of the MhGCMF is to learn the spatial collaborative features for each user/service and predict the QoS value of a given user-service pair.
In MhGCMF, \(F^{0}\) first undergoes an initial transformation by passing through a dense layer, resulting in the transformed feature embedding \(F^{1}\). Subsequently, \(F^{1}\) and \(\bar{\mathcal{A}}\) are directed to the multi-head GCMFU (i.e., MhGCMFU) block. MhGCMFU comprises \(N_{h}\) number of GCMFUs followed by a \(1\times 1\) convolution layer. The "multi-head" in MhGCMFU refers to the presence of multiple GCMFUs. The outputs of these multiple GCMFUs are then concatenated channel-wise, and the resulting tensor passes through a \(1\times 1\) convolutional layer. This entire process is repeated \(t\) times, where the output of MhGCMFU is fed back into itself. The output generated by each MhGCMFU block is once again concatenated channel-wise and then forwarded to another \(1\times 1\) convolutional layer. For each \(1\times 1\) convolutional layer, there is a corresponding tuple indicating (padding, stride, and number of filters), as illustrated in Fig. 1(b). The output of the final \(1\times 1\) convolutional layer is split row-wise to produce the user and service embedding matrices. These matrices are subsequently multiplied to generate the predicted QoS log matrix. We train the MhGCMF for each user-service pair \((u_{i},s_{j})\in\mathcal{U}\times\mathcal{S}\), such that \(q_{ij}\neq 0\) using Cauchy Loss Function, as shown in Eq. 8.
\[\mathcal{C}_{\mathcal{L}}=\sum\limits_{q_{ij}\neq 0}\ln{(1+\frac{||q_{ij}-\hat{q}_{ ij}||^{2}}{\gamma^{2}})} \tag{8}\]
where \(\gamma\) is a hyper-parameter that can be tuned externally, and \(\hat{q}_{ij}\) is the predicted QoS value. The outlier resilience characteristics of the Cauchy loss function ensure that SORRQP also exhibits outlier resilience. However, SORRQP alone cannot address the other issues, such as grey sheep and cold start. In the next section, we address the issue of grey sheep.
### _Grey sheep Detection Block (GD)_
Collaborative Filtering (CF) highly relies on the collaborative relationship between users and services and operates on the premise that similar users and services exist within the system. However, it is worth noting that there can be a small group of users or services with QoS patterns that deviate from the norm, making them distinct from the majority. These users and services are often referred to as _grey sheep_,
representing an anomalous subset within the user or service community. The existence of grey sheep users or services may occasionally lead to substantial inaccuracies in QoS predictions. In this context, our approach begins with addressing the task of identifying grey sheep Users/Services (GSU/GSS). Subsequently, we tackle the challenge of QoS prediction for these identified GSUs and/or CSSs. The GD block within ARRQP primarily focuses on identifying the instances of grey sheep within the QoS data.
To capture the distinct QoS pattern exhibited by a user/service, here, we introduce two concepts: _reliability score_ and _Grey sheep Anomaly (GA) score_[16] for each user/service. These two scores together aid in determining whether a particular user/service qualifies as a GSU/GSS.
**Definition 3.2** (Reliability Score).: _The reliability score of a user \(u_{i}\in\mathcal{U}\) and service \(s_{j}\in\mathcal{S}\), denoted by \(\mathcal{R}(u_{i})\) and \(\mathcal{R}(s_{j})\), respectively, are defined as:_
\[\mathcal{R}(u_{i})= \tag{10}\] \[1-\left(\frac{\sigma_{i}^{u}-\min\limits_{u_{k}\in\mathcal{U}}( \sigma_{k}^{u})}{\max\limits_{u_{k}\in\mathcal{U}}(\sigma_{k}^{u})-\min\limits _{u_{k}\in\mathcal{U}}(\sigma_{k}^{u})}\right);\]
\[1-\left(\frac{\sigma_{j}^{u}-\min\limits_{u_{k}\in\mathcal{U}}(\sigma_{k}^{u}) }{\max\limits_{u_{k}\in\mathcal{U}}(\sigma_{k}^{u})-\min\limits_{u_{k}\in \mathcal{U}}(\sigma_{k}^{u})}\right);\]
\[\sigma_{i}^{u}\]
_and \(\sigma_{j}^{u}\) represent the standard deviation of the QIVs of \(u_{i}\) and \(s_{j}\), respectively (refer to Table I)._
It may be noted from Eqs.9 and 10 that a user/service is said to be more reliable than another if the standard deviation of the QIV of the former is lower than that of the later. We now define GA score in Definition 3.3.
**Definition 3.3** (Grey sheep Anomaly (GA) Score).: _The GA score of a user \(u_{i}\in\mathcal{U}\) and service \(s_{j}\in\mathcal{S}\), denoted by \(\mathcal{G}(u_{i})\) and \(\mathcal{G}(s_{j})\), respectively, are defined as:_
\[\mathcal{G}(u_{i})=\left(\sum\limits_{s_{j}\in\mathcal{S}_{i}}\left(|u_{ij}- \mu_{i}^{s}-\bar{\mu}_{j}^{s}|\times\mathcal{R}(s_{j})\right)\right)/| \mathcal{S}_{i}| \tag{11}\]
\[\mathcal{G}(s_{j})=\left(\sum\limits_{u_{i}\in\mathcal{U}_{j}}\left(|u_{ij}- \mu_{j}^{s}-\bar{\mu}_{i}^{s}|\times\mathcal{R}(u_{i})\right)\right)/|\mathcal{ U}_{j}| \tag{12}\]
\[\bar{\mu}_{j}^{s}=\left(\left(\sum\limits_{u_{i}\in\mathcal{U}_{j}}q_{ij} \right)-\max\limits_{u_{i}\in\mathcal{U}_{j}}\left(q_{ij}\right)-\min\limits _{u_{i}\in\mathcal{U}_{j}}\left(q_{ij}\right)\right)/\left(\mathcal{U}_{j}|-2\right) \tag{13}\]
\(\bar{\mu}_{i}^{u}\) _is also defined similarly. \(\mathcal{S}_{i}\) be the set of services invoked by \(u_{i}\). \(\mathcal{U}_{j}\) be the set of users invoked \(s_{j}\). \(\mu_{i}^{u}\) and \(\mu_{j}^{s}\) represent the mean of the QIVs of \(u_{i}\) and \(s_{j}\), respectively._
It may be noted from the above equations that the GA score of each user \(u_{i}\) (service \(s_{j}\)) is computed over its respective QIV, denoted by \(\mathcal{Q}(i)\) (\(\mathcal{Q}^{T}(j)\)). For each invocation \(q_{ij}\) (\(\neq 0\)) made by \(u_{i}\) (for \(s_{j}\)), we compute the deviation with respect to the mean of QIV of \(u_{i}\) (\(s_{j}\)) and centralized mean of \(s_{j}\) (\(u_{i}\)). The average of these weighted deviations, computed over the QIV of \(u_{i}\) (\(s_{j}\)), is referred to as the GA score of \(u_{i}\) (\(s_{j}\)). The reliability score of the corresponding service is used as the weight in this computation. Notably, if a service has a high reliability score, the corresponding deviation carries more weight than one associated with a service with a lower-reliability score. It may be noted the GA score of a user only depends on the set of services it invoked. Therefore, the GA score is not affected by the data sparsity.
We now define the Grey sheep user (GSU) and service (GSS) below.
**Definition 3.4** (Grey sheep user (GSU)).: _A user \(u_{i}\in\mathcal{U}\) (service \(s_{j}\in\mathcal{S}\)) is called GSU (GSS), if \(\mathcal{G}(u_{i})\) (\(\mathcal{G}(s_{j})\)) is more than a given threshold \(\tau_{\mathcal{M}}^{u}\) (\(\tau_{\mathcal{M}}^{s}\))._
\(\tau_{\mathcal{M}}^{u}\) and \(\tau_{\mathcal{M}}^{s}\) are hyper-parameters. In this paper, we consider \(\tau_{\mathcal{M}}^{u}=\mu_{\mathcal{M}}^{u}+c*\sigma_{\mathcal{M}}^{u}\) (\(\tau_{\mathcal{M}}^{s}=\mu_{\mathcal{M}}^{s}+c*\sigma_{\mathcal{M}}^{u}\)), where \(\mu_{\mathcal{M}}^{u}\) and \(\sigma_{\mathcal{M}}^{u}\) (\(\mu_{\mathcal{M}}^{s}\) and \(\sigma_{\mathcal{M}}^{s}\)) are the mean and standard deviation of the GA scores of users (services), respectively. \(c\) is a hyper-parameter, which can be tuned externally.
Once the GD block identifies the GSU/GSS, the next step involves predicting the QoS values for these identified GSUs/GSSs using our next prediction block, GRRQP. In the following section, we discuss the specifics of the GRRQP.
### _GRRQP Block_
The GRRQP is primarily designed to provide QoS predictions for GSS/GSU. The rationale behind introducing the GRRQP block stems from the recognition that grey sheep users/services exhibit markedly distinct QoS patterns compared to the majority of users/services. Consequently, relying extensively on collaborative features, as employed in SORRQP, may not be adequate for achieving higher QoS predictions for GSU/GSS. Furthermore, the prediction of QoS values for GSU/GSS in conjunction with other users/services may not be advisable, given that GSU/GSS possess distinct characteristics that set them apart from the rest. Therefore, here, we design separate architectures for the QoS prediction tailored specifically for GSU/GSS.
The GRRQP is an improvisation of SORRQP, featuring supplementary MLPs (Multi-Layer Perceptrons) in addition to the MhGCMF model discussed previously. Here, we have three MLPs designed specifically for (a) non-grey sheep users and CSSs, (b) GSUs and non-grey sheep services, and (c) GSUs and GSSs. The feature vector for each MLP is constructed by concatenating the user features and the service features. If the user \(u_{i}\) is a non-grey sheep user, the embedding vector \(E_{i}^{u}\) (i.e., \(i^{th}\) row of \(E^{u}\)) obtained from MhGCMF is used as its features. However, if \(u_{i}\) is a grey sheep user, its feature is generated by concatenating the following components: (i) the embedding vector \(E_{i}^{u}\) obtained from MhGCMF, (ii) the initial feature embedding vector \(\mathcal{E}_{1}^{i}\), (iii) the GA score \(\mathcal{G}(u_{i})\), and (iv) the number of invocations of \(u_{i}\). The service features are also generated similarly. The MLPs are trained for the user-service pair in the respective categories for which \(q_{ij}\neq 0\).
In the following subsection, we illustrate our final prediction block, designed to handle the cold start issue.
### _CRRQP Block_
The term _cold start_ pertains to a scenario in which a new user/service is introduced into a system. This situation poses a significant challenge for QoS prediction due to the lack of available data. The SORRQP system is ill-suited to handle the cold start scenario for the following reasons: (a) The similarity and statistical features in the initial feature embedding cannot provide meaningful information for the newly added user/service due to the absence of historical data. (b) The newly introduced user/service forms an isolated node within the QIG, indicating no interactions among other users/services. These isolated nodes lack the
ability to capture spatial features effectively. As a result, this limitation poses a challenge for SORRQP in making accurate predictions under such circumstances. Therefore, we propose another prediction block, CRRQP, to address the above issue. CRRQP is an enhancement over SORRQP.
In addition to MhGCMF, CRRQP is also equipped with three MLPs to predict the QoS of the following target user-service pairs. (a) Cold Start User (CSU): Here, the target user is new. However, the service has sufficient past data. (b) Cold Start Service (CSS): In this case, the target service is new, while the user has historical data. (a) Cold Start user/service Both (CSB): This scenario involves both the user and service being newly introduced entities without any historical data available for either of them.
The three MLPs are trained separately with three different sets of input features. The input features are constructed by concatenating the user and the service features. If the user/service \(u_{i}/s_{j}\) is not a newly added, the embedding vector \(E_{i}^{u}/E_{j}^{s}\) obtained from MhGCMF is used as its features. However, if \(u_{i}/s_{j}\) is newly added, its feature is generated by concatenating its contextual features and the collaborative features obtained using non-negative matrix decomposition.
In the next subsection, we discuss our final block for outlier detection.
### _Outlier Detection Block_
In this paper, we employ an unsupervised learning algorithm based on the isolation forest (\(i\)Forest) algorithm [37] to detect the outliers from scratch. \(i\)Forest algorithm explicitly isolates the outliers rather than profiling the inliers converse to the distance or density-based approaches for the outliers detection. The strength of the \(i\)Forest approach is its effectiveness over different benchmarks and high scalability. It uses an ensemble of isolation trees (\(i\)Tree), whose branch grows iteratively; this way, it isolates every single data point. Finally, in each \(i\)Tree, the outliers are instances that are isolated near the root of the tree and have comparatively short average path lengths, consecutively, inliers isolated to the far end of the \(i\)Tree. After identifying the outliers, we proceed to eliminate them from the dataset, with the ratio of removed outliers denoted by \(\lambda\) (e.g., \(\lambda=0.1\) refers to 10% of the total dataset considered as the outlier). This step is essential to evaluate the performance of our framework without outliers.
Fig. 2 shows the overall architecture of ARRQP. In the next section, we analyze the performance of ARRQP.
## 4 Experimental Results
In this section, we present the experimental analysis of our framework. We implemented our proposed framework in TensorFlow 2.6.2 with Python 3.6.9. All experiments were executed on the Linux-5.4.0-133-generic-x86_64-with-Ubuntu-18.04-bionic with Intel(R) Core(TM) i9-10885H CPU @ 2.40GHz x86_64 architecture, 16 Core(s), 128 GB RAM, with the following cache L1d: 32K, Lii:32K, L2:256K, L3:16384K.
### _Experimental Setup_
In this subsection, we present a comprehensive overview of the datasets, performance metrics, model configurations utilized to conduct our experiment, and finally, the experimental analysis. We begin by detailing the datasets used in our study.
#### 4.1.1 Datasets
We used the benchmark and publicly available WS-DREAM [34] datasets. The dataset contains two different QoS parameters: response time (RT) and throughput (TP). Table II comprehensively provides the details of the dataset.
#### 4.1.2 Comparison Metric
To measure the performance of our framework, we used two performance metrics: Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) defined as follows:
\[MAE=\frac{1}{|TD|}\sum_{(u_{i},s_{j})\in TD}|q_{ij}-\hat{q}_{ij}| \tag{14}\]
and
\[RMSE=\sqrt{\frac{1}{|TD|}\sum_{(u_{i},s_{j})\in TD}(q_{ij}-\hat{q}_{ij})^{2}} \tag{15}\]
where \(TD\) is the test dataset and \(|TD|\) denotes the size of the test dataset. \(q_{ij}\) and \(\hat{q}_{ij}\) represent the actual and
\begin{table}
\begin{tabular}{c|c|c} \hline Statistics & RT & TP \\ \hline \hline Number of Users & \multicolumn{2}{c}{339} \\ \hline Number of User’s Regions & \multicolumn{2}{c}{31} \\ \hline Number of User’s AS & \multicolumn{2}{c}{137} \\ \hline Number of Services & \multicolumn{2}{c}{5825} \\ \hline Number of Service’s Regions & \multicolumn{2}{c}{74} \\ \hline Number of Service’s Providers & \multicolumn{2}{c}{2699} \\ \hline Number of Valid Invocations & \(1873838\) & \(1831253\) \\ \hline Range (min-max) & \(0.0010-19.9990\) & \(0.0040-1000\) \\ \hline Mean & \(0.9086\) & \(47.5617\) \\ \hline Median & \(0.3200\) & \(13.9530\) \\ \hline Standard Deviation & \(1.9727\) & \(110.7970\) \\ \hline \end{tabular}
\end{table} TABLE II: WS-DREAM datasets details for RT and TP
Fig. 2: Overall architecture ARRQP
the predicted QoS values of a user-service pair \((u_{i},s_{j})\), respectively.
In general, MAE is the average of non-negative differences between the actual and predicted sample that usually provides equal weight to all the samples. However, RMSE measures a quadratic score which gives relatively high weight to large errors.
Furthermore, we used another measure, called _Improvement_, to show the performance improvement of our framework over the past methods, which is defined below.
**Definition 4.1** (Improvement \(I(m_{1},m_{2})\)).: _Given two error values \(P_{1}\) and \(P_{2}\) (measured in terms of MAE or RMSE) obtained by two different methods \(M_{1}\) and \(M_{2}\), respectively, the
\begin{table}
\begin{tabular}{l|l|c|c|c|c|c|c|c|c} \hline Anomaly & \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{MAE} & \multicolumn{4}{c}{RMSE} \\ \cline{3-10} \multicolumn{1}{c|}{Addressed} & & \multicolumn{1}{c|}{TP-5} & \multicolumn{1}{c|}{TP-10} & \multicolumn{1}{c|}{TP-15} & \multicolumn{1}{c|}{TP-20} & \multicolumn{1}{c|}{TP-5} & \multicolumn{1}{c|}{TP-10} & \multicolumn{1}{c|}{TP-15} & \multicolumn{1}{c|}{TP-20} \\ \hline \hline \multirow{10}{*}{None} & UPCC & 27.5760 & 23.0130 & 26.8800 & 19.2940 & 63.7540 & 56.6210 & 51.9550 & 47.6510 \\ & IPCC & 26.8640 & 26.0590 & 28.5790 & 24.2610 & 65.9700 & 61.1320 & 59.1500 & 54.6970 \\ & WSRE & 26.7330 & 22.6940 & 20.5470 & 18.9640 & 63.4980 & 56.4500 & 51.7890 & 46.9510 \\ & Z (\(\zeta\)) & **50.40** & **52.53** & **52.50** & **53.97** & **36.53** & **34.39** & **32.48** & **29.73** \\ \hline \multirow{10}{*}{S + C} & CSMF [12] & 28.3080 & 26.8580 & 26.6250 & 21.7420 & 72.7040 & 70.0210 & 68.6180 & 61.6790 \\ & LBR [29] & 26.1610 & 21.0008 & 18.5840 & 15.6510 & 80.8103 & 57.7280 & 54.4410 & 51.6790 \\ & NMF [2] & 25.7529 & 27.8411 & 18.9893 & 15.2516 & 68.5173 & 53.9896 & 51.7322 & 48.6330 \\ & GeoMF [33] & 24.7465 & 22.4728 & 17.7908 & 16.2852 & 57.7842 & 49.2456 & 45.3255 & 43.9545 \\ & LACF [41] & 29.737 & 19.4498 & 17.8886 & 46.5880 & 87.8875 & 52.9207 & 49.5640 & 47.4108 \\ & RSNH [46] & 24.1420 & 17.2305 & 14.6880 & 14.3654 & 09.7948 & 50.5289 & 42.647 & 45.822 \\ & PMF [44] & 19.9034 & 16.1755 & 15.0956 & 14.6941 & 50.5408 & 46.4439 & 43.7957 & 42.4855 \\ & LEM-FM [55] & 19.8460 & 17.6410 & 17.0460 & 18.5390 & 73.8200 & 65.8608 & 67.0930 & 61.8030 \\ & LRMF [56] & 19.1090 & 15.9494 & 14.9794 & 13.9206 & 58.0719 & 48.2718 & 44.0682 & 41.7880 \\ & LNL-FM [45] & 18.6512 & 16.0634 & 14.7664 & 14.2612 & 54.3243 & 46.8756 & 44.3871 & 43.0892 \\ & LM-FP [49] & 18.3901 & 15.9125 & 17.4540 & 14.1033 & 51.7765 & 46.1418 & 49.2927 & 41.4084 \\ & NAME [47] & 18.0836 & 15.9808 & 14.6661 & 13.4661 & 33.5286 & 52.8685 & 44.0788 & 43.0206 & 40.7481 \\ & NMF [9] & 17.9297 & 16.6542 & 14.4633 & 13.7099 & 51.6783 & 45.9409 & 43.1596 & 41.1689 \\ & NAME [51] & 16.3818 & 19.33917 & 15.0712 & 17.2044 & 90.6124 & 43.9092 & 45.3519 & 99.9431 \\ & LAFHEL [28] & 13.9675 & 12.2119 & 11.3750 & 15.0924 & 95.8431 & 42.5096 & 42.2770 & 38.6575 \\ & Z (\(\zeta\)) & **14.72** & **11.79** & **14.20** & **20.213** & **16.96** & **28.15** & **13.19** & **14.46** \\ \hline \multirow{10}{*}{S + C} & DMM [27] & 18.4903 & 16.2861 & 15.1406 & 14.7953 & 63.2979 & 55.0821 & 49.9340 & 45.4284 \\ & DCAIF [52] & 17.6576 & 15.3959 & 14.3936 & 16.5971 & 54.5123 & 45.9013 & 42.6235 & 41.2194 \\ \cline{1-1} & NCF [53] & 15.4680 & 13.6160 & 12.2840 & 11.1330 & 49.7030 & 46.3400 & 42.3170 & 41.2630 \\ \cline{1-1} & SPP+LIMF [54] & 15.3820 & 13.6540 & 11.9040 & 11.0890 & 51.5660 & 45.6810 & 41.6310 & 39.5340 \\ \cline{1-1} & LOCK [25] & 13.8440 & 12.8320 & 11.2700 & 14.8470 & 43.4820 & 39.8130 & 38.9980 \\ \cline{1-1} & DCLG [32] & 12.9280 & 11.3040 & 10.4000 & 9.9640 & 43.9600 & 39.9600 & 37.6370 & 36.1760 \\ \cline{1-1} & Z (\(\%\)) & **-1.97** & **4.70** & **6.22** & **12.15** & **8.34** & **7.30** & **7.97** & **8.80** \\ \hline \(\mathbf{S+GS+O+C}\) & **ARQQF** & **13.183** & **10.722** & **9.7640** & **8.7285** & **40.3005** & **37.0415** & **34.9693** & **32.9910** \\ \hline \end{tabular}
\end{table} TABLE IV: Performance of ARRQP on Throughput (TP)
\begin{table}
\begin{tabular}{l|l|c|c|c|c|c|c|c} \hline Anomaly & \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{MAE} & \multicolumn{4}{c}{RMSE} \\ \cline{3-10} & & RT-5 & RT-10 & RT-15 & RT-20 & RT-5 & RT-10 & RT-15 & RT-20 \\ \hline \hline \multirow{10}{*}{None} & UPCC [38] & 7.695 & 0.7201 & 0.6521 & 0.5921 & 1.6940 & 1.6101 & 1.4695 & 1.3993 \\ & IPCC [39] & 0.7326 & 0.7125 & 0.6783 & 0.6503 & 1.6346 & 1.5459 & 1.4104 & 1.3199 \\ & WSRec [3] & 0.6794 & 0.6211 & 0.
improvement of \(M_{1}\) over \(M_{2}\) is defined as:_
\[\mathcal{I}(M_{1},M_{2})=((P_{2}-P_{1})/P_{2})\times 100\% \tag{16}\]
#### 4.1.3 Parameter Configurations
To validate the performance of ARRQP, we used four different training-testing split-ups: \((5\%,95\%),(10\%,90\%),(15\%,85\%),(20\%,80\%)\). In this paper, we use the notation RT-\(x\) (TP-\(x\)) to denote the training-testing split up for \((x\%,(100-x)\%)\). In other words, for RT-\(x\), we used \(x\%\) data of the given response time values for training, while the rest was used for testing. Moreover, we used 20% of the training data as the validation dataset for our experiment. We repeated each experiment 10 times and reported the average value.
In all our experiments, unless specifically stated otherwise, we maintained consistent usage of the parameters with values as listed in Table V.
### _Experimental Results_
We now present the analysis of our experimental results.
#### 4.2.1 Comparison with State-of-the-Art Methods
We compared ARRQP with 30 and 24 major State-of-the-Art (SoA) methods on RT and TP datasets, respectively. Tables III and IV show the performance of ARRQP on RT and TP datasets for different training densities in terms of prediction accuracy, respectively. We have the following observations about the performance of ARRQP.
1. [label=()]
2. _Comparison of prediction accuracy with SoA_: As observed from Tables III and IV, the prediction error started decreasing as the methods started handling more anomalies. It is worth noting that ARRQP outperformed the major SoA methods since it addressed more variations of anomalies compared to contemporary methods. The improvements of ARRQP over the other methods for each case (the methods are divided based on the anomalies addressed by them) are shown in the tables in bold.
3. _Comparison of prediction accuracy between ARRQP and DCLG_: It may be noted from Table IV, the performance of ARRQP degraded by \(-1.97\%\) compared to DCLG [32] in terms of MAE on TP-5. However, the performance of ARRQP improved by \(8.34\%\) for the same in terms of RMSE. This shows that even though ARRQP increased the overall errors compared to DCLG, ARRQP was able to reduce the large prediction errors significantly. For the rest of the other cases, however, ARRQP outperformed DCLG in terms of MAE and RMSE.
4. _Change of prediction accuracy with training density_: As observed from Tables III and IV, the performance of ARRQP improved with the increase in the training density. This is because, as the density increases, the number of neighbors of each node in the QoS prediction graph increases, which allows each user and service to have additional spatial collaborative information in the feature vector.
5. _Training time of ARRQP_: The training of ARRQP is performed in offline mode as discussed in Section III. The training time for ARRQP varied from 1.8 to 21.75 minutes, depending on the number of heads used in GCMF, which varied from 1 to 8 in our experiments.
6. _Performance of ARRQP in terms of prediction time_: The prediction time for ARRQP (including GRRQP and CRRQP) was in the order of \(10^{-6}\) seconds, while the prediction time for SORRQP was in the order of \(10^{-9}\) seconds. This difference in prediction time between SORRQP and ARRQP was because of the use of an additional MLP employed in GRRQP and CRRQP. The prediction time of ARRQP was negligible compared to the minimum service response time shown in Table II. The prediction of ARRQP is approximately 10 times faster than the prediction time for TRQP [23], which was in the order of \(10^{-5}\) seconds, and it is the best-known among all the methods reported in this paper in terms of prediction time.
In summary, ARRQP achieved better prediction accuracy compared to the major SoA methods with a reasonable learning time and negligible prediction time, enabling it to integrate with a real-time service recommendation system.
We now discuss the performance of ARRQP addressing various anomalies discussed in Section 2.
#### 4.2.2 Performance of ARRQP on Addressing Anomalies
This subsection analyzes various anomaly detection mechanisms used in this paper and the performance of ARRQP in dealing with those anomalies. We begin with discussing the performance of ARRQP handling outliers.
**Impact of Outlier Detection Method:** Table VI shows the performance of SORRQP after the removal of 10% outliers (i.e., \(\lambda=0.1\)). Here, we consider the SoA methods after removing \(10\%\) or more data with anomalies, as mentioned in the second column of Table VI. As observed from the final row of Table VI, SORRQP outperformed all the major SoA methods in terms of prediction accuracy by a significant improvement margin.
of the ARROP. The MSE is an outlier-sensitive loss function; therefore, it performed worst. MAE, Huber loss and Cauchy loss can deal with the outliers. In our framework, we adopt the Cauchy loss to train our model since it outperformed all other loss functions, as evident from Fig.s 4(a) and (b).
**Impact of Grey sheep Detection Algorithm:** Fig.s 5 (a) and (b) show the Grey sheep detection capability of GRROP on RT and TP datasets, respectively. We may infer the following observations from Fig.s 5 (a) and (b) below:
1. For \(\lambda=0\), as the value of \(c\) decreased and the number of detected GSU and GSS increased and removed, the prediction error decreased. Table VII shows the number of detected GSU and GSS for different values of \(c\).
2. As we removed 10% of the outliers along with the GSU and GSS for every value of \(c\), the performance improvement of GRROP was quite significant compared to the case with removing the GSU and GSS only without removing any outliers.
3. For \(\lambda=0.1\), the decrease in the prediction error with a decrease in the value of \(c\) was not that significant compared to the case for \(\lambda=0\). This finding suggests that the presence of a large number of outliers might contribute to the emergence of more grey sheep instances.
**Performance of GRROP:** Table VIII presents the performance comparison between GRROP and SORROP to show the effectiveness of our framework in handling the GSU and GSS. While the SORROP only addresses sparsity and outliers issues, the GRROP offers explicit solutions for GSU and GSS in QoS prediction. This experiment was divided into two cases: (i) Case-1: considering all users and GSS (\(\mathcal{U},\mathcal{S}_{\mathcal{G}}\)), (ii) Case-2: considering GSU and all services (\(\mathcal{U}_{\mathcal{G}},\mathcal{S}\)). The improvement of GRROP over SORROP for Case-1 and Case-2 are shown in the last two columns of Table VIII.
**Performance of ARROP After Grey sheep Removal:** Table IX demonstrates the performance of ARROP after removing the data corresponding to GSU and GSS for \(c=2\). Here, we compared our framework with the SoA methods that attempted to detect GSU and GSS and produced the results after removing them. As observed from Table IX, ARROP outperformed all the major SoA methods reported in Table IX with a significant improvement margin with or without removing outliers.
Moreover, it is worth mentioning that even though the SoA methods attempted to detect GSU and GSS, they failed to handle them. In contrast to the existing literature, ARROP detects GSU and GSS and effectively handles them.
**Performance of CRROP:** Cold start scenarios occur when new users or services are added to the system. Therefore, we divided our experiment into three cases as discussed below:
1. CSU: This is the first case where the experiment was designed with a set of cold start users without any data in the system. A cold start percentage (CSP) used for the experiment determines the number of cold start users. The experiment was limited to cold start users and all the services for this case.
2. CSS: This is the second case. Here, the experiment was designed with a set of cold start services having no data in the system. Here CSP also determines the number of cold start services. The experiment was performed on all the users and cold start services for this case.
3. CSB: This is the final case, which combines CSU and CSS. In this case, if the value of CSP is considered as \(x\), the \(x\%\) of the total users and \(x\%\) of the total services
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Methods} & Anomaly & \multicolumn{2}{c|}{RT-10} & \multicolumn{2}{c|}{RT-20} & \multicolumn{2}{c}{TF-10} & \multicolumn{2}{c}{TF-20} \\ \cline{3-10} & removed & MAE & RMSE & MAE & RMSE & MAE & RMSE & MAE & RMSE \\ \hline \hline LLMF [54] & 10\% & 0.4011 & 0.6120 & 0.4037 & 0.6106 & 14.5509 & 33.8899 & 13.1105 & 27.97648 \\ DALF [57] & 10\% & 0.3955 & 0.7466 & 0.3439 & 0.6779 & 13.1966 & 27.8531 & 11.9619 & 26.0299 \\ CAP [58] & 10\% & 0.3603 & 0.6439 & 0.3521 & 0.6640 & 16.4269 & 32.9558 & 16.3125 & 32.9334 \\ TAP [39] & 10\% & 0.3385 & 0.5512 & 0.2843 & 0.4985 & 22.1419 & 43.4987 & 19.8723 & 40.9333 \\ TROP [23] & 13\% & 0.2540 & - & 0.2520 & - & 10.5760 & - & 9.5660 & - \\ OffDQ [21] & 15\% & 0.2000 & - & 0.1800 & - & 9.1600 & - & 8.6700 & - \\ CMF [8] & 10\% & 0.1762 & 0.3705 & 0.1524 & 0.3399 & 8.4573 & 24.9137 & 7.2501 & 20.80927 \\ \hline
**SORROP** & 10\% & **0.0930** & **0.3556** & **0.0794** & **0.2710** & **4.9652** & **18.3332** & **4.0876** & **16.6755** \\ \hline \(\mathcal{U}_{\mathcal{G}}\) & **47.21** & **4.02** & **47.90** & **24.69** & **41.29** & **26.41** & **43.62** & **20.18** \\ \hline \end{tabular}
\end{table} TABLE VI: Performance of SORROP after removal of outliers
Fig. 4: Analysis of loss functions (a) RT and (b) TP datasets
Fig. 5: Analysis of grey sheep detection metric (a) RT and (b) TP datasets
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \multicolumn{2}{c|}{\(c=1\)} & \multicolumn{2}{c|}{\(c=2\)} & \multicolumn{2}{c}{\(c=3\)} \\ \hline RT & TP & RT & TP & RT & TP \\ \hline (S5, 254) & (24, 304) & (25, 150) & (7,233) & (8, 101) & (4, 176) \\ \hline \end{tabular}
\end{table} TABLE VII: Number of (GSU, GSS) for different \(c\)
are designated as the cold start users and services. The experiment here was performed jointly on all the users with cold start services and cold start users with all the services.
Here, we present the analysis for various values of CSP. Fig.s 6 (a) and (b) show the sensitivity of cold start on the performance of ARRQP on RT and TP datasets, respectively. As evident from Fig.s 6 (a) and (b), the performance of ARRQP deteriorated with the increasing value of CSP. Besides the unavailability of information on cold start users and services, the sparsity caused by the increasing CSP value was another reason for the performance degradation of ARRQP.
models on RT and TP datasets, respectively. The following can be inferred from Fig.s 8 (c) and (d):
1. The multi-head without attention model outperformed the multi-head with attention model [63] by an average improvement of \(23.69\%\) and \(30.08\%\) on RT and TP datasets, respectively. Moreover, it also surpassed the single-head without attention model by an average improvement of \(1.34\%\) and \(4.74\%\) on RT and TP datasets, respectively.
2. A similar trend can be observed after the removal of 10\(\%\) outliers.
#### 4.2.5 Impact of Hyper-parameters
In this subsection, we study the influence of various hyper-parameters used in our framework.
**Impact of \(\gamma\):**\(\gamma\) is a hyper-parameter present in the Cauchy loss function. Fig.s 9 (a) and (b) illustrate the performance of ARRQP with the change in the value of \(\gamma\) on RT and TP datasets, respectively. Among different \(\gamma\) values, we observed that \(\gamma=0.25\) and \(\gamma=10\) achieved the best performance for all training densities on the RT and TP datasets, respectively. The same trend was followed after removing the \(10\%\) of outliers from the datasets.
**Impact of Number of Heads:** We tuned our model with different numbers of heads. The experimental results are shown in Fig.s 9 (c) and (d) for the RT and TP datasets, respectively. The following observations may be drawn from the figures:
1. For RT datasets, although there is no fixed pattern, one head is sufficient to achieve the best performance at \(10\%\) training density, while three heads performed best at \(20\%\) training density. Moreover, removing \(10\%\) of outliers enhances the model's performance.
2. For TP datasets, the best performance is observed with four heads at 10% training density and five heads at \(20\%\) training density.
#### 4.2.6 Statistical Significance Testing
To establish the reliability of ARRQP, we assess the statistical significance of our model using confidence intervals (CIs) [64]. A CI essentially provides a range around a measurement, indicating the precision of that measurement. Typically, CIs are computed for various confidence levels (CL), such as \(90\%,95\%\), and \(99\%\). A \(x\%\) CL implies that there is a \((100-x)\%\) chance of uncertainty in the experiments for being wrong.
The insights derived from Table XI enable us to evaluate the error precision of ARRQP under diverse conditions, facilitating well-informed decisions concerning the reliability of ARRQP. Notably, as we move from lower to higher CL, the CI tends to become wider. In summary, this comprehensive analysis strengthens the validity of ARRQP and underscores its practical significance.
## 5 Related Work
This section presents a concise overview of relevant literature to discuss various anomalies in QoS prediction. In the domain of QoS prediction, Collaborative Filtering (CF) emerges as a widely adopted method. CF-based methods predominantly rely on the collaborative relationships among users and services. They often leverage the similarity between users and/or services as a foundational element to facilitate QoS prediction. However, it is worth noting that CF-based methods often encounter various difficulties, including coping with high data sparsity, addressing cold-start issues, handling outliers, dealing with grey sheep problems, and exploiting intricate relationships among users and services. These challenges eventually result in elevated prediction errors [1, 2]. To address these challenges and enhance QoS prediction accuracy, more advanced techniques have been proposed in the literature. We now discuss various challenges and their possible solutions explored in the literature.
**Data Sparsity:** Data sparsity is a prevalent challenge in the field of QoS prediction, which occurs due to an insufficient number of interactions between users and services, making it difficult to predict accurate QoS values because of the limited available information. To tackle the sparsity problem, the literature has proposed several solutions, including: _(i) Low-rank matrix decomposition:_ It [7, 8, 9, 11, 12, 40, 44, 46, 50, 51, 54] tackles the sparsity problem by capturing underlying patterns and relationships among
Fig. 8: Feature ablation study on (a) RT, (b) TP; Model ablation study on (c) RT, (d) TP datasets
Fig. 9: Analysis of \(\gamma\) on (a) RT and (b) TP; Analysis of number of heads in multi-head without attention model on (c) RT and (d) TP datasets
users and services by decomposing the user-service interaction matrix into low-dimensional matrices and subsequently reconstructing it. However, while matrix decomposition-based methods are effective in handling sparsity, they encounter additional challenges, such as noise handling or difficulty in capturing higher-order relationships among users/services, potentially leading to inaccuracies in predictions. _(ii) Designing additional data imputation method:_ This is used sometimes to predict the missing values [14, 20, 21, 48]. These imputed values are then utilized by the prediction module to estimate the final QoS value for the target user-service pair. However, the performance of the data imputation method directly affects the quality of predictions. Hence, the chosen data imputation method should be sophisticated to yield accurate results. Nonetheless, employing data imputation adds an extra layer of complexity to the prediction process, which, in turn, may result in reduced scalability and slower responsiveness of the prediction system. _(iii) Use of contextual data for prediction:_ As seen in some approaches [27, 28, 29, 30, 65], leveraging contextual data for prediction can help alleviate sparsity issues in certain cases. However, contextual data is not always readily available. Furthermore, when contextual information is used in isolation without any collaborative QoS data, it becomes challenging to capture the complex relationships between users and services. As a result, prediction accuracy may suffer and degrade.
**Presence of Outliers:** Outliers are data points that deviate significantly from the majority of the data, introducing anomalies that can hinder the performance of prediction algorithms. In the literature, outliers are predominantly addressed through two distinct approaches. _(i) Utilizing outlier-resilient loss functions:_ Some methods employ specialized loss functions that are resilient to the influence of outliers. These loss functions, such as L1 loss [27, 22, 54], Huber loss [25, 31, 32], Cauchy loss [8, 23], have been demonstrated to be more effective than the standard L2 loss function when dealing with outliers. These robust loss functions downweight the impact of outliers during the training process, allowing the model to focus more on learning from the majority of the data. _(i) Detecting and removing outliers:_ Alternatively, certain methods follow the explicit outliers detection [21, 52, 23, 8] and followed by subsequent removal of them, resulting in a more reliable dataset for the prediction model.
**Presence of Grey sheep:** Grey sheep users/services refer to those with QoS invocation patterns that are sufficiently different from the others, often characterized by unique underlying behaviors. Therefore, the traditional and intuitive solutions may not effectively address them. The existing methods focus on identifying grey sheep instances and avoiding them. Here are some approaches used to detect the grey sheep. RAP [60] mitigates the data credibility problem by leveraging the user reputation ranking algorithm, which identifies untrustworthy users present in the data. TAP [59] ensures credibility by employing unsupervised K-means clustering with beta distribution to calculate the reputation of users. CAP [58] utilizes a two-phase K-means clustering-based credibility-aware QoS prediction method, where clusters with a minimum number of users are considered untrustworthy. However, despite these efforts, data sparsity remains a significant challenge that can hinder the identification of grey sheep users or services. Sparse data may not provide enough information to accurately distinguish between typical and atypical behaviors, making it challenging to address grey sheep instances effectively.
Moreover, to the best of our knowledge, prediction methods tailored specifically for grey sheep users/services are mostly unexplored in the literature. However, understanding and effectively accommodating these grey sheep users/services in prediction algorithms can be crucial for achieving more accurate and personalized QoS predictions.
**Cold Start:** The term _cold start_ is widely used to describe the scenario when new users or services are introduced to a system [33]. These freshly added users or services typically lack data in the form of invocation history or have minimal associated data, leading to subpar prediction performance. It is important to note that the solutions designed to handle data sparsity are also highly relevant for addressing cold-start scenarios. Techniques developed to mitigate sparsity-related challenges can be adapted to improve predictions for cold-start instances by leveraging available data effectively [9, 15, 25, 28, 29, 30, 41, 43, 45, 51, 47].
Table XII presents a comprehensive summary of the literature on anomaly detection and handling, encompassing 38 SoA methods and our proposed framework. Notably, our framework demonstrates its efficacy in detecting and effectively handling all the mentioned anomalies.
**Positioning Our Proposed Work:** In contrast to the approaches discussed above, this paper introduces several innovative solutions to mitigate the challenges associated with QoS prediction.
1. We propose a simple yet effective solution leveraging graph convolution [35], which excels in dealing with high sparsity by aggregating and sharing neighborhood information of the nodes.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline CL & RT-5 & RT-10 & RT-15 & RT-20 & TP-5 & TP-10 & TP-15 & TP-20 \\ \hline \hline
90 & (0.3545, 0.3549) & (0.3052, 0.3055) & (0.2661, 0.2664) & (0.2399, 0.2402) & (12.5458, 12.5576) & (10.4042, 10.4152) & (8.8769, 8.8863) & (8.3365, 8.3450) \\
95 & (0.3545, 0.3549) & (0.3051, 0.3055) & (0.2661, 0.2665) & (0.2398, 0.2402) & (12.5447, 12.5587) & (10.4032, 10.4162) & (8.8759, 8.8874) & (8.3357, 8.3458) \\
99 & (0.3544, 0.3550) & (0.3051, 0.3056) & (0.2661, 0.2665) & (0.2398, 0.2402) & (12.5425, 12.5609) & (10.4011, 10.4183) & (8.8741, 8.8892) & (8.3341, 8.3474) \\ \hline \hline \end{tabular}
\end{table} TABLE XI: Confidence Intervals
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multicolumn{1}{c|}{Methods} & \multicolumn{2}{c|}{An Anomaly Detection} & \multicolumn{2}{c}{An Anomaly Handling} \\ \hline \hline
0 & (15,
Integrating both contextual information and QoS data, along with spatial collaborative information, offers a holistic perspective on user-service interactions here. This comprehensive approach enhances the system's ability to understand user behaviors and service performance, evidently resulting in more accurate predictions.
The outlier-resilient Cauchy loss enables us to minimize the impact of outliers during the model training.
The grey sheep detection block in ARRQP empirically shows the effectiveness in identifying grey sheep users/services. Importantly, it has the advantage of being able to handle the issue of data sparsity because it does not rely on measuring similarity between users/services.
In addition to the above, ARRQP offers an effective model specifically designed for predicting the QoS values of grey sheep users or services.
The CRRQP block is specifically designed to address the prediction of QoS for newly added users/services. It recognizes the challenges posed by the cold start scenario, where these newcomers lack historical data, and takes special measures to make accurate predictions despite the limited information available. This specialized attention to cold start situations ensures that ARRQP can provide meaningful QoS predictions even for users or services with minimal or no prior usage history.
In summary, ARRQP demonstrates resilience to anomalies and is a well-suited framework for integration into real-time service recommendation systems, given its high scalability and responsiveness.
## 6 Conclusion
This paper introduces an anomaly-resilient real-time QoS prediction framework (ARRQP), designed to achieve highly accurate QoS prediction in negligible time by addressing various challenges and anomalies, including the presence of outliers, data sparsity, grey sheep instances, and the cold start problem. ARRQP proposes a multi-head graph convolution matrix factorization model to capture complex relationships and dependencies among users and services. By doing so, it enhances its capacity to predict QoS accurately, even in the face of data limitations. Furthermore, ARRQP integrates contextual information and collaborative insights, enabling a holistic understanding of user-service interactions. Robust loss functions are employed to mitigate the impact of outliers during model training, improving predictive accuracy. In addition to QoS prediction, ARRQP introduces a sparsity-resilient method for detecting grey sheep users or services. These distinctive instances are subsequently handled separately for QoS prediction, ensuring tailored and accurate predictions. Moreover, the cold start problem is addressed as a distinct challenge, emphasizing the importance of contextual features. ARRQP also exhibited a high responsiveness and scalability, rendering it well-suited for integration into real-time systems. This characteristic positions ARRQP as a valuable tool for applications where timely and efficient QoS prediction is essential.
The incorporation of advanced anomaly detection and mitigation algorithms holds promise for enhancing the performance of QoS prediction methods. Additionally, the development of a time-aware extension to ARRQP is a crucial area of focus for our future research endeavors. These directions represent our commitment to advancing the capabilities of QoS prediction and ensuring its relevance in dynamic and evolving systems.
|
2309.17061 | SCALE: Synergized Collaboration of Asymmetric Language Translation
Engines | In this paper, we introduce SCALE, a collaborative framework that connects
compact Specialized Translation Models (STMs) and general-purpose Large
Language Models (LLMs) as one unified translation engine. By introducing
translation from STM into the triplet in-context demonstrations, SCALE unlocks
refinement and pivoting ability of LLM, thus mitigating language bias of LLM
and parallel data bias of STM, enhancing LLM speciality without sacrificing
generality, and facilitating continual learning without expensive LLM
fine-tuning. Our comprehensive experiments show that SCALE significantly
outperforms both few-shot LLMs (GPT-4) and specialized models (NLLB) in
challenging low-resource settings. Moreover, in Xhosa to English translation,
SCALE experiences consistent improvement by a 4 BLEURT score without tuning LLM
and surpasses few-shot GPT-4 by 2.5 COMET score and 3.8 BLEURT score when
equipped with a compact model consisting of merely 600M parameters. SCALE could
also effectively exploit the existing language bias of LLMs by using an
English-centric STM as a pivot for translation between any language pairs,
outperforming few-shot GPT-4 by an average of 6 COMET points across eight
translation directions. Furthermore we provide an in-depth analysis of SCALE's
robustness, translation characteristics, and latency costs, providing solid
foundation for future studies exploring the potential synergy between LLMs and
more specialized, task-specific models. | Xin Cheng, Xun Wang, Tao Ge, Si-Qing Chen, Furu Wei, Dongyan Zhao, Rui Yan | 2023-09-29T08:46:38Z | http://arxiv.org/abs/2309.17061v1 | # SCALE: Synergized Collaboration of Asymmetric Language Translation Engines
###### Abstract
In this paper, we introduce SCALE, a collaborative framework that connects compact Specialized Translation Models (STMs) and general-purpose Large Language Models (LLMs) as one unified translation engine. By introducing translation from STM into the triplet in-context demonstrations, SCALE unlocks refinement and pivoting ability of LLM, thus mitigating language bias of LLM and parallel data bias of STM, enhancing LLM speciality without sacrificing generality, and facilitating continual learning without expensive LLM fine-tuning. Our comprehensive experiments show that SCALE significantly outperforms both few-shot LLMs (GPT-4) and specialized models (NLLB) in challenging low-resource settings. Moreover, in Xhosa to English translation, SCALE experiences consistent improvement by a 4 BLEURT score without tuning LLM and surpasses few-shot GPT-4 by 2.5 COMET score and 3.8 BLEURT score when equipped with a compact model consisting of merely 600M parameters. SCALE could also effectively exploit the existing language bias of LLMs by using an English-centric STM as a pivot for translation between any language pairs, outperforming few-shot GPT-4 by an average of 6 COMET points across eight translation directions. Furthermore we provide an in-depth analysis of SCALE's robustness, translation characteristics, and latency costs, providing solid foundation for future studies exploring the potential synergy between LLMs and more specialized, task-specific models1.
Footnote 1: Code available at: [https://github.com/Hannibal046/SCALE](https://github.com/Hannibal046/SCALE)
## 1 Introduction
Large Language Models (LLMs) have recently revolutionized the field of natural language processing (OpenAI, 2023; Touvron et al., 2023; Peng et al., 2023), significantly influencing machine translation (MT) by delivering exceptional performance without requiring a bilingual corpus, particularly in high-resource languages (Brown et al., 2020; Garcia et al., 2023). Moreover, as a unified multi-task learner, LLMs represent a substantial step towards artificial general intelligence (Bubeck et al., 2023), with the potential to overcome not only language barriers but also cultural boundaries simultaneously through a simple "translate and explain" prompt.
Despite their advancements, LLM-based translation systems still confront several challenges. Firstly, there exists a significant language bias towards English (e.g., 92.1% of the GPT-3 pre-training corpus is English, while French, the second largest, represents only 1.8%2), which significantly constraints multilingual translation performance, especially for those low-resource languages (Scao et al., 2022; Hendy et al., 2023). Secondly, as a practical approach for system improvement, fine-tuning LLM poses great challenges. These include (1) the trade-off between speciality and generality (Cheng et al., 2023; Lin et al., 2023), and (2) the prohibitively high cost associated with tuning large-scale models (Hu et al., 2021; Dettmers et al., 2023). In contrast, traditional Specialized Translation Models (STMs)--those based on encoder-decoder architecture, trained with supervision and significantly smaller in size (Sutskever et al., 2014; Vaswani et al., 2017)--serve as specialists for specific translation tasks and could be efficiently fine-tuned. However, these models lack general
language capabilities and are potentially susceptible to parallel data bias, such as the memorization of low-quality samples (Raunak et al., 2022).
In this paper, we demonstrate for the first time the possibility to unify these two asymmetric translation engines in a single framework. Our work, SCALE, connects LLMs and STMS by utilizing the LLM's most enigmatic capability: in-context learning. Rather than employing source-target pairs as in conventional few-shot translation (Garcia et al., 2023; Vilar et al., 2023), SCALE would first sample translations from a STM and then use triplets consisting of a source sentence, an STM-generated set and a target sentence as in-context demonstrations to unlock the refinement and pivoting ability of LLMs. With SCALE, we could (1) mitigate both language bias of LLMs by utilizing an STM that concentrates on a specific language pair, and parallel data bias of STMs by using a general-purpose LLM as the main body of the system; (2) enhance the speciality of LLMs without compromising generality; (3) facilitate continual learning within the framework by updating only the lightweight STM, thus avoiding expensive LLM fine-tuning. By employing SCALE, we create a more efficient and effective system that combines the best of both translation engines.
Our comprehensive experiments reveal that SCALE considerably outperforms few-shot LLMs (e.g., GPT-4) and specialized models (e.g., NLLB) in the challenging low-resource setting, as depicted in Figure 1. Moreover, in Xhosa to English translation, SCALE experiences consistent improvement by a 4 BLEURT score without tuning LLM and surpasses few-shot GPT-4 by 2.5 COMET score and 3.8 BLEURT score when equipped with a compact model consisting of merely 600M parameters. Remarkably, SCALE can effectively exploit the existing language bias of LLMs by using an English-centric STM as a pivot for translation between any language pairs, outperforming few-shot GPT-4 by an average of 6 COMET points across eight translation directions. Furthermore, we conduct an in-depth analysis of the robustness, translation characteristics, and latency costs associated with SCALE. Our findings provide valuable insights and encourage further research in this field.
## 2 The SCALE Framework
In this section, we present the proposed SCALE method and provide an overview illustrated in Figure 2. Popularized by GPT-3 (Brown et al., 2020), In-context Learning (ICL) allows LLMs to perform a wide variety of tasks, even newly created ones (Bills et al., 2023), by leveraging few-shot learning with a limited number of demonstrations. For a translation task from a source language \(\mathcal{X}\) to a target language \(\mathcal{Y}\), an LLM with parameters \(\theta\) carries out ICL by conditioning on \(k\) source-target paired examples \(\mathbb{E}=(x_{1},y_{1})\oplus(x_{2},y_{2})\oplus...(x_{k},y_{k})\) and the test source sentence \(x\), generating the target \(y\) in an auto-regressive manner as \(y_{t}\sim p_{\theta}(y_{t}|\mathbb{E},x,y_{<t})\). In this scenario, the LLM must analyze the provided examples to discern the input distribution, output distribution, input-output mapping, and formatting to successfully complete the task (Press et al., 2022; Wei et al., 2023). Different from conventional ICL, SCALE introduces an intermediate variable \(\mathbb{Z}\) as reference
Figure 1: Translation results of few-shot LLM (GPT-4), STM (NLLB) and SCALE (ours) for six low-resource languages measured by COMET and BLEURT.
between source \(x\) and target \(y\), transforming each demonstration example into a triplet \((x,\mathbb{Z},y)\). The variable \(\mathbb{Z}\) is a generation set sampled from a specialized translation model \(\mathbf{M}_{\mathcal{X}\mapsto\mathcal{Y}}\) trained on a labeled dataset. The final input to the LLM consists of the instruction, demonstrations, and source sentence combined in a prompt template: \(\mathcal{T}((x_{1},\mathbb{Z}_{1},y_{1})\oplus(x_{2},\mathbb{Z}_{2},y_{2})... \oplus(x_{k},\mathbb{Z}_{k},y_{k})),(x,\mathbb{Z}))\). Unlike language understanding tasks that have fixed label set (Xu et al., 2023), the hypothesis space of translation model is actually infinite, so we could sample multiple generation paths from STM for one single source sentence to provide a more comprehensive generation guide for LLM. The SCALE framework, though conceptually straightforward, demonstrates several advantages over STMs and LLMs, as highlighted below:
**Refinement** For \(\mathcal{X}\) to \(\mathcal{Y}\) translation task, when the intermediate variable \(\mathbb{Z}\) is from \(\mathbf{M}_{\mathcal{X}\mapsto\mathcal{Y}}(x)\), SCALE essentially conduct few-shot learning in a multi-task way by introducing an additional refinement task. Refinement has long been proved effective in MT (Xia et al., 2017; Cheng et al., 2022). And this also holds true for LLM-based translation. In this refinement process, we pass sampled sentences and their confidence score (probability score) from STM to an LLM. The LLM then digests the information carried by the sampled set and infers the generation space of the STM, which guides the LLM to generate the output that is more consistent with the local data distribution (Xu et al., 2023). And since the final translation is delivered by an LLM, SCALE could also mitigate the parallel data bias from STMs and exhibit robustness by not merely copying and pasting the draft translation from STMs as shown in SS5.3.
**Pivoting** Considering the predominantly English-centric nature of most LLMs (Brown et al., 2020; Touvron et al., 2023), SCALE could employ an intermediate variable \(\mathbb{Z}\) from \(\mathbf{M}_{\mathcal{X}\mapsto\text{English}}(x)\) where the target language \(\mathcal{Y}\) is not necessarily English. And here \(\mathbb{Z}\) serves as a pivot point for LLMs to enhance their understanding of the source sentence and yield improved translations. This can also be regarded as a form of knowledge transfer from high-resource languages to low-resource languages (Chen et al., 2017; Kim et al., 2019; Jiao et al., 2023).
**Updating** A significant limitation of the existing LLM-based translation systems is the inherent complexity of LLM continual learning. This complexity arises from several factors, including the delicate balance between speciality and generality (Lin et al., 2023), the catastrophic forgetting problem (Yong et al., 2023), and the substantial computational demands (Dettmers et al., 2023). In contrast, the SCALE framework offers a more efficient and streamlined approach to continuous updating. By exclusively and effectively updating the lightweight \(\mathbf{M}_{\mathcal{X}\mapsto\mathcal{Y}}\), component, the framework ensures that the LLM remains untouched, thus preserving its general language capabilities. This selective updating process not only mitigates the issue of catastrophic forgetting but also reduces the computational burden of fine-tuning associated with LLM-based translation systems.
## 3 Experimental Setup
### Dataset
Our evaluation datasets encompass a diverse set of languages, spanning both low- and high-resource settings and deriving from various language families. To facilitate reproducibility and data sharing,
Figure 2: The SCALE framework, comprised of a lightweight specialized model and a frozen large language model with triplet in-context demonstrations.
all our evaluation datasets come from the devtest split of Flores-200 (NLLB Team et al., 2022), a publicly available many-to-many evaluation data set covering 200 languages from all over the world.
### Translation Systems
We compare our approach with cutting-edge academic systems including both specialized models and LLMs, as well as one commercial system, Microsoft Translator3.
Footnote 3: [https://azure.microsoft.com/en-us/products/cognitive-services/translator](https://azure.microsoft.com/en-us/products/cognitive-services/translator)
We have two strong specialized models:
* **M2M100**(Fan et al., 2021) is the first multilingual encoder-decoder translation model that can translate between any pair of 100 languages without relying on English data.
* **NLLB**(NLLB Team et al., 2022) is a supervised translation model suite covering from 169M to 54.5B (MOE) parameters with encoder-decoder architecture and capable of delivering high-quality translations directly between 200 languages.
For few-shot LLMs, we consider:
* **XGLM**(Lin et al., 2022) is a multilingual generative language models trained on a corpus covering a diverse set of languages and the largest XGLM-7.5B model outperforms comparable sized GPT-3 model in multilingual setting.
* **GPT-3.54** is a GPT model specially optimized for conversational purpose and shows remarkable performance in machine translation tasks (Jiao et al., 2023). Footnote 4: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
* **GPT-4**(OpenAI, 2023) is the latest and the most powerful version of GPT-series.
We use both GPT-3.5 and GPT-4 from Microsoft Azure OpenAI Service5. Without further notice, the number of few-shot samples in LLM and SCALE are set to 10 and the sample selection strategy follows Agrawal et al. (2022). The prompt we use could be found in the Appendix A.1.
Footnote 5: [https://azure.microsoft.com/en-us/products/cognitive-services/translator](https://azure.microsoft.com/en-us/products/cognitive-services/translator)
### Evaluation Metrics
Because neural metrics have shown higher correlation with human preference (Freitag et al., 2022; Rei et al., 2020) and are widely adopted by recent literatures (Hendy et al., 2023; Garcia et al., 2023), we mainly evaluate our system with (1) **COMET-226**, a reference-based neural metric (Rei et al., 2022) combining direct assessments, sentence-level score, and word-level tags from multidimensional quality metrics error annotations, (2) **COMETKiwi**7, a reference-free quality estimation model from Rei et al. (2022), and (3) **BLeurt**(Sellam et al., 2020), a learnable evaluation metric with a regression model trained on ratings data. For completeness, we also include the results of lexical metrics such as spBLEU (NLLB Team et al., 2022) and chrF++ (Popovic, 2017).
Footnote 4: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
Footnote 5: [https://azure.microsoft.com/en-us/products/ai-services/openai-service](https://azure.microsoft.com/en-us/products/ai-services/openai-service)
Footnote 6: [https://huggingface.co./Unbabel/wmt22-comet-da](https://huggingface.co./Unbabel/wmt22-comet-da)
## 4 Experimental Results
In this section, we conduct various experiments to show the effectiveness of our framework. In SS4.1, we verify the effectiveness of the refinement ability within SCALE by comparing with STMs and few-shot LLMs. In SS4.2, we focus on non-English pairs to test the pivoting ability of SCALE. In SS4.3, we show the continual learning results of SCALE with a fixed LLM and an evolving STM.
### SCALE Refinement
To evaluate the refinement capabilities of SCALE, this section primarily concentrates on low-resource languages, which currently pose significant challenges for few-shot LLMs. Our approach
showcases its versatility by incorporating languages from diverse families and scripts, including Assamese (asm_Beng), Armenian (hye_Armn), Amharic (amh_Echi), Xhosa (xho_Latin), Uyghur (uig_Arab), Khmer (khm_Khmr), Nepali (npi_Deva), and Sindhi (snd_Arab). For additional data details, please refer to the Appendix A.2.
We adopt three kinds of baseline systems as described in SS3.2. For supervised NLLB model suite, we choose the NLLB-3.3B version, and for SCALE-refine, the LLM is GPT-4 the STM is also NLLB-3.3B for fair comparison.
The results are displayed in Table 1. As observed, few-shot LLMs, including GPT-4, significantly trail behind specialized models in all translation directions. Even with Xhosa belonging to the same language family as English, the GPT-4 model fails to deliver comparable results to NLLB model. In contrast, our framework, by combining LLMs and STMs, demonstrates superior performance over few-shot GPT-4 by an averaged 2.96 COMET scores and 5 BLEURT scores, and surpasses the strong NLLB model in 8/8 directions. Interestingly, when the performance gap is substantial (e.g., SCALE-refine over GPT-4), the lexical metric spBLEU aligns with COMET and BLEURT. However, when comparing SCALE-refine with NLLB, although COMET-22, COMETKwi, and BLEURT exhibit consistent patterns, spBLEU displays degradation with the GPT-based system in 4 out of 8 directions. Similar findings are also reported in Vilar et al. (2023); Hendy et al. (2023).
### SCALE Pivoting
In this section, we demonstrate the performance of SCALE-pivot, in which the variable \(\mathbb{Z}\) is not directly pertinent to the current translation directions but functions as a pivot. Specifically, we examine the performance of few-shot GPT-4 and SCALE-pivot on Lao\(\rightarrow\)\(\mathbb{Y}\) translations, where \(\mathbb{Y}\) represents a language set encompassing both low-resource and high-resource languages. For the low-resource languages, we include Assamese (asm_Beng), Armenian (hye_Armn), Amharic (amh_Echi), Xhosa (xho_Latin), and we have German (deu_Latin), Czech (ces_Latin), Bulgarian (bul_Cyrl) and Greek (ell_Grek) for the high-resource setting.
\begin{table}
\begin{tabular}{l c c c c|c c c c} \hline \hline & **COMET-22** & **COMETKwi** & **BLEURT** & **spBLEU** & **COMET-22** & **COMETKwi** & **BLEURT** & **spBLEU** \\ \hline \multicolumn{8}{c}{asm,Beng} & \multicolumn{8}{c}{lye\_Armn} \\ NLLB & 85.6 & 82.8 & 72.1 & 33.9 & 88.3 & 87.5 & 77.0 & 43.0 \\ M2M100 & n/a & n/a & n/a & n/a & 75.9 & 76.5 & 58.9 & 23.7 \\ Microsoft & 83.5 & 81.7 & 68.8 & 29.6 & 85.2 & 85.0 & 71.5 & 34.6 \\ XGLM & 62.7 & 57.8 & 38.8 & 3.7 & 43.9 & 50.2 & 20.5 & 0.2 \\ GPT-3.5 & 78.6 & 76.7 & 61.0 & 18.1 & 77.0 & 77.2 & 60.5 & 19.4 \\ GPT-4 & 83.9 & 80.9 & 69.1 & 27.9 & 86.2 & 86.0 & 73.1 & 35.6 \\ SCALE-refine & **86.6** & **83.2** & **73.8** & 34.1 & **88.8** & **88.0** & **77.8** & 42.3 \\ \hline \multicolumn{8}{c}{amh\_Echi} \\ NLLB & 86.9 & 84.5 & 73.6 & 36.4 & 80.7 & 65.8 & 74.0 & 40.1 \\ M2M100 & 72.3 & 72.0 & 54.8 & 18.5 & 68.0 & 62.1 & 59.0 & 25.7 \\ Microsoft & 87.5 & 84.6 & 74.7 & 41.9 & n/a & n/a & n/a & n/a \\ XGLM & 50.2 & 43.9 & 17.8 & 0.1 & 39.6 & 41.7 & 37.1 & 1.6 \\ GPT-3.5 & 58.8 & 54.2 & 31.7 & 3.4 & 69.1 & 65.5 & 58.3 & 21.9 \\ GPT-4 & 83.2 & 81.9 & 67.3 & 27.1 & 78.8 & 67.1 & 70.8 & 34.5 \\ SCALE-refine & **88.0** & **85.3** & **75.7** & 37.6 & **82.1** & **67.3** & **75.7** & 40.0 \\ \hline \multicolumn{8}{c}{ug\_Arab} & \multicolumn{8}{c}{lhm\_Khmr} \\ NLLB & 85.4 & 84.4 & 70.4 & 27.5 & 86.1 & 85.4 & 72.2 & 35.4 \\ M2M100 & n/a & n/a & n/a & 69.6 & 71.6 & 54.0 & 17.6 \\ Microsoft & 82.7 & 81.7 & 66.2 & 21.6 & 80.2 & 80.5 & 63.3 & 25.6 \\ XGLM & 37.1 & 52.8 & 16.9 & 0.2 & 48.6 & 53.7 & 21.6 & 0.7 \\ GPT-3.5 & 73.7 & 74.2 & 53.0 & 11.6 & 73.3 & 73.0 & 53.2 & 13.9 \\ GPT-4 & 83.7 & 82.8 & 67.4 & 23.1 & 84.6 & 84.0 & 69.9 & 29.1 \\ SCALE-refine & **86.4** & **85.0** & **72.2** & 27.9 & **87.1** & **85.9** & **73.9** & 34.7 \\ \hline \multicolumn{8}{c}{pi\_Deva} \\ NLLB & 90.4 & 88.3 & 77.1 & 45.0 & 86.9 & 72.5 & 75.5 & 44.4 \\ M2M100 & 75.2 & 73.6 & 55.1 & 21.2 & 49.8 & 47.2 & 39.2 & 6.4 \\ Microsoft & 89.8 & 88.2 & 75.3 & 42.8 & 83.6 & 77.4 & 70.4 & 38.5 \\ XGLM & 72.9 & 67.0 & 48.8 & 8.3 & 53.8 & 45.1 & 29.8 & 1.8 \\ GPT-3.5 & 87.2 & 85.4 & 69.9 & 29.3 & 75.6 & 68.1 & 58.8 & 17.3 \\ GPT-4 & 90.2 & 88.1 & 76.3 & 40.8 & 83.2 & 75.3 & 69.9 & 32.3 \\ SCALE-refine & **91.1** & **88.8** & **78.1** & 44.0 & **87.5** & **79.5** & **76.6** & 42.9 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Translation results of eight low-resource languages to English. The best results are in **bold** and the second best are with underscore. SCALE-refine is compared with specialized model (NLLB, M2M), commercial system (MS Translator) and few-shot LLM (XGLM, GPT-3.5, GPT-4).
The results are presented in Figure 3. Firstly, with GPT-4 results alone, we could observe that the language bias of LLM heavily affects translation performance. The few-shot GPT-4 model typically excels in the high-resource setting but struggles in low-resource one. Furthermore, it is evident that SCALE-pivot can enhance the performance of GPT-4 in both low- and high-resource settings, while the performance gain is more significant in high-resource setting (an averaged 6.8 COMET-22 score improvement for high-resource versus 5.2 for low-resource).
### SCALE Updating
In this section, we explore the potential enhancement of our framework by keeping the LLM fixed and solely updating the STM. Specifically, we use M2M100-12B and NLLB model suite ranging from 600M to 3.3B as our evolving STM. We conduct experiments on the Xhosa \(\rightarrow\) English direction and adopt the prompt format of SCALE-refine. The experimental results are displayed in Figure 4, leading to the following observations:
(1) The overall framework can be consistently improved with a fixed LLM and a continuously evolving STM; (2) SCALE, when equipped with a small model containing only 600M parameters, can outperform GPT-4 with an absolute 2.5 COMET-22 score and a 3.8 BLEURT score; (3) Equipped
Figure 4: Translation results from Xhosa to English with evolving STMs in the SCALE framework.
Figure 3: Translation results from Lao to both low- and high-resource languages, where GPT-4 uses few-shot prompting and SCALE-pivot uses English as the pivot language.
with an STM (M2M100) of relatively lower performance than original few-shot GPT-4, SCALE demonstrates strong robustness by not merely copying and pasting the less satisfactory reference answer provided by M2M100, which we detailedly investigated in SS5.3.
Interestingly, we also observe that the growth patterns exhibited by lexical metrics and neural semantic metrics differ. For M2M100 and NLLB-600M as STM, both metrics experience substantial improvement, while for NLLB-1.3B and 3.3B as STM, SCALE maintains the same lexical accuracy while continually enhancing translation performance as measured by neural semantic metrics.
## 5 Further Analysis
### Translation Characteristics
To gain a deeper understanding of the translation characteristics of different systems (few-shot LLMs, STMs, and SCALE) beyond overall translation quality, we employ the following measurements, as suggested by Hendy et al. (2023):
1. **Translation Fluency:** Since LLMs are optimized by predicting the next token, their translations tend to display a language modeling bias that favors fluency over adequacy. To investigate this, we utilize an independently trained open-source language model (GPT2-XL (Radford et al., 2019)) to measure the perplexity score of the translation output.
2. **Translation Non-Monotonicity:** This metric evaluates the extent to which a translation adheres to the source sentence's structure, calculating the deviation from the diagonal in the word-to-word alignment. Translations that are more paraphrastic or less literal tend to deviate from closely tracking the source word order across language pairs (Hendy et al., 2023). We apply the non-monotonicity metric proposed by Schioppa et al. (2021).
3. **Unaligned Source Words:** Another measure of literalness is the count of unaligned source words (Hendy et al., 2023; Raunak et al., 2023). When accounting for quality, less literal translations are likely to include more words that do not align with those in the source sentence.
We present the **Translation Fluency** results of \(\mathbb{X}\rightarrow\) English translation in Figure 5, where \(\mathbb{X}\) remains the same as used in Section 4.1. It is evident that regardless of the translation quality delivered by the LLM, whether superior (SCALE) or inferior (GPT-4) compared to the STM (NLLB), the LLM translation generally demonstrates higher fluency than the STM. Additionally, in 6 out of the 8 languages examined, SCALE produces lower perplexity scores than the original GPT-4 output. This suggests that the STM-generated variable \(\mathbb{Z}\) can effectively aid the GPT-4 model in further decreasing its generation uncertainty.
For **Non-Monotonicity** and **Unaligned Source Words**8, we choose Xhosa\(\rightarrow\)English translation with different STMs, and the results are shown in Figure 6. We also include PPL score for completeness. We find that both the USW and NM scores for STM are higher than those of GPT-4. This
Figure 5: Perplexity score from \(\mathbb{X}\rightarrow\)English translation measured by GPT2-XL.
indicates that even though STM provides higher translation quality, it results in less literal translations. However, for SCALE, it effectively reduces GPT-4's NM score while maintaining a moderate USW score. This suggests that during the SCALE refinement process, the model primarily adheres to the original LLM output structure while taking cues from STM's word selection. We show several concrete cases in Appendix A.3.
### Multipath Sampling
In this section, We list the results of multiple path sampling strategy in Table 2. We test with Xhosa\(\rightarrow\)English with one-shot SCALE-refine. The results show that without increasing the shot number in the few-shot learning, using STM to generate more generation paths could consistently improve the overall performance, which could be useful in the extremely low-resource setting where demonstration samples are hard to acquire.
### Ablation
In this section, we conduct an ablation study for each key design in our framework. We examine the following variants: (1) without confidence: This model follows the same setting as the SCALE-refine in SS4.1, except that we do not pass the confidence score of each token as input. (2) zero-shot: This variant removes all in-context-learning examples, keeping only the translation instruction and the reference answer from STM. (3) one-shot: This model utilizes only one-shot, in contrast to the ten-shot results presented in SS4.1. (4) zero-shot-M2M: This model also implements zero-shot, but the STM used is M2M100, a less performant model than the original few-shot GPT-4. This is employed to assess the robustness of our framework.
The outcomes of our ablation study are showcased in Table 3. It is evident that each component in our framework perform effectively, with the in-context-learning setting providing the most performance gain. This indicates that simply offering a reference answer to the LLM without in-context samples does not adequately guide the model in utilizing those references effectively. Furthermore, the number of ICL examples is also an essential factor in the process.
Regarding the SCALE zero-shot-M2M variant, its performance is significantly inferior to that of the few-shot LLM due to the poor quality of the M2M100 output. From this observation, we can
\begin{table}
\begin{tabular}{c c c c} \hline \hline \# Path & **COMET-22** & **BLEURT** & **spBLEU** \\ \hline
1 & 80.4 & 73.2 & 35.6 \\
2 & 81.2 & 74.3 & 37.1 \\
3 & 81.4 & 74.7 & 38.0 \\
4 & 81.5 & 74.8 & 38.3 \\
5 & 81.4 & 74.9 & 38.4 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Translation results from Xhosa to English with multi-path sampling. All the experiments are conducted by one-shot SCALE-refine and only differ in the number of sampled paths from STM.
Figure 6: Perplexity, Unaligned Source Words percentage and Non-Monotonicity score from Xhosa\(\rightarrow\)English translation.
conclude that the robustness of SCALE, as illustrated in Figure 4, primarily stems from the power of in-context learning. This learning approach informs the LLM about which elements to trust and which to disregard, ultimately improving the overall translation performance and robustness.
### Generation Latency
In this section, we conduct a detailed evaluation of the overhead introduced by SCALE in comparison to conventional few-shot LLM. The additional latency arises from two factors: first, the time required to generate the variable \(\mathbb{Z}\) for the current source sentence \(x\) using STM, and second, the increased latency caused by the LLM due to the extended context. Since the response time from the GPT API may not accurately represent the actual latency of the LLM, we utilize one of the largest open-source LLMs (BLOOM-176B) for this analysis. As shown in Table 4, we observe that the incurred latency can be primarily attributed to the extended context window due to the quadratic time complexity of the transformer architecture. Exploring methods to accelerate this process based on STM-generated output using speculative decoding techniques remains a topic for future work (Xia et al., 2022; Chen et al., 2023; Yang et al., 2023).
## 6 Related Work
The use of LLM for translation tasks has garnered significant interest in recent times. Brown et al. (2020) initially demonstrated the efficacy of prompting an LLM with a few examples to achieve noteworthy results, particularly in high-resource languages (Vilar et al., 2023; Lin et al., 2022). Following the release of ChatGPT, several studies have examined its overall translation performance(Jiao et al., 2023; Hendy et al., 2023), along with works focusing on the issue of hallucination (Guerreiro et al., 2023), literalness (Raunak et al., 2023), multilinguality (Zhu et al., 2023) and incidental bilingualism problem (Briakou et al., 2023). A comprehensive analysis conducted by Garcia et al. (2023) revealed the unreasonable effectiveness of few-shot LLMs. Furthermore, a diverse range of research has attempted to enhance LLM-based translation systems through cultural awareness (Yao et al., 2023), refinement (Chen et al., 2023; Cheng et al., 2023), retrieval-augmentation (Cheng et al., 2023), post-editing (Raunak et al., 2023), and comparison (Zeng et al., 2023).
Our work also shares similarities with a series of studies that aim to build collaboration between LLMs and other systems. Luo et al. (2023) propose equipping LLMs with a knowledge-guiding module to access relevant information without altering the LLMs' parameters. Hendy et al. (2023) propose to use Microsoft Translator system as the primary translation system, and then use GPT as
\begin{table}
\begin{tabular}{l c c|c c c c} \hline \hline & \multicolumn{2}{c|}{**few-shot LLM**} & \multicolumn{3}{c}{**SCALE**} \\ & avg. \#length & total & avg. \#length & STM & LLM & total \\ \hline
0-shot & 101.37 & 7.19 & 161.13 & 1.87 & 7.43 & 9.3 \\
1-shot & 198.00 & 7.46 & 516.92 & 1.87 & 8.33 & 10.2 \\
10-shot & 951.91 & 9.52 & 2489.72 & 1.87 & 14.17 & 16.04 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Generation latency results of LLM (BLOOM-175B) and SCALE (BLOOM-175B + NLLB-3.3B) measured in seconds (s).
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **COMET-22** & **COMETKiwi** & **BLEURT** \\ \hline M2M100 & 68.0 & 62.1 & 59.0 \\ NLLB & 80.7 & 65.8 & 74.0 \\ GPT-4 & 78.8 & 67.1 & 70.8 \\ SCALE & 82.1 & 67.3 & 75.7 \\ w/o confidence & 81.6 & 67.6 & 74.9 \\ zero-shot & 81.4 & 66.4 & 74.8 \\ one-shot & 81.7 & 66.7 & 75.3 \\ zero-shot-M2M & 76.4 & 66.8 & 68.2 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study for SCALE with Xhosa\(\rightarrow\)English translation. |
2309.10068 | A Unifying Perspective on Non-Stationary Kernels for Deeper Gaussian
Processes | The Gaussian process (GP) is a popular statistical technique for stochastic
function approximation and uncertainty quantification from data. GPs have been
adopted into the realm of machine learning in the last two decades because of
their superior prediction abilities, especially in data-sparse scenarios, and
their inherent ability to provide robust uncertainty estimates. Even so, their
performance highly depends on intricate customizations of the core methodology,
which often leads to dissatisfaction among practitioners when standard setups
and off-the-shelf software tools are being deployed. Arguably the most
important building block of a GP is the kernel function which assumes the role
of a covariance operator. Stationary kernels of the Mat\'ern class are used in
the vast majority of applied studies; poor prediction performance and
unrealistic uncertainty quantification are often the consequences.
Non-stationary kernels show improved performance but are rarely used due to
their more complicated functional form and the associated effort and expertise
needed to define and tune them optimally. In this perspective, we want to help
ML practitioners make sense of some of the most common forms of
non-stationarity for Gaussian processes. We show a variety of kernels in action
using representative datasets, carefully study their properties, and compare
their performances. Based on our findings, we propose a new kernel that
combines some of the identified advantages of existing kernels. | Marcus M. Noack, Hengrui Luo, Mark D. Risser | 2023-09-18T18:34:51Z | http://arxiv.org/abs/2309.10068v2 | # A Unifying Perspective on Non-Stationary Kernels for
###### Abstract
The Gaussian process (GP) is a popular statistical technique for stochastic function approximation and uncertainty quantification from data. GPs have been adopted into the realm of machine learning in the last two decades because of their superior prediction abilities, especially in data-sparse scenarios, and their inherent ability to provide robust uncertainty estimates. Even so, their performance highly depends on intricate customizations of the core methodology, which often leads to dissatisfaction among practitioners when standard setups and off-the-shelf software tools are being deployed. Arguably the most important building block of a GP is the kernel function which assumes the role of a covariance operator. Stationary kernels of the Mat\(\acute{e}\)rn class are used in the vast majority of applied studies; poor prediction performance and unrealistic uncertainty quantification are often the consequences. Non-stationary kernels show improved performance but are rarely used due to their more complicated functional form and the associated effort and expertise needed to define and tune them optimally. In this perspective, we want to help ML practitioners make sense of some of the most common forms of non-stationarity for Gaussian processes. We show a variety of kernels in action using representative datasets, carefully study their properties, and compare their performances. Based on our findings, we propose a new kernel that combines some of the identified advantages of existing kernels.
## 1 Introduction
The Gaussian process (GP) is arguably the most popular member of the large family of stochastic processes and provides a powerful and flexible framework for stochastic function approximation in the form of Gaussian process regression (GPR) [74]. A GP is characterized as a Gaussian probability distribution over (model or latent) function values \(\{f(\mathbf{x}_{1}),f(\mathbf{x}_{2}),f(\mathbf{x}_{3}),...\}\) and therefore over the subspace \(\{f:f(\mathbf{x})=\sum_{i}^{m}\alpha_{i}k(\mathbf{x},\mathbf{x}_{i};h)\ \forall\ \mathbf{x}\in \mathcal{X},\ m\in\mathbb{N},\ \alpha_{i}\in\mathbb{R}\}\), called a reproducing kernel Hilbert space (RKHS), where \(k(\mathbf{x},\mathbf{x}_{i};h)\) is the kernel function, \(h\in\Theta\) is a vector of hyperparameters, and \(\mathcal{X}\) is the input space. The RKHS -- as the name would suggest -- is directly influenced by the choice of the kernel function, which assumes the role of the covariance function in the GP framework. This double role makes the kernel an important building block when it comes to optimizing the flexibility, accuracy, and expressiveness of a GP.
In a recent review [50], it was pointed out that the vast majority of applied studies using GPs employ the radial basis function (RBF) kernel (also referred to as the squared exponential or Gaussian kernel). The fraction is even higher when we include other stationary kernels. Stationary kernels are characterized by \(k(\mathbf{x}_{i},\mathbf{x}_{j})=k(|\mathbf{x}_{i}-\mathbf{x}_{j}|)\), i.e., covariance matrix entries only depend on some distance between data points in the input domain, not on their respective location. Stationary kernels are popular because they carry little risk -- in terms of model misspecification -- and come with only a few hyperparameters that are easy to optimize or train. However, it has been shown that the stationarity assumption can lead to poor prediction performance and unrealistic uncertainty quantification that is affected mostly by point geometry [41]. In
other words, uncertainty will increase when moving away from data points at a constant rate across the input space. To overcome these limitations, significant research attention has been drawn to non-stationary Gaussian process regression [56, 47, 48]. Non-stationary kernels depend on the respective locations in the input space explicitly, i.e., \(k(\mathbf{x}_{i},\mathbf{x}_{j})\neq k(|\mathbf{x}_{i}-\mathbf{x}_{j}|)\). This makes them significantly more flexible and expressive leading to higher-quality estimates of uncertainty across the domain. Even so, non-stationary kernels are rarely used in applied machine learning (ML) due to the inherent difficulty of customization, optimizing hyperparameters, and the associated risks of model misspecification (wrong hyperparameters), overfitting, and computational costs due to the need to find many hyperparameters [65, 44]. When applied correctly, non-stationary GPs have been shown to provide significant advantages over their stationary counterparts, especially in scenarios where the data exhibit non-stationary behavior [59].
This paper aims to bring some structure to the use of non-stationary kernels and related techniques to make it more feasible for the practitioner to use these kernels effectively. Throughout this paper -- in the hope of covering the most practical set of available options -- we focus on and compare four ways to handle non-stationarity in a dataset:
1. Ignore it: Most datasets and underlying processes exhibit some level of non-stationarity which is often simply ignored. This leads to the use of stationary kernels of the form \(k_{stat}=k(|\mathbf{x}_{i}-\mathbf{x}_{j}|)\); \(|\cdot|\) is a norm appropriate to the properties of the input space. Given the key takeaways in [50], this option is chosen by many which serves as the main motivation for the authors to write this perspective.
2. Parametric non-stationarity: Kernels of the form \(k(\mathbf{x}_{i},\mathbf{x}_{j})=\sum_{d=1}^{N}g_{d}(\mathbf{x}_{i})g_{d}( \mathbf{x}_{j})k_{stat}(|\mathbf{x}_{i}-\mathbf{x}_{j}|)\), where \(g_{d}(\mathbf{x})\) can be any parametric function over the input space and \(N\) is some positive integer.
3. Deep kernels: Kernels of the form \(k_{stat}(|\boldsymbol{\phi}(\mathbf{x}_{i})-\boldsymbol{\phi}(\mathbf{x}_{j} )|)\), where \(\boldsymbol{\phi}\) is a neural network, and \(k_{stat}\) is any stationary kernel. This kernel was introduced by Wilson et al. [75] and was quickly established as a powerful technique, even though potential pitfalls related to overfitting were also discovered [46].
4. Deep GPs: Achieving non-stationarity not by using a non-stationary kernel, but by stacking stationary GPs -- meaning the output of one GP becomes the input to the next, similar to how neural networks are made up of stacked layers -- which introduces a non-linear transformation of the input space and thereby non-stationarity [12].
Non-stationarity can also be modeled through the prior-mean function of the Gaussian process but, for the purpose of this paper, we use a constant prior mean and focus on the four methods described above. From the brief explanations of the three types of non-stationarity, it is instantly clear that deriving, implementing, and deploying a non-stationary kernel and deep GPs (DGPs) can be an intimidating endeavor. The function \(g_{d}\) (in #2 above), for instance, can be chosen to be any sum of local basis functions -- linear, quadratic, cubic, piecewise constant, etc. -- any polynomial or sums thereof, and splines, just to name a few. Given the vast variety of neural networks (in #3), choosing \(\boldsymbol{\phi}\) is no easier. DGPs involve stacking GPs which increases flexibility but also leads to a very large number of hyperparameters, or latent variables, making optimization and inference a big challenge. In addition, a more complex kernel generally results in reduced interpretability -- a highly valued property of GPs.
This paper is concerned with shining a light on the characteristics and performance of different kernels and experiencing them in action. In a very pragmatic approach, we study the performance of the three methodologies above to tackle non-stationarity and compare them to a stationary Matern kernel to create a baseline. Our contributions can be summarized as follows:
1. We strategically choose a representative set of test kernels and investigate their properties.
2. We choose test datasets to study the performance of the chosen kernels.
3. We derive performance and (non)-stationarity metrics that allow fair comparisons of the non-stationary kernels and the selected deep GP implementation.
4. We compare the performance of the candidate kernels tasked with modeling the test datasets and present the uncensored results.
5. We use the gathered insights to draw some conclusions and suggest a new kernel design for further investigation.
We identify and exploit two primary pathways to achieve non-stationary properties: through the application of deep architectures -- deep kernels of deep GPs -- and the utilization of parametric non-stationary kernels. Our work delves into the role of parametric non-stationary kernels arguing that these kernels offer a similar behavior to a deep-kernel architecture without any direct notion of deepness, hinting at the possibility that deepness and flexibility are practically synonymous. This provides a valuable computational perspective to the broader dialogue surrounding deep architectures and non-stationarity, offering insights that may be instrumental in future research.
The remainder of this paper is organized as follows. The next section introduces some minimal but necessary theory. There, we also introduce some measures that will help us later when it comes to investigating the performance of different kernels. Then we give an overview of the relevant literature and some background. Subsequently, we introduce a range of different kernels and some test datasets, which positions the reader to see each kernel in action in different situations. We then discuss the key takeaways; we give some pointers toward improved kernel designs, make remarks about the connection between non-stationary kernels and multi-task learning, and conclude with a summary of the findings.
## 2 Preliminaries
The purpose of this section is to give the reader the necessary tools to follow along with our computational experiments. It is not intended to be a complete or comprehensive overview of the GP theory or related methodologies. For the remainder of this paper, we consider some input space \(\mathcal{X}\subset\mathbb{R}^{n}\) with elements \(\mathbf{x}\). While we often think of the input space as a subspace of the Euclidean space, this is not a necessary assumption of the framework in general. We assume data \(\mathcal{D}\) as a set of tuples \(\{\mathbf{x}_{i},\mathbf{y}_{i}\}\)\(\forall\)\(i=\{1,2,3,...,|\mathcal{D}|\}\). In this work, except for a remark on multivariate GPs at the end, we will assume scalar \(y_{i}\).
### GPs in a Nutshell
The GP is a versatile statistical learning tool that can be applied to any black-box function. The ability of GPs to estimate both the mean and the variance of the latent function makes it an ideal ML tool for quantifying uncertainties in the function approximation. The prior Gaussian probability distribution is optimized or inferred, and then conditioned on the observational data \(\mathcal{D}\) to yield a posterior probability density function (PDF). GPs assume a model of the form \(y(\mathbf{x})~{}=~{}f(\mathbf{x})+\epsilon(\mathbf{x})\), where \(f(\mathbf{x})\) is the unknown latent function, \(y(\mathbf{x})\) is the noisy function evaluation (the measurement), \(\epsilon(\mathbf{x})\) is the noise term, and \(\mathbf{x}\) is an element of the input space \(\mathcal{X}\). We define a prior over function values
\[p(\mathbf{f})=\frac{1}{\sqrt{(2\pi)^{|\mathcal{D}|}|\mathbf{K}|}}\exp\left[- \frac{1}{2}(\mathbf{f}-\mathbf{m})^{T}\mathbf{K}^{-1}(\mathbf{f}-\mathbf{m}) \right], \tag{1}\]
where \(\mathbf{K}\) is the covariance matrix defined by the kernel function \(K_{ij}~{}=~{}k(\mathbf{x}_{i},\mathbf{x}_{j})\), and \(\mathbf{m}\) is the prior mean obtained by evaluating the prior-mean function at points \(\mathbf{x}_{i}\). We also define a likelihood \(p(\mathbf{y}|\mathbf{f})\) which allows us to perform Bayesian inference, typically (but necessarily) using a multivariate Gaussian distribution
\[p(\mathbf{y}|\mathbf{f})=\frac{1}{\sqrt{(2\pi)^{|\mathcal{D}|}|\mathbf{V}|}} \exp\left[-\frac{1}{2}(\mathbf{y}-\mathbf{f})^{T}\mathbf{V}^{-1}(\mathbf{y}- \mathbf{f})\right], \tag{2}\]
where the covariance matrix \(\mathbf{V}\) is used to describe measurement error arising from \(\epsilon(\mathbf{x})\). Training the hyperparameters, extending the prior over latent function values at points of interest, marginalizing over \(\mathbf{f}\), and conditioning on the data \(\mathbf{y}\) yield the posterior PDF for the requested points [74]. For example, for a Gaussian process with Gaussian likelihood, marginalization over \(\mathbf{f}\) can be obtained in closed form
\[p(\mathbf{y}|h)=\int_{\mathbb{R}^{|\mathcal{D}|}}p(\mathbf{y}|\mathbf{f})p( \mathbf{f})d\mathbf{f}=\frac{1}{\sqrt{(2\pi)^{|\mathcal{D}|}|\mathbf{K}+ \mathbf{V}|}}\exp\left[-\frac{1}{2}(\mathbf{y}-\mathbf{m})^{T}(\mathbf{K}+ \mathbf{V})^{-1}(\mathbf{y}-\mathbf{m})\right], \tag{3}\]
where we suppress the implicit conditioning on the hyperparameters. Training the GP is done by maximizing the log marginal likelihood -- i.e., the log of Equation (3) taken as a function of the hyperparameters \(h\) when the data \(\mathbf{y}\) is given --
\[\ln(p(\mathbf{y}|h))\propto-\frac{1}{2}(\mathbf{y}-\mathbf{m}(h))^{T}(\mathbf{ K}(h)+\mathbf{V}(h))^{-1}(\mathbf{y}-\mathbf{m}(h))-\frac{1}{2}\ln(|\mathbf{K}(h)+ \mathbf{V}(h)|) \tag{4}\]
Figure 1: The key concept and essence of non-stationary kernels. A synthetic function — that is also later used for our computational experiments — was sampled at 40 equidistant points. The function is comprised of high-frequency regions (far left and right), and near-constant-gradient regions (center, green circle). A Gaussian process (GP) is tasked with interpolating the data using a stationary (top) and a non-stationary (bottom) kernel. For each case, the function approximation and the prior covariance matrix are presented. While the posterior mean is similar in both cases, the posterior variance differs substantially. Focusing on the central region, the uncertainty increases between data points, even though the function is very well-behaved there. The covariance matrix can deliver clues as to why this might happen. The matrix is constant along diagonals, which translates into uncertainties that depend on the distance from surrounding data points only, independent of where in the domain they are located. The non-stationary kernel has no such restriction and provides more realistic estimates of the uncertainty. The covariance entries are not constant along diagonals but correspond to different regions of the function (blue line connections).
with respect to the hyperparameters \(h\). After the hyperparameters are found, the posterior is defined as
\[p(\mathbf{f}_{0}|\mathbf{y},h) = \int_{\mathbb{R}^{|\mathcal{D}|}}p(\mathbf{f}_{0}|\mathbf{f}, \mathbf{y},h)\ p(\mathbf{f}|\mathbf{y},h)\ d\mathbf{f} \tag{5}\] \[\propto\mathcal{N}(\mathbf{m}_{0}+\boldsymbol{\kappa}^{T}\ (\mathbf{K}+ \mathbf{V})^{-1}\ (\mathbf{y}-\mathbf{m}),\boldsymbol{\mathcal{K}}-\boldsymbol{\kappa}^{T}\ (\mathbf{K}+\mathbf{V})^{-1}\ \boldsymbol{\kappa}),\]
where \(\boldsymbol{\kappa}=k(\mathbf{x}_{0},\mathbf{x}_{j})\), \(\boldsymbol{\mathcal{K}}=k(\mathbf{x}_{0},\mathbf{x}_{0})\), and \(\mathbf{m}_{0}\) is the prior mean function evaluated at the prediction points \(\mathbf{x}_{0}\). If, in a Bayesian setting, we want to further incorporate uncertainties in the hyperparameters \(h\in\Theta\), Equation (5) becomes
\[p(\mathbf{f}_{0}|\mathbf{y})\ =\ \int_{\Theta}p(\mathbf{f}_{0}|\mathbf{y},h)\ p(h| \mathbf{y})\ dh, \tag{6}\]
where \(p(\mathbf{f}_{0}|\mathbf{y},h)\) is available in closed form (5). This basic framework can be extended by more flexible mean, noise, and kernel functions [49, 50, 42, 43, 42].
### The Covariance Operator of a GP: The Kernel or Covariance Function
In this work, the focus is on the kernel function -- also called covariance function -- of a GP, denoted \(k:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}\). In the GP framework, the kernel function assumes the role of a covariance operator, i.e., the elements of the covariance matrix \(\mathbf{K}\) in Equation (1) are defined by \(K_{ij}=k(\mathbf{x}_{i},\mathbf{x}_{j})\). Because of that, kernel functions have to be symmetric and positive semi-definite (psd); a complete characterization of the class of valid kernel functions is given by Bochner's theorem [6, 1]. At the same time, the kernel uniquely defines the underlying reproducing kernel Hilbert space (RKHS), which is the function or hypothesis space of the Gaussian process posterior mean.
Kernels are called "stationary" if the function only depends on the distance between the two inputs, not their respective locations, i.e., \(k(\mathbf{x}_{i},\mathbf{x}_{j})=k(|\mathbf{x}_{i}-\mathbf{x}_{j}|)\). The arguably most prominent members are kernels of the Matern class (see for instance [66]), which includes the exponential
\[k(\mathbf{x}_{i},\mathbf{x}_{j})=\sigma_{s}e^{-0.5\frac{||\mathbf{x}_{i}- \mathbf{x}_{0}||_{2}}{l}} \tag{7}\]
and the squared exponential (RBF) kernel
\[k(\mathbf{x}_{i},\mathbf{x}_{j})=\sigma_{s}e^{-0.5\frac{||\mathbf{x}_{i}- \mathbf{x}_{0}||_{2}^{2}}{l^{2}}}, \tag{8}\]
the by-far most used kernel by practitioners [50]; here, \(||\cdot||_{2}\) is the Euclidean norm, which makes the kernel function both stationary and isotropic. These two special cases of the Matern kernel class have two hyperparameters: \(\sigma_{s}\), the signal variance, and \(l\), the length scale, both of which are constant scalars applied to the entire domain \(\mathcal{X}\). Stationary and isotropic kernels can be advanced by allowing anisotropy in the norm, i.e., \(((\mathbf{x}_{i}-\mathbf{x}_{j})^{T}\mathbf{M}(\mathbf{x}_{i}-\mathbf{x}_{j}) )^{1/2}\) where \(\mathbf{M}\) is some symmetric positive definite matrix whose entries are typically included in the vector of hyperparameters that need to be found. Incorporating anisotropy allows the kernel function to stretch distances such that the implied covariances have ellipsoidal patterns (as opposed to spherical patterns for isotropic kernels). When \(\mathbf{M}=\frac{1}{l}\mathbf{I}_{n}\) the isotropic kernels in Equations (7) and (8) are recovered; more generally, when \(\mathbf{M}\) is diagonal with different elements along the diagonal, automatic relevance detection is recovered [76]. Other stationary kernel designs include the spectral kernel, the periodic kernel, and the rational quadratic kernel.
Non-stationary kernels do not have the restriction of only depending on the distance between the input points, but depend on their explicit positions \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\), i.e., \(k(\mathbf{x}_{i},\mathbf{x}_{j})\neq k(|\mathbf{x}_{i}-\mathbf{x}_{j}|)\). Formulating new non-stationary kernels comes with the difficulty of proving positive semi-definiteness which is often challenging. However, the statistics and machine learning literature has a variety of general approaches for applying valid non-stationary kernels; this is the primary topic of this paper which is discussed in Section 3. For now, it is important to keep in mind that the essence of stationary versus non-stationary kernels -- due to the way they deal with the locations in the input space -- manifests itself in the covariance matrix which, in the stationary case, has to be constant along all bands parallel to the diagonal for sorted and equidistant input points, a property that is not observed in the non-stationary case (see Figure 1). Therefore, non-stationary kernels lead to more descriptive, expressive, and flexible encodings of covariances and therefore uncertainties. Of course, in addition, the function space (the RKHS) contains a much broader class of functions as well when non-stationary kernels are used.
### A Note on Scalability
Computational complexity is a significant challenge for GPs, for both stationary and non-stationary kernels. The need to calculate the log-determinant of and invert the covariance matrix -- or solve a linear system instead -- results in computational complexity of \(\mathcal{O}(|\mathcal{D}|^{3})\), where \(|\mathcal{D}|\) is the number of data points [74, 39]. This complexity limits the application of GPs for large-scale datasets. Several methods have been proposed to overcome this issue. Sparse methods [64, 39] and scalable GPR approaches [23] have been developed for stationary GPs. For non-stationary GPs, methods such as local GPs [19] and the Bayesian treed GP [20] have been proposed to tackle this issue. These methods have an origin in divide-and-conquer methods attempting to break down the problem into smaller, manageable pieces that can be solved independently [37], thereby reducing the computational complexity for each piece. However, these approaches can lead to a loss of global information, so finding the right balance between computational efficiency and model accuracy remains a key challenge. A recent approach can scale GPs to millions of data points without using approximations by allowing a compactly supported, non-stationary kernel to discover naturally occurring sparsity [44].
## 3 Non-Stationary Kernels
Stationary kernels are widely used primarily because they are easy and convenient to implement, even though the implied assumption of translation-invariant covariances is almost never exactly true for real-world data sets. As mentioned in Section 2, there are serious challenges associated with both deriving non-stationary kernels and choosing an appropriate and practical non-stationary kernel from the valid options for any given implementation of a Gaussian process. We now provide a brief overview of the literature on non-stationary kernels, including both a historical perspective of early developments followed by greater detail on three modern frameworks for non-stationary kernels as well as metrics for quantifying non-stationarity in data sets.
### Historical Perspective
It has now been over three decades since the first paper on non-stationary kernels via "deformations" or warping of the input space appeared [56]. Since then, the statistics literature has developed a number of approaches for non-stationary kernels, mostly in the context of modeling spatially-referenced data. These methods can broadly be categorized as basis function expansions and kernel convolutions, in addition to the aforementioned deformation approach. We now briefly summarize each method, focusing on aspects that apply directly to kernel functions for Gaussian processes.
The fundamental idea underpinning the deformation or warping approach [56] is that instead of deriving new classes of non-stationary kernels one can keep isotropic kernels but obtain non-stationarity implicitly by rescaling interpoint distances in a systematic way over the input space. In other words, this approach transforms \(\mathcal{X}\) to a new domain, say \(\mathcal{X}^{*}\), wherein stationarity holds. The transformation, say \(\boldsymbol{\phi}:\mathbb{R}^{n}\to\mathbb{R}^{n^{*}}\), is a (possibly nonlinear) mapping applied to elements of \(\mathcal{X}\) to yield a non-stationary kernel via
\[k(\mathbf{x}_{i},\mathbf{x}_{j})=k_{stat}(||\boldsymbol{\phi}(\mathbf{x}_{i}) -\boldsymbol{\phi}(\mathbf{x}_{j})||), \tag{9}\]
where \(k_{stat}\) is an arbitrary stationary kernel function. Two extensions were later proposed to this approach [11, 61] that supposed the mapping \(\boldsymbol{\phi}(\cdot)\) was itself a stochastic process. For example, [61] placed a Gaussian process prior on \(\boldsymbol{\phi}(\cdot)\) -- essentially coming up with the idea of deep kernels more than a decade before related ideas appeared in the machine learning literature. In some cases \(n^{*}>n\), i.e., the mapping involves dimension expansion [8]. Ultimately, early approaches to warping the input space were largely unused due to a lack of computational tools for optimizing the mapping function \(\boldsymbol{\phi}(\cdot)\) in a reliable and robust manner.
In contrast to deformations, basis function expansion methods provide constructive approaches for developing non-stationary kernel functions. The main idea for this approach arises from the Karhunen-Loeve Expansion [31, 36] of a (mean-zero) stochastic process in terms of orthogonal eigenfunctions \(E_{m}(\cdot)\) and weights \(w_{m}\):
\[f(\mathbf{x})=\sum_{m=1}^{\infty}w_{m}\,E_{m}(\mathbf{x}). \tag{10}\]
This framework defines a Gaussian process if the weights have a Gaussian distribution; the implied kernel
function is
\[k(\mathbf{x}_{i},\mathbf{x}_{j})=\sum_{m=1}^{\infty}v_{m}E_{m}(\mathbf{x}_{i})E_{ m}(\mathbf{x}_{j}),\]
where the eigenfunctions and weight variances \(v_{m}\) come from the Fredholm integral equation
\[\int_{\mathcal{X}}k(\mathbf{x}_{i},\mathbf{x}_{j})E_{m}(\mathbf{x}_{i})d \mathbf{x}_{i}=w_{m}E_{m}(\mathbf{x}_{j}). \tag{11}\]
If the infinite series in Equation 10 is truncated to the leading \(M\) terms, the finite sum approximation to the kernel can be used instead and is optimal in the sense that it minimizes the variance of the truncation error for all sets of \(M\) basis functions when the \(E_{m}(\cdot)\) are the exact solutions to the Fredholm equation [73]. The main task is then to model the weight-eigenfunction pairs \(\{w_{m},E_{m}(\cdot)\}\), which can be done empirically using singular value decomposition [27] or parametrically using, e.g., wavelets [45].
Like basis function expansions, the kernel convolution approach is useful in that it provides a constructive approach to specifying both stochastic models and their covariance functions. The main result is that a stochastic process can be defined by the kernel convolution
\[f(\mathbf{x})=\int_{\mathcal{X}}\kappa_{\mathbf{x}}(\mathbf{u})dW(\mathbf{u}) \tag{12}\]
[69, 70], where \(W(\cdot)\) is a \(n\)-dimensional stochastic process and \(\kappa_{\mathbf{x}}(\cdot)\) is a kernel function that depends on input location \(\mathbf{x}\). [24] summarizes the extremely flexible class of stochastic models defined using Equation (12): see, for example, [4, 28], and [72]. The popularity of this approach is due largely to the fact that it is much easier to specify (possibly parametric) kernel functions than a covariance function directly since the kernel functions only require \(\int_{\mathbb{R}^{d}}\kappa_{\mathbf{x}}(\mathbf{u})d\mathbf{u}<\infty\) and \(\int_{\mathbb{R}^{d}}\kappa_{\mathbf{x}}^{2}(\mathbf{u})d\mathbf{u}<\infty\). The process \(f(\cdot)\) in Equation 12 is a Gaussian process when \(W(\cdot)\) is chosen to be Gaussian, and the associated covariance function is
\[k(\mathbf{x}_{i},\mathbf{x}_{j})=\int_{\mathcal{X}}\kappa_{\mathbf{x}_{i}}( \mathbf{u})\kappa_{\mathbf{x}_{j}}(\mathbf{u})d\mathbf{u}, \tag{13}\]
which cannot be written in terms of \(||\mathbf{x}_{i}-\mathbf{x}_{j}||\) and is hence non-stationary. Various choices can be made for using this general framework in practice: replace the integral in Equation (12) with a discrete sum approximation [26] or choose specific \(\kappa_{\mathbf{x}}(\cdot)\) such that the integral in Equation (13) can be evaluated in closed form [25]. The latter choice can be generalized to yield a closed-form kernel function that allows all aspects of the resulting covariances to be input-dependent: the length-scale [47, 48], the signal variance [52], and even the differentiability of realizations from the resulting stochastic process when the Matern kernel is leveraged [67]. This approach is often referred to as "parametric" non-stationarity since a non-stationary kernel function is obtained by allowing its hyperparameters to depend on input location. In practice, some care needs to be taken to ensure that the kernel function is not too flexible and can be accurately optimized [48, 3]. We return to a version of this approach in the next section.
In conclusion, the statistics literature contains a broad set of techniques (only some of which are summarized here) for developing non-stationary kernel functions. However, historically speaking, these techniques were not widely adopted because, in most of the cases described here, the number of hyperparameters is on the same order as the number of data points. This property makes it very difficult to apply the kernels to real-world data sets due to the complex algorithms required to fit or optimize such models.
Nonetheless, the potential benefits of applying non-stationary kernels far outweigh the risks in our opinion, and this perspective is all about managing this trade-off. To do so, we now introduce three modern approaches to handle non-stationarity in datasets in order to later test them and compare their performance (Section 4).
### Non-Stationarity via a Parametric Signal Variance
This class of non-stationary kernels uses a parametric function as the signal variance. The term \(g(\mathbf{x}_{1})g(\mathbf{x}_{2})\) is always symmetric and psd [42] and is, therefore, a valid kernel function. Also, any product of kernels is a valid kernel, which gives rise to kernels of the form
\[k(\mathbf{x}_{i},\mathbf{x}_{j})=g(\mathbf{x}_{i})g(\mathbf{x}_{j})k_{stat}( \mathbf{x}_{i},\mathbf{x}_{j}). \tag{14}\]
This can be seen as a special case of the non-stationary kernel derived in [48] and [52] wherein the length-scale is taken to be a constant. [52], in particular, consider parametric signal variance and (anisotropic) length scale. In an extension of (14), any sum of kernels is a valid kernel which allows us to write
\[k(\mathbf{x}_{i},\mathbf{x}_{j})=\sum_{l=1}^{N}g_{l}(\mathbf{x}_{i})g_{l}( \mathbf{x}_{j})k_{stat}(\mathbf{x}_{i},\mathbf{x}_{j}). \tag{15}\]
The function \(g\) can be any function defined on the input domain, but we will restrict ourselves to functions of the form
\[g(\mathbf{x})=\sum_{k=1}^{N_{2}}c_{k}\beta(\mathbf{x}_{k},\mathbf{x}), \tag{16}\]
where \(c_{k}\) are some coefficients (or parameters), and \(\beta(\mathbf{x}_{k},\mathbf{x})\) are basis functions centered at \(\mathbf{x}_{k}\). For our computational experiments, we use radial basis functions of the form
\[\beta(\mathbf{x}_{k},\mathbf{x})=e^{-\frac{||\mathbf{x}_{k}-\mathbf{x}||^{2} }{w}}, \tag{17}\]
where \(w\) is the width parameter.
### Deep Kernels
Our version of parametric non-stationarity operates on the signal variance only. This is by design so that we can separate the effects of the different kernels later in our tests. This next approach uses a constant signal variance but warps the input space to yield flexible non-constant length scales. The set of valid kernels is closed under non-linear transformation of the input space as long as this transformation is constant across the domain and the resulting space is considered a linear space. This motivates the definition of kernels of the form
\[k(\mathbf{x}_{i},\mathbf{x}_{j})=k_{stat}(||\boldsymbol{\phi}(\mathbf{x}_{i}) -\boldsymbol{\phi}(\mathbf{x}_{j})||_{2}), \tag{18}\]
where \(\boldsymbol{\phi}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n^{*}}\) can again be any scalar or vector function on the input space. Deep neural networks have been established as a preferred choice for \(\boldsymbol{\phi}\) due to their flexible approximation properties which gives rise to, so-called, deep kernels (see Algorithm 1 for an implementation example). For our tests, we define a 2-deep-layer network with varying layer widths. While it is possible through deep kernels to perform dimensionality reduction, in this work we map the original input space into a linear space of the same dimensionality, i.e., \(n=n^{*}\). Care must be taken not to use neural networks whose weights and biases, given the dataset set size, are underdetermined. That is why comparatively small networks are commonly preferred. We use ReLu as an activation function. The neural network weights and biases are treated as hyperparameters and are trained accordingly. We set \(k_{stat}\) in (18) to be the Matern with \(\nu=3/2\).
Our deep kernel construction shares the perspective of using a neural network to estimate a warping function [56]. In [77], the authors propose an approach of deep compositional spatial models that differs from traditional warping methods in that it models an injective warping function through a composition of multiple elemental injective functions in a deep-learning framework. This allows for greater flexibility in capturing non-stationary and anisotropic spatial data and is able to provide better predictions and uncertainty quantification than other deep stochastic models of similar complexity. This uncertainty quantification is point-wise, similar to the deep GPs we introduced next.
### Deep GPs
Deep Gaussian process (DGP) models are hierarchical extensions of Gaussian processes where GP layers are stacked -- similar to a neural network -- enhancing modeling flexibility and accuracy [12, 14, 30] (more details can be found in the Appendix D). The DGP model is one of the deep hierarchical models [51, 53, 68] and consists of a number of variational Gaussian process layers defined by a mean function and a stationary covariance (kernel) function. The first layer uses constant zero means for lower-dimensional representation of the input data. The second layer uses constant zero means and takes the first layer's output, generating the final model output. Each layer's forward method applies the mean and covariance functions to input data and returns a multivariate normal distribution. This output serves as the next layer's input. An additional method allows for the implementation of concatenation-based skip connections. We can impose more than two hidden layers for a single DGP according to our needs. However, we match the 2-layer structure used in our deep kernel and point out that the complexity of the neural architecture may not always lead to better performance.
Although each layer of DGP is equipped with stationary kernels, the output of one GP layer becomes the input to the next GP layer, hence the final output will not be stationary. For a DGP with \(L\) layers, we can represent the model as follows:
\[f^{(1)}(\mathbf{x})\sim GP(m^{(1)}(\mathbf{x}),k^{(1)}(\mathbf{ x},\mathbf{x}^{\prime}))\] \[f^{(2)}(\mathbf{x})\sim GP(m^{(2)}(f^{(1)}(\mathbf{x})),k^{(2)}( f^{(1)}(\mathbf{x}),f^{(1)}(\mathbf{x}^{\prime})))\] \[\ldots\] \[f^{(L)}(\mathbf{x})\sim GP(m^{(L)}(f^{(L-1)}(\mathbf{x})),k^{(L) }(f^{(L-1)}(\mathbf{x}),f^{(L-1)}(\mathbf{x}^{\prime})))\]
Then, an optimizer and the variational Evidence Lower Bound (ELBO) are used for training of the DGP model. Using a variational approximation in ELBO, instead of exact inference, leads to manageable computational complexity of deeper GP models [71]. The deep Gaussian Processes (GPs) can be perceived as hierarchical models whose kernel does not admit a closed form. Crucially, this "hierarchy of means" is constructed via the means of the layer-distributions in the deep GP, but not higher moments like [2, 13, 40]. Only the mean functions at each layer of the deep GP are contingent upon computations from preceeding layers, signifying hierarchies that rely on the first-order structure at every layer of the model.
Using a slightly different framework based on the Vecchia approximation, [29] introduced a "deep Vecchia ensemble," a hybrid method combining Gaussian processes (GPs) and deep neural networks (DNNs) for regression. This model builds an ensemble of GPs on DNN hidden layers, utilizing Vecchia approximations for GP scalability. Mathematically, the joint distribution of variables is decomposed using Vecchia's approximation, and predictions are combined using a generalized product of experts. This approach offers both representation learning and uncertainty quantification. As described in the last section, a GP model can utilize a deep kernel, constructively combining the neural network's (NN) and the GP's strength, leading to a model that benefits from the GP's interpretability and the NN's flexibility [75].
Our Bayesian DGP architecture follows [57] and includes a two-layer neural network, applied as a transformation to the input data. The first layer uses a rectified linear unit (ReLU) activation function and the second employs a sigmoid activation. This non-linear feature mapping expresses complex input space patterns (See Appendix C).
Contrasting DGPs with deep-kernel GPs, DGPs use multiple GP layers to capture intricate dependencies, whereas deep-kernel GPs employ a NN for input data transformation before GP application. Essentially, while DGPs exploit GP layering to manage complex dependencies, deep kernel learning leverages NNs for non-linear input data transformation, enhancing the GP's high-dimensional function representation ability.
### Measuring Non-Stationarity of Datasets
When it comes to characterizing non-stationarity, some methods focus on non-stationarity in the mean function (e.g., polynomial regression), while others concentrate on the non-stationarity in the variance (e.g., geographically weighted regression [16]). Non-stationarity is typically characterized by a change in statistical properties over the input space, e.g., changes in the dataset's mean, variance, or other higher moments. Quantifying non-stationarity is an active area of research and in this paper, we merely introduce a particular kind of non-stationarity measure for the purpose of judging our test kernels when applied to the test datasets without claiming to propose a new method to measure it. Overall, measuring a given dataset's non-stationarity properties is an important ingredient in understanding the performance of a particular kernel. For the reader's convenience, we offer our non-stationarity measure as a pseudocode (see Algorithm 2). For theoretical motivation of the non-stationarity measure we use int this work, please refer to A.
To avoid bias through user-based subset selection we draw the location and width of a uniform distribution over the domain \([0,1]^{n}\) with dimensionality \(n\). We then draw data points randomly from this distribution 100 times and use MCMC to get a distribution for the length scale and the signal variance in each iteration. The distribution of the means of the distributions of signal variances and length scales is then assessed to measure non-stationarity (See Algorithm 2).
```
1:procedureMeasureNonStationarity(\(\mathbf{X}\), \(\mathbf{y},m\)) \(\triangleright\)\(\mathbf{X}\) is the set of data points, \(\mathbf{y}\) is the collected data, \(m\) is the number of iterations
2: length scale list = []
3: signal variance list
4:for\(i\) in \(0\) to \(m\)do
5:\(mean\sim U([0,1])\)
6:\(standard\)\(dev\sim U((0,1])\)\(\triangleright\) alternatively draw size of subdomain.
7: draw \(m\) test data points:
8:\(\mathbf{X}_{t}\sim\mathcal{N}(mean,standard\)\(dev)\)\(\triangleright\) alternatively draw from uniform distr. over subdomain
9: Find associated \(\mathbf{y}_{t}\) in dataset
10: initialize a stationary GP
11: mean signal variance, mean length scale = run_mcmc()
12: append new mean signal variance to signal variance list
13: append new mean length scale to length scale list
14:endfor
15:return signal variance list, length scale list
16:endprocedure
```
**Algorithm 2** Measuring non-stationarity via local, stationary-GP hyperparameter distributions.
To test our non-stationary measure, we applied it to three synthetic functions (see Figure 2). In the first scenario (first row in Figure 2), where the signal is purely linear, the algorithm's behavior leads to a high concentration of points in a single compact cluster when plotting the estimated length scale versus the signal variance. This concentration reflects the inherent stationarity of a linear signal, where the statistical properties do not change over the input space. The unimodal and concentrated distributions of both the estimated parameters in the violin plots further corroborate this observation. The consistency in the length scale and signal variance across multiple iterations of MCMC indicates that the underlying data structure is stationary. This case demonstrates the effectiveness of the proposed non-stationary measure in detecting the stationary nature of a linear signal.
In the second case (second row in Figure 2), the signal is a trigonometric curve with local oscillations. The algorithm's response to this signal structure results in a single cluster when plotting the estimated length scale versus the signal variance but with a highly linear correlation. This linear correlation suggests a consistent relationship between the length scale and signal variance across different local oscillations. The unimodal concentrated distributions in the violin plots, coupled with a smaller variance in the estimated length scale, reflect a degree of stationarity within the local oscillations. The algorithm's ability to capture this nuanced behavior underscores its sensitivity to variations in non-stationarity, even within a seemingly stationary pattern.
The third scenario (third row in Figure 2) presents a more complex signal structure, a trigonometric curve with varying amplitude and frequency. The algorithm's reaction to this non-stationary signal is manifested
in the clustering of points into two less concentrated clusters when plotting the estimated length scale versus the signal variance. This bimodal behavior in the clustering, as well as in the violin plots for both estimated parameters, reveals the underlying non-stationarity in the data. The varying magnitude of the signal introduces changes in statistical properties over space, leading to a broader distribution of the hyperparameters. The algorithm's ability to discern this complex non-stationary pattern and reflect it in the clustering and distribution of the estimated hyperparameters illustrates its robustness and adaptability in measuring non-stationarity across diverse data structures.
The three cases in Figure 2 demonstrate the algorithm's capability to measure non-stationarity through the local, stationary GP-hyperparameter distributions. The varying behaviors in clustering and distribution of the estimated parameters across the three cases provide insights into the underlying stationarity or non-stationarity of the signals. The algorithm's sensitivity to these variations underscores its potential as a valuable tool for understanding and characterizing non-stationarity in different contexts.
Figure 2: A way of measuring the non-stationarity of a dataset or a synthetic function. When data is drawn randomly and a GP using a stationary kernel is trained via MCMC repeatedly, the distribution of the mean of the hyperparameters — here signal variance and isotropic length scale — can be used to measure non-stationarity. The stationary function (top) leads to a narrow distribution of the length scale and the signal variance. For the non-stationary function (bottom), the distribution for both hyperparameters is broader.
### Performance Measures
Throughout our computational experiments, we will measure the performance via three different error metrics as a function of training time. We argue that this allows us to compare methodologies across different implementations as long as all tests are run on the same computing architecture with similar hardware utilization. As for error metrics, we utilize the log marginal likelihood (Equation 4), the root mean square error (RMSE), and the Continuous Ranked Probability Score (CRPS). The RMSE is defined as
\[RMSE=\sqrt{\frac{\sum_{i}^{N}(y_{i}-f_{0}^{i})^{2}}{N}}, \tag{19}\]
where \(y_{i}\) are the data values of the test dataset and \(f_{0}^{i}\) are the posterior mean predictions. The RMSE metric provides a measure of how closely the model's predictions align with the actual values -- approaching zero as fit quality improves -- while the log marginal likelihood evaluates the fit of the Gaussian Process model given the observed data. The log marginal likelihood will increase as the model fits the data more accurately. The CRPS is defined as
\[CRPS(f_{0},y_{i})=\sigma\big{(}\frac{1}{\sqrt{\pi}}-2\psi(\frac{x-\mu}{\sigma} )-\frac{x-\mu}{\sigma}(2\Psi(\frac{x-\mu}{\sigma}-1))\big{)}, \tag{20}\]
where \(\psi\) is the probability density function of a standard normal distribution and \(\Psi\) is the associated cumulative distribution function. For a GP, \(f_{0}\) is Gaussian with mean \(\mu\) and variance \(\sigma^{2}\). The CRPS is negative and approaches zero as fit quality improves. The CRPS is arguably the more important score compared to the RMSE because it is _uncertainty aware_. In other words, if the prediction accuracy is low, but sincerity in those regions is high -- the algorithm is aware of its inaccuracy -- the score improves.
Computational cost is becoming a main research topic in recent studies in large-scale non-parametric models [37, 62, 23], especially GPs [35, 18, 34, 39]. In our analysis, we sought to examine the progression of the optimization process. To achieve this, we established a callback function during the optimization phase, tracking the RMSE, the log marginal likelihood, and the CRPS as a function of compute time. In terms of interpretation, ideally, we expect the RMSE and the CRPS to decrease over time, suggesting that the model's predictive accuracy and estimation of uncertainties are improving. On the other hand, the log marginal likelihood should increase, indicating a better fit of the model to the observed data. This analysis gives us a summary of the model's learning process and helps us understand the progression of the optimization, thus providing valuable insights into the efficacy of our model and the optimization strategy employed.
## 4 Computational Experiments
The purpose of this section is to see how different kernels and a deep GP deal with non-stationarity in several datasets and to compare the characteristics and properties of the solutions. To make this comparison fair and easier, we ran all tests on the same Intel i9 CPU (Intel Core i9-9900KF CPU @ 3.60GHz \(\times\) 8) and used the total compute time as the cost. As a performance metric, we calculate and observe the RMSE (Root Mean Squared Error, Equation 19), the CRPS (continuous rank probability score, Equation 20) of the prediction, and the log marginal likelihood of the observational data (Equation 2). We attempted to run fair tests in good faith; this means, the effort spent to set up each kernel or methodology was roughly proportional to a method's perceived complexity, within reasonable bounds, similar to the effort expended by an ML practitioner -- this meant minutes of effort for stationary kernels and hours to days for non-stationary kernels and DGPs. The optimizer to reach the final model was _scipy_'s differential evolution. We used an in-house MCMC (Markov Chain Monte Carlo) algorithm to create the plots showing the evolution of the performance metrics over time. In cases when our efforts did not lead to satisfactory performance, we chose to present the result "as is" to give the reader the ability to judge for themselves. To further the hands-on aspect of this section, we also included the used algorithms in the Appendix and on a specifically designed website together with links to download the data. The performance-measure-over-time plots were created without considering deep GPs due to incompatible differences in the implementations (see Section D). All computational experiments were run multiple times to make sure we showed representative results.
This section, first, introduces three datasets we use later to evaluate the performance of the test methodologies. Second, we present the unredacted, uncensored results of the test runs. The purpose is not to judge
some kernels or methodologies as better or worse universally, but to evaluate how these techniques perform when tested under certain well-defined conditions and under the described constraints. We encourage the reader to follow our tests, to rerun them if desired, and to judge the performance of the methods for themselves.
### Introducing the Test Datasets
We will consider three test data sets. All datasets are normalized such that the range and the image -- the set of all measured function values -- are in \([0,1]\).
For the first dataset, we define a one-dimensional synthetic function
\[f(x)=(\sin(5x)+\cos(10x)+(2(x-0.4)^{2})\cos(100x)+2.597)/3.94 \tag{21}\]
data is drawn from. 50 data points are drawn randomly and the noise \(\epsilon\sim\mathcal{N}(0,0.001)\) is added. Figure 3 presents the function and its non-stationary measures.
Figure 3: Test dataset 1 and its non-stationarity measures visualized as distributions in the hyperparameters of a stationary kernel trained on local subsets of that data. The dataset is derived from a one-dimensional synthetic function (see Equation (21)). Non-stationarity appears to be present in the length scale and the signal variance.
Second, we consider a three-dimensional climate dataset that is available online ([https://www.ncei.noaa.gov/data/global-historical-climatology-network-daily/](https://www.ncei.noaa.gov/data/global-historical-climatology-network-daily/)), consisting of in situ measurements of daily maximum surface air temperature (\({}^{\circ}\)C) collected from weather stations across the contiguous United States (geospatial locations defined by longitude and latitude) over time (the third dimension). The data and its non-stationarity measures are presented in Figure 4.
Third, we consider a dataset that was collected during an X-ray scattering experiment at the CMS beamline at NSLSII, Brookhaven National Laboratory. The dataset originated from an autonomous exploration of multidimensional material state-spaces underlying the self-assembly of copolymer mixtures. Because the scientific outcome of this experiment has not been published yet, all scientific insights have been obscured by normalization and the removal of units. The dataset is presented in Figure 5.
### Results
In this section, we present quantitative results of how the different kernels and deep GPs performed when tasked to learn the underlying data-generating latent functions that produced the test datasets introduced in the previous section. For each test, we show the model and its uncertainties across the domain or a subdomain after convergence of a global evolutionary optimization of the log marginal likelihood, and the performance measures as a function of the compute time of an MCMC algorithm. Code snippets can be
Figure 4: Test dataset 2 and its non-stationarity measures visualized as distributions in the hyperparameters of a stationary kernel trained on local subsets of that data. The dataset consists of recorded temperatures across the United States and a period of time. Weak non-stationarity appears to be present in the length scale and the signal variance.
found in the Appendix and on our website (see the Code Availability paragraph at the end). To reiterate, for all tests, we put ourselves in the position of an ML practitioner setting up each algorithm under a _reasonable-effort_ constraint -- we did not optimize each method to its full extent because this would not lead to a fair comparison. However, we used our best judgment and followed online documentation closely. The same thought process was put into the exact design of the kernels; of course, one could always argue that a particular model would have performed better using more hyperparameters. For the fairness of the comparison, we kept the number of hyperparameters similar across the kernels for a particular computational experiment and increased the number of kernel parameters in a near-proportional fashion for the non-stationary kernels as we moved to higher dimensions. See Table 1 for the number of hyperparameters for the different experiments. All non-stationary kernels were implemented in the open-source GP package fvGP ([https://github.com/lbl-camera/fvGP](https://github.com/lbl-camera/fvGP)). We tried two different deep GPs, the gpflux package ([https://github.com/secondmind-labs/GPflux](https://github.com/secondmind-labs/GPflux)) and the Bayesian deep GP (BDGP) by [57; 58] ([https://cran.r-project.org/web/packages/deepgp/index.html](https://cran.r-project.org/web/packages/deepgp/index.html)). We selected the latter in its two-layer version for our final comparisons because of performance issues with the gpflux package (see Figure 7).
Figure 5: Test dataset 3 and its non-stationarity measures visualized as distributions in the hyperparameters of a stationary kernel trained on local subsets of that data. The dataset consists of analyzed X-ray scattering signals over \([0,1]^{3}\). Non-stationarity appears to be very weak but existent in both length scale and signal variance.
#### 4.2.1 One-Dimensional Synthetic Function
Our one-dimensional synthetic test function was introduced in Section 4.1. The stationary reference kernel (\(k_{stat}\)) is a Matern kernel with \(\nu=3/2\)
\[k_{stat}(x_{i},x_{j})=\sigma^{2}\left(1+\frac{\sqrt{3}d}{l}\right)\exp\left(- \frac{\sqrt{3}d}{l}\right), \tag{22}\]
where \(l\) is the length scale, and \(d=||x_{i}-x_{j}||_{2}=|x_{i}-x_{j}|\). The parametric non-stationary kernel in this experiment was defined as
\[k_{para}(x_{i},x_{j})=\big{(}g_{1}(x_{i})g_{1}(x_{j})+g_{2}(x_{i})g_{2}(x_{j}) \big{)}k_{stat}, \tag{23}\]
where
\[g_{a}(x)=\sum_{b=1}^{6}c_{a}^{b}\exp\left[-0.5(||\tilde{x}_{b}-x||^{2})/w_{a} \right], \tag{24}\]
\(\tilde{x}=\{0,0.2,0.4,0.6,0.8,1\}\), leading to a total of 15 hyperparameters -- counting two \(g_{a}\) functions in the sum, a constant width of the radial basis functions for each \(g_{a}\), and a constant length scale. The deep kernel
\[k_{nn}(x_{i},x_{j})=k_{stat}(\phi(x_{i}),\phi(x_{j})), \tag{25}\]
where \(\phi\) is a fully connected neural network mapping \(\mathbb{R}\rightarrow\mathbb{R}\), with ReLu activation functions and two hidden layers of width five, which led to a total of 48 hyperparameters (weights, biases, and one constant signal variance). The results, presented in Figure 6, show a gradual improvement in approximation performance as more flexible kernels are used. The stationary kernel (\(k_{stat}\)) stands out through its fast computation time. However, the keen observer notices similar uncertainties independent of local properties of the latent function; only point spacing is considered in the uncertainty estimate. That is in stark contrast to the parametric non-stationary kernel (\(k_{para}\)) and the deep kernel (\(k_{nn}\)) which both predict lower uncertainties in the well-behaved center region of the domain. This is a very desirable characteristic of non-stationary kernels. The deep kernel and parametric non-stationary kernel reached very similar approximation performance but the deep kernel was by far the most costly to train. The BDGP predicts a very smooth model with subpar accuracy compared to the other methods. We repeated the experiment with different values for the nugget, and let the algorithm choose the nugget, without further success. This is not to say the method cannot perform better, but we remind the reader that we are working under the assumption of reasonable effort, which, in this case, was insufficient to reach a better performance. We share the code with the reader in the Appendix (see C) for reproducibility purposes. The MCMC sampling runs revealed what was expected, the stationary kernel converges most robustly; however, all kernels led to a stable convergence within a reasonable compute time.
\begin{table}
\begin{tabular}{l c c c} & \multicolumn{3}{c}{**Number of hyperparameters per experiment**} \\
**Kernel functions** & _1D Synthetic_ & _3D Climate_ & _3D X-ray_ \\ \hline Stationary, \(k_{stat}\) & 2 & 3 & 3 \\ Parametric non-stationary, \(k_{para}\) & 15 & 58 & 58 \\ Deep kernel, \(k_{nn}\) & 48 & 186 & 186 \\ \hline \end{tabular}
\end{table}
Table 1: Number of hyperparameters for our computational experiments.
Figure 6: Performance overview for dataset 1. From the top-left: stationary reference kernel, parametric non-stationary kernel, deep kernel, and deep GP. In this test, the parametric non-stationary kernel and the deep kernel reached similar approximation performances, while the former used significantly fewer hyperparameters, which is why it can be trained significantly faster. The deep GP (see Appendix D for the algorithm), over-smoothed the model and took a long time to train. We note that the deep-GP result highly depended on the specified nugget — too small, and the algorithms produced NaNs, too large, and the presented smoothing was observed. The result was the same when we allowed the algorithm to choose its own nugget. Note the asterisk next to the deep-GP compute time, denoting that this is a separate software written in a different language (R), and compute times can therefore not be directly compared.
#### 4.2.2 The Climate Model
Our three-dimensional climate dataset was introduced in Section 4.1. The stationary reference kernel (\(k_{stat}\)) is a Matern kernel with \(\nu=3/2\) (Equation 22). In all cases, stationary and non-stationary, we added to the kernel matrix the noise covariance matrix \(\mathbf{V}=\sigma_{n}^{2}\mathbf{I}\), where \(\sigma_{n}^{2}\) is the nugget variance, treated as an additional hyperparameter. The parametric non-stationary kernel is similar to Equation (23); however, we place radial basis functions (24) at \(\{0,0.5,1\}^{3}\) and added a nugget variance, leading to a total of 58 hyperparameters. The deep kernel \(k_{nn}(\boldsymbol{\phi}(\mathbf{x}_{i}),\boldsymbol{\phi}(\mathbf{x}_{j})), \ \boldsymbol{\phi}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3}\) has two hidden layers of width 10, but is otherwise identical to (25), yielding 186 hyperparameters. The results, two-dimensional slices through the three-dimensional input space, are presented in Figure 8. For completeness, we included the deep GP result in Figure 10, which, however, in our run was not competitive. Once again, the stationary kernel (\(k_{stat}\)) delivers fast and robust results; however, lacks accuracy compared to the parametric non-stationary kernel (\(k_{para}\)) and the deep kernel (\(k_{nn}\)). The performance of the two non-stationary kernels is on par with a slight advantage in CRPS for the deep kernel and a significant advantage in compute time for the parametric non-stationary kernel. The MCMC (Figure 9) sample runs revealed stable convergence, however, at significantly different time scales.
Figure 7: Approximation result for the one-dimensional synthetic dataset using the gpflux DGP. Despite following the online documentation closely, the model appears to show artifacts. This is the reason why we did not use the gpflux DGP for our higher-dimensional test cases. For reproducibility purposes, we publish our exact run script in the Appendix (see B).
Figure 8: Performance overview for the climate model. From the top: \(k_{stat}\), \(k_{para}\), and \(k_{nn}\). The posterior mean is displayed in the left column. The posterior variance is on the right. The model function is defined over \([0,1]^{2}\times\{0.5\}\). This model is obtained after the convergence of a global evolutionary optimizer. Notable is the fast computing time of the stationary kernel due to the fact that only three hyperparameters have to be found: a signal variance, one isotropic length scale, and the nugget (i.i.d. noise). This parametric non-stationary kernel led to a total number of 58 hyperparameters, which increases the computing time significantly compared to the stationary kernel. The accuracy, in this case, is slightly higher. It is debatable and highly application-driven whether the parametric non-stationary kernel should be used for this dataset. If time is not an issue, the increased accuracy can pay off in downstream operations. The deep kernel contains 186 hyperparameters which are identified robustly. The accuracy of the prediction and the UQ is similar to the parametric non-stationary kernel, with a slight advantage in the CRPS but approximately twice the compute time. As with all our test runs, this run was repeated several times with the same result.
Figure 10: Performance overview for the climate model, computed with the BDGP. The model function — posterior mean (left) and variance (right) — is defined over \([0,1]^{2}\times\{0.5\}\). This result is only presented for completeness. Clearly, our BDGP setup is not competitive compared to the tested kernels. It is presented in Appendix C.
Figure 9: MCMC sampling convergence for the climate model and all kernels. From the top: \(k_{stat}\), \(k_{para}\), and \(k_{nn}\). The MCMC is converging robustly but at vastly different time scales; the deep kernel is by far the slowest to converge.
#### 4.2.3 The X-Ray Scattering Model
Our three-dimensional X-ray scattering dataset was introduced in Section 4.1. The kernel setup is identical to our climate example (see Section 4.2.2). The results are presented in Figure 11. This experiment shows improving performance as kernel complexity increases, however, the improvements are moderate. As before, if accuracy is the priority, non-stationary kernels should be considered. Again, the deep kernel (\(k_{nn}\)) performed very competitively -- with a significant advantage in terms of RMSE and CRPS -- but did not reach the same log marginal likelihood as the competitors, which can be traced back to solving a high-dimensional optimization due to a large number of hyperparameters. This opens the door to an even better performance if more effort and time is spent in training the model. What stands out in Figure 11 is the smaller and more detailed predicted uncertainties for the deep kernel which would affect decision-making in an online data acquisition context. Also, the posterior mean has more intricate details compared to the stationary and the parametric non-stationary kernel. Figure 12 reveals stable convergence of the MCMC sampling for all kernels at similar time scales supporting the deep kernel as the superior choice on for this model. We again included the BDGP, which however was not competitive (Figure 13).
Figure 11: Performance overview for the X-ray model. From the top: \(k_{stat}\), \(k_{para}\), and \(k_{nn}\). The posterior mean is displayed in the left column. The posterior variance is on the right. The model function is defined over \([0,1]^{2}\times\{0.24\}\). This model is obtained after the convergence of a global evolutionary optimizer. Once again, the stationary kernel stands out due to fast computing speeds. The compute time increases significantly for the parametric non-stationary kernel, due to the 58 hyperparameters that have to be found. Since the CRPS is our most relevant performance metric, the approximation and the UQ are better than for the stationary kernel. The deep kernel, for this dataset, performs best with respect to the RMSE and CRPS while achieving lower log marginal likelihood. The training time increases again because 186 hyperparameters have to be found.
Figure 12: MCMC sampling convergence for the X-ray model and all kernels. From the top: \(k_{stat}\), \(k_{para}\), and \(k_{nn}\). The MCMC is converging robustly and, in this case, at similar time scales, making the case for the deep kernel that reached the best CRPS.
Figure 13: Performance overview for the X-ray scattering model, computed with the BDGP. The model function — posterior mean (left) and variance (right) — is defined over \([0,1]^{2}\times\{0.24\}\). This result is, again, included for completeness; it is not competitive compared to the tested kernels. Our script is presented in Appendix C.
### Interpretations Test-by-Test
The one-dimensional synthetic function (21) exhibits a complex behavior that is captured by our stationarity analysis through the clustering of the length scale versus signal variance and the characteristics observed in the violin plots. The clustering patterns reveal a multifaceted behavior, with different regions of the data exhibiting distinct statistical properties. The linear correlation within one cluster (lower-left, scatter plot, Figure 3) and the dispersion in the other cluster captures the interplay between the sinusoidal and quadratic components of the function. The linear cluster indicates a linear correlation between these two hyperparameters within a specific region of input data. This pattern may correspond to the sinusoidal components of the function, where local oscillations exhibit a consistent relationship between the length scale and signal variance. The additional dispersed points in the scatter plot likely correspond to the regions influenced by the quadratic term in the function, where the statistical properties vary, leading to a broader distribution of the hyperparameters. The bimodal distributions observed in the violin plots for both the length scale and signal variance further corroborate the non-stationarity. These distributions reflect the complexity of the signal, with different modes corresponding to different patterns within the data. The primary modes near zero may correspond to the high-frequency components of the sinusoidal terms, while the secondary mode captures the broader trend introduced by the quadratic term. The signal variance violin plot's very weak bimodal pattern with a larger variance spread compared to the length scale violin plot reflects the varying magnitude and complexity of the signal. The larger spread in the signal variance captures the diverse behaviors within the synthetic function, including both oscillatory and quadratic patterns. The results provide insights into the intricate interplay between the sinusoidal and quadratic components of the function. The analysis underscores the algorithm's robustness and sensitivity in measuring non-stationarity across complex data structures, demonstrating its potential as a valuable tool for understanding diverse and multifaceted latent functions. Due to the strong non-stationarity in the data, non-stationary kernels performed extremely well in this case. Due to the low dimensionality, the number of hyperparameters is low in all cases, leading to robust training. It is in our opinion safe to conclude, that in one-dimensional cases, with moderate dataset sizes, and suspected non-stationarity in the data, non-stationary kernels are to be preferred. Both our parametric and deep non-stationary kernels performed well with a slight edge in accuracy for the parametric kernel (see Figure 6). Our two tested deep GP setups (Figures 6 and 7) were not competitive given our reasonable-constraint effort, which in this case, was in the order of days.
Moving on to the climate data example, Figure 4 shows a concentrated cluster near \((0,0)\) and some dispersed points in the length-scale-signal-variance scatter plot which may indicate a strong stationarity in large parts of the input space. This concentration suggests that the statistical properties are consistent across this region, possibly reflecting a dominant pattern or behavior in the data. The presence of fewer dispersed points, forming another much less concentrated cluster, reveals some underlying non-stationarity. In the violin plot, we see a near-unimodal distribution in the signal variance with some weak non-stationarity in the length scale. The computational experiments (see Figures 11) reveal a trade-off between accuracy and compute time. One has to put much more effort -- number of hyperparameters and time -- into the computation for a relatively small gain in accuracy. Both the parametric non-stationary kernel and the deep kernel achieve higher accuracy than the stationary kernel but are costly. In time-sensitive situations, the stationary kernel is likely to be preferred in this situation; for best accuracy, the parametric non-stationary kernel or the deep kernel is the superior choice. Looking at the CRPS, the deep kernel has a slight edge in accurately estimating uncertainties over the parametric kernel; however, the parametric non-stationary kernel reaches the highest log marginal likelihood. For this dataset, we tested the BDGP without much success under our reasonable-effort constraint (see Figure 10).
Finally, for the X-ray scattering data, Figure 5 shows the presence of a very concentrated cluster near \((0,0)\) in the length-scale-signal-variance scatter plot. This concentration continues to indicate strong stationarity, reflecting consistent statistical properties in much of the domain. However, the inability of the scattered points to form even a weak second cluster represents a significant departure from the climate dataset. The unimodal distributions in the violin plots, with the mode near zero, support the presence of a strong stationary pattern. This leads to similar performances across our test kernels (see Figure 11), with the stationary kernel showing a high RMSE but the worst CRPS, and the deep kernel showing a slight edge in CRPS over its competitors -- since stationary kernels only allow us to estimate uncertainties based on global properties of the data and local geometry it is expected that non-stationarity kernels estimate uncertainties more accurately which manifests itself in a higher CRPS. The parametric non-stationary kernel leads the field in
log marginal likelihood. Surprisingly, among the three tested kernels, the deep kernel leads to by far the lowest log marginal likelihood, which, again, suggested a better optimizer might lead to a much-improved performance. Once again, our setup of the BDGP was not competitive (see Figure 13).
### Key Takeaways from the Computational Experiments
While we included as much information in the computational experiments and the Appendix as possible to give the reader a chance to make up their own minds, here we summarize some key takeaways.
1. Stationary kernels proved to be surprisingly competitive in terms of the RMSE and are unbeatable when evaluating accuracy per time. It seems worth it in most cases to run a stationary GP first for comparison.
2. Uncertainty is estimated more accurately by non-stationary kernels; if UQ-driven decision-making is the goal, non-stationary kernels are to be preferred. This is not a surprise since, given a constant (possibly anisotropic) length scale and a signal variance, the posterior variance only depends on data-point geometry.
3. The parametric non-stationary kernel encodes a flexible non-stationarity while maintaining interpretability. The involved parametric functions over the input space can be visualized and interpreted.
4. Deep kernels are some of the most flexible kernels but interpretation is lost in all but the simplest cases, which can easily lead to model misspecifications (wrong model class and hyperparameters). Our models took a fair amount of experimenting before an acceptable performance was achieved. In online applications, in which testing beforehand is not possible, deep kernels should be used with caution.
5. Non-stationarity in the covariance structure appears in signal variance and length scale; ideally a kernel addresses both (see Equation 26).
6. While not included in the tests, experimenting with prior mean functions has shown that non-stationarity properties highly depend on the prior mean function of the GP. This is especially true for the non-stationary signal variance.
7. Extrapolation and non-stationary kernels are difficult to combine. While the parametric non-stationary kernel can be set up for extrapolation, traditional neural networks are poorly equipped for that task.
8. We should think of the number of hyperparameters conservatively; too many bear the risk of model misspecification (through local minima) and overfitting.
9. The parametric non-stationary kernel achieved overall better RMSE; the deep kernel led to better uncertainty quantification as indicated by the CRPS.
### A Parametric Deep Kernel
Given the observation that non-stationarity in the covariance structure of a dataset originates from a non-constant signal variance and length scales, one might argue that both should be addressed in the kernel design. Our parametric non-stationary kernel attempts to account for all non-stationary purely through the signal variance; it leaves the length scale constant -- however, implementations exist that allow non-constant and anisotropic length scales [48, 52], generally in the same flavor as the parametric non-stationary signal variance. The deep kernel, on the other hand, only acts on the length scale by warping the input space. It seems logical to ask what happens if we combine the two concepts. The kernel
\[k(\mathbf{x}_{i},\mathbf{x}_{j})=\sum_{d=1}^{2}g_{d}(\mathbf{x}_{i})g_{d}( \mathbf{x}_{j})k(\boldsymbol{\phi}(\mathbf{x}_{i}),\boldsymbol{\phi}( \mathbf{x}_{j})) \tag{26}\]
achieves just that; it is a combination of our parametric non-stationary kernel and the deep kernel. Modeling our synthetic dataset, see Figure 14, we see good performance with moderate improvements compared to our earlier tests (Figure 6). The kernel might constitute somewhat of an overkill for such a simple problem but might lead to more significant gains in real applications. We encourage the reader to give this kernel a try in their next application.
### Connection between Multi-Task Learning and Non-Stationary Kernels
Multi-task learning offers a powerful paradigm to leverage shared information across multiple related tasks, thereby enhancing the predictive performance of each individual task [38, 63, 78]. This is particularly beneficial when data for some tasks are sparse, as information from data-rich tasks can be used to inform predictions for data-scarce tasks. Flexible non-stationary kernels offer an interesting benefit for multi-task learning: instead of employing specialized techniques, such as coregionalization, one can reformulate the problem to a single task problem and let a flexible non-stationary kernel learn intricate correlations between input (\(\mathcal{X}_{i}\)) and output (\(\mathcal{X}_{o}\)) space locations. By transforming the multi-task learning problem over \(\mathcal{X}_{i}\) to a single-task learning problem over \(\mathcal{X}_{i}\times\mathcal{X}_{o}\), no further changes to the core algorithm are required. This has been known for a long time and is referred to as problem-transformation methods [7]. These methods were originally dismissed as not being able to capture intricate correlations between the tasks; however, this is only true if stationary, separable kernels are used. A flexible non-stationary kernel is able to flexibly encode covariances across the input and the output domain, independent of the indexing of the tasks. This makes it possible to transfer all complexities of multi-task learning to the kernel design and use the rest of the GP framework as-is, inheriting the superior robustness and interpretability properties of single-task GPs.
## 6 Summary and Conclusion
In this paper, we put on the hat of a machine learning practitioner trying to find the best kernel or methodology within the scope of a Gaussian process (GP) to address non-stationarity in various datasets. We introduced three different datasets -- one synthetic, one climate dataset, and one originating from an X-ray scattering experiment at the CMS beamline at NSLSII, Brookhaven National Laboratory. We introduced a non-stationarity measure and studied each dataset to be able to judge their non-stationarity properties quantitatively. We then presented four different methodologies to address the non-stationarity: Ignoring it by using a stationary kernel, a parametric non-stationary kernel, that uses a flexible non-constant signal variance, a deep kernel that uses a neural network to warp the input space, and a deep GP. We set all methodologies up under reasonable effort constraints to allow for a fair comparison; just like a a practitioner might encounter. In our case, that meant minutes of setup time for the stationary kernels and hours to days for the non-stationary kernels and the deep GPs. After the methodologies were set up, we ran our computational tests and presented the results unredacted and uncensored. This way, we hope, the reader gets the best value out of the comparisons. This is also to ensure that the reader has the chance to come up with conclusions different from ours. To further the readers' ability to double-check and learn, we are publishing all our codes online.
Our tests have shown that even weak non-stationarity in a dataset motivates the use of non-stationary kernels if training time is not an issue of concern. If training time is very limited, stationary kernels are still the preferred choice. We have discovered that non-stationarity in the covariance comes in two flavors, in the
Figure 14: Approximation result for the one-dimensional synthetic dataset using the parametric deep non-stationary kernel (26). The kernel combines non-stationarity in the signal variance and the length scale by leveraging both the parametric non-stationarity kernel design and a deep kernel. The approximation is on par with our previously tested methods but the reached likelihood is higher. This kernel might perform very well when strong non-stationarity is present in length scale and signal variance
signal variance and the length scale and, ideally, both should be addressed through novel kernel designs; we drew attention to one such kernel design. However, non-stationary kernels come with a great risk of model misspecification. If a new model should be relied on out-of-the-box, a stationary kernel might be the preferred choice.
We hope that this paper motivates more practitioners to deploy and experiment with non-stationary kernels but to also be aware of some of the risks.
AcknowledgementsThe work was supported by the Laboratory Directed Research and Development Program of Lawrence Berkeley National Laboratory under U.S. Department of Energy Contract No. DE-AC02-05CH11231. This work was further supported by the Regional and Global Model Analysis Program of the Office of Biological and Environmental Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. This document was prepared as an account of work sponsored by the United States Government. While this document is believed to contain the correct information, neither the United States Government nor any agency thereof, nor the Regents of the University of California, nor any of their employees, makes any warranty, express or implied, or assumes any legal responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by its trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof, or the Regents of the University of California. The views and opinions of the authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof or the Regents of the University of California.
We want to thank Kevin G. Yager from Brookhaven National Laboratory for providing the X-ray scattering dataset. This data collection used resources of the Center for Functional Nanomaterials (CFN), and the National Synchrotron Light Source II (NSLS-II), which are U.S. Department of Energy Office of Science User Facilities, at Brookhaven National Laboratory, funded under Contract No. DE-SC0012704.
Conflict of InterestThe authors declare no conflict of interest.
Data AvailabilityAll data and the Jupyter notebook that runs all experiments can be found on gpcam. lbl.gov/examples/non_stat_kernels (available upon publication).
Code AvailabilityAll experiments (except deep GPs) were run using the open-source Python package fvGP (github.com/lbl-camera/fvGP) which is available stand-alone and within gpCAM(gpcam. lbl.gov). The run scripts are available at gpcam.lbl.gov/examples/non_stat_kernels (available upon publication).
Author ContributionsM.M.N. originally decided to write this paper, led the project, developed the test scripts and software with help from H.L. and M.R., and ran the computational experiments. H.L. suggested the non-stationarity measure and improved its practicality with help from M.M.N. and M.R. H.L. also did the majority of the work regarding deep GPs, both regarding the algorithms and the manuscript. M.R. oversaw all developments, especially regarding parametric non-stationary kernels, and further assisted with writing and editing the manuscript. All authors iteratively refined the core ideas, algorithms, and methods. All decisions regarding the work were made in agreement with all authors. All authors iteratively revised the manuscript. |
2308.00152 | A Hybrid Optimization and Deep Learning Algorithm for Cyber-resilient
DER Control | With the proliferation of distributed energy resources (DERs) in the
distribution grid, it is a challenge to effectively control a large number of
DERs resilient to the communication and security disruptions, as well as to
provide the online grid services, such as voltage regulation and virtual power
plant (VPP) dispatch. To this end, a hybrid feedback-based optimization
algorithm along with deep learning forecasting technique is proposed to
specifically address the cyber-related issues. The online decentralized
feedback-based DER optimization control requires timely, accurate voltage
measurement from the grid. However, in practice such information may not be
received by the control center or even be corrupted. Therefore, the long
short-term memory (LSTM) deep learning algorithm is employed to forecast
delayed/missed/attacked messages with high accuracy. The IEEE 37-node feeder
with high penetration of PV systems is used to validate the efficiency of the
proposed hybrid algorithm. The results show that 1) the LSTM-forecasted lost
voltage can effectively improve the performance of the DER control algorithm in
the practical cyber-physical architecture; and 2) the LSTM forecasting strategy
outperforms other strategies of using previous message and skipping dual
parameter update. | Mohammad Panahazari, Matthew Koscak, Jianhua Zhang, Daqing Hou, Jing Wang, David Wenzhong Gao | 2023-07-31T21:07:11Z | http://arxiv.org/abs/2308.00152v1 | # A Hybrid Optimization and Deep Learning Algorithm for Cyber-resilient DER Control
###### Abstract
With the proliferation of distributed energy resources (DERs) in the distribution grid, it is a challenge to effectively control a large number of DERs resilient to the communication and security disruptions, as well as to provide the online grid services, such as voltage regulation and virtual power plant (VPP) dispatch. To this end, a hybrid feedback-based optimization algorithm along with deep learning forecasting technique is proposed to specifically address the cyber-related issues. The online decentralized feedback-based DER optimization control requires timely, accurate voltage measurement from the grid. However, in practice such information may not be received by the control center or even be corrupted. Therefore, the long short-term memory (LSTM) deep learning algorithm is employed to forecast delayed/missed/attacked messages with high accuracy. The IEEE 37-node feeder with high penetration of PV systems is used to validate the efficiency of the proposed hybrid algorithm. The results show that 1) the LSTM-forecasted lost voltage can effectively improve the performance of the DER control algorithm in the practical cyber-physical architecture; and 2) the LSTM forecasting strategy outperforms other strategies of using previous message and skipping dual parameter update.
_Index Terms_--with, distributed energy resources (DERs), DER control, LSTM, Deep learning.
cyber-resilient alg
## I Introduction
The distribution grid is undergoing 1) proliferation of distributed energy resources (DERs) including utility-level DERs and behind-the-meter (BTM) DERs, 2) more and faster data streaming from sensor networks, 3) underpinning data-driven methods, and 4) local energy market design. This creates the open research question that how does the future development of the synchronized sampling data and data analytics technology may contribute to the grid visibility, and reliable and resilient operation of the integrated grid. Especially, geographically dispersed DERs can be coordinated at scale with two basic core functions: a) DER production scheduling, dispatch of active and reactive power to address stochastic and dynamic challenges; b) DER ancillary services provision, including frequency and voltage regulation [1]. However, coordinating a large number of DERs heavily depend on access to reliable and secure data, sensing, communications and computing at multiple operational timescales spanning milliseconds to hours [2]. Therefore, as a typical cyber-physical system, the development of the DER management systems (DERMS) and scalable cyber-resilient DER monitoring and control algorithms for the distribution grid with proliferation of heterogenous grid-edge resources still remains unsolved.
The existing research work related to the DER coordination are focusing on 1) DERMS platform [3, 4], 2) optimal voltage regulation of virtual power plant (VPP) [5, 6], and communications architectures for DER coordination [2, 7, 8]. However, very little attention has been paid to perhaps development of scalable cyber-physical DER control algorithms resilient to asynchronous data flow resulting from real communication networks. Therefore, the novel cyber-resilient DER control algorithms are in a critical need to address communication and security issues.
To fill in this gap, this study further proposes a hybrid feedback-based optimization and deep learning algorithm for DER control at the grid edge with incentive for utilizing more sampling grid data and underpinning data-driven methods; and providing the guideline to the DERMS deployment. This work is based on the existing optimal regulation of virtual power plant (VPP) algorithm [5] and cyber-physical DER control algorithm [9]. The challenge of development of cyber
Fig. 1: Cyber-resilient DER Control Architecture of Hybrid Feedback-based Optimization and Deep Learning Algorithm
resilient DER control algorithms is the way to handle delayed or lost voltage measurements. To this end, the long short-term memory (LSTM) deep learning algorithm is employed to forecast delayed/missed messages with high accuracy, which is a main contribution of this paper.
## II Problem Recap of DER Control
A DER penetrated distribution feeder with \(N+1\) nodes, \(\mathcal{N}\cup\{0\},\mathcal{N}:=\{1,...,N\}\) is considered. The feeder head is denoted as Node \(0\). Let define the \(N\)-dimensional phasor voltage vector as \(\mathbf{v}:=[V_{1},...,V_{N}]^{T}\in\mathbb{C}^{N}\). \(P_{0}\) and \(Q_{0}\) denote the active and reactive powers at the feeder, and \(P_{l,n}\) and \(Q_{l,n}\) are the load at the \(n\)th node. Let \(\mathcal{G}:=\{1,...,G\}\subseteq\mathcal{N}\) be a set of nodes equipped with DERs, and \(P_{i}\) and \(Q_{i}\) are the DER powers at Node \(i\in\mathcal{G}\). For each PV system with the capacity \(S_{i}\), \(\mathcal{Y}_{i}=\{(P_{i},Q_{i}):0\leq P_{i}\leq P_{i}^{av},P_{i}^{2}+Q_{i}^{2} \leq S_{i}^{2}\}\subset\mathbb{R}^{2}\) denotes the feasible range of \(P_{i},Q_{i}\) and \(P_{i}^{av}\) be the available power. The injection power at nodes \(\mathcal{N}\) is denoted as \(\mathbf{s}_{inj}:=[S_{1},...,S_{N}]\in\mathcal{C}^{N}\), where \(S_{i}=-P_{l,i}-jQ_{l,i}\) for \(i\in\mathcal{G}\backslash\mathcal{N}\), and \(S_{i}=P_{i}-P_{l,i}+j(Q_{i}-Q_{l,i})\) for \(i\in\mathcal{G}\). Denoting \(\mathbf{v}_{nom}\) as the equilibrium point of the nominal-voltage vector, the "LinDisFlow" approach is employed to achieve the approximate linear power flow equations, where \(|\mathbf{v}|\) and \(P_{0},Q_{0}\) are the functions of real and reactive injection power:
\[\begin{split}|\mathbf{v}|&\approx\mathbf{Ap_{inj}} +\mathbf{Bq_{inj}}+\mathbf{c},\\ &[P_{0},Q_{0}]^{T}&\approx\mathbf{Mp_{inj}}+\mathbf{ Nq_{inj}}+\mathbf{o};\end{split} \tag{1}\]
where \(\mathbf{p_{inj}}:=\Re\{\mathbf{s}_{inj}\}\), \(\mathbf{q_{inj}}:=\Im\{\mathbf{s}_{inj}\}\). And suitable linearization methods for the AC power-flow equations can be employed to achieve the model parameters \(\mathbf{A}\in\mathbb{R}^{N\times N},\mathbf{B}\in\mathbb{R}^{N\times N}, \mathbf{M}\in\mathbb{R}^{2\times N},\mathbf{N}\in\mathbb{R}^{2\times N}, \mathbf{c}\in\mathbb{R}^{N},\mathbf{o}\in\mathbb{R}^{2}\)[5].
Each DER dispatch happens in a discrete-time fashion. For each time instant \(t_{k},k\in\mathbb{N}\), Let functions \(f_{i}^{t_{k}}(\cdot)\) capture different objectives from different DER owners and the utility, and \(P_{0,set}^{t_{k}}\) be the setpoint at the feeder head. Denote \(\mathcal{M}:=\{1,...,M\}\subset\mathcal{N}\) as a set of nodes where voltage measurements are available and the voltage regulation within \([V^{min},V^{max}]\) is required at each node. Then, the DER dispatch problem is formulated into a time-varying optimization problem with the operational objectives and constraints at \(t_{k}\), as below:
\[\begin{split}\min_{P_{i},Q_{i}}&\quad\sum_{i\in \mathcal{G}}f_{i}^{t_{k}}(P_{i},Q_{i})\\ \text{s.t.}&\quad P_{i},Q_{i}\in\mathcal{Y}_{i}^{t_{ k}}(2a)\\ &\quad P_{0}^{t_{k}}(P_{i},Q_{i})-P_{0,set}^{t_{k}}\leq E^{t_{k}}(2b)\\ &\quad-(P_{0}^{t_{k}}(P_{i},Q_{i})-P_{0,set}^{t_{k}})\leq E^{t_{k }}(2c)\\ &\quad V^{min}-|V_{n}^{t_{k}}|(P_{i},Q_{i})\leq 0,\forall n\in\mathcal{M} (2d)\\ &\quad|V_{n}^{t_{k}}|(P_{i},Q_{i})-V^{max}\leq 0,\forall n\in \mathcal{M}(2e)\end{split} \tag{2}\]
Lagrangian multipliers \(\lambda^{t_{k}}\) and \(\zeta^{t_{k}}\) are associated with the setpoints tracking constraints (2b)-(2c). And the dual variables \(\boldsymbol{\gamma}^{t_{k}}:=[\gamma_{1}^{t_{k}},...,\gamma_{M}^{t_{k}}]^{T}\) and \(\boldsymbol{\mu}^{t_{k}}:=[\mu_{1}^{t_{k}},...,\mu_{M}^{t_{k}}]^{T}\) are associated with the voltage regulation constraints(2d) - (2e). Then, the DER control algorithm is reformulated to the lagrangian equation with \(\mathbf{d}:=\{\boldsymbol{\gamma},\boldsymbol{\mu},\lambda,\zeta\}\), as below,
\[\begin{split}&\mathcal{L}^{t_{k}}(\mathbf{p},\mathbf{q},\mathbf{d }):=\sum_{i\in\mathcal{G}}f_{i}^{t_{k}}(P_{i},Q_{i})\\ &\quad+\sum_{n\in\mathcal{M}}[\gamma_{n}(V^{min}-|V_{n}^{t_{k}}|(P _{i},Q_{i}))\\ &\quad+\mu_{n}(|V_{n}^{t_{k}}|(P_{i},Q_{i})-V^{max})]\\ &\quad+\lambda[P_{0}^{t_{k}}(P_{i},Q_{i})-P_{0,set}^{t_{k}}-E^{t_ {k}}]\\ &\quad+\zeta[P_{0,set}^{t_{k}}-P_{0}^{t_{k}}(P_{i},Q_{i})-E^{t_{k}}] \\ &\quad+\frac{\nu}{2}\sum_{i\in\mathcal{G}}(P_{i}^{2},Q_{i}^{2})- \frac{\epsilon}{2}\|\mathbf{d}\|_{2}^{2},\forall i\in\mathcal{G},\forall n\in \mathcal{M}\end{split} \tag{3}\]
where \(\mathbf{p}:=[P_{1},...,P_{G}]^{T}\), \(\mathbf{q}:=[Q_{1},...,Q_{G}]^{T}\), the tracking error \(E^{t_{k}}>0\), and \(\nu\) and \(\epsilon\) be regularization coefficients.
## III Hybrid Optimization and Deep Learning Algorithm for Cyber-resilient DER Control
To solve the DER control problem described in (3) considering data loss and network issues, a new cyber-resilient algorithm is proposed in this section.
### _Distributed DER Control_
The distributed architecture will improve the reliability of the DERs control at scale. The hierarchical and distributed control framework proposed in [5, 10] consists of three main steps, shown in Fig. 1: **Step 1** collecting voltage magnitude measurements from each node \(n\in\mathcal{M}\) and measurement of \(\widehat{P}_{0}^{t_{k}}\) from the head to the control center (e.g., the DERMS software); **Step 2** updating dual parameter set \(\mathbf{d}^{t_{k+1}}=[\gamma_{n}^{t_{k+1}},\mu_{n}^{t_{k+1}},\lambda^{t_{k+1}},\zeta^{t_{k+1}}]\) as follows and then broadcasting it to each DER controller/node:
\[\begin{split}\gamma_{n}^{t_{k+1}}&=proj_{\mathbb{R}_ {+}}\left\{\gamma_{n}^{t_{k}}+\alpha(V^{min}-|\widehat{V}_{n}^{t_{k}}|{-} \epsilon\gamma_{n}^{t_{k}})\right\},\\ \mu_{n}^{t_{k+1}}&=proj_{\mathbb{R}_{+}}\left\{ \mu_{n}^{t_{k}}+\alpha(|\widehat{V}_{n}^{t_{k}}|{-}V^{max}-\epsilon\mu_{n}^{t_{k} })\right\},\\ \lambda^{t_{k+1}}&=proj_{\mathbb{R}_{+}}\left\{ \lambda^{t_{k}}+\alpha(\widehat{P}_{0}^{t_{k}}-P_{0,set}^{t_{k}}-E^{t_{k}}- \epsilon\lambda^{t_{k}})\right\},\\ \zeta^{t_{k+1}}&=proj_{\mathbb{R}_{+}}\left\{ \zeta^{t_{k}}+\alpha(P_{0,set}^{t_{k}}-\widehat{P}_{0}^{t_{k}}-E^{t_{k}}- \epsilon\zeta^{t_{k}})\right\};\end{split} \tag{4}\]
**Step 3** calculating and updating new \(P_{i}^{t_{k+1}},Q_{i}^{t_{k+1}}\) at each DER agent as follow, after receiving \(\widehat{P}_{i}^{t_{k}},\widehat{Q}_{i}^{t_{k}}\) locally and \(\mathbf{d}^{t_{k+1}}\) remotely from control center:
\[\begin{split}[P_{i}^{t_{k+1}},Q_{i}^{t_{k+1}}]^{T}& =proj_{\mathcal{Y}_{i}^{t_{k}}}\{[P_{i}^{t_{k}},Q_{i}^{t_{k}}]^{T}\\ &\quad-\alpha\nabla_{[P_{i},Q_{i}]}\mathcal{L}^{t_{k}}(\mathbf{p}, \mathbf{q},\mathbf{d})|_{\widehat{P}_{i}^{t_{k}},\widehat{Q}_{i}^{t_{k}}, \mathbf{d}^{t_{k+1}}}\};\end{split} \tag{5}\]
In the cyber-physical system, **Step 1** and **Step 2** is implemented in the control center located in the feeder head, and **Step 3** is conducted in the individual
with delayed messages in both uplink and downlink. The first strategy is to use previous measurement of a delayed/missed message to continue the DER control procedure, and the another strategy is to skip the updating of dual parameters or new dispatched power for corresponding delayed messages. Along with these two strategies, we validated the impact of individual communication uplink/downlink situation on the control algorithm performance, based on the metric of the feeder head's power setpoint tracking error. The sensitivity analysis results show that more voltage measurement delayed in the uplink will degrade the algorithm performance more dramatically for both strategies, compared to delayed downlink dual variables. Fig.2 shows such sensitivity observation of using previous message with different message loss rates.
### _LSTM Network_
The above sensitivity analysis results indicate that it is critically needed to develop a more intelligent and effective method to deal with the delayed/lost voltage magnitude measurement messages. Currently, data-driven methods have obtained a great success in anomaly detection and missed features and data estimation [11]. In addition, considering time series nature of collected voltage magnitude measurements, the state of art long-short term memory (LSTM) network is proposed to effectively estimate the delayed/lost data. The LSTM is an extended and advanced version of traditional recurrent neural networks (RNNs) [12].
The LSTM network depicted in Fig. 3, contains cell states, input gate, output gate, and forget gate. The cell state \(c(t)\) is the key concept of the LSTM model and it keeps important parts of historical data. The input gate decides to select parts of the input which is relevant to the current state of the system and allows them to pass through the gate. This procedure is implemented by considering the previous output \(h_{t-1}\) and the current input \(x_{t}\) together, as below:
\[i_{t}=\sigma(W_{i}.[h_{t-1},x_{t}]+b_{i}), \tag{6}\]
where \(i_{t}\), \(\sigma\), \(W_{i}\) and \(b_{i}\) are output of input gate, sigmoid function, weight matrix and bias vector of input gate, respectively. The output of sigmoid function is in the range of (0,1). The value close to 1 means that the input is more relevant to the current cell state, while the value close to 0 means there is a few coherency between input and current cell state. Then, to filter the desired part of input, the \(\tanh\) layer is used to create a vector of new candidate values, \(\tilde{C}_{t}\), which will be used to create the new cell state. The \(\tilde{C}_{t}\) can be found as follow:
\[\tilde{C}_{t}=\tanh(W_{C}.[h_{t-1},x_{t}]+b_{C}), \tag{7}\]
where \(W_{C}\) and \(b_{C}\) are weight matrix and bias vector of input layer. The forget gate decides what part of the previous state should be forgotten. The procedure is similar to the input gate:
\[f_{t}=\sigma(W_{f}.[h_{t-1},x_{t}]+b_{f}), \tag{8}\]
where \(W_{f}\) and \(b_{f}\) are weight matrix and bias vector of the forget layer. The forget gate is equipped with a sigmoid function to choose parts of previous step that remain in the cell state. Combining new data came from the input layer and remained data from the previous cell state, the new cell state can be calculated as below:
\[C_{t}=f_{t}*C_{t-1}+i_{t}*\tilde{C}_{t}. \tag{9}\]
The output gate decides what should be reported as output. This output is based on the new cell state, as shown:
\[\begin{gathered} o_{t}=\sigma(W_{o}[h_{t-1},x_{t}]+b_{o}),\\ h_{t}=o_{t}*\tanh C_{t}.\end{gathered} \tag{10}\]
This LSTM network will be used to forecast the delayed or missed voltage measurement messages in the uplink.
### _Hybrid Cyber-resilient DER Control Algorithm_
The key concept of the hybrid cyber-resilient DER control algorithm is to employ the deep learning forecast technique to resilient the cyber issues, such as delayed message, lost message, as well as attacked message. Thus, the proposed algorithm will integrate the LSTM-based delayed message forecast model into the original optimization-based DER control framework. This LSTM forecast model consists of four components: Data collector and validator, Historical data, LSTM network training, and Online voltage forecast, shown in Fig.1. At each iteration, the Data collector and validator module collects the voltage measurements and validates if the measurement arrives within the threshold. All received messages are stored into the Historical data block for the training purpose. An optimization technique and a back-propagation through the time are employed for the LSTM network training, and this offline LSTM network can be trained in a periodic way to have updated and more accurate network parameters, which are passed to the Online voltage forecast module periodically. Once the Online voltage forecast module is informed that there is a message delayed,
Fig. 3: LSTM Network Structure
Fig. 2: Tracking Error-based Sensitivity analysis
it conducts the forecast the delayed message in real-time to ensure the DER control algorithm running properly.
```
1:procedure DERMS(\(\nu,\epsilon,\alpha\))
2: initialization: \(t_{k}=1,d^{*},V^{min},V^{max},n\in\mathcal{M}\)
3:repeat
4: update \(E^{t_{k}}\)
5: wait
6: receive the setpoint: \(\widehat{P}_{0,set}^{t_{k}}\)
7: receive measurements: \(|\widehat{V}_{n}^{t_{k}}|\), \(\widehat{P}_{0}^{t_{k}}\),
8:until timer \(\geq d^{*}\) or all measurements received
9:if\(|\widehat{V}_{n}^{t_{k}}|\) received within \(d^{*}\)then
10: update \(\mathbf{d}^{t_{k+1}}\) by (4)
11:else
12: call LSTM forecast model to estimate \(|\widehat{V}_{n}^{t_{k}}|\)
13: wait
14: receive estimated \(|\widehat{V}_{n}^{t_{k}}|\) to update \(\mathbf{d}^{t_{k+1}}\)
15:endif
16: broadcast \(\mathbf{d}^{t_{k+1}}\) to all DERs at grid edge
17:\(t_{k}=t_{k}+1\)
18:endprocedure
19:procedureLocal DER agent \(i\)
20: initialization: \(t_{k}=1\), \(\mathcal{V}_{i}^{t_{k}}\)
21:repeat
22: receive \(\widehat{P}_{i}^{t_{k}},\widehat{Q}_{i}^{t_{k}}\)
23: wait
24:until receive \(\mathbf{d}^{t_{k+1}}\)
25: update \(P_{i}^{t_{k+1}},Q_{i}^{t_{k+1}}\) by (5)
26: dispatch \(P_{i}^{t_{k+1}},Q_{i}^{t_{k+1}}\) to the DER device
27:\(t_{k}=t_{k}+1\)
28: send \(|V_{n}^{t_{k}}|\) to the DERMS
29:endprocedure
30:procedureLocal non-DER grid edge \(n\)
31: initialization: \(t_{k}=1\)
32:whiledo
33: send \(|\widehat{V}_{n}^{t_{k}}|\) to the DERMS
34:\(t_{k}=t_{k}+1\)
35:endwhile
36:endprocedure
37:procedureLSTM Estimator(\(\tilde{V}_{n}^{t_{k-len}},...,\tilde{V}_{n}^{t_{k-1}}\))
38: normalize input voltage vector to per unit value
39: predict using historical voltages: [\(\tilde{V}_{n}^{t_{k-len}},...,\tilde{V}_{n}^{t_{k-1}}\)]
40: de-normalize predicted voltage value
41: report predicted voltage value: \(|\tilde{V}_{n}^{t_{k}}|\)
42:endprocedure
```
**Algorithm 1** Hybrid Optimization and Deep Learning Algorithm for Cyber-resilient DER Control
We define a deadline or _delay threshold_, namely \(d^{*}>0\) in milliseconds, for the uplink message. In a normal operating condition, the Data collector and validator module collects and validates nodal voltages and the DER control module generates dual variables to update DERs' setpoints, shown in Fig. 1). If any local voltage measurement \(|\tilde{V}_{n}^{t_{k}}|\) does not arrive at the DERMS within time \(d^{*}\), the LSTM forecast model is to predict the delayed voltage \(\tilde{V_{n}}^{t_{k}}\) by using previous voltages \(\tilde{V_{n}}^{t_{k-len}},...,\tilde{V_{n}}^{t_{k-1}}\) of Node \(n\) to continue computing the dual parameters \(\lambda_{n}^{t_{k+1}},\mu_{n}^{t_{k+1}}\), where \(len\) is the length of historical data at Node \(n\), that work as the input of the LSTM forecast model. The resulting hybrid DER control algorithm is described in detail in Algorithm 1.
## IV Validation and Results
To validate the proposed hybrid algorithm, we consider a modified single phase IEEE 37-node test feeder and please refer to [9] for the detailed configuration data, and the topology is shown in Fig. 5. The generation profile data is generated based on the real solar radiation data of Sacramento, CA on August 15, 2012 from the NREL Measurement and Instrumentation Data Center (MIDC) with a granularity of 1 second after processing and capacity of 50kW, shown in Fig. 4(a) too. Other parameters are set as \(V^{min}=0.95,V^{max}=1.05,\nu=10^{-3},\epsilon=10^{-4},E^{t_{k}}=0.001\),and the step size \(\alpha=0.1\). And the PV system optimization objective (3) is set as \(f_{i}^{t_{k}}(P_{i},Q_{i})=c_{p}(P_{av,i}^{t_{k}}-P_{i}^{t_{k}})^{2}+c_{q}(Q_{i} ^{t_{k}})^{2}\), where \(c_{p}=3,c_{q}=1,i\in\mathcal{G}\). We consider the setpoints \(P_{0,set}^{t_{k}}\) from 12:00 to 14:00, consists of 5-minute economic dispatch commands, 1-minute automatic generator control setpoints, ramp signals and constant commands of 65 minutes, depicted in red line, shown in Fig. 4(b).
The LSTM network is implemented by using the Keras library. To generate the training data set of voltage values, the randomly generated \(P_{0}\) setpoint curves are used to run the algorithm in the ideal cyber network. The look-back time window size is set to 10 to train the LSTM model for each node and the root mean square error (RMSE) is adopted as the loss function to optimize trained model.
To validate the performance of the proposed hybrid optimization and deep learning cyber-resilient DER control algorithm, we conduct the comparison with other two commonly-used strategies for the delayed messages: 1) using previous voltage measurement to update dual parameters, and 2) skipping the update of the corresponding dual parameters. The delay model described in [9] is applied to generate delays in the uplink. Setting \(d^{*}=6.675\) ms will lead to 1% of messages being delayed. To better show the impact of delayed messages on the performance of DER control algorithm, the communication delay model has been applied only from 12:30 to 13:30. We implemented IEEE-37 test case in OpenDSS and the cyber-physical DER control along with two above-mentioned strategies in Matlab and the LSTM based voltage forecast model in Python with a granularity of 1 second.
Testing the trained LSTM model approves the high accuracy of LSTM in predicting missed voltage values. The RMSE for predicting missed voltages for our test case is 0.00065 kV. The tracking and voltage regulation performance is shown in Fig. 4. From Fig. 4(b) and (f), we have this observation: the strategy of using previous message for delayed measurements can not be successful in keeping setpoint tracking and voltage regulation convergence. Even after removing asynchrony of communication, the algorithm is not able to track the setpoint. The performance of the skipping strategy, shown in Fig.
4(c) and (g), indicates that the skipping strategy outperforms the strategy of using previous message, although the total performance of this strategy is not acceptable in practice. Fig. 4(d) and (h) shows that the LSTM forecast strategy can track \(P_{0,set}^{t_{k}}\) with the RMSE value of 3.685 kW and regulate nodal voltages properly, and it obviously has the best and acceptable performance among three strategies with 1% delay rate.
## V Conclusion
In this paper, we developed a hybrid feedback-based optimization and deep Learning algorithm for cyber-resilient DER control to enhance the resiliency of the DERS system to all kinds of cyber issues, such as the delayed/lost voltage measurements. The well-trained LSTM forecast model can estimate the delayed voltage data with high accuracy. The experiment result shows that the proposed algorithm obviously outperforms both using previous message and skipping strategies for the delayed messages.
## Acknowledgement
Matthew Koscak was partially supported by NSF OAC-1852102.
|
2309.16419 | Magnetic structure and phase diagram of the Heisenberg-Ising spin chain
antiferromagnetic PbCo$_{2}$V$_{2}$O$_{8}$ | The effective spin-1/2 antiferromagnetic Heisenberg-Ising chain materials,
ACo$_2$V$_2$O$_8$, A = Sr, Ba, are a rich source of exotic fundamental
phenomena and have been investigated for their model magnetic properties both
in zero and non-zero magnetic fields. Here we investigate a new member of the
family, namely PbCo$_2$V$_2$O$_8$. We synthesize powder and single crystal
samples of PbCo$_2$V$_2$O$_8$ and determine its magnetic structure using
neutron diffraction. Furthermore, the magnetic field/temperature phase diagrams
for magnetic field applied along the c, a, and [110] crystallographic
directions in the tetragonal unit cell are determined via magnetization and
heat capacity measurements. A complex series of phases and quantum phase
transitions are discovered that depend strongly on both the magnitude and
direction of the field. Our results show that \pcvo is an effective spin-1/2
antiferromagnetic Heisenberg-Ising chain with properties that are in general
comparable to those of SrCo$_2$V$_2$O$_8$ and BaCo$_2$V$_2$O$_8$. One
interesting departure from the results of these related compounds, is however,
the discovery of a new field-induced phase for the field direction $H\|$[110]
which has not been previously observed. | K. Puzniak, C. Aguilar-Maldonado, R. Feyerherm, K. Prokeš, A. T. M. N. Islam, Y. Skourski, L. Keller, B. Lake | 2023-09-28T13:11:58Z | http://arxiv.org/abs/2309.16419v1 | Magnetic structure and phase diagram of the Heisenberg-Ising spin chain antiferromagnetic PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)
###### Abstract
The effective spin-1/2 antiferromagnetic Heisenberg-Ising chain materials, ACo\({}_{2}\)V\({}_{2}\)O\({}_{8}\), A = Sr, Ba, are a rich source of exotic fundamental phenomena and have been investigated for their model magnetic properties both in zero and non-zero magnetic fields. Here we investigate a new member of the family, namely PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\). We synthesize powder and single crystal samples of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) and determine its magnetic structure using neutron diffraction. Furthermore, the magnetic field/temperature phase diagrams for magnetic field applied along the **c**, **a**, and [110] crystallographic directions in the tetragonal unit cell are determined via magnetization and heat capacity measurements. A complex series of phases and quantum phase transitions are discovered that depend strongly on both the magnitude and direction of the field. Our results show that PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)is an effective spin-1/2 antiferromagnetic Heisenberg-Ising chain with properties that are in general comparable to those of SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)and BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\). One interesting departure from the results of these related compounds, is however, the discovery of a new field-induced phase for the field direction \(H\|[110]\) which has not been previously observed.
## I Introduction
Quantum phase transitions (QPTs) have attracted considerable interest due to their relevance to the fundamental processes of quantum magnetism [1]. Unlike a classical phase transition driven by thermal fluctuations, a QPT arises at \(T\) = 0 K when the system is tuned by a non-thermal external parameter such as pressure, magnetic field, or chemical doping. The spin-1/2 spin-chain with Heisenberg-Ising (XXZ) exchange anisotropy, in a magnetic field applied transverse to the Ising direction generates one of the canonical examples of a QPT [1]. The most famous experimental realization of this model was the quasi-one-dimensional (quasi-1D) spin-1/2 Ising ferromagnet CoNb\({}_{2}\)O\({}_{6}\)[2].
More recently the quasi-1D antiferromagnetic materials AM\({}_{2}\)V\({}_{2}\)O\({}_{8}\), have been found to harbor a wealth of exotic phases including QPTs. Here the M-sites are filled by a magnetic transition metal ion such as Cu\({}^{2+}\), Ni\({}^{2+}\), Co\({}^{2+}\) or Mn\({}^{2+}\), while the divalent A-site ion and V\({}^{5+}\) are non-magnetic. Depending on the nature of the magnetic ion, different spin moments and anisotropies can be explored. Of particular interest are the members ACo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) where A = Sr, Ba, which give rise to effective 1D spin-1/2 antiferromagnets with Heisenberg-Ising (or XXZ) exchange anisotropy due to the Co\({}^{2+}\) ions which form 4-fold screw chains along the tetragonal **c**-axis. The intrachain coupling is strong and antiferromagnetic, while the interchain coupling is weak and eventually gives rise to long-range antiferromagnetic Neel order at sufficiently low temperatures.
In zero magnetic field, these compounds have a spinon continuum above the Neel temperature \(T_{N}\approx\) 5 K, and were used to demonstrate spinon confinement on cooling below \(T_{N}\) where the continuum is replaced by sharp bound-spinon modes [3; 4; 5]. A longitudinal magnetic field applied parallel to the easy axis which is the **c**-axis also shows exotic physics. Above a critical field, the antiferromagnetic order is suppressed to much lower temperatures (\(T<\) 1 K) and the systems undergo a transition to a longitudinal spin density wave and then a transverse canted antiferromagnet with increasing magnetic field [6; 7; 8; 9]. In the excitation spectrum, bound states of magnons known as Bethe strings, which were predicted by Hans Bethe in 1931 [10], were observed for the first time in SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)using terahertz spectroscopy [11; 12] and inelastic neutron scattering [9].
The behavior of these chains in a transverse magnetic field along the **a**-axis (perpendicular to the Ising anisotropy) is equally fascinating. Recent NMR measurements reveal two QPTs for SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)as the magnetic order is suppressed by field [13]. Neutron diffraction and inelastic neutron measurements for BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)revealed a quantum phase transition between two different types of solitonic topological objects [14] where the excitations can be described as collective solitonic modes superimposed on a continuum [15]. At a lower magnetic field of 4.7 T within the ordered phase, a hidden 1D quantum phase transition was identified by NMR [16]. It has universality class described by the exceptional \(E_{8}\) Lie algebra which is characterized by eight gapped excitations whose gaps are theoretically predicted to have precise
values [17]. These excitations were measured by inelastic neutron scattering [16; 18] and terahertz spectroscopy [19] and compared successfully to theory [20]. A magnetic field applied along the other transverse direction ([110]) was shown to give a very different phase diagram than the **a**-axis, with the Neel antiferromagnetic order found in zero field maintained to very high fields [21; 22; 23; 24]. The reason why the transverse [110]- and **a**-field directions are different, is due to the complex g-tensor for the Co\({}^{2+}\) ions [22; 23] which is responsible for many of the unique properties of these magnets.
The topic of this paper is a new and unexplored member of the ACo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) family, namely PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\). PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\), with tetragonal space group \(I4_{1}cd\) (# 110) [25], is isostructural to SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)and very similar to BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)(which has space group \(I4_{1}/acd\) (# 142)). As for SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)and BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)the magnetic Co\({}^{2+}\) ions are arranged in edge-sharing CoO\({}_{6}\) octahedra forming 4-fold screw chains running along the **c**-axis, which are well separated by non-magnetic V\({}^{5+}\) and Pb\({}^{2+}\) ions. There are four screw chains per unit cell, two rotating clockwise and the other two anticlockwise. Powder samples of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)were synthesized previously [25], and magnetic and thermodynamic measurements reveal long-range Neel order below \(T_{\rm N}\approx 4\) K [25; 26], however, its magnetic structure has not been investigated. Under an external field, the powder sample shows a broad transition at \(\mu_{0}H\approx 4\) T.
In this paper, we undertake the first detailed investigation of the magnetic properties of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\). We synthesize powder samples and perform powder neutron diffraction to determine the magnetic structure. We also synthesize a large single crystal which allows the anisotropic magnetism to be studied as a function of magnetic field applied along the **c**, **a**, and [110] directions. Using a combination of heat capacity and magnetization measurements we construct the magnetic field/temperature phase diagrams for these three directions. While the properties of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)and rather similar to those of SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)and BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)for the **c**, **a** axes, we discover a completely new phase for the field parallel to [110].
## II Experimental details
Powder and single crystal samples of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)were synthesized at the Core Lab Quantum Materials (CLQM), Helmholtz Zentrum Berlin fur Materialien und Energie (HZB), Germany. The powder was prepared by the solid state reaction of high purity powders of PbO (99.99%, Alfa Aesar), Co\({}_{2}\)O\({}_{4}\cdot 2\)H\({}_{2}\)O (99.995%, Alfa Aesar), and V\({}_{2}\)O\({}_{5}\) (99.99%, Alfa Aesar) which were thoroughly mixed in the 1:1:2 molar ratio in ethanol and then sintered at 930\({}^{\circ}\) three to four times for 12 hours each with grindings performed after each sintering. For the crystal growth, a dense feed rod was prepared from the stoichiometric powder pressed under 2000 bars in a cold-isostatic-pressure (CIP) machine and subsequently sintered. Since PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)was found to melt incongruently, the traveling solvent floating zone technique was applied using a solvent excess in V\({}_{2}\)O\({}_{5}\) at the tip of the feed rod. The Crystal growth was carried out in a 4-mirror type optical Floating Zone furnace (Crystal Systems Corp., FZ-T 10000-H-VI-VPO) with 150 W Tungsten halide lamps. It was performed in a 0.2 MPa Argon atmosphere at a growth rate between 0.5 to 1.0 mm/h. The as-grown single crystal was about 35 mm in length and about 5 mm in diameter. As far as we are aware these are the first reported crystals of this compound.
The crystal quality was checked using X-ray powder diffraction. A small piece of the crystal was crushed and ground into a powder. The powder diffraction pattern was collected at room temperature on a Bruker D8 diffractometer (Cu \(K_{\alpha}\), energy 8.0478 keV, wavelength 1.5406 A). A long 2\(\theta\) scan was done from 10 to 100 degrees with a step size of 0.0014 degrees counting 7 seconds per point. X-ray Laue diffraction was also used to check the crystal quality and prepare oriented samples for thermodynamic and magnetic measurements.
The field and temperature dependence of the magnetization were also measured at the CLQM, HZB. Measurements were performed using the Physical Properties Measurement System (PPMS 14 T Quantum Design) in magnetic fields up to \(\mu_{0}H\) = 14 T over the temperature range from 1.8 K to 400 K, with field applied along the **c**-, **a**-, and [110]-axes. Measurements were also carried out using a Magnetic Property Measurement System (MPMS 7 T, Quantum Design), equipped with a \({}^{3}\)He insert in magnetic fields up to \(\mu_{0}H\) = 7 T and over the temperature range from 0.4 K to 1.8 K for fields applied parallel to the **c**- and **a**-axes. High-field magnetization was measured at \(T=1.5\) K in pulsed magnetic fields up to \(\mu_{0}H\) = 58 T generated by the induction method using a coaxial pick-up coil system [27] at the Hochfeld Magnetlabor Dresden in the Helmholtz-Zentrum Dresden Rossendorf (HZDR). The sample was cooled in zero field and when the desired temperature was stable, a magnetic field pulse of a total duration of 25 ms was applied. Measurements took place with field applied along the **c**-, **a**-, and [110]-axes. Normalization to absolute units was achieved by calibrating the data with the lower field PPMS magnetization obtained in static fields at the CLQM.
The specific heat measurements were performed at the CLQM, by means of a relaxation method using the PPMS 14 T equipped with a \({}^{3}\)He insert. Magnetic fields up to 14 T were applied, and the temperature was varied between 0.4 K and 5 K (except in the case of zero field where the minimum temperature was 0.8 K because the low value of thermal coupling prevented lower temperature measurements). For each of the heat capacity scans, measured at different magnetic fields, an addenda measurement collected at 0 T was subtracted from the signal to obtain the sample heat capacity (note that the addenda of the used puck does not show any magnetic field dependence). Three crystal pieces were measured with
the **c**- (6.14 mg), **a**- (5.10 mg), and [110]- (4.22 mg) axes respectively, parallel to the applied magnetic field.
Finally, the crystal and magnetic structure in zero magnetic field were investigated by neutron powder diffraction using the cold neutron diffractometer DMC, at the SINQ Facility in the Paul Scherrer Institute (PSI), Switzerland. The low-temperature measurements were performed using a \({}^{3}\)He stick inserted into an orange cryostat. A sample of mass \(\simeq\) 3 g of the powder prepared by solid state reaction was sealed in a Copper can which was attached to the cold finger of the cryostat. Diffraction patterns were collected using a wavelength of \(\lambda=2.46\) A for temperatures in the range 0.3 K to 120 K with typical counting times of six hours for 0.3 K(\(\ll T_{N}\)), 4 K (\(\approx T_{N}\)) and 120 K, all other temperatures were counted for 2 hours. An additional high temperature measurement at 120 K was performed to study the structure of the sample, to reduce the background the \({}^{3}\)He insert was not used and the sample was loaded in a Vanadium can.
## III Results
### Temperature-dependence of the magnetization
The static susceptibility and the temperature dependence of the magnetization of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)was measured to explore its magnetic properties. Figure 1(a) shows the temperature dependence of the DC magnetic susceptibility of single crystal PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)measured in a field of
Figure 1: The DC magnetic susceptibility and the magnetization of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)at low magnetic fields and temperatures. (a) Temperature-dependence of the susceptibility measured in a magnetic field of \(\mu_{0}H\) = 0.10 T applied parallel to the **c**- (\(\chi\|\)**c**, red line), **a**- (\(\chi\|\)**a**, blue line) and [110]- (\(\chi\|\)[110], green line) directions. The solid red line gives the temperature derivative of \(\chi\|\)**c** and the vertical black dashed line indicates the Néel temperature, \(T_{N}\) = 3.80 K. The solid black line show the fit to Equ. (1). (b) Field-dependence of the magnetization measured for \(H\|\)**c** at \(T\) = 0.4, 0.8, and 1.2 K and also for \(H\|\)**a** at \(T\) = 0.4, 0.8, 1.2, and 1.6 K. The green and black solid lines show d\(M\)/d\(\mu_{0}H\) at \(T\) = 0.4 K for \(H\|\)**c** and \(H\|\)**a** respectively. (c) The low temperature susceptibility \(\chi\|\)**c**, measured for magnetic fields from \(\mu_{0}H\) = 0.25 T to 5 T applied along the **c**-axis. The inset shows the temperature dependence of the temperature derivative of susceptibility for magnetic fields from 0.25 T to 4 T. (d) Low temperature susceptibility \(\chi\|\)**a** measured for magnetic fields from \(\mu_{0}H\) = 0.5 T to 4.5 T applied along the **a**-axis.
\(\mu_{0}H=0.10\) T applied parallel to \(\mathbf{c}\) (\(\chi\|\mathbf{c}\)), \(\mathbf{a}\) (\(\chi\|\mathbf{a}\)) and [110] (\(\chi\|[110]\)) in the temperature range from 2 K to 400 K. The temperature derivative of \(\chi\|\mathbf{c}\) is also shown.
The \(H\|\mathbf{c}\) data show a sudden drop below \(\approx 4\) K and the temperature derivative of the susceptibility \(d\chi\|\mathbf{c}/dT\) shows a sharp peak at 3.80 K indicating a transition to long-range antiferromagnetic order at \(T_{N}=3.80\) K. The magnetic susceptibilities \(\chi\|\mathbf{a}\) and \(\chi\|[110]\) show a peak and then a similar drop at \(T_{N}\) which appears in their temperature derivatives as a peak (not shown) confirming this transition. While \(\chi\|\mathbf{c}\) tends towards a very small value at low temperatures, \(\chi\|\mathbf{a}\) and \(\chi\|[110]\) tend to constant high values. This indicates that \(\mathbf{c}\)-axis is the easy axis or the Ising axis. At higher temperatures a broad hump around 40 K for \(\chi\|\mathbf{c}\) was observed which is a clear sign of short-range magnetic order probably due to the strong intrachain interactions expected in this compound which would give rise to quasi-one-dimensional behavior. The significant difference between \(\chi\|\mathbf{c}\) compared to \(\chi\|\mathbf{a}\) and \(\chi\|[110]\), which persists even up to 400 K, is evidence for a large magnetic anisotropy (as was observed for BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[28] and SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[24]).
Our susceptibility data are in general agreement with the previous powder susceptibility measurements for PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)which found a transition at \(T_{N}=4\) K [25; 26]. In order to estimate the intrachain interaction in PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\), the \(\chi\|\mathbf{c}\) susceptibility curve was fitted with the Bonner-Fisher model for uncoupled Ising chains [29; 30; 31]:
\[\chi(T)=\chi_{0}+\frac{N_{A}\mu_{B}^{2}g_{\parallel}^{2}}{4k_{B}T}\frac{0.25 +0.15x+0.30x^{2}}{1+1.98x+0.68x^{2}+6.06x^{3}}, \tag{1}\]
where \(N_{A}\) is Avogadro's number, \(k_{B}\) is the Boltzmann constant, and \(\mu_{B}\) the Bohr magneton respectively. The term \(g_{\parallel}\) is the Lande factor parallel to the Ising axis, \(J\) is the intrachain exchange constant, \(x=J/k_{B}T\), and \(\chi_{0}\) is a constant term. To avoid the effects of long-range magnetic order which are not included in this model, the fitted temperature range should start well above \(T_{N}\). It should however include the characteristic broad hump at around 40 K which indicates the energy scale of the system. For the range 10 to 200 K, the fitted values of the parameters are \(J=32.10\pm 0.06\) K and \(g_{\parallel}=5.30\pm 0.01\) and \(\chi_{0}=0.0088\pm 0.0001\) cm\({}^{3}\)/mol, and the fitted curve is given by the solid black line in Fig. 1(a). These values change a little if the lower limit is increased to 25 K which gives \(J=32.80\pm 0.07\) K, \(g_{\parallel}=5.38\pm 0.01\), and \(\chi_{0}=0.0075\pm 0.0001\) cm\({}^{3}\)/mol, revealing the reliability of the results and the applicability of the model.
Figure 1(c) shows the temperature dependence of the low-temperature DC magnetic susceptibility \(\chi\|\mathbf{c}\) for several different magnetic field strengths from \(\mu_{0}H=0.25\) T to 5 T applied parallel to the \(\mathbf{c}\)-axis. For \(0.25<H\|\mathbf{c}<2.7\) T, a strong decrease in susceptibility is observed at low temperatures. The temperature where the rapid drop occurs, which indicates \(T_{N}(H\|\mathbf{c})\), shifts with increasing magnetic field towards lower temperatures. It is more visible in the inset of the figure, as a peak in the temperature derivative of the magnetic susceptibility. This anomaly starts to disappear at 3 T and is gone above 4 T, although a low temperature transition is still visible as a small kink in the susceptibility and a very small maximum in the temperature derivative of \(\chi\|\mathbf{c}\).
Figure 1(d) shows the low-temperature magnetic susceptibility \(\chi\|\mathbf{a}\) at several different magnetic fields from \(\mu_{0}H\)= 0.5 T to 4.5 T applied along the \(\mathbf{a}\)-axis. The rapid drop in the magnetic susceptibility which indicates \(T_{N}(H\|\mathbf{a})\), shifts toward lower temperatures with increasing magnetic field and disappears completely at 4 T revealing a phase transition at around this field. Higher magnetic field measurements show no further transitions, suggesting the absence of a long-range magnetically ordered state.
### Field dependence of the magnetization
We now investigate the magnetic field-dependence of the magnetization which provides information about the field-induced transitions. Figure 1(b) shows the low temperature magnetization up to 7 T for \(H\|\mathbf{c}\) at \(T=0.4\), 0.8, and 1.2 K and also for \(H\|\mathbf{a}\) at \(T=0.4\), 0.8, 1.2, and 1.6 K. The field derivative of the magnetization at \(T=0.4\) K for \(H\|\mathbf{c}\) and \(H\|\mathbf{a}\) is also presented. For \(H\|\mathbf{c}\) one can see two critical fields at \(\mu_{0}H_{c1}^{c}\)(0.4 K) = 2.70 T and \(\mu_{0}H_{c2}^{c}\)(0.4 K) = 4.68 T suggesting the presence of three distinct magnetic phases for fields up to 7 T at lowest temperatures. For \(H\|\mathbf{a}\) there is one critical field at \(\mu_{0}H_{c1}^{c}\)(0.4 K) = 3.80 T. These transitions appear almost independent of temperature in the studied temperature range.
Figures 2(a), 2(b), and 2(c) show the magnetization curves along with their magnetic field derivatives at \(T=2\) K as a function of applied magnetic field up to 14 T for \(H\|\mathbf{c}\), \(H\|\mathbf{a}\), and \(H\|[110]\), respectively. An abrupt increase in the magnetization is observed at \(\mu_{0}H_{c1}^{c}\)(2 K) = 2.65 T for \(H\|c\) which is clearly visible in d\(M\)/d\(\mu_{0}H\) in Fig. 2(a), indicating a field-induced transition. The second transition for this field direction, which was found at lower temperatures at \(\mu_{0}H_{c2}^{c}\)(0.4 K) = 4.68 T (see, Fig. 1(b)), is not observed here at 2 K. While no magnetization jump is seen for \(H\|\mathbf{a}\) (see, Fig. 2(b)) and \(H\|[110]\) (see, Fig. 2(c)), a peak for \(H\|\mathbf{a}\) is visible in the derivative d\(M\)/d\(\mu_{0}H\) data, indicating the transition at \(\mu_{0}H_{c1}^{c}\)(2 K) = 3.86 T.
We also measured the magnetization of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)at much higher fields up to 58 T using a pulsed field magnet at \(T=1.5\) K. These data had to be calibrated, for which the DC magnetization curves up to 14 T at \(T=2\) K were used (Figs. 2(a), (b), and (c)). This approach is reasonable as the magnetization at 14 T does not change much with temperature in the range between 1.5 and 2 K. The normalized magnetization curves are shown along with their field derivatives for \(H\|\mathbf{c}\) (Fig. 2(d)), \(H\|\mathbf{a}\) (Fig. 2(e)) and \(H\|[110]\) (Fig. 2(f)).
The magnetization curve for \(H\|\mathbf{c}\) shows the first transition at \(\mu_{0}H_{\mathrm{c1}}^{c}(1.5\) K) = 2.95 T, while \(\mu_{0}H_{c2}^{c}\) is not observable at this temperature. In the high field region, the magnetization is strongly nonlinear and appears to saturate at above \(\approx 30\) T. This is confirmed by the field derivative of the magnetization. A peak is observed in \(\mathrm{d}M^{c}/\mathrm{d}\mu_{0}H\) giving the saturation field along the \(\mathbf{c}\)-direction as \(\mu_{0}H_{s}^{c}(1.5\) K) = 30.7 T. This peak has a shoulder indicating a third transition at the slightly lower field of \(\mu_{0}H_{\mathrm{c3}}^{c}(1.5\) K) = 28 T. Above the saturation field we can see a linear increase of magnetization. This increase is related to the van Vleck contribution to the magnetization which is linear with field. The van Vleck contribution was fitted and extrapolated to zero magnetic field (red dashed line). It is estimated to be \(\chi_{VV}\) = 0.012 \(\mu_{B}\)/T per Co\({}^{2+}\) thus giving the saturation value of the magnetization as \(M_{s}^{c}=2.75\)\(\mu_{B}\). Assuming effective spin\(-1/2\) moments on the Co\({}^{2+}\) ions, this value of \(M_{s}^{c}\) suggest that \(g_{\parallel}\) = 5.5 in agreement with the value found from static susceptibility.
The magnetization curve for PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)with \(H\|\mathbf{c}\) is similar to that found previously for SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[24] and BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[21; 21]. For SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)the saturation field is 28.3 T and the saturation magnetization is 3 \(\mu_{B}\)[24]. A high field transition just below the saturation at 23.7 T was also observed [24] comparable to the transition at \(\mu_{0}H_{\mathrm{c3}}^{c}(1.5\) K) = 28 T found in PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\). For BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)the critical fields are at 19.5 T (transition) and 22.7 T (saturation), and the saturation magnetization is \(2.5-3.2\)\(\mu_{B}\)[21; 32; 12].
The high field magnetization curve for \(H\|\mathbf{a}\) shows the first transition as a change of slope and as a peak in \(\mathrm{d}M^{a}/\mathrm{d}\mu_{0}H\) occurring at \(\mu_{0}H_{\mathrm{c1}}^{a}(1.5\) K) = 3.52 T. The magnetization then increases approximately linearly up to 40 T. Above 45 T a slight rounding of the magnetization curve is observed. A possible tendency towards saturation of the high field magnetization was also reported for BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)and SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[22; 24].
The magnetization curve for \(H\|[110]\) is approximately linear and does not show any transitions at 1.5 K up to 40 T. Above 40 T its slope increases and at 45 T it becomes flatter suggesting saturation at \(\mu_{0}H_{s}^{[110]}(1.5\) K) = 45 T as indicated by the field derivative. Similarly as for \(H\|\mathbf{c}\) direction, the van Vleck contribution for \(H\|[110]\) direction was fitted and extrapolated to zero magnetic field. It is estimated to be \(\chi_{VV}\) = 0.025 \(\mu_{B}\)/T per Co\({}^{2+}\) thus giving the saturation value of the magnetization as \(M_{s}^{[110]}=0.68\)\(\mu_{B}\). Comparable behavior was observed for SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[24], for which the saturation field is \(\mu_{0}H_{s}^{[110]}(1.4\) K) = 45.7 T. An additional high field transition, not observed in PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\), was found at 33.0 T. For BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)saturation is observed at 40.9 T with a value of \(\approx 1.35\)\(\mu_{B}\) and the additional transition is found
Figure 2: Field dependence of the magnetization of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\). The top panels show the magnetization curves measured at \(T=2\) K in the PPMS for magnetic fields up to 14 T and their field derivatives for (a) \(H\|\mathbf{c}\) (red line), (b) \(H\|\mathbf{a}\) (blue line), and (c) \(H\|[110]\) (green line). The lower panels give the high field magnetization curves collected using pulsed fields up to 58 T at \(T\)= 1.5 K, along with their field derivatives for (d) \(H\|\mathbf{c}\) (red line), (e) \(H\|\mathbf{a}\) (blue line) and (f) \(H\|[110]\) (green line). The vertical dashed black lines indicate the critical fields. The red dashed line in panel (d) and the green dashed line in panel (f) extrapolate the saturation magnetization to zero field and show the van Vleck contribution to the field dependence of the magnetization.
at 30.8 T [21; 22; 23].
By analogy to BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)and SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)the difference between the magnetization curves for \(H\|\mathbf{a}\) and \(H\|[110]\) for PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)may be explained by the 4-fold screw chain structure of the CoO\({}_{6}\) octahedra. These octahedra are distorted and their apical bond is tilted away from the **c**-axis by a few degrees in a direction that follows the screw chain rotation. These features give rise to a complicated g-tensor in both BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[22] and SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[13]. As a result, a magnetic field applied along the **a**-axis gives rise to an effective field parallel to the **b**-axis, which is staggered along the chain driving the spin-flop transition of the spins from pointing along the **c** to the **b** direction at \(\mu_{0}H_{c1}^{a}\)[14; 22]. In contrast, no such staggered field occurs when the field is applied in \(H\|[110]\)-direction and therefore no spin-flop transition is observed. It should be noted that our diffraction results described in Section III.5 find that PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)also has the moments canted by a small angle away from the **c**-axis implying that a similar mechanism could apply here.
### Heat capacity
PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)was also investigated using heat capacity measurements which provide a very accurate way to identify the phase transitions. Both temperature- and field-dependent measurements were performed in zero magnetic field and with the field applied along the \(\mu_{0}H\|\mathbf{c}\), \(\mu_{0}H\|\mathbf{a}\) and \(\mu_{0}H\|[110]\) directions. Figure 3(a) shows the temperature dependence of the heat capacity \(C_{p}\) from \(T\) = 280 K down to \(T\) = 1.7 K measured in zero field. The data above 50 K up to 280 K where the magnetic contribution becomes negligible, were fitted by Einstein and Debye terms to model the phononic contribution which was then extrapolated down to base temperature as shown by the solid red line. The magnetic heat capacity \(C_{m}(\mu_{0}H=0\) T), was extracted by subtracting this contribution. \(C_{m}(\mu_{0}H=0\) T)/\(T\) is presented in Fig. 3(b) by the black dots and shows a sharp \(\lambda\)-type anomaly at \(T_{N}(\mu_{0}H=0\) T) = 3.80 K. Finally, we perform the integral over temperature of the magnetic heat capacity divided by temperature in order to obtain the magnetic entropy as shown by the red dots. The magnetic entropy saturates above 40 K at \(\approx\) 6.65 JK\({}^{-1}\) per mole of Co\({}^{2+}\), a value slightly larger than the value \(S_{mag}\) = R ln(2) = 5.76 JK\({}^{-1}\) per mole of Co\({}^{2+}\) where R is the gas constant, expected for a doublet ground state. This result suggests that we can assign an effective spin-1/2 moment to the Co\({}^{2+}\) ions.
Figure 4(a) shows the temperature-dependence of the heat capacity at low temperatures for a single crystal of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)under longitudinal magnetic field \(\mu_{0}H\|\mathbf{c}\) from \(\mu_{0}H=0\) T to 11.5 T. The measurements started from 0.4 K except for zero field where the lowest temperature was 0.8 K as explained in Section II. At \(\mu_{0}H=0\) T the heat capacity curve shows a sharp \(\lambda\)-type anomaly at \(T_{N}(\mu_{0}H=0\) T) = 3.80 K, indicative of the second-order phase transition between the paramagnetic and Neel phases. These data are in general agreement with the magnetic heat capacity divided by temperature presented in Fig. 3(b) which was collected on another sample, and with previous measurements on a powder sample of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)which found the transition at \(T_{N}=4\) K [25; 26]. Below the transition, the heat capacity decreases very rapidly with decreasing temperature suggesting that the magnetic excitations are gapped as would be expected if there were Ising anisotropy. To estimate the gap size, a Schottky term was fitted to the zero field data in the range from 0.8 K to 2 K (well below the transition) and yielded the energy gap \(\Delta E=0.94\pm 0.04\) meV.
Figure 3: (a) Temperature-dependence of the heat capacity measured in zero field from 1.7 K to 280 K (black dots). The data above 50 K which is dominated by the lattice contribution were fitted to a combination of Einstein and Debye terms (red line). (b) The temperature-dependence of the magnetic heat capacity divided by temperature \(C_{m}(\mu_{0}H=0\) T)/\(T\), obtained after subtraction of the phononic contribution, as well as the magnetic entropy \(S_{mag}\) at \(\mu_{0}H=0\) T are shown by the black and red dots respectively. The horizontal red dashed line marks the value \(S_{mag}=\) R ln(2), expected for the effective spin-1/2 magnetic moments.
With increasing longitudinal magnetic field (\(H\|\mathbf{c}\)), the peak shifts to lower temperatures and decreases in amplitude rapidly. It almost disappears at \(\mu_{0}H_{c1}^{c}\approx 2.75\) T which is close to the value of \(\mu_{0}H_{c1}^{c}(0.4\) K) = 2.70 T found from magnetization. The peak reappears at 3 T at 1.25 K suggesting the appearance of a new phase at \(\mu_{0}H>3\) T. It becomes weak and broad again at \(\mu_{0}H_{c2}^{c}\approx 4.5\) T implying the second phase boundary. Interestingly at 5 T, the peak reappears again and shifts to higher temperatures with increasing field reaching 1.4 K at 11.5 T. This peak marks the upper temperature boundary of a high field phase, which is discussed in the next section. The two field-induced phase transitions can be seen more clearly in the field-dependence of the heat capacity for \(H\|\mathbf{c}\) at \(T=0.4\) K presented in Fig. 4(d). Two anomalies are observed giving the critical fields \(\mu_{0}H_{c1}^{c}(T=0.4\) K) = 2.79 T and \(\mu_{0}H_{c2}^{c}(T=0.4\) K) = 4.60 T in good agreement with the low temperature magnetisation results.
Figure 4(b) shows the temperature dependence of the heat capacity under transverse field \(\mu_{0}H\|\mathbf{a}\), from \(\mu_{0}H=0\) T to 5 T. The amplitude of the \(\lambda\) anomaly gradually decreases as the magnetic field is increased, and the peak shifts to lower temperature, finally disappearing at around \(\mu_{0}H_{c1}^{a}\approx 4.0\) T. No further anomalies were observed at higher fields. This result is consistent with magnetization data for \(\mu_{0}H\|\mathbf{a}\) (see, Fig. 1(d)). The field-dependence of the heat capacity for \(\mu_{0}H\|\mathbf{a}\) for temperatures from 0.8 K to 2.0 K is presented in Fig. 4(e). A peak is found whose amplitude decreases with decreasing temperature and it is very weak at 0.8 K. The position of the peak is almost independent of temperature and indicates the phase transition at \(\mu_{0}H_{c1}^{a}(0.8\) K) \(\approx 4.0\) T.
For the case of \(\mu_{0}H\|[110]\) the temperature-dependence of the heat capacity was measured for various magnetic field values up to \(\mu_{0}H=14\) T (see Fig. 4(c)). The critical temperature and the amplitude of the \(\lambda\)-anomaly de
Figure 4: Heat capacity of single crystalline PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\). Temperature scans \(C_{p}(T)\) were performed under various values of magnetic field applied (a) longitudinally \(H\|\mathbf{c}\), (b) transversely \(H\|\mathbf{a}\), and (c) transversely \(H\|[110]\). Panels (d), (e), and (f) show field scans of \(C_{p}(H)\) for \(H\|\mathbf{c}\), \(H\|\mathbf{a}\), and \(H\|[110]\) respectively, at various temperatures. Heat capacity data at non-zero field were collected in the temperature range above 0.4 K; however, for zero field the data were collected starting from 0.8 K as explained in Section II. An offset proportional to an applied magnetic field with a proportionality factor of 1 JK\({}^{-1}\)mol\({}^{-1}\)/T has been added to each heat capacity curve in panels (a), (b), and (c) for clarity. In the case of panels (d), (e), and (f) no offset has been added. The heat capacity data are presented as raw data without subtraction of the phononic contribution which is very small at these low temperatures.
crease as \(\mu_{0}H\) increases up to 7 T. Starting from 7 T we observe a subtle splitting of the peak, which increases up to 9 T. For \(\mu_{0}H>10\) T, the splitting vanishes. To follow this feature more carefully, the heat capacity was scanned over the field range from 6 T to 12 T for several temperatures from 0.6 K to 1.8 K (see, Fig. 4(f)). At 1.8 K, we observe a cusp at \(\mu_{0}H_{c1}^{[110]}(1.8\text{ K})\approx 8.5\text{ T}\) that moves to higher fields with decreasing temperature. This marks the start of a new phase boundary which can be traced to \(\mu_{0}H_{c1}^{[110]}(0.6\text{ K})\approx 10.4\text{ T}\). This critical field was not observed in our DC magnetization because this was measured at 2 K.
### Magnetic phase diagram
The magnetic and heat capacity measurements were used to extract the magnetic field/temperature phase diagrams of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)which are presented in Fig. 5 for the field directions \(\mu_{0}H\|\mathbf{c}\), \(\mu_{0}H\|\mathbf{a}\), and \(\mu_{0}H\|[110]\). The phase boundaries are obtained from the positions of the extremes in the field derivatives of \(M(H)\) and temperature derivative of \(\chi(T)\) (red circles) and from the positions of the peaks in \(C_{p}(T)\) and \(C_{p}(H)\) (black circles).
Figure 5(a) shows the phase diagram for magnetic field applied along the \(\mathbf{c}\)-axis where three magnetic phases are present up to 11.5 T. The boundaries between these phases appear to extend down to zero Kelvin. In order to estimate the critical fields at absolute zero, the first phase boundary was fitted with the empirical power law formula \(T_{c}=A(H_{c}-H)^{\phi}\) over the temperature range from 2 to 2.6 T, [13] which yielded the critical field value of \(\mu_{0}H_{c1}^{c}(0\text{ K})=2.73\pm 0.02\text{ T}\) and exponent \(\phi=0.27\pm 0.01\). The extrapolation of the second phase boundary gives the value of the second critical field \(\mu_{0}H_{c2}^{c}(0\text{ K})=4.66\text{ T}\). This phase diagram resembles the \(H\|\mathbf{c}\) phase diagrams found for SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[8; 33; 9] and BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[23; 28; 27; 34], where three magnetic phases were also found. These phases were identified, using neutron diffraction, as Neel antiferromagnetic (low fields), longitudinal spin density wave (intermediate fields), and transverse antiferromagnet (high fields) [6; 7; 8; 35]. As for PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\), the second and third phases have significantly lower ordering temperatures compared to the Neel phase. Just above these ordering temperatures a quantum critical regime is predicted. While qualitatively similar to SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)and BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\), PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)appears to have a smaller energy scale with the transitions occurring at somewhat lower fields.
Figure 5(b) shows the magnetic phase diagram of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)for field applied along the \(\mathbf{a}\)-axis. Only one phase boundary is visible in this case. This boundary was fitted with the power law function over the temperature range from 3 to 4 T and yielded the value of critical field at \(T=0\text{ K}\) of \(\mu_{0}H_{c1}^{a}(0\text{ K})=4.01\pm 0.02\text{ T}\) and field exponent \(\phi=0.30\pm 0.03\). The \(H\|\mathbf{a}\) phase diagrams for SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)and BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)are similar with a single
Figure 5: Phase diagram of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)as a function of magnetic field and temperature for (a) longitudinal field \(H\|\mathbf{c}\), (b) transverse field \(H\|\mathbf{a}\), and (c) transverse field \(H\|[110]\). The black circles represent the critical fields and temperatures determined by the heat capacity measurements and the red circles represent the critical fields and temperatures extracted from the magnetization measurements. The different colored regions indicate the different magnetic phases. The phase colored gray is identified as a Néel antiferromagnetic phase in III.5. By analogy with SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)and BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)the phase colored pink might have longitudinal spin density wave order, whereas the phase colored blue might be a transverse antiferromagnetic phase, although these field-induced phases have yet to be confirmed by neutron diffraction.
phase that terminates at the critical fields of 7 T [13; 36] and \(\approx 10\) T [22; 14; 23] respectively. This transition was described as a quantum phase transition from Neel order to a quantum disorder regime [13] or a topological quantum phase transition between two different types of solitonic topological objects [14].
Figure 5(c) shows the magnetic phase diagram of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)with the uniform magnetic field applied along the \(H\|[110]\) direction. The critical temperature decreases smoothly with increasing field up to 8 T. Starting from 8 T one can clearly see a phase boundary to a new ordered phase, whose critical field increases to 10.4 T as temperature is reduced to 0.6 K. By extrapolation, we estimate that this new phase boundary may persist up to \(\mu_{0}H_{c1}^{[110]}(0\) K) = 10.9 T. Interestingly, the transition temperature increases above 8 T as this new phases emerges. This phase was not observed in the two other sister compounds SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)or BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)where no phase transitions were observed below 30 T. However, a transition is observed in SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)at the much higher field of 33.0 T [24] and in BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)at \(\approx 30\) T [21; 22; 23]. The nature of this transition and the high field phase have never been explored. It is not clear whether this transition is related to the \(\mu_{0}H_{c1}^{[110]}(0\) K) = 10.9 T transition in PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)which occurs at much lower fields. In any case, the relatively low critical field of the transition makes it accessible to experimental techniques such as neutron scattering, providing an opportunity for its exploration.
### Magnetic structure at zero magnetic field
This section focuses on the magnetic structure in the low temperature and low magnetic field phase of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)by analysing neutron powder diffraction data collected in zero magnetic field. However we first verify the crystal structure and the quality of the sample using X-ray and neutron powder diffraction experiments at high temperatures. The X-ray powder diffraction data was obtained from a crushed single crystal at room temperature (see Fig. 6(a)), meanwhile the neutron powder diffraction was performed on the powder sample at \(T=120\) K that was measured without the \({}^{3}\)He insert to avoid extra background peaks from the equipment (see Fig. 6(b)). The two methods are complementary, since unlike X-ray, neutrons are sensitive to light elements like Oxygen. On the other hand, Vanadium, which does not contribute to the neutron Bragg peaks because it only scatters incoherently, is observable by X-rays.
A combined Rietveld refinement of the X-ray and neutron diffraction pattern was performed simultaneously where all the atomic positions, the lattice parameters at 300 and 120 K, and isotropic Debye Waller factors were included in the fit (the occupancies were constrained to be stoichiometric). The results of this combined Rietveld refinement (see Fig. 6(a) and (b)) confirm the suitability of the \(I4_{1}cd\) space group with lattice parameters \(a=b=12.2610(12)\) A, \(c=8.3741(10)\) A found from neutron diffraction pattern at \(T=120\) K, and \(a=b=12.3580(8)\) A, \(c=8.4444(5)\) A found from the X-ray diffraction pattern at room temperature. The resulting atomic position which are listed in Table 1 are in good agreement with Ref. [25]. The reliability factors are \(R_{F}=6.32\%\) and \(R_{wp}=8.76\%\) for the neutron data and
\begin{table}
\begin{tabular}{c c c c c c c} Atom & Site & \(x\) & \(y\) & \(z\) & Biso(Å\({}^{2}\)) & Occ \\ \hline \hline Pb & 8a & 0 & 0 & 0 & 0.76(7) & 0.5 \\ Co & 16b & 0.3322(13) & 0.3303(12) & 0.1707(10) & 0.76(7) & 1.0 \\ V & 16b & 0.2606(10) & 0.0716(7) & 0.0464(25) & 0.76(7) & 1.0 \\ O1 & 16b & 0.1458(5) & 0.4958(13) & -0.0556(9) & 0.90(17) & 1.0 \\ O2 & 16b & 0.3398(13) & 0.6710(15) & 0.4273(9) & 0.90(17) & 1.0 \\ O3 & 16b & 0.1594(17) & 0.6816(10) & 0.6643(10) & 0.90(17) & 1.0 \\ O4 & 16b & 0.3220(6) & 0.4987(11) & 0.1415(7) & 0.90(17) & 1.0 \\ \hline \end{tabular}
\end{table}
Table 1: Atomic coordinates and isotropic Debye-Waller factors obtained from the combined refinement of the neutron powder diffraction pattern of the crushed single crystal at \(T=120\) K (using DMC, PSI) and the X-ray powder pattern at \(T=300\) K from the powder sample (using Bruker D8, HZB). The refinement was performed in space group \(I4_{1}cd\) (# 110) yielding lattice parameters \(a=b=12.3580(8)\) Å, \(c=8.4444(5)\) Å at room temperature with \(R_{F}=10.5\%\) and \(R_{wp}=13.5\%\) from the X-ray data, and \(a=b=12.2610(12)\) Å, \(c=8.3741(10)\) Å at 120 K with \(R_{F}=6.32\%\) and \(R_{wp}=8.76\%\) from the neutron data.
Figure 6: Powder patterns of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)collected at high temperatures in zero magnetic field showing (a) X-ray powder diffractogram for the crushed single crystal at room temperature, and (b) Neutron powder diffraction pattern from the powder sample at \(T=120\) K. In both cases the observed and calculated intensities are represented by the open red circles and solid black line respectively. The difference between them is given by the blue solid line and the Bragg peak positions for the \(I4_{1}cd\) space group are given by the vertical green lines.
\(R_{F}=10.50\%\) and \(R_{wp}=13.50\%\) for the X-ray pattern. As expected the crystal structure of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)is very similar to SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)and BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)with the magnetic Co\({}^{2+}\) ions forming 4-fold screw chains running along the **c**-axis, with four chains per unit cell, two rotating clockwise and two anticlockwise.
For the magnetic structure determination, neutron powder diffraction patterns was collected using the \({}^{3}\)He insert at \(T=120\) K(\(\gg T_{N}\)) (see Fig. 7(a)) and \(T=0.3\) K(\(<T_{N}=3.80\) K) (see Fig. 7(b)). Additional Bragg reflections appear at integer \(hkl\) positions below \(T_{N}\) which are magnetic. The nuclear reflections occur at \(h+k+l=2n\), where \(n\) is an integer, due to the body-centered (\(I\)) symmetry. The magnetic peaks occur when \(h+k+l=2n\)+1, where \(n\) is an integer and can be indexed by the propagation vector \(\kappa=(0,0,1)\) (see, Fig. 7(c)). This is in agreement with reports on the isostructural compound SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[37] and also with the related compound BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[35].
To obtain the magnetic structure of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\), space group analysis was performed, using the BasiReps software from the Fullprof suite [38; 39]. The crystal structure is described within space group \(I4_{1}cd\) (\(\#\) 110), which has two centering operations, and eight symmetry operations, which leave the propagation vector invariant. The magnetic representation decomposes into four one-dimensional irreducible representations (IReps) \(\Gamma_{i=1-4}\) repeated three times each, and one two-dimensional IRep (\(\Gamma_{5}\)) repeated six times, _i.e._\(\Gamma_{m}=3\Gamma_{1}^{1}+3\Gamma_{2}^{1}+3\Gamma_{3}^{1}+3\Gamma_{4}^{1}+6 \Gamma_{5}^{2}\). The basis vectors of each irreducible representation for this space group with the propagation vector \(\kappa=(0,0,1)\) and Wyckoff site (16 \(b\)) for Co\({}^{2+}\) are presented in Table 2. All possible magnetic structures were compared to the data collected at \(T=0.3\) K and only \(\Gamma_{5}\) gave good fitting parameters (\(R_{F}=17.6\%\)). The resulting fit is shown in Fig. 7(b).
At \(T=0.3\) K the magnetic moments on the Co\({}^{2+}\) ions of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)along the three directions are \(m_{a}=-0.104\pm 0.07\)\(\mu_{B}\), \(m_{b}=0.000\)\(\mu_{B}\) and \(m_{c}=1.436\pm 0.03\)\(\mu_{B}\) showing that they point predominantly along the **c**-axis with a small canting away from this axis of \(4^{\circ}\pm 3^{\circ}\). Within the errorbar, this canting is the same size as the canting of \(3.4^{\circ}\) of the principle (compressed) axis of the CoO\({}_{6}\) octahedra away from the **c**-axis. The projection of this structural canting onto the
Figure 8: The magnetic structure of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) showing only the CoO\({}_{6}\) octahedra. (a) The projection on the **b**-**c**-plane shows two of the four 4-fold screw chains for clarity, the spins (blue arrows) point almost parallel to the **c**-axis and are aligned antiferromagnetically along each chain. (b) The projection on the **a**-**b**-plane shows all four screw chains where dark blue (light blue) circles represent spin up (spin down). Black arrows show the sense of rotation of each screw chain, the Co\({}^{2+}\) ions of each chain are numbered 1 to 4 according to their height along the chain, thus ions with the same number are located in the same **c**-plane. The spins of neighboring chains are aligned ferromagnetically (antiferromagnetically) along the **a** (**b**) axes respectively.
Figure 7: Neutron powder diffraction patterns from the powder sample at (a) \(T=120\) K and (b) \(T=0.3\) K (red dots). The solid black line gives the Rietveld refinement and the solid blue line gives the difference between the data and refinement. The top row of green vertical lines indicates Bragg peak positions for \(I4_{1}cd\). The next two rows correspond to the Bragg peaks of Cu (sample can) and Al (shielding). The last row in (b) indicates the magnetic Bragg peaks position corresponding to the ordering wavevector \(\kappa=(0,0,1)\) which are also labelled in pink. (c) The neutron diffractograms for both temperatures are overplotted for comparison in the region from \(20^{\circ}\) to \(60^{\circ}\) to highlight the magnetic peaks.
**a**-**b** plane follows the 4-fold rotation of the screw chains. It therefore seems natural to associate the canting of the Co\({}^{2+}\) spins with this structural canting where the spins are constrained to point along the principle octahedral axis due to anisotropy. This was the situation found in BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)and SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)where the 4-fold canting gives rise to complex g-tensor [13; 22].
The magnitude of the ordered magnetic moment obtained from the refinement of \(\mu=1.44\)\(\mu_{B}\) at \(T=0.3\) K is considerably smaller than the 3 \(\mu_{B}\) expected for Co\({}^{2+}\) (spin only, high spin state \(S=3/2\)) and is also lower than the saturation magnetization of 2.75 \(\mu_{B}\) we observed for magnetic field along the **c**-axis. An explanation is that the Co\({}^{2+}\) moments are not fully ordered as is frequently observed in quasi-one-dimensional antiferromagnets where magnetic order is partially suppressed by quantum fluctuations.
Figure 8 shows the magnetic structure. The Co\({}^{2+}\) moments are aligned antiferromagnetically with respect to each other along each of the four 4-fold screw chains that run along the **c**-axis. Figure 8(a) gives a projection onto the **b**-**c**-plane, showing just two chains for clarity. Within the basal **a**-**b**-plane, The spins of neighboring chains are aligned ferromagnetically (antiferromagnetically) along the **a** (**b**) axes respectively (see, Fig. 8(b)). This magnetic structure is very similar to that observed in BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[6; 7; 14; 35] and SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[8; 36; 37; 40].
One interesting point is that the space group of BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)(\(I4_{1}/acd\)) is centrosymmetric, while that of SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)and PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)(\(I4_{1}cd\)) is noncentrosymmetric and polar. This means some ions should be displaced from their original positions in \(I4_{1}/acd\) to off-centered position in \(I4_{1}cd\) so as to have a polarity. Indeed the symmetry of the Co\({}^{2+}\) site in SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)and PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)is reduced and all the Co\({}^{2+}\)-O\({}^{2-}\) bonds to the surrounding octahedron are different compared to BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)where there are only three unique bonds. Such additional displacements should affect the magnetic properties of these materials, resulting in changes in the superexchange interactions. Despite this, the properties of all three compounds are remarkably similar in terms of their magnetic structures and phase diagrams and thus differences arising from the different space groups are very subtle.
To investigate the temperature dependence of the magnetic reflections, measurements at various temperatures were performed. The integrated intensities are shown in Fig. 9 and reveal that the magnetic intensity disappears above 4 K due to the loss of long-range magnetic order. The data were compared to the power law function \(I=I_{0}+A(1-\frac{T}{T_{N}})^{2\beta}\) where \(A\) is a proportionality constant. Magnetic systems are classified into several categories with different values of temperature exponent \(\beta\), such as \(\beta=0.326\) (3D Ising), \(\beta=0.35\) (3D XY), \(\beta=0.367\) (3D Heisenberg), and \(\beta=0.50\) (mean-field) [41; 42]. For PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\), this equation was fitted to the data over the temperature range of 2.5 K to 3.5 K, keeping the critical temperature fixed at \(T_{N}=3.80\) K which was the value obtained from our heat capacity (Section III.3) and magnetization (Section III.1) measurements. The fit yielded the exponent \(\beta=0.316\pm 0.06\). This value of \(\beta\) might be an indication for the 3D Ising model, although the limited amount of data significantly reduces the reliability of this result. For comparison the value \(\beta=0.33\pm 0.03\) was found for SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)[37] and \(\beta=0.307-0.328\)[6] or \(\beta=0.28\)[35] for BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\).
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline IR & \multicolumn{8}{c}{Symmetry Elements} \\ & \(\{1|000\}\) & \(\{20_{0z}|ppp\}\) & \(\{4^{+}_{00z}|0ps\}\) & \(\{4^{-}_{00z}|p0t\}\) & \(\{m_{z0z}|00p\}\) & \(\{m_{0yz}|pp0\}\) & \(\{m_{x-xz}|0pt\}\) & \(\{m_{xzz}|p0s\}\) \\ \hline \(\Gamma_{1}\) & 1 & -1 & i & -i & 1 & -1 & i & -i & i \\ \(\Gamma_{2}\) & 1 & -1 & i & -i & -1 & 1 & -i & i \\ \(\Gamma_{3}\) & 1 & -1 & -i & i & 1 & -1 & -i & i \\ \(\Gamma_{4}\) & 1 & -1 & -i & i & -1 & 1 & i & -i \\ \(\Gamma_{5}\) & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & \(-1\) & 0 & \(-1\) \\ \(\Gamma_{6}\) & 0 & 1 & 0 & 1 & 0 & \(-1\) & 0 & 1 & 0 & 1 & 0 & 1 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Irreducible representations (IR) of the space group \(I4_{1}cd\) for the propagation vector \(\kappa\) = (0,0,1). The symmetry elements are written according to Wigner-Seitz notation.
Figure 9: The temperature-dependence of the normalized square root of the integrated intensity of the (110), (\(21\bar{1}\)), (\(200\)) magnetic Bragg peaks of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\). The solid red line corresponds to the fitted curve of the critical exponent equation.
Summary
To conclude we have synthesized powder and, to our knowledge, the first single crystal samples of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\), which we have investigated using magnetization, specific heat, and neutron diffraction. The crystal structure of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)gives rise to 4-fold screw chains of magnetic Co\({}^{2+}\) ions along the \(\mathbf{c}\)-axis. In zero magnetic field, long-range magnetic order takes place at \(T_{N}=3.80\) K and the moments order antiferromagnetically along the chains with the spins canted a small amount from the \(\mathbf{c}\)-axis. We confirm the presence of a Heisenberg-Ising (XXZ-type) anisotropy and application of a magnetic field gives rise to a complex series of new phases that are different for the \(H\|\mathbf{c}\), \(H\|\mathbf{a}\) and \(H\|[110]\) directions. We have constructed detailed phase diagrams for all three directions up to 11.5 T and have also explored the behavior to high fields. Apart from having a slightly smaller energy scale, PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)shows many similarities to SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)and BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\). The resemblance of their phase diagrams for \(H\|\mathbf{c}\) and \(H\|\mathbf{a}\) suggests that the same magnetic phases occur in all three compounds for these directions. However, the phase diagram for the \(H\|[110]\) field direction reveals a new phase for PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)at \(H\|[110]\approx 10\) T, which has not been previously reported in either SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)or BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\). The nature of this phase is unknown and it is not clear why it appears in PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)but not the other compounds. One possibility is that the phase is in fact present in SrCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)or BaCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)but their higher energy scale drives it to higher fields making it inaccessible due to the low temperature necessary for its observation which are unavailable in high field magnets, and thus it has been missed until now.
As model spin chain materials with XXZ anisotropy and complex g-tensors, the ACo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) A = Sr, Ba compounds have remarkably rich phenomena and have been used to test and explore several different fundamental physics ideas. Finding a new member of this family raises the possibility of finding new phenomena and further refining theories. To this end, we plan to investigate the origin of the new phase for \(H\|[110]>10\) T in PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\)starting with single crystal neutron diffraction in the near future.
###### Acknowledgements.
We acknowledge the Core Lab Quantum Materials (CLQM), Helmholtz Zentrum Berlin fur Materialien und Energie (HZB), Germany, where the powder and single crystal samples of PbCo\({}_{2}\)V\({}_{2}\)O\({}_{8}\) were synthesized and measured. The authors acknowledge the support of Hochfeld Magnetlabor Dresden at Helmholtz Zentrum Dresden Rossendorf (HLD-HZDR), a member of the European Magnetic Field Laboratory (EMFL). This work is also partly based on experiments performed at the Swiss spallation neutron source SINQ, Paul Scherrer Institute, Villigen, Switzerland.
|
2308.16730 | Proof of Deep Learning: Approaches, Challenges, and Future Directions | The rise of computational power has led to unprecedented performance gains
for deep learning models. As more data becomes available and model
architectures become more complex, the need for more computational power
increases. On the other hand, since the introduction of Bitcoin as the first
cryptocurrency and the establishment of the concept of blockchain as a
distributed ledger, many variants and approaches have been proposed. However,
many of them have one thing in common, which is the Proof of Work (PoW)
consensus mechanism. PoW is mainly used to support the process of new block
generation. While PoW has proven its robustness, its main drawback is that it
requires a significant amount of processing power to maintain the security and
integrity of the blockchain. This is due to applying brute force to solve a
hashing puzzle. To utilize the computational power available in useful and
meaningful work while keeping the blockchain secure, many techniques have been
proposed, one of which is known as Proof of Deep Learning (PoDL). PoDL is a
consensus mechanism that uses the process of training a deep learning model as
proof of work to add new blocks to the blockchain. In this paper, we survey the
various approaches for PoDL. We discuss the different types of PoDL algorithms,
their advantages and disadvantages, and their potential applications. We also
discuss the challenges of implementing PoDL and future research directions. | Mahmoud Salhab, Khaleel Mershad | 2023-08-31T13:49:04Z | http://arxiv.org/abs/2308.16730v1 | # Proof of Deep Learning: Approaches, Challenges, and Future Directions
###### Abstract
The rise of computational power has led to unprecedented performance gains for deep learning models. As more data becomes available and model architectures become more complex, the need for more computational power increases. On the other hand, since the introduction of Bitcoin as the first cryptocurrency and the establishment of the concept of blockchain as a distributed ledger, many variants and approaches have been proposed. However, many of them have one thing in common, which is the Proof of Work (PoW) consensus mechanism. PoW is mainly used to support the process of new block generation. While PoW has proven its robustness, its main drawback is that it requires a significant amount of processing power to maintain the security and integrity of the blockchain. This is due to applying brute force to solve a hashing puzzle. To utilize the computational power available in useful and meaningful work while keeping the blockchain secure, many techniques have been proposed, one of which is known as Proof of Deep Learning (PoDL). PoDL is a consensus mechanism that uses the process of training a deep learning model as proof of work to add new blocks to the blockchain. In this paper, we survey the various approaches for PoDL. We discuss the different types of PoDL algorithms, their advantages and disadvantages, and their potential applications. We also discuss the challenges of implementing PoDL and future research directions.
Proof-of-Work, consensus mechanism, Proof-of-Deep-Learning, machine learning, blockchain.
## I Introduction
Blockchain technology at its core is a type of distributed ledger that enables multiple users to achieve consensus without the need for a central authority. Additionally, it ensures the immutability of the stored records, thereby providing tamperproofness. Bitcoin [8] is the most famous application that uses Proof of Work (PoW) as its consensus mechanism. The consensus algorithm is a crucial component of a blockchain network system. It helps make the system decentralized, transparent, auditable, secure, and tamper-resistant. The algorithm works by offering incentives to users who participate in the network, ensuring that everyone follows the rules and works towards the same goal [37].
Miners of Bitcoin consume a lot of computational resources due to the large amount of hash calculation required by PoW [28, 29]. According to [1] the annual electrical energy consumed by the Bitcoin network is estimated to be 87.2 TWh as of 2019, which is similar to the consumption of a country such as Belgium. To mitigate the amount of energy required, various solutions have been proposed, such as using an Application-Specific Integrated Circuit (ASIC) machine [33], using different consensus algorithms instead of PoW, such as Proof of Stake (PoS) [28], Proof of Activity (PoA) [46], and Proof of Useful Work (PoUW) [33].
After the introduction of the PoUW concept, which aims to utilize computational power and wasted resources for solving hash puzzles in the PoW algorithm, researchers have been exploring ways to employ these resources to do useful things. For example, Primecoin [35] is a type of altcoin that utilizes a unique mining process where miners are required to find a specific prime number sequence known as Cunningham chains, as an alternative to the usual hash puzzles. While the discovery of Cunningham chains through this mining process holds mathematical and research significance, its practical applications in the real world remain uncertain [12]. Similarly, the Proof of Exercise (PoX) is a proposed mechanism outlined in [36] that falls under the PoUW category, requiring miners to perform specific exercises and provide proof of their outcomes. However, one significant limitation of the PoX mechanism is its reliance on an outsourced centralized board, which diminishes the decentralization feature of the blockchain.
To develop high-performance deep learning (DL) models, significant computational and memory resources are often required [19, 34]. To address this, researchers have proposed a novel approach that involves utilizing the computational power traditionally used for proof of work in blockchain to instead be used for training DL models. The supervised training of DL models is used to secure the blockchain in the Proof of Deep Learning [11]. However, the model proposed in [11] relies heavily on the integrity of the data provider. The authors in [2] surveyed the benefits of integrating deep learning and blockchain for data security, automatic decision-making, cumulative judgments, and enhanced robustness. They also touched on the concept of proof of deep learning but did not go into much detail.
In this study, we explore the nascent idea of using deep learning as a proof of work, which is still in its formative years and has a lot of potential for growth. Our research delves into the latest advances in this field and examines the various types of PoDL algorithms, including their benefits, drawbacks, and potential applications. Furthermore, we investigate the obsta
cles associated with implementing PoDL and identify potential areas for future research. To the best of our knowledge, this is the first paper that provides a comprehensive overview of the latest advances in PoDL. The paper is intended to be a valuable resource for researchers who are interested in learning more about this rapidly evolving field.
The paper is structured as follows: we start in section II by giving a high-level background of the blockchain and deep learning. In Section III, we delve into the details of PoDL. Following this, Section IV examines the challenges associated with PoDL systems. In Section V, we discuss the current state and potential future directions of PoDL. Finally, we summarize our findings and conclude the paper in Section VI.
## II Background
### _Cryptocurrencies and Blockchain_
The blockchain is a distributed network of interconnected nodes that operate in a peer-to-peer manner, enabling the transfer of digital assets without the need for any intermediaries [3]. Blockchain was originally built to support the work of the most famous cryptocurrency, Bitcoin [4].
Blockchain is a publicly distributed ledger that comprises a chain of blocks that store transactions. The chain expands continuously when a new block is added to it [5]. All the transactions are processed in a decentralized way, which eliminates the need to have intermediaries to validate and verify them [6].
Decentralization, transparency, immutability, and auditability are among the essential features of blockchain technology, as outlined in [7].
In the Bitcoin blockchain, the system comprises a list of blocks where each block consists of three main items [8]:
* Merkle Tree represents the transaction data.
* The previous block's cryptographic hash, except for the first block (the "Genesis Block") where it is hard coded.
* A nonce number used for consensus and validation.
For a blockchain to be considered valid, all of its blocks must be valid. A block, in turn, is considered valid if:
* Each transaction in the block is valid.
* Contains the hash of the previous block (except for the first block).
* The hash of the entire block (the hash of the concatenation of the nonce, the hash of the previous block, and the Merkel tree) is less than a pre-defined value which is called the target difficulty.
The target difficulty of a blockchain is frequently updated to maintain a consistent block generation rate (for example, 10 minutes in Bitcoin ) based on the network hash rate [8].
Bitcoin leverages the Hashcash algorithm [43], which entails the miners competing to validate a block by solving a hash puzzle. Miners compete to find the nonce value that meets the target difficulty. The hash function SHA-256 is utilized in Bitcoin due to its puzzle-friendliness property [33].
The process of hash puzzle solving which is known as Proof of Work (PoW) is expensive and requires substantial computational power as well as specialized hardware. All miners compete to propose the next block, and the first node that solves the puzzle will be compensated for its effort. The reward will be in the form of bitcoins, where the winner node receives two types of fees, which are [44]:
* **Transaction fees**: These fees are paid by the creators of the block transactions to the miners.
* **Block reward**: The miner of the block mints a number of bitcoins, which decreases over time.
In the absence of a central node that everybody trusts to ensure that all nodes have the same ledger, there is a need for a process where nodes agree on the truth in such a trustless distributed network. In blockchain, this process is known as consensus, below are some of the conventional methods for achieving consensus in the blockchain:
* **Proof of Work (PoW)**: Participants, also known as miners, will do an exhaustive search (i.e. brute force) to find the best nonce value that meets the target hash. Once it's found, the node that solved the puzzle will disseminate the block throughout the network, and then the block will be validated. Once it's validated, it will be added to the blockchain. The primary apprehension with the PoW approach is that miners are required to expend substantial resources in order to solve the puzzle and generate a block.
* **Proof of Stake (PoS)**: In this method, miners don't have to solve a mathematical puzzle, hence it's computationally efficient. The validation is done by selecting specific nodes for block validation. The chance of being selected is mainly correlated with the stake or the wealth of the node.
* **Delegated proof-of-stake (DPoS)**: In this elective consensus process, nodes that hold a stake in the network have the option to delegate transaction validation to another node through a voting process.
* **Byzantine Fault Tolerance**: Involves a group of nodes agreeing on a collective course of action such as validating a transaction, and was developed based on research on the Byzantine fault [39].
* **Proof of Authority**: Similar to BFT, PoA involves delegating specific nodes in the network with authoritative control to achieve consensus based on majority votes when validating a block.
* **Proof of Elapsed Time**: Similar to PoA, PoET chooses a leader to generate new blocks in the blockchain by linking response time to a timer and selecting the node with the shortest expiration time as the leader.
* **Proof of Burn**: Requires validators to spend or burn their coins to create a new block and receive a reward. This process enhances the validator's stake in the network, while also reducing the number of coins in circulation and increasing the value of the remaining coins due to the coin-burning mechanism.
* **Proof of Importance**: Similar to PoA, this consensus mechanism selects validating nodes based on their stake
in the network, but in this case, the stake is determined by their history of successful transactions and validations.
* **Proof of Capacity**: Also known as Proof of Space or Proof of Storage, it operates as an alternative to PoW by storing all possible nonce values on the hard drives of participating nodes.
### _Deep Learning_
As a result of the increase in computational power and data accessibility, deep learning [32] has experienced significant achievements across a range of application domains.
There are four categories into which various forms of deep learning can be grouped, as noted by [47]:
* Deep Supervised Learning, which utilizes labeled data for training purposes.
* Deep Semi-supervised Learning, which operates on partially labeled data.
* Deep Unsupervised Learning, which does not rely on labeled data during the training process.
* Deep Reinforcement Learning, which is a technique used for learning in unfamiliar environments.
Given a model that maps input features X to target Y, where X represents the input data and Y is the target or the label of X, which can be continuous in the case of regression tasks and discrete in the case of classification tasks. Different optimization methods, such as SGD, Adagrad, AdaDelta, RMSprop, and Adam can be used to train the model on the given data [48]. These optimization techniques are used to minimize an objective function, which measures how good the model's predictions are. It is important to note that this function must be differentiable, as these optimization methods use gradients to minimize/maximize the objective function.
The training process is done by iteratively performing the below steps:
1. The neural network processes input data, generates predictions and produces outputs by applying weights to the input and activating functions.
2. The cost function evaluates the disparity between the predicted and actual outputs. This is used as a measure of how well the neural network is performing.
3. Gradients are calculated using the cost function. These gradients are used to update the weights in the neural network.
4. In the backward pass, the gradients calculated in step 3 are utilized for updating the weights of the neural network, which assists in enhancing the network's performance. This process is repeated many times until the network reaches an acceptable level of accuracy.
Training a model on a large dataset can be challenging as it may not be feasible to pass the entire dataset at once. Therefore, the dataset is split into smaller batches that can be used iteratively to train the model. An epoch refers to the point where all batches have passed through the model once during the training process. Typically, model training involves multiple epochs rather than a single epoch.
In order to prevent overfitting where the model memorizes the training data, it is essential to split the data into training and testing sets. In practical scenarios, this data is further split into three sets:
* **Training set**: This is the data that the model will be trained on.
* **Validation set**: Used during the training process to select the parameters that lead to the best performance.
* **Test set**: Used to test the actual model performance.
The training and validation data are utilized to train the model and fine-tune the hyperparameters. On the other hand, the test data is leveraged to evaluate how the model performs on unseen data.
A diverse set of metrics are available for evaluating the efficacy of models, such as Accuracy, Precision, Recall, and F1-score, among others.
## III Proof of Deep Learning (PoDL)
In this section, we present a summary of how PoDL operates and the key participants involved in the PoDL-based network. We conducted a thorough investigation of existing research and examined the strategies employed to address the following aspects:
* Approaches for rewarding honest mining participants.
* Methodologies for task initiation and model submission.
* Block validation and acceptance criteria, along with techniques for verifying the entire blockchain.
* Techniques employed to handle short model training time and enable incremental training across blocks.
* Approaches used to tackle challenges such as double spending, self-publishing, data privacy, and model size.
The original PoW technique requires nodes to compete to find out a nonce value in such a way that the resulting block hash meets certain criteria. This process is computationally expensive and does not utilize the computational resources for useful operations. To address this issue, PoDL was proposed. In PoDL, nodes compete to train deep learning models and use trained models as evidence of the work they've done. This results in utilizing the computational resources as well as making the blockchain robust against tampering by making it computationally infeasible for malicious nodes to tamper with any block.
Most of the proposed techniques such as in [10, 11, 12, 13, 14, 15, 16, 17] rely on three main participants for achieving consensus in PoDL, which are:
* **Miners**: The nodes within the network contend with one another to append new blocks onto the chain. In the original PoW consensus mechanism, miners compete to resolve a computationally difficult hash puzzle. In the PoDL consensus mechanism, miners contend to submit trained models as proof of work. For this reason, some researchers refer to them as "trainers" (e.g., [10]).
* **Full nodes**: These nodes uphold the blockchain and authenticate the work of miners by scrutinizing their performance (e.g., accuracy, precision, recall, etc. of the
submitted models). Some researchers such as in [10] call these nodes "validators" because they handle the validation of the submitted models.
* **Task/Data publishers**: These are the nodes in the network that are responsible for publishing machine learning training tasks to the network. For this reason, some researchers refer to them as "suppliers" (e.g., [10]) while others call them "model requesters" such as in [11]. These tasks include the training data, testing data, metrics, the minimum threshold for each metric, the model architecture to be trained, and any other hyper-parameters.
PoDL was originally proposed in [11] as a method for training deep learning models on decentralized networks. The workflow of PoDL can be divided into seven phases:
1. **Training data release**: The data publisher releases the training data, hyperparameters, pre-trained model (if any), and any other necessary requirements for conducting the training to all miners.
2. **Training**: The miners train the released model (if there is any) on the released training data.
3. **Block header submission**: Once the miners finish the training, they compute the value of the hash of the trained parameters for the model and then send the header of the block to the full nodes.
4. **Testing data release**: The data publisher releases the test data (or part of it) to all nodes.
5. **Assessing model performance**: The nodes then utilize the test data to calculate the trained model's accuracy.
6. **Model submission**: Once the nodes have finished calculating the accuracy; the trained model, together with the block, is submitted for validation.
7. **Block selection**: The validators then assess the submitted models' accuracy and sort the models based on their accuracy. Once that is done, the validators then accept the first model with the highest accuracy. In the event of a tie, the first block sent will be chosen, or some researchers prefer to choose the smallest model size, as in [10].
Figure 1 illustrates an overview of the workflow described above from the task generation by the task publisher till the block gets added to the blockchain by the full nodes.
### _Design of Reward_
In the original PoW-based blockchain, the miners are rewarded with transaction fees and block rewards for successfully solving a computationally difficult puzzle [30].
In PoDL, several approaches for reward distributions have been proposed. For example, in [10], the authors proposed a PoDL blockchain where miners are rewarded by task publishers for training the best-performing model for a given task. The PoDL blockchain validators are compensated equally with a transaction fee provided by the task publisher as well as new WekaCoins. The proposed method also prevents the task publisher from participating in the competition by training on the test data. This is because the task publisher would have to pay for both itself and the validators, which is infeasible from the task publisher's point of view.
In [16], the authors designed a reward mechanism that incentivizes miners to train models for task publishers. In this system, task publishers pay miners to train a model on their data. The miners are rewarded with the publisher's payment. No new coins are generated in this process, which prevents the task publisher from participating in the competition.
### _Task initiation process_
Task initiation is the process of requesting a model to be trained on a specific dataset. In [10], researchers proposed a special transaction called a "task publication transaction" while in [17] the authors call it "data transaction". In this transaction, the task publisher proposes a challenge to train a model. This transaction contains the information required to complete the task, such as the dataset, the desired performance, and the reward for completing the task.
Once the task transaction is ready to be instantiated, it is important to digitally sign both the transaction and the data [10]. This will prevent any malicious intermediary in the network from manipulating the data to make other nodes perform maliciously. For example, an attacker could modify the data in a way that causes other nodes to train on incorrect data. This would waste the resources of the other nodes and increase the attacker's chance of winning the next block.
Fig. 1: Proof of deep learning (PoDL) Workflow.
### _Model submission process_
Once a miner node completes the training process, to inform the network that a trained model is ready, the authors in [10] used a special transaction called "model transaction" wherein the trainer puts forth a solution for a specific task. The model transaction is digitally signed to prevent any manipulation by intermediaries. This is similar to the task initiation transaction, which is also digitally signed. The digital signatures ensure that the transactions have not been tampered with, and that they are created by an authorized sender. This contributes to the fairness and integrity of the competition data [31].
### _Validation and Block Acceptance Criteria_
In the original PoW-based blockchain, the first miner that proposes a valid block will get its block appended to the blockchain [8]. The work acceptance criterion in PoDL is dependent on the metrics specified by the task publisher. Once the task publisher releases the testing data, the miners then send the calculated metrics, together with the model, to the full nodes. The latter check the metrics and sort all of the submitted models by accuracy. The most accurate model is then accepted by the full nodes. Furthermore, to ensure that no node steals a trained model from other nodes during the model submission process, all nodes are first required to submit the header of the block that includes the hash of the model parameters as well as the result in some cases. This ensures that even if a node steals a trained model from another, when it submits the stolen model, the validator will check the submitted header as well as the model, by passing the test set to the model and checking if they match their claimed results in the header of the block, as well as the model parameters' hash. If it is not matched or if the miner submits a trained model without submitting the block header, the block will be rejected. Additionally, since the test data is published after the block header is submitted by the miners, the resulting model trained on the test data by any node will result in an invalid claim between the actual model results and the claimed ones in the block header [18].
In [13], an extra layer called the secure mapping layer (SML) was used to prevent model theft. This layer allows nodes to share their models in the network without having to submit the block header beforehand as mentioned above, since the SML is treated as part of the model. The SML takes the input data concatenated with the current and the previous block hashes as input to the model. Consequently, the input features of the model are dependent on miner information due to the coinbase. Thus, the act of model stealing can negatively impact model performance, as the node attempting to steal the model would introduce unseen features during training, leading to poor performance.
### _Short time training handling and incremental training_
In many blockchain systems, the pace of block creation is fixed. For example, every 10 minutes, the Bitcoin network generates a new block [8]. However, complex deep learning models can take days or even weeks to train, especially if the model and the data will be trained on is large or the desired accuracy is high. As a result, miners may not be able to complete more than a single epoch of training during this time period. This can lead to low-accuracy models, as miners are not able to train their models on enough data or for longer periods of time.
In order to address this issue, researchers have proposed a number of solutions. One approach proposed in [11] and adopted in [18] is to allow miners to train their models over multiple blocks. In this approach, the task publisher does not gather the trained model unless the accuracy of the model does not vary throughout a number of blocks. This ensures that miners have enough time to train their models on enough data to achieve the desired accuracy. However, there is a potential for cheating in this approach. Since the task publisher will expose the test data to mine the first block, it is possible for a malicious node to deceive by training the model on data from the test set and achieve the highest accuracy in the next block. To prevent this, it is important for the task publisher to publish a fresh and new test set for each block. This allows validators to validate the work over each block. Miners keep training their models on the same training set, but they are evaluated on a fresh test set each time. This helps to ensure that miners are not cheating by training their models to the current testing data.
In [14], the authors used a similar approach, but instead of blindly relying on the accuracy to settle and not change for a certain number of blocks, they used short-term target accuracy and desired accuracy. The desired accuracy takes longer to achieve than the block generation rate, and it is achieved by incremental improvements based on short-term goals. These short-term targets can be achieved within a pre-defined time window that is less than or equal to the block generation rate. The short-term targets are based on the results of the last task. For example, if the last task achieved 90% accuracy while the desired accuracy is 95%, the next short target might be 92%.
Other researchers have made assumptions in their designs that prioritize the block generation rate over the model training. For example, in [12, 15], the authors prioritize the block generation rate by assuming that the model training can be interrupted. This assumption is based on the fact that many machine learning models are trained using gradient-based optimization, which can be interrupted without losing much progress as long as the miners save the best checkpoint.
### _Double Spending Prevention_
Double spending happens when an amount of money is spent more than once, and the transaction is successfully processed for each spending. This is done by transferring a certain amount of coins (in the case of cryptocurrencies) to user A, and then transferring the same amount to user B [26, 27]. Double spending attacks are deterred by the block acceptance policy mentioned previously, where only the block created by the trainer that trained the highest accurate model is accepted by the validators. The authors in [11] used the same techniques and found that double spending is unlikely to occur when the majority (i.e., 51%) of full nodes are controlled or
owned by a single entity. This is because the global optimum is not known in advance, so when only the most accurate models are accepted, it becomes difficult to improve upon the best-performing model. Their claim is supported by the fact that the model's accuracy is dependent on randomly chosen hyperparameters and initial weights.
In DLchain [14], an attacker can launch a double spending attack by forking the blockchain several blocks behind and then building a longer chain above that forked block. However, this attack is unlikely to be successful because the honest miners will be able to build a longer chain on top of the main chain faster than the attacker can; this is similar to the analysis conducted in the original Bitcoin paper [8]. As a consequence, the attacker will never be able to successfully double-spend any transactions.
The authors in [10, 12, 13, 16] did not consider a double spending attack that could be carried out by utilizing the test data released by the task publisher. Since the test data is released to the network by the task publisher to allow the validators to validate the results, a malicious node could perform a double-spending attack via training a model on data from the test set. The number of examples in the test set is always substantially less than the training data, so the computational resources needed to train the model on it are negligible. If the attacker can create a fork in the blockchain, the block generation on the fork chain will be faster than the honest miners who generate blocks on the primary chain. Furthermore, if the intruder controls the majority of the network's nodes, this becomes critical. This kind of attack is also possible even if the training process is carried out over multiple blocks. In this case, the attacker might only train on a subset of the test data, to ensure that the test result is higher than the best-performing model. To prevent such an attack from happening, the authors in [11] mandated that all models submitted to the competition must be reproducible. This means that in order to verify the block, any full node has to be willing to retrain the winning model on the training set.
### _Blockchain verification_
In the original blockchain proposed by Nakamoto [8], the blockchain can be validated by verifying the proof of work by recalculating the hash of each block.
In PoDL, to check the validity of the entire blockchain (e.g., by the newcomer full nodes), full nodes require access to the trained model as well as the data to validate the blockchain. This is done by checking the blockchain block by block. For each block, the node checks the claimed accuracy, the trained model by the winner miner, and the generated block. To make the verification process easy for any new node, it is necessary for the task publisher and full nodes to upload the required data and models so that new full nodes can download them. This approach was adopted by several authors such as in [12, 15].
In addition to the above criteria, in the approach proposed in [11], the miners are required to provide all the parameters and configurations used, such as hyperparameters, initial weights, number of epochs, etc. This is to give the full nodes the flexibility and ability to verify their work by reproducing and retraining the model.
### _Model size handling_
Deep learning models can vary in size from a few kilobytes to several gigabytes. Storing these model parameters can be storage-consuming. However, there are a variety of methods that can be used to minimize the total storage required such as:
* **Model compression**: Many techniques have been used to minimize the trained model size without impacting the accuracy of the models such as in [20, 21, 22, 23, 24, 25].
* **Limiting the model size**: The miners or task publishers are limited to a certain model size (e.g. 10MB/model in [20, 21]). This can help to reduce the overall storage needed.
* **Removing low-performing models**: This can be done if the model training process is done over multiple blocks. If a model is not performing well, it can be removed without jeopardizing the integrity of the blockchain. This is because the tamper-proofing of the blockchain is guaranteed by the high-accuracy models. This approach was used in [11, 12, 15].
### _Data privacy_
According to [38], individuals have the right to regulate the collection, utilization, and distribution of their personal information, which is also known as data privacy, data protection, or information privacy.
The training and testing data published by the task publisher may contain sensitive information. The authors in [13] proposed a solution to this problem by using data encryption. They utilized two forms of ciphertexts which are: 1) inner-product functional encryption (IPFE) [40] and 2) IPFE with function-hiding (IPFE-FH) [41]. It serves to secure the privacy of data and prevent it from being abused.
### _Selfish Publisher Attack_
This kind of attack was first discussed in [14]. Since the identities of the miners and the task publisher are not known, there is nothing to prevent the task publisher from participating as a miner node.
This attack is common when the training process is done over multiple blocks and the desired accuracy is achieved by setting short-term goals for each block. The attack is carried out when the task publisher has already pretrained a model in order to win the block reward. The publisher can carry out the attack in two different ways:
1. Set the short-term goal to be easy to win the reward using its pre-trained model.
2. Set the short-term goal to a high value in such a way the training takes a long time to reach the desired target, and the publisher can generate valid blocks with its pre-trained model.
For the first attack, the authors in [14] solved it by discarding the blocks. This is because too short block intervals of block generation indicate that the difficulty of the task is low.
The second attack is very difficult to carry out after reaching a certain accuracy. This is because there is competition between nodes, and the attacker must produce a well-trained model that requires resources to achieve. As a result, it may be infeasible to carry out the attack.
If task publishers pay nothing or only a small amount of money for the submitted tasks and there is a block reward and transaction fees, this type of attack may be feasible. However, section III-A discusses how the reward structure can be designed to prevent such attacks.
## IV PoDL Challenges
The authors in [42] studied the challenges of proof-of-useful-work (PoUW) systems. These challenges can be summarized and reflected in PoDL as follows:
* **Block sensitivity and non-reusability**: Without these properties, it would be possible to pre-calculate future blocks using existing proofs-of-work. To retain block sensitivity, the hash from the preceding block must be used in the current block. To retain non-reusability, it is necessary to bind the validity of a PoW to the block it validates.
* **Adjustable problem hardness**: The hardness of the given problems can be adjusted to meet the target difficulty, ensuring a consistent block generation rate. This is crucial in PoDL, as having a large dataset with complex models, the best-performing node -in terms of its computational power- may not be able to complete a short-term goal, or even complete a single epoch.
* **Fast verification**: A full node needs to have the capability to quickly verify the validity of a block proposed by the miners. This is done by checking the proof-of-work done by the miners. In the case of PoDL, if the full nodes are going to retrain the model to verify the work done, it might not be a good idea. This is because having a complex model and large data may lead to slowing down the verification process, which in turn allows attackers to perform spam attacks on the network.
* **Problem is parallelizable**: miners should utilize their full computational power to propose the next block. In PoDL, this can be done by utilizing vectorization to optimize and speed up the training process.
## V Discussion and Future Directions
Even though PoDL is a growing field of interest in the research community, and not that much work has been done in that field, it's promising as it, on one hand, can maintain the immutability and security of the blockchain network, while on the other hand, it makes the process more efficient and helps in the development of other fields such as AI.
PoDL enables researchers and developers to submit a training task and get the results without the need for manually setting up a training environment on the cloud or internal infrastructure. It benefits both parties the researcher and developer on one hand and the miners and the blockchain on the other hand, by providing a trained model to researchers/developers and compensating miners for their work, thereby aiding the blockchain network's security. Furthermore, researchers and developers can conduct numerous experiments with various model architectures and hyperparameters utilizing the blockchain network. Different nodes can execute a diverse set of experiments, saving individuals time and resources.
Despite the progress that has been made, and the advantages that PoDL brings, there is still a lot of work to be done to improve the current systems. Here are some of the aspects that need further investigation and research:
* **Data privacy**: As we mentioned earlier, data may contain sensitive information. However, not much work has been done in this area to protect the submitted data by the task publisher.
* **Double spending attack**: As we mentioned in our work, there is still no fully mature system that prevents such attacks from happening. As mentioned earlier, an attacker could use the test data and train the model on it to perform a double spending attack by creating a fork in the blockchain.
* **Continuous task suppliers**: It is critical to have a continuous stream of tasks that can be published to the miners to train models. If there are no tasks, the network will be jeopardized.
* **ASIC hardware**: The Application-Specific Integrated Circuit (ASIC) hardware is a chip created for a specific task. In cryptocurrency mining, ASICs are preferred over general-purpose CPUs and GPUs as they offer better efficiency. However, there is limited research on the impact of ASIC hardware designed for Bitcoin on PoDL. It may be necessary to use new hardware like Tensor Processing Units (TPUs) for PoDL [11].
* validators, trainers, and suppliers.
## VI Conclusion
In this work, we reviewed the latest approaches for PoDL as an example of PoUW. We have discussed the workflow of different PoDL algorithms and their advantages and disadvantages, the network participants, and their interactions. We have also discussed the challenges of implementing PoDL and future research directions.
PoDL is a promising new consensus mechanism that has the potential to shift the resources used in the Proof of Work (PoW) to be more useful. However, some challenges must still be overcome before PoDL can be widely used. One challenge is that PoDL requires the model publishers to continuously provide the network with new tasks. Another challenge is data privacy.
Despite these challenges, PoDL is a promising new technology, and we believe that PoDL has the potential to make PoW-based blockchains more efficient. |
2309.11376 | Harnessing quantum emitter rings for efficient energy transport and
trapping | Efficient transport and harvesting of excitation energy under low light
conditions is an important process in nature and quantum technologies alike.
Here we formulate a quantum optics perspective to excitation energy transport
in configurations of two-level quantum emitters with a particular emphasis on
efficiency and robustness against disorder. We study a periodic geometry of
emitter rings with subwavelength spacing, where collective electronic states
emerge due to near-field dipole-dipole interactions. The system gives rise to
collective subradiant states that are particularly suited to excitation
transport and are protected from energy disorder and radiative decoherence.
Comparing ring geometries with other configurations shows that that the former
are more efficient in absorbing, transporting, and trapping incident light.
Because our findings are agnostic as to the specific choice of quantum
emitters, they indicate general design principles for quantum technologies with
superior photon transport properties and may elucidate potential mechanisms
resulting in the highly efficient energy transport efficiencies in natural
light-harvesting systems. | Raphael Holzinger, Jonah Peter, Stefan Ostermann, Helmut Ritsch, Susanne Yelin | 2023-09-20T14:56:51Z | http://arxiv.org/abs/2309.11376v2 | # Harnessing quantum emitter rings for efficient energy transport and trapping
###### Abstract
Efficient transport and harvesting of excitation energy under low light conditions is an important process in nature and quantum technologies alike. Here we formulate a quantum optics perspective to excitation energy transport in configurations of two-level quantum emitters with a particular emphasis on efficiency and robustness against disorder. We study a periodic geometry of emitter rings with subwavelength spacing, where collective electronic states emerge due to near-field dipole-dipole interactions. The system gives rise to collective subradiant states that are particularly suited to excitation transport and are protected from energy disorder and radiative decoherence. Comparing ring geometries with other configurations shows that that the former are more efficient in absorbing, transporting, and trapping incident light. Because our findings are agnostic as to the specific choice of quantum emitters, they indicate general design principles for quantum technologies with superior photon transport properties and may elucidate potential mechanisms resulting in the highly efficient energy transport efficiencies in natural light-harvesting systems.
In quantum optics, ordered quantum emitter lattices with subwavelength spacing have emerged as a resourceful platform for near-term quantum technologies [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. Here long-range interactions between light-induced dipoles lead to highly modified optical properties of the quantum emitter ensemble, including Dicke superradiance [14] and the emergence of collective long-lived subradiant states [15, 16]. Applications range from single photon switch gates [17] to enhanced single photon detection for biomedical applications [18, 19] and topological edge state lasing [20, 21]. Likewise, uncovering design principles underlying biological systems and applying this understanding to synthetic systems is crucial for near-term quantum technologies. Ring geometries of quantum emitters promise to enhance single photon sensing, transport, storage, and light generation in engineered nanoscale systems [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123]. In photosynthetic energy transfer, as it occurs in nature, organisms utilize ring-shaped antennae that increase the photon scattering cross-section of a single reaction center: the site where photosynthesis takes place. This transfer process occurs at near unit efficiency, and understanding the mechanisms behind this remarkable feat is an outstanding scientific challenge [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35].
Taking inspiration from biological systems, we examine the long-range excitation transport between a donor and an acceptor emitter through a lattice of quantum emitter rings [Fig. 1(a)]. As a main result, we show that efficient excitation transport at low trapping rates preferentially occurs for ring geometries, as compared to other lattices. This property has important consequences for devising artificial light harvesting and transport systems, and may be relevant for understanding the excellent excitation transport capabilities of biological systems [36]. We also highlight that for ring lattices, the trapping of light at an acceptor site under low-light conditions is enhanced by many orders of magnitude as compared to other geometries and to independent emitters. By choosing an
Figure 1: **Lattices of nanoscopic quantum emitter rings.****(a)** Each ring is composed of two-level quantum emitters with resonance frequency \(\omega_{0}\) and separation \(d<\lambda_{0}\), where \(\lambda_{0}=\omega_{0}/c\) is the wavelength of light. The excited state \(\ket{e}\) spontaneously decays with rate \(\Gamma_{0}\) to the ground state \(\ket{g}\) and the emitters are coupled via long-range dipole-dipole interactions with nearest-neighbor coupling strength \(J\). Emitters acting as donor and acceptor are shown in yellow and the acceptor features an additional trapping state to which excitations irreversibly decay with rate \(\Gamma_{\Gamma}\). **(b)** More detailed sketch illustrating the inter-ring separation \(d_{R}\). The ring radius \(R\) and the emitter spacing \(d\) are related via \(d=2R\sin(\pi/N_{R})\), with \(N_{R}\) emitters per ring. **(c)** Excitation transport efficiency according to Eq. 1 for a chain of 10 rings and various \(N_{R}\). Parameters: \(d/\lambda_{0}=0.05\), \(d_{R}/d=0.9\), \(\Gamma_{\Gamma}/\Gamma_{0}=2\) and \(\Delta=0\).
optimal detuning for the donor and acceptor with respect to the lattice, radiative losses are strongly suppressed, and excitations are protected during the transport by subradiance [15, 16]. While the influence of the excitation trapping rate on the transport efficiency in other geometries has been explored in other works [37, 38, 39, 40], as have certain design principles for bio-inspired artificial solar-harvesting devices [25, 27, 28, 41, 42, 43, 44, 45], our findings specifically highlight the advantages of the rotationally symmetric ring geometry. This special feature of ring configurations is particularly intriguing for its close connection to natural photosynthetic complexes found in biological systems. Our work therefore opens the possibility of exploiting quantum effects in bio-inspired configurations of quantum emitters for near-term optical technologies that enable quantum-enhanced light-matter coupling on the nanoscale.
As a paradigmatic quantum optical model to simulate excitation energy transport, we consider a one-dimensional lattice of \(M\) rotationally symmetric rings, each composed of \(N_{R}\) identical two-level emitters with a ground state \(|g\rangle\) and an excited state \(|e\rangle\). The two states are connected via the transition operator \(\hat{\sigma}_{n}=|g_{n}\rangle\langle e_{n}|\) for the \(n\)-th emitter. Additional emitters acting as donor and acceptor sites are placed in the center of two rings at either end of the lattice, as illustrated in Fig. 1(a). The transport efficiency between these two sites is the core quantity of interest and is defined as
\[\eta_{t}=\Gamma_{\mathrm{T}}\int_{0}^{t}dt^{\prime}\langle\Psi(t^{\prime})| \hat{\sigma}_{a}^{\dagger}\hat{\sigma}_{a}|\Psi(t^{\prime})\rangle. \tag{1}\]
Here \(t\) is the integration time over which excitation can accumulate in the trap state [see Fig. 1(a)]. \(\eta_{t}\) can take values between \(0\) and \(1\), where \(0\) corresponds to no transport at all and \(1\) identifies maximal transport efficiency. \(\hat{\sigma}_{a}^{\dagger}\hat{\sigma}_{a}=|e_{a}\rangle\langle e_{a}|\) corresponds to the projector onto the excited state of the acceptor. The trap population accumulates over time and reaches a steady state value at large times \(t\) when the total excited state population is either dissipated via radiative losses or accumulated in the trap. The transition frequencies and decay rates of the ring emitters are assumed to be equal and given by \(\omega_{0}=2\pi c/\lambda_{0}\) and \(\Gamma_{0}\), respectively, whereas the donor/acceptor transitions may be detuned by \(\Delta=\omega_{\mathrm{d,n}}-\omega_{0}\) with respect to the ring emitter frequencies. We assume \(\omega_{\mathrm{d}}=\omega_{\mathrm{a}}\) for the remainder of this work. The acceptor features an extra trapping channel through which excitations are extracted from the system at a rate \(\Gamma_{\mathrm{T}}\). Furthermore, the quantum emitters are confined in the \(x\)-\(y\) plane with intra-ring separation \(d=2R\sin(\pi/N_{R})\) and inter-ring separation \(d_{R}\), where \(R\) is the ring radius, as illustrated in Fig. 1(b). To reduce the number of free paramters, all dipole emitters are assumed to be circular polarized, namely \((1,i,0)^{T}/\sqrt{2}\). However, qualitatively similar results can be obtained for geometries consisting of linear polarized emitters.
We model the system within the Born-Markov approximation [15], and only consider the quantum emitter's internal degrees of freedom. Furthermore, we assume the weak excitation regime, where at most a single excitation is present in the system (see Methods), and therefore the system can be described (in the rotating frame with \(\omega_{0}\)) by the non-Hermitian Hamiltonian \(\hat{\mathcal{H}}_{\mathrm{eff}}=\hat{\mathcal{H}}_{\mathrm{ad}}+\hat{ \mathcal{H}}_{\mathrm{lattice}}+\hat{\mathcal{H}}_{\mathrm{int}}\). Here \(\hat{\mathcal{H}}_{\mathrm{ad}}=(\Delta-\frac{i}{2}\Gamma_{0})(\hat{\sigma}_{a }^{\dagger}\hat{\sigma}_{a}+\hat{\sigma}_{d}^{\dagger}\hat{\sigma}_{d})-\frac {i}{2}\Gamma_{\mathrm{T}}\hat{\sigma}_{a}^{\dagger}\hat{\sigma}_{a}\) is the bare Hamiltonian of the donor and acceptor, \(\hat{\mathcal{H}}_{\mathrm{lattice}}\) describes the emitters in the ring lattice, and \(\hat{\mathcal{H}}_{\mathrm{int}}\) describes the interaction between the ring emitters and the donor/acceptor,
\[\hat{\mathcal{H}}_{\mathrm{lattice}} =\sum_{n,m}\Big{(}J_{nm}-i\frac{\Gamma_{nm}}{2}\Big{)}\hat{ \sigma}_{n}^{\dagger}\hat{\sigma}_{m}, \tag{2a}\] \[\hat{\mathcal{H}}_{\mathrm{int}} =\sum_{n;k=\mathrm{a,d}}\Big{(}J_{nk}-i\frac{\Gamma_{nk}}{2} \Big{)}(\hat{\sigma}_{n}^{\dagger}\hat{\sigma}_{k}+\hat{\sigma}_{k}^{\dagger} \hat{\sigma}_{n}). \tag{2b}\]
All emitters interact via vacuum-mediated dipole-dipole interactions in free space. The pairwise coherent and dissipative interactions are given by \(J_{nm}=-3\pi\Gamma_{0}/k_{0}\operatorname{Re}(G_{nm})\) and \(\Gamma_{nm}=6\pi\Gamma_{0}/k_{0}\operatorname{Im}(G_{nm})\) respectively, with \(G_{nm}\) being the free space Green's function (see Methods). The Green's function depends only on the separations between the emitters and their dipole orientation. The time evolution of the system is described by the effective Hamiltonian in Eq. (2) via the Schrodinger equation \(i\partial_{t}|\Psi(t)\rangle=\hat{\mathcal{H}}_{\mathrm{eff}}|\Psi(t)\rangle\). Since the Hamiltonian is non-Hermitian, the amplitude of the wavefunction for the quantum emitters decreases with time, which is a direct manifestation of the dissipative nature of the system.
As discussed in previous works [15, 16, 22], a subwavelength-spaced ring of quantum emitters exhibits guided eigenmodes that are extremely subradiant, exhibiting an exponentially increasing lifetime, \(\tau\Gamma_{0}\sim\exp(N_{R})\), of a single excitation [22]. Aside from the bright symmetric superposition state, the fields of the remaining eigenmodes vanish at the center of the ring due to symmetry. Thus, they are decoupled from any emitter at the center. Here we demonstrate that a donor/acceptor at the center of the ring that is dipole-dipole coupled to the symmetric ring mode can form a subradiant state with a majority of the excitation concentrated in the donor/acceptor. We start by analyzing a single ring of \(N_{R}\) emitters with a single donor in the center. For a single ring, where the dipole orientations preserve the discrete rotational invariance, the collective eigenmodes of the effective Hamiltonian are spin waves of the form \(|\Psi_{m}\rangle=\hat{S}_{m}^{\dagger}|G\rangle\), where
\[\hat{S}_{m}=\frac{1}{\sqrt{N_{R}}}\sum_{j=1}^{N_{R}}e^{im\varphi_{j}}\hat{ \sigma}_{j} \tag{3}\]
and \(|G\rangle\) denotes all emitters in the ground state. Here \(\varphi_{j}=2\pi j/N_{R}\) is the angle between neighboring emitters along the ring and \(m=0,\pm 1,\cdots,[\pm(N_{R}-1)/2]\) is the angular momentum of the collective mode. The associated energy shifts and decay rates of these spins waves are given by \(\hat{J}_{m}=\sum_{j}e^{im\varphi_{j}}J_{1j}\) and \(\tilde{\Gamma}_{m}=\sum_{j}e^{im\varphi_{j}}\Gamma_{1j}\), respectively. In such a configuration, all ring emitters
couple equally to the central donor, which restricts the spectrum to the \(m=0\) mode. This system features two eigenstates \(|\Psi_{\pm}\rangle\), that are symmetric/anti-symmetric superpositions of the symmetric ring mode and the central donor. The anti-symmetric state can be extremely sub-radiant depending on the detuning \(\Delta\) of the donor with respect to the ring emitters, resulting in a vanishingly small net dipole strength [24]. This leads to an optimal detuning \(\Delta_{\text{sub}}\approx J_{\text{d}}(\tilde{\Gamma}_{0}-\Gamma_{0})-\tilde{ J}_{0}\) that maximizes the subradiance of the donor with an effective decay rate \(\Gamma_{\text{eff}}/\Gamma_{0}\lesssim 10^{-3}\) (see Methods). Here, \(J_{\text{d}}\) is the coherent coupling between the donor and a ring emitter.
Likewise, a chain of quantum emitter rings features a rich collective eigenmode structure. In particular, the subradiance of the eigenmodes protects the excitations from radiative decoherence and leads to efficient excitation transport [46, 16]. As shown above, eigenmodes of a rotationally symmetric ring carry angular momentum \(m\). Similarly, the eigenmodes of a linear chain of quantum emitters carry linear momentum \(k\)[5, 15]. This leads to an ansatz wavefunction for the eigenmodes of a ring chain, \(|\Psi_{m,k}\rangle\), with an angular and linear quasi-momentum pair \((m,k)\), associated eigenenergies \(\omega_{0}+J_{\text{m,k}}\) and decay rates \(\Gamma_{\text{m,k}}\)[47] (details provided in the Methods). The translational distance between adjacent ring centers along the chain is given by \(\tilde{d}=2R+d_{R}\). Fig. 2(a) shows the energy bands for \(N_{R}=9\) and \(d_{R}/d=0.9\). The band structure exhibits a nontrivial topology with a non-zero Zak phase \(\varphi=i\int_{BZ}dk\,\langle\Psi_{m=0,k}|\partial_{k}|\Psi_{m=0,k}\rangle\)[48] as well as gapped edge states between the energy bands of the \(m=0\) and \(|m|=1\) eigenmodes. The edge states emerge for decreasing inter-ring spacings \(d_{R}\), illustrated in Fig. 2(c), and become more pronounced until a critical spacing of \(d_{R}/d\approx 0.58\) and \(d_{R}/d\approx 0.34\) for \(N_{R}=8,9\) emitters per ring, respectively. The edge states are energetically degenerate and detuned by \(\Delta_{\text{gap}}^{(0,1)}\) from the lower/upper
Figure 2: **Band structure, edge states, and excitation transport for a chain of rings.****(a)** Eigenmodes of the effective Hamiltonian \(\mathcal{\hat{H}}_{\text{lattice}}\) in Eq. (2a) can be cast into \(N_{R}=9\) energy bands with an angular momentum projection \(|m|\) using Eq. (9) and translation along the ring chain axis is given by \(\tilde{d}=2R+d_{R}\), the spacing between adjacent ring centers. (Decay rates of all eigenmodes are color coded). A band gap emerges with two edge states residing inside with an energy separation \(\Delta_{\text{gap}}^{(0,1)}\) to the nearest lower/upper band edge respectively. **(b)** The minimal energy gap \(\Delta_{\text{gap}}^{(0,1)}\) to the nearest lower/upper band edge normalized by the maximal nearest-neighbor coupling \(J\) shows distinct maxima as a function of \(d_{R}\). These maxima correspond to an optimal transport efficiency \(\eta_{t}\) as shown in (e) for \(N_{R}=8,9\). Furthermore, smaller emitter numbers per ring \(N_{R}\) are effected more by randomly rotated rings except for even \(N_{R}\). (The average was taken over 50 random realizations). **(c)** For \(N_{R}=9\) edge states appear with maximum amplitude on the left/right end of the ring chain and a superradiant decay rate for decreasing inter-ring spacings \(d_{R}/d\). Notably, topological edge states have been observed in zigzag chains of gold nano-rings [20, 21] with lasing from the edge rings. **(d)** With decreasing inter-ring spacing \(d_{R}\) edge states become more pronounced with distinct minima where the bulk amplitude vanishes leading to a suppression of excitation transport. **(e)** Excitation transport between a donor and acceptor site paced in the center of the edge rings. The transport efficiency \(\eta_{t}\) is evaluated after a time \(t\Gamma_{0}=150\) with the donor/acceptor detuning \(\Delta\) optimized for maximum transport. Suppression appears when the edge states become too pronounced with vanishing amplitude in the bulk rings as shown in (d). **(f)** Time dynamics of the excitation transport process between a donor and acceptor site with \(\Delta=0\). The eigenstate fidelity \(|\langle\Psi(t)|\Psi_{\text{edge}}\rangle|^{2}\) for \(N_{R}=9\) demonstrates the importance of edge states at early times. Parameters: \(d=0.05\lambda_{0}\), \(\Gamma_{\text{T}}/\Gamma_{0}=1\) in (e)-(f) and 10 rings in (b)-(f).
band edge respectively. Fig. 2(b) shows the minimum distance of the edge states to the nearest band edge as a function of the inter-ring spacing \(d_{R}\). Topologically protected edge states are crucial for resilient excitation transport in disordered systems [4] and \(\min(\Delta_{\text{gap}}^{(0,1)})\) can serve as a figure of merit in this regard. Specifically for lattices of rings, the band gap remains finite in the presence of rotational disorder and exhibits a distinct maximum, e.g. \(d_{R}/d\approx 0.9\) for \(N_{R}=9\), as shown in Fig. 2(b). Edge states also become superradiant at the critical distance where excitation transport is suppressed, as is shown in Fig. 2(c). This points to the possibility of edge mode lasing, already observed in gold nano-rings arranged in zigzag chains [20, 21, 49].
Figs. 2 (e) and (f) demonstrate the fundamental influence of the edge states on the transport dynamics between a donor and acceptor site for 10 rings with \(N_{R}=9\). At the critical spacings, transport is either completely or strongly suppressed because the edge states possess no amplitudes in the bulk rings. Conversely at other spacings \(d_{R}\), edge states are crucial during the early times of the transport process, as demonstrated by the eigenstate fidelities \(\mathcal{F}(t)=|\langle\Psi(t)|\Psi_{\text{eig}}\rangle|^{2}\) for \(N_{R}=9\). Qualitatively similar results hold for other \(N_{R}\). Indeed, edge states have been thoroughly studied in dimerized chains (\(N_{R}=2\)), which reproduces a long-range generalization of the well-known Su-Schrieffer-Heeger (SSH) model [21]. A more complete discussion of the emergence of edge and corner states [49] in two-dimensional ring lattices is briefly discussed in Methods and warrants further study.
We now focus on the excitation transport dynamics and discuss the time evolution of a single initially excited donor. We find that excitation transport is optimized at particular donor/acceptor detunings, and that efficient transport occurs only for ring emitter numbers \(N_{R}\geq 6\). In particular, rings with 8-, 9- and 10-fold symmetry seem to be most optimal. This is particularly intriguing because [50, 51] 8-, 9- and 10-fold rings, the most abundant type occuring in natural light harvesting antennae [52], show the highest resilience when rings are randomly rotated with respect to each other. In Fig. 3 the donor excited state populations and the trap populations \(\eta_{t}\) are shown after a time \(t\Gamma_{0}=150\) for a chain of 10 rings with various inter-ring spacings \(d_{R}\). The detuning \(\Delta\) where the donor excited state population is maximized (i.e., most subradiant), follows the optimal detuning \(\Delta_{\text{sub}}\) for the single ring case. Here, the donor excitation largely remains trapped in the subradiant state discussed above, even for small inter-ring spacings.
A crucial element of excitation energy transport is robustness against energy disorder. We provide a comparison of ring lattices with other lattice geometries, including the influence of static frequency disorder in the lattice emitters. This is achieved by taking emitter frequencies \(\omega_{m}\) from a Gaussian distribution around the unperturbed emitter frequency \(\omega_{0}\) with a standard deviation \(\delta\omega\) and adding the term \(\sum_{m}(\omega_{m}-\omega_{0})\hat{\sigma}_{m}^{\dagger}\hat{\sigma}_{m}\) to the Hamiltonian in Eq. (2). The donor/acceptor detuning \(\Delta\) remains unchanged and is chosen such that unperturbed excitation transport is maximized. Fig. 4 shows various geometries, many of which also have been studied previously for atoms trapped in optical lattices [4, 5, 10, 12, 22, 46]. The nearest-neighbor distance is kept at \(d=0.06\lambda_{0}\) and the donor-acceptor distance at \(\sim\lambda_{0}\) to establish a uniform comparison between the different geometries. Figs. 4(a) and 4(b) show a hexagonal lattice with \(\Delta=0\) and a honeycomb lattice with \(\Delta=4.5\)\(\Gamma_{0}\). In Fig. 4 (c) a 1D ring chain and 2D hexagonal ring lattice with \(N_{R}=9\) are shown with \(\Delta=\Gamma_{0}\). The fluctuation in the emitter frequencies \(\delta\omega\) is set to \(|J|/4\) and \(|J|/2\), where \(J\approx-8.4\Gamma_{0}\). Altogether, the different geometries show a similar reduction in the maximal transport efficiencies under disorder but behave quite differently in the range of trapping frequencies \(\Gamma_{\text{T}}\) where maximal transport occurs. Whereas the hexagonal lattice exhibits peak transport at a trapping rate far above the optical decay rate, namely at \(\Gamma_{\text{T}}/|J|\sim 2\), the ring lattices demonstrate efficient transport over a large range \(\Gamma_{\text{T}}^{\text{opt}}/|J|\sim 0.01-1\), even in the moderately disordered case. Qualitatively similar conclusions also apply to ring lattices with \(N_{R}\neq 9\). Just as importantly, the ring lattices in Fig. 4(c) show significant transport enhancement for \(\Gamma_{\text{T}}\ll|J|\) compared to the independent case where no lattice is present. In summary, the ring lattices show significantly better transport capa
Figure 3: **(a)-(c) Excitation transport in quantum emitter rings between a donor and acceptor site.** Scan over the number of emitters per ring \(N_{R}\) and donor/acceptor detuning \(\Delta\) for 10 rings with decreasing inter-ring spacings \(d_{R}\) after a time \(t\Gamma_{0}=150\). Efficient transport emerges only with \(N_{R}\geq 6\), irrespective of \(d_{R}\). The excited state population in the donor can get trapped in a subradiant state involving the ring surrounding the donor and follows the detuning \(\Delta_{\text{sub}}\) (white dashed line) derived in the main text. For 9-fold symmetric rings the donor/acceptor detuning, that optimizes transport, is given by \(\Delta\approx 0\) for all inter-ring spacings. Additional parameters are: \(\Gamma_{\text{T}}=2\Gamma_{0}\), \(d=0.05\lambda_{0}\).
bility and robustness against disorder.
So far we have assumed, that a single donor is initially excited, and we have quantified the transport behavior by calculating the fraction of the excitation that accumulates in a trap state via Eq. (1) after a waiting time \(t\). However, in many realistic scenarios, a perfectly excited donor is rather unlikely, and emitters close to the donor will be excited too. This motivates the study of the trapping rate at which the excitation ends up in the trap state under continuous coherent illumination in the form of a Gaussian laser beam with finite beam waist \(w\). The continuous coherent drive is modeled by
\[\hat{\mathcal{H}}_{\text{laser}}=\Omega_{0}\sum_{i}\exp\Big{(}-\frac{|\vec{r} _{d}-\vec{r}_{i}|^{2}}{2w^{2}}\Big{)}(\hat{\sigma}_{i}^{\dagger}+\hat{\sigma}_ {i}), \tag{4}\]
where \(\Omega_{0}\) is the laser Rabi frequency, \(\vec{r}_{d}\) is the position of the donor with the sum including all emitters. The driving rate of the laser is kept small (\(\Omega_{0}\ll\Gamma_{0}\)) to ensure that the system stays in the single-excitation regime and the model remains valid. As a figure of merit for energy transport efficiency, we define \(\Gamma_{\text{T}}\langle\hat{\sigma}_{a}^{\dagger}\hat{\sigma}_{a}\rangle_{ \text{st}}/(4\Omega_{0}^{2})\), as the steady-state trapping rate at the acceptor emitter. The effective trapping rate is normalized by the trapping rate of a single acceptor, given by \(\sigma_{0}\Gamma_{0}\Gamma_{\text{T}}/(\Gamma_{0}+\Gamma_{\text{T}})^{2}\), where \(\sigma_{0}=6\pi/k_{0}^{2}\) is the single emitter scattering cross-section [24]. In Fig. 5 a hexagonal lattice (a), a honeycomb lattice (b), and a hexagonal ring lattice with \(N_{R}=9\) (c) are compared for two laser beam waists under continuous driving. By choosing a beam waist of \(w/\lambda_{0}=0.3\), most of the incoming light is focused around the donor emitter while the acceptor emitter remains mostly undriven. For \(w/\lambda_{0}=3\) the whole lattice is uniformly driven--a scenario more applicable to deeply subwavelength lattices under illumination from a non-directional light source. Natural light-harvesting antennae in purple bacteria offer an example [25]. In all cases, the donor/acceptor detuning \(\Delta\) is chosen optimally such that the trapping rate is maximized. We find that the ring lattice is many orders of magnitude more efficient in trapping incident light as compared to both the triangular and honeycomb lattices as well as the single emitter at trapping rates much below the nearest-neighbor coherent transfer rate \(J\). In particular for \(\Gamma_{\text{T}}/|J|\lesssim 0.01\) the ring lattice exhibits an almost \(100\times\) higher trapping efficieny compared to an independent emitter and the other lattices.
Figure 4: **Excitation transport in ring geometries exhibits superior robustness against disorder.** Comparison of transport efficiency in **(a)** chain and hexagonal lattices and in the absence of a lattice (grey line), **(b)** honeycomb, and **(c)** ring lattices as a function of the trapping rate \(\Gamma_{\text{T}}\) and frequency disorder. Lattice emitter frequencies are randomly fluctuating by \(\delta\omega\) around the resonance frequency \(\omega_{0}\). The donor-acceptor distance is approximately the wavelength of light \(\lambda_{0}\). Also shown in the dashed-dotted lines is the case of a donor-acceptor pair separated by \(d\) in the absence of any lattice. Although frequency disorder decreases the long-range transport capacity, it prevails remarkably well even at large frequency fluctuations. In particular, the ring lattices exhibit high transport efficiencies (close to \(90\%\)) over a wide range of trapping rates as compared to the other geometries. At trapping rates much below the magnitude of the coherent transfer rate \(J\), ring-based lattices are superior to any other lattice in our study. Additional parameters: \(d_{R}/d=0.9\), \(d/\lambda_{0}=0.06\), \(J/\Gamma_{0}\approx-8.4\), \(t\Gamma_{0}=150\). An average over \(25\) random realizations with standard deviation \(\delta\omega\) was performed in all plots. Donor/acceptor detunings in (a) \(\Delta=0\), (b) \(\Delta=4.5\Gamma_{0}\), and (c) \(\Delta=-\Gamma_{0}\).
In conclusion, we have demonstrated intriguing optical properties of quantum emitter ring lattices, including the emergence of topological edge states. Furthermore, we have shown based on general symmetry principles that ring lattices form a superior platform for transporting and trapping excitations. We have also elucidated the guiding principles that govern optimal donor/acceptor tunings, trapping rates, and geometric arrangements with robustness against static energy disorder. Under more realistic conditions of weak coherent light illumination, we have shown that ring lattices are orders of magnitude more efficient at trapping the absorbed light when the trapping rate is much smaller than the nearest-neighbor coherent coupling rate. This result is thought-provoking since natural light-harvesting systems also operate with trapping mechanisms that are orders of magnitude slower than the coherent transfer time between neighboring chromophores [53, 54]. Measurements performed on natural light-harvesting complexes show that the coherent energy transfer time between neighboring chromophores during photosynthesis is of the order of \(\sim 0.1-10\) ps whereas the trapping time in the reaction center is typically \(\sim 0.1-10\) ns [53, 36, 54]. For a pre-existing trapping structure, this could provide an explanation why nature utilizes ring geometries as a moderating mechanism to trap absorbed sun light in reaction centers. Other studies have focused on molecular emitters in ambient conditions with vibrational degrees of freedom and multiple decoherence channels [55, 56]. The impact of these additional effects on the results presented here are an exciting avenue for future research [57, 58, 59]. Nevertheless, our results suggest that there exist general and platform-agnostic design principles that govern the efficient transport of excitation energy at the nano scale. These geometrical considerations may have played a role in evolutionary design and warrant further study.
**Acknowledgments** - R. H. acknowledges funding from the Austrian Science Fund (FWF) doctoral college DK-ALM W1259-N27. S.O. is supported by the Harvard Quantum Initiaive (HQI). S.F.Y. thanks the AFOSR and the NSF (through the CUA PFC and QSense QLCI).
|
2303.17984 | Models as Agents: Optimizing Multi-Step Predictions of Interactive Local
Models in Model-Based Multi-Agent Reinforcement Learning | Research in model-based reinforcement learning has made significant progress
in recent years. Compared to single-agent settings, the exponential dimension
growth of the joint state-action space in multi-agent systems dramatically
increases the complexity of the environment dynamics, which makes it infeasible
to learn an accurate global model and thus necessitates the use of agent-wise
local models. However, during multi-step model rollouts, the prediction of one
local model can affect the predictions of other local models in the next step.
As a result, local prediction errors can be propagated to other localities and
eventually give rise to considerably large global errors. Furthermore, since
the models are generally used to predict for multiple steps, simply minimizing
one-step prediction errors regardless of their long-term effect on other models
may further aggravate the propagation of local errors. To this end, we propose
Models as AGents (MAG), a multi-agent model optimization framework that
reversely treats the local models as multi-step decision making agents and the
current policies as the dynamics during the model rollout process. In this way,
the local models are able to consider the multi-step mutual affect between each
other before making predictions. Theoretically, we show that the objective of
MAG is approximately equivalent to maximizing a lower bound of the true
environment return. Experiments on the challenging StarCraft II benchmark
demonstrate the effectiveness of MAG. | Zifan Wu, Chao Yu, Chen Chen, Jianye Hao, Hankz Hankui Zhuo | 2023-03-31T11:42:04Z | http://arxiv.org/abs/2303.17984v1 | Models as Agents: Optimizing Multi-Step Predictions of Interactive Local Models in Model-Based Multi-Agent Reinforcement Learning
###### Abstract
Research in model-based reinforcement learning has made significant progress in recent years. Compared to single-agent settings, the exponential dimension growth of the joint state-action space in multi-agent systems dramatically increases the complexity of the environment dynamics, which makes it infeasible to learn an accurate global model and thus necessitates the use of agent-wise local models. However, during multi-step model rollouts, the prediction of one local model can affect the predictions of other local models in the next step. As a result, local prediction errors can be propagated to other localities and eventually give rise to considerably large global errors. Furthermore, since the models are generally used to predict for multiple steps, simply minimizing one-step prediction errors regardless of their long-term effect on other models may further aggravate the propagation of local errors. To this end, we propose Models as Agents (MAG), a multi-agent model optimization framework that reversely treats the local models as multi-step decision making agents and the current policies as the dynamics during the model rollout process. In this way, the local models are able to consider the multi-step mutual affect between each other before making predictions. Theoretically, we show that the objective of MAG is approximately equivalent to maximizing a lower bound of the true environment return. Experiments on the challenging StarCraft II benchmark demonstrate the effectiveness of MAG.
## 1 Introduction
Model-Based Reinforcement Learning (MBRL) [11, 12] aims to improve the sample efficiency of model-free methods by learning an approximate world model and then using it to aid policy learning. Despite the success in single-agent settings, there are still limited works concentrating on MBRL in multi-agent systems. In these systems, the exponential dimension growth of the joint state-action space dramatically increases the complexity of the environment dynamics, making it infeasible to learn an accurate global model [13, 14]. Thus, a common practice is to make use of local agent-wise models which only require partial information and then predict the most relevant information for policy learning of each corresponding agent, so as to alleviate the issue of high dimension and avoid modelling the whole complicated dynamics [13, 14].
One of the most commonly used paradigms in Multi-Agent Reinforcement Learning (MARL) is Centralized Training with Decentralized Execution (CTDE) [10, 11, 12, 13, 14, 15], which allows the use of global information in the policy training phase, yet retains local observability of each agent during execution. MAMBA [1], a recently proposed multi-agent MBRL method under the CTDE paradigm, achieves state-of-the-art sample efficiency in several challenging benchmarks, especially the StarCraft II challenge [14]. To take full advantage of the centralized training phase, MAMBA utilizes the Attention mechanism [12] to extract information for each local model from the global information, i.e., \((f^{1},\ldots,f^{N})=\text{Attention}(l^{1},\ldots,l^{N})\), where \(N\) is the number of agents, \(l^{i}:=(o^{i},a^{i})\) denotes the local information of agent \(i\), \(o^{i}\) and \(a^{i}\) are the observation and action of agent \(i\) respectively, and \(f^{i}\) denotes the extracted feature for the local model of agent \(i\). However, since the Attention block fuses all local predictions obtained from the local models, the prediction of each local model, i.e., \(\hat{P}^{i}(o^{i^{\prime}}|f^{i})\), can affect the subsequent predictions of other local models in the next rollout step.
Furthermore, while generally trained to simply minimize one-step prediction errors, the local models are usually not able to take into account the aforementioned multi-step er
Figure 1: Intuition for multi-step interactions between the local models and the policies.
rors induced by the interactions between the local models and policies. As a result, prediction errors of one local model can be propagated to the others and eventually induce large accumulative global errors during multi-step model rollouts, which would hinder the learning of policies.
Figure 1 gives an intuitive illustration of the above discussion: Given local features \((f_{t}^{1},f_{t}^{2})\) w.r.t. agent \(1\) and \(2\) at step \(t\), 1) the local models predict \(\hat{P}^{1}(o_{t+1}^{1}|f_{t}^{1})=1\) and \(\hat{P}^{2}(o_{t+1}^{2}|f_{t}^{2})=\hat{P}^{2}(\tilde{o}_{t+1}^{2}|f_{t}^{2})=50\%\); and 2) under the previous joint policy \(\pi_{\text{old}}^{1,2}\), both prediction of the next joint observation, i.e., \((o_{t+1}^{1},o_{t+1}^{2})\) and \((o_{t+1}^{1},\tilde{o}_{t+1}^{2})\), will lead the trajectory to go into regions with low values predicted by the value function, hence \(\pi_{\text{old}}^{1,2}\) is updated to \(\pi_{\text{new}}^{1,2}\) to explore regions with potential high values. But under the updated joint policy \(\pi_{\text{new}}^{1,2}\), the subsequent rollout trajectory starting from \((o_{t+1}^{1},\tilde{o}_{t+1}^{2})\) would lead to considerably larger model errors compared to the trajectory starting from \((o_{t+1}^{1},o_{t+1}^{2})\). Thus, to reduce accumulative model errors along rollout trajectories, the local models should learn to coordinate with each other while quickly adapting to the current joint policy. Formally, as will be shown in Section 3, smaller accumulative model errors could provide stronger performance guarantee.
In this work, we propose Models as AGents (MAG), a multi-agent model optimization framework which considers the interactions between local models during multi-step model rollout. Based on the MAMBA framework, the whole environment dynamics is decomposed into agent-wise local models, and our key idea lies in reversely considering the local models as multi-step decision makers while fixing the current joint policy to serve as the environment. During model learning, the local models perform multi-step interactions with each other as well as the policies, so as to take the long-term global effect of immediate local predictions into account and generate trajectories with less accumulative errors. Theoretically, we show the necessity of considering the local model interactions and minimizing the multi-step accumulative errors. Empirically, the results on several challenging tasks in the StarCraft II benchmark demonstrate that MAG significantly outperforms MAMBA in low data regime, and the model error analysis further verifies the effectiveness of our model learning mechanism.
## 2 Background
In this section, we first introduce the problem setting of MARL and MBRL, and then give a brief description of MAMBA, the aforementioned state-of-the-art model-based MARL method.
**MARL** In this work, we focus on the fully cooperative multi-agent systems that can be formalized as DecPOMDPs [10], which are defined by tuple \((N,S,\Omega,O,A,R,P,\gamma)\), where \(N\) is the number of agents, \(S\) the set of global states, \(\Omega\) the observation space shared by the agents, \(O(s,i)\) the function deriving partial observations for each agent \(i\) from a global state \(s\in S\), \(A\) the action space, \(R(s,a^{1},\ldots,a^{N})\) a shared scalar reward function that takes \(s\in S\) and \(a^{i}\in A,i\in\{1,\ldots,N\}\) as input, and \(\gamma\in[0,1)\) the discount factor. Each agent has an action-observation history \(\tau^{i}\in T\equiv(\Omega\times A)^{*}\). We use the bold symbol \(\mathbf{o},\mathbf{a},\mathbf{\pi}\) to denote the joint observation \(\{o^{1},\ldots,o^{N}\}\), action \(\{a^{1},\ldots,a^{N}\}\) and policy \(\{\pi^{1},\ldots,\pi^{N}\}\), respectively. At each timestep, agent \(i\) chooses an action \(a^{i}\in A\) according to its policy \(\pi^{i}(a^{i}|\tau^{i})\) (We replace \(\tau^{i}\) by \(o^{i}\) in our analysis for brevity). The environment then returns the reward signal \(R(s,\mathbf{a})\) and shifts to the next state according to the transition function \(P(s^{\prime}|s,\mathbf{a})\). The expected return of joint policy \(\mathbf{\pi}\) is defined by \(J(\mathbf{\pi}):=\mathbb{E}_{\mathbf{\pi}}[\sum_{t^{\prime}=0}^{\infty}\gamma^{t^{ \prime}}R_{t+t^{\prime}}|s_{t},\mathbf{a}_{t}]\). Some previous works [23, 10] have shown that it is possible to significantly reduce the state space in large environments to only relevant information for the agents' decision making. Hence, in this paper we assume that the joint observation-action, i.e., \((\mathbf{o},\mathbf{a})\), is sufficient to predict the next joint observation \(\mathbf{o}^{\prime}\) and the global reward \(R\). Serving as a special case of Dec-POMDPs, [11] assume global observability of each agent and are adopted to reformulate the model rollout process in our work.
**MBRL** MBRL methods learn a model \(\hat{P}\) that approximates the unknown dynamics \(P\), and then use this model to assist policy learning. While the model can be utilized in various ways [14, 1, 13, 15], this work focuses on one of the most common usages, i.e., generating pseudo samples to enrich the dataset, so as to accelerate policy learning and reduce interactions with the true environment [22, 16, 17, 18]. The expected return of policy \(\mathbf{\pi}\) predicted by model \(\hat{P}\) is denoted as \(J^{\hat{P}}(\mathbf{\pi}):=\mathbb{E}_{\mathbf{\pi},\hat{P}}[\sum_{t^{\prime}=0}^{ \infty}\gamma^{t^{\prime}}R_{t+t^{\prime}}|s_{t},\mathbf{a}_{t}]\). As a state-of-the-art MBRL method in discrete environments, Dreamer V2 [10] makes use of the RSSM model [10] to learn the dynamics of the environment in the latent space by minimizing the evidence lower bound [19].
**MAMBA** Building upon Dreamer V2, MAMBA [1] also learns the environment dynamics in the latent space, and makes use of the Attention mechanism [20] to extract features for each local models from global information. To disentangle the agents' latent space and encourage the local models to be mutually independent when making predictions, MAMBA proposes to maximize the mutual information between the latent state and the previous action of the corresponding agent. In addition, the method allows communicating with the neighbouring agents via discrete messages to sustain world models during the execution phase, thus regarding world models as an instance of communication. To the best of our knowledge, MAMBA is the first model-based MARL method that improves the sample efficiency of model-free methods by an order of magnitude on the challenging StarCraft II benchmark. Nevertheless, compared to the performance of Dreamer V2 in Atari games [10] and MBPO [11] in the MuJoCo [12] benchmark, the overall improvement of sample efficiency, as well as the asymptotic performances
in some difficult tasks achieved by MAMBA are still relatively limited, which may be due to the high complexity of the dynamics of multi-agent systems.
## 3 Method
In this section, we first propose a theoretical result of how the prediction errors of agent-wise local models affect the overall policy performance, based on which we reformulate the model rollout process as a multi-agent sequential decision making problem. In the last subsection, we present the practical implementation of MAG and further detail some important steps in the algorithm.
### Theoretical Result
Since general MBRL methods optimize the policy by maximizing the expected return predicted by the model, one of the most crucial theoretical problems for MBRL is to bound the gap between the model predicted return and the true environment return. Our major theoretical result is the following theorem that bounds the performance gap:
**Theorem 1**.: _Denoting the set of local models by \(\hat{P}:=\{\hat{P}^{i}\}_{i=1}^{N}\) and the data-collecting policy obtained in the last iteration by \(\boldsymbol{\pi}_{D}\), the gap between the expected return of the model and the environment can be bounded as 1:_
Footnote 1: In our theoretical analysis, the reward function is assumed to be known. Note that this is a commonly adopted assumption since the sample complexity of learning the reward function with supervised learning is a lower order term compared to the one of learning the transition model [11].
\[\left|J(\boldsymbol{\pi})-J^{\hat{P}}(\boldsymbol{\pi})\right|\leq\frac{R_{ max}}{(1-\gamma)^{2}}\left(2\epsilon_{\boldsymbol{\pi}}+(1-\gamma)\sum_{t=1}^{ \infty}\gamma^{t}\epsilon_{m_{t}}\right), \tag{1}\]
_where \(\epsilon_{\boldsymbol{\pi}}:=\max_{\boldsymbol{o}}D_{TV}(\boldsymbol{\pi}_{D }(\cdot|\boldsymbol{o})\|\boldsymbol{\pi}(\cdot|\boldsymbol{o}))\) denotes the distribution shift of the joint policy between two consecutive iterations, \(\epsilon_{m_{t}}:=\mathbb{E}_{\boldsymbol{o}\sim\hat{P}_{t-1}(\cdot; \boldsymbol{\pi})}\left[\max_{\boldsymbol{a}}\sqrt{2\sum_{i=1}^{N}\mathbb{E}_ {\boldsymbol{o}^{\prime}\sim\hat{P}(\cdot|\boldsymbol{o},\boldsymbol{a})}} \left[\log\frac{\hat{P}^{i}(\sigma^{i^{\prime}}|\boldsymbol{o},\boldsymbol{a} )}{\sqrt{\hat{P}(\sigma^{i^{\prime}}|\boldsymbol{o},\boldsymbol{a})}}\right]\right]\) denotes the upper bound of the \(i\)-th model's error at timestep \(t\) of the model rollout trajectory, \(\hat{P}_{t-1}(\boldsymbol{o};\boldsymbol{\pi})\) denotes the distribution of joint observation at \(t-1\) under \(\hat{P}\) and \(\boldsymbol{\pi}\), and \(R_{max}:=\max_{s,\boldsymbol{a}}R(s,\boldsymbol{a})\)._
Proof.: Please refer to Appendix A.
It is worth noting that Theorem 2 is not simply a multi-agent version of the results that have been derived in the single-agent setting [19, 10]. The key difference is that Theorem 2 does not scale up the step-wise model prediction errors (i.e., \(\epsilon_{m_{t}}\)) to their maximum over timesteps, which not only leads to a tighter bound that provides stronger guarantee for policy improvement (see Appendix A for proof), but also indicates how the interactions between local models affect the overall performance error bound: Note that by definition the model error at step \(t\), i.e., \(\epsilon_{m_{t}}\), depends on the distribution of the joint observation at the last timestep, i.e., \(\hat{P}_{t-1}(\boldsymbol{o};\boldsymbol{\pi})\), and except for the first step of rollout trajectories, this distribution further depends on the current policies \(\boldsymbol{\pi}\) and the prediction of other local models at the last timestep, i.e., \(\hat{P}_{t}(\boldsymbol{o};\boldsymbol{\pi})=\mathbb{E}_{\boldsymbol{o}\sim \hat{P}_{t-1}(\cdot;\boldsymbol{\pi}),\boldsymbol{a}\sim\boldsymbol{\pi}(\cdot |\boldsymbol{o})}[\prod_{i=1}^{N}\hat{P}^{i}(o^{i}|\boldsymbol{o},\boldsymbol{a })]\). Thus, the errors of the local models can affect each other during multi-step model rollout, and this mutual affect can largely determine the tightness of the overall error bound.
Based on this result, a performance lower bound with regard to the policy shift and the model error can be written as: \(J(\boldsymbol{\pi})\geq J^{\hat{P}}(\boldsymbol{\pi})-2C(\hat{P},\boldsymbol {\pi})\), where \(C(\hat{P},\boldsymbol{\pi})\) denotes the right hand side of Eq. (3). Then, in an ideal manner, applying the following update rule repeatedly can **guarantee the monotonic improvement** of the joint policy:
\[\hat{P},\boldsymbol{\pi}\leftarrow\operatorname*{arg\,max}_{\hat{P}, \boldsymbol{\pi}}J^{\hat{P}}(\boldsymbol{\pi})-C(\hat{P},\boldsymbol{\pi}). \tag{2}\]
The update rule in Eq. (2) is often impractical since it involves an exhaustive search in the joint state-action space to compute \(C\), and requires full-horizon rollouts in the model for estimating the accumulative model errors. Thus, similar to how algorithms like TRPO [10] approximate their theoretically monotonic version, this update rule can be approximated by maximizing the expected model return (i.e., \(J^{\hat{P}}(\boldsymbol{\pi})\)) while keeping the accumulative model error (i.e., \(\sum_{t}\gamma^{t}\epsilon_{m_{t}}\)) small. As for the policy shift term \(\epsilon_{\pi}\), though the bound suggests that this term should also be constrained, we found empirically that it is sufficient to only control the model error. This may be explained by the relatively small scale of policy shift w.r.t. the model error, as observed in [10].
By treating \(-\epsilon_{m_{t}}\) as the "reward" shared by the local models at timestep \(t\), the learning of the local models can be regarded as an optimization process of multi-step predictions of the local models, where the objective is to minimize the global prediction errors accumulated along the model rollout trajectories. Note that the definition of \(\epsilon_{m_{t}}\) involves the expectation under the current joint policy \(\boldsymbol{\pi}\), thus during model learning, the joint policy can be fixed to serve as a background environment, while the local models reversely play the role of decision-makers that should learn to maximize the "expected return" (i.e., \(-\mathbb{E}_{\boldsymbol{\pi},\hat{P}}[\sum_{t}\gamma^{t}\epsilon_{m_{t}}]\)) under the current joint policy. Building on the above theoretical intuition, we now propose the MAG framework in the next subsection.
### Problem Reformulation
To formalize the intuition of reversing the roles of the models and the agents during model learning, we first define the _model MMDP_ to reformulate the model rollout process and then outline the overall model optimization of MAG as a generic solution to the reformulated problem.
**Definition 1**.: _The model MMDP is defined by tuple \((N,\gamma,S_{m},A_{m},P_{m},R_{m})\), where \(N\) is the number of local models, \(\gamma\) is the discount factor, \(S_{m},A_{m},P_{m}\) and \(R_{m}\) are the model-state space, the model-action space, the model-transition function and the scalar model-reward function, respectively._
_At each timestep \(t\), each local model \(\hat{P}^{i}\) receives model-state \(s_{m_{t}}:=(\mathbf{o_{t}},\mathbf{a}_{t})\in S_{m_{t}}\), then takes a model-action \(a_{m_{t}}^{i}:=o_{t+1}^{i}\) according to its "policy" \(\hat{P}^{i}(a_{m_{t}}^{i}|s_{m_{t}})\). After that, the model-transition function returns the next model-state by \(P_{m}(s_{m_{t+1}}|s_{m_{t}},\mathbf{a}_{m_{t}}):=\mathbf{\pi}(\mathbf{a}_{t+1}|\mathbf{o}_{t+1} )\prod_{i=1}^{N}\hat{P}(o_{t+1}^{i}|\mathbf{o}_{t},\mathbf{a}_{t})\), while the model-reward function returns a scalar reward by \(R_{m}(s_{m},a_{m_{t}}):=\sum_{i=1}^{N}\log\frac{\hat{P}^{i}(o_{t+1}^{i}|\mathbf{o}_ {t},\mathbf{a}_{t})}{\sqrt{\hat{P}(o_{t+1}|\mathbf{o}_{t},\mathbf{a}_{t})}}\)._
Using the _model MMDP_ formulation, the model learning phase can be viewed as a multi-agent learning problem, where the current joint policy is fixed to serve as the environment dynamics and the local models, now as the decision makers, interact with each other and learn to minimize the accumulative prediction error under the current joint policy. From this perspective, the local models trained by minimizing one-step prediction errors for each individual can be intuitively interpreted as greedy independent learners, which are often considered shortsighted and may struggle to learn cooperative behaviors. To minimize the accumulative global errors, the local models must instead consider the long-term global effect of immediate local predictions.
Note that in the competitive or mixed cooperative-competitive scenarios, the goal of each local model is generally to assist policy learning of only one individual agent, thus in those scenarios the local models would aim at minimizing the individual accumulative errors instead of the global summation of model errors. Consequently, in those scenarios, the model rollout process can be defined as a _Markov Game_(Shapley, 1953), where the reward function can be defined respectively for each local model. Since the major focus of this work is the fully cooperative scenarios, we leave the above discussion as a possible motivation for future work.
Similar to the optimization of the policy, the objective of model learning can be written as \(\arg\max_{\hat{P}}J^{\mathbf{\pi}}\big{(}\hat{P}\big{)}\) where \(J^{\mathbf{\pi}}\big{(}\hat{P}\big{)}:=\mathbb{E}_{\mathbf{\pi},\hat{P}}[\sum_{t} \gamma^{t}R_{m}(s_{m_{t}},\mathbf{a}_{m_{t}})]\). Due to this duality between the learning of the policies and the models, we call this overall model-based MARL method by Models as AGents (MAG). Specifically, during model learning, the local models first generate samples by actively interacting with the current joint policy (now viewed as the background environment), and then optimize the expected return \(J^{\mathbf{\pi}}\big{(}\hat{P}\big{)}\) accordingly.
### Practical Implementation
To give a practical solution to the _model MMDP_, we describe the implementation of MAG in this subsection.
The Overall AlgorithmAlgorithm 1 gives the overall algorithm design of MAG. In each outer loop, the current joint policy is applied in the real environment to collect an episode of real-world data, which is then added to the environment dataset \(\mathcal{D}_{e}\) (Line 3). Then, the local models are pre-trained by traditional one-step prediction loss \(\sum_{i=1}^{N}\|\hat{o}^{i^{\prime}}-o^{i^{\prime}}\|+\|\hat{R}^{i}-R\|\), where \(\hat{o}^{i^{\prime}},\hat{R}^{i}\sim\hat{P}^{i}(\cdot,\cdot|\mathbf{o},\mathbf{a})\) and each transition \((\mathbf{o},\mathbf{a},R,\mathbf{o}^{\prime})\) is sampled from the environment dataset (Line 4). Since the reward function \(R\) is generally not available in practice, each local model is also trained to predict the global reward respectively given \((\mathbf{o},\mathbf{a})\). Besides, it deserves to be noted that we do not directly use the above pre-trained local models to obtain the predictions during model rollout, but instead optimize the multi-step predictions of local models via a planning process. In Line 5, MAG trains the \(\hat{R}_{m}\) network to approximate \(R_{m}\), since by definition the model-reward \(R_{m}\) involves the true environment dynamics and thus cannot be directly computed. The approximation of \(R_{m}\) will be detailed later on. Lines 6-15 give the model rollout process where \(M\) parallelized trajectories of length \(k\) are generated based on different initial observations sampled from \(\mathcal{D}_{e}\). For each rollout step, before predicting the next observation, the local models first treat the current joint policy as the "dynamics" and then perform \(H\)-step (\(H\leq k\)) planning to obtain the best predictions for the current step (Lines 10-12). This is the core of MAG and will be detailed later. Finally, the pseudo samples generated by the model are added to the model dataset, which is then used for policy learning. Specifically, we adopt PPO (Schulman et al., 2017) as the underlying policy optimization method and use global information that has been processed by the Attention block as the input of the critic.
Approximating \(R_{m}\)We approximate \(R_{m}\) by training a neural network \(\hat{R}_{m}\) which takes transitions \((\mathbf{o},\mathbf{a},R,\mathbf{o}^{\prime})\) sampled from the environment dataset as inputs, and the model prediction errors on these transitions as labels. The prediction error of an environment transition is computed via \(\sum_{i=1}^{N}\|\hat{o}^{i^{\prime}}-o^{i^{\prime}}\|+\|\hat{R}^{i}-R\|\), where \(\hat{o}^{i^{\prime}}\) and \(\hat{R}^{i}\) are sampled from \(\hat{P}^{i}(\cdot,\cdot|\mathbf{o},\mathbf{a})\). Intuitively, \(\hat{R}_{m}\) can be seen as an indicator that informs the models where their "weakhesses" lie in. Additionally, since Dreamer V2 utilizes VAE [13] and learns the dynamics in a latent space, the actual loss of the dynamics consists of a reconstruction loss of the auto-encoder and a KL divergence loss that aim to minimize the distance between the prior and the posterior of the latent state. Consequently, computing the model errors in Dreamer V2 can be interpreted as computing the prediction errors in the latent space, and thus is equivalent to computing \(\sum_{i=1}^{N}\|\hat{o}^{i^{\prime}}-o^{i^{\prime}}\|+\|\hat{R}^{i}-R\|\) in principle.
Planning to PredictSince in our problem formulation the "dynamics" of the model rollout process (i.e., the current joint policy \(\mathbf{\pi}\)) is accessible, one of the simplest yet effective approaches to learn the models can be the Model Predictive Control (MPC) [1], which utilizes the dynamics to plan and optimize for a sequence of actions. Given the state \(s_{m_{t}}\) at step \(t\), the MPC controller first optimizes the sequence of actions \(\mathbf{a}_{m_{t:t+H}}\) over a finite horizon \(H\), and then employs the first action of the optimal action sequence, i.e., \(\mathbf{a}_{m_{H,t}}:=\operatorname*{arg\,max}_{\mathbf{a}_{m_{t:t+H}}}\mathbb{E}_{ \hat{P},\mathbf{\pi}}\sum_{t^{\prime}=t}^{t+H-1}R_{m}(s_{m_{t^{\prime}}},\mathbf{a}_{m _{t^{\prime}}})\). Computing the exact \(\operatorname*{arg\,max}_{\mathbf{a}_{m_{t:t+H}}}\) requires a complete search in a space of dimension \(|A_{m}|^{N\cdot H}\), which is impractical in most scenarios. Thus, as specified from Lines 10-12 in Algorithm 1, we adopt the random-sampling shooting method [14] which generates \(L\) random action sequences, executes them respectively, and chooses the one with the highest return predicted by the dynamics. Essentially, this planning process is a simulation of the interactions between the local models and the current joint policy, according to which each local model chooses the best prediction that approximately minimizes the global model error in concert with the other local predictions, thus achieving the coordination between local models.
## 4 Experiments
In this section, we present an empirical study of MAG on the challenging StarCraft II benchmark (SMAC) [1]. In the first subsection, we provide the overall comparison between MAG and several baselines. Then, we provide a quantitative analysis on the multi-step prediction loss to verify the effectiveness of our algorithm design in model learning. In the last subsection, we conduct ablation studies to show how the choices of the planning horizon (i.e., \(H\) in Algorithm 1) and the number of random shooting trajectories (i.e., \(L\) in Algorithm 1) affect the overall performance.
### Comparative Evaluation
BaselinesWe compare MAG with a model-based baseline and several model-free baselines. The model-based baseline is MAMBA [1], a recently proposed multi-agent MBRL method that achieves state-of-the-art sample efficiency in several SMAC tasks. The model-free baselines include 1) Attention-PPO, the model-free counterpart of both MAG and MAMBA which equips
Figure 2: Comparisons against baselines on SMAC. Solid curves represent the mean of runs over 5 different random seeds, and shaded regions correspond to standard deviation among these runs. X axis denotes the number of steps taken in the real environment and Y axis denotes the win rate.
PPO [14] with centralized attention-critics and communication during execution; 2) G2A [15], which adopts a two-stage attention architecture to realize communication between agents; and 3) CommNet [14], which applies LSTM [12] to learn continuous communication protocols for partially observable environments. In addition, it deserves to note that MAG is essentially a flexible plug-in component which can be employed by most model-based methods to improve the learning of the model. In our comparisons, we plug the model learning process of MAG into MAMBA.
EnvironmentsThe methods are evaluated on 8 maps of SMAC, ranging from _Easy_ maps (2s_vs_1sc, 2s3z, 3s_vs_3z), _Hard_ maps (3s_vs_4z, 3s_vs_5z, 2c_vs_64zg) and _Super Hard_ maps (corridor, 3s5z_vs_3s6z).
Implementation DetailsThe implementation of MAG is overall built on MAMBA. 2 For more details of the hyperparameter settings, please refer to Appendix B.
Footnote 2: Code available at [https://github.com/ZifanWu/MAG](https://github.com/ZifanWu/MAG).
ResultsThe overall results shown in Figure 2 demonstrate that MAG consistently outperforms all the baselines in low data regime. The comparison between MAG and MAMBA verifies the effectiveness of optimizing multi-step prediction errors that are induced by the interactions between local models. Besides, note that except for Attention-PPO in 2c_vs_64zg, all model-free baselines fail to even achieve a non-zero win rate in such low data regimes, showing the significant improvement of sample efficiency resulted from using a world model.
### Model Error Analysis
Based on the theoretical result presented in Section 3.1, the core idea of MAG is to reverse the roles played by the local models and the current joint policy, thus treating the models as decision-makers interacting with each other and aiming at minimizing the global accumulative model error. To validate the effectiveness of this algorithmic design, we empirically study the accumulative prediction error on the 2c_vs_64zg map. While the real dynamics is unavailable during training, the error is approximated by a neural network trained on the environment dataset, i.e., \(\hat{R}_{m}\).
The result in Figure 3 demonstrates that as the model rollout trajectories go longer, the accumulative model errors of MAMBA become significantly larger than that of MAG, which not only validates the effectiveness of MAG in reducing the accumulative model errors, but also provides a solid support for our theoretical result derived in Section 3, i.e., the method inducing less accumulative error is likely to achieve better performance. Besides, we can also observe that in the first two steps the model errors induced by MAG are slightly larger than the errors of the baseline. This further agrees with the intuition mentioned in Section 3 by showing that MAG is able to trade the one-step greedy model error for the accumulative error by considering the long-term effect of the immediate prediction.
### Ablation Studies
According to the descriptions in Algorithm 1, apart from easy implementation, another advantage of utilizing MPC in optimizing the accumulative model-reward is that this only introduces a small number of extra hyperparameters. Specifically, there are mainly two extra hyperparameters that need to be tuned, i.e., the planning horizon \(H\) and the number of random shooting trajectories \(L\). The ablation results of these two hyperparameters are shown in Figure 4 and Figure 5 respectively, which indicate that longer planning horizons and more random trajectories always induce better performance. Since increasing \(H\) and \(L\) leads to a rapid growth in terms of the computational complexity and the memory cost, the finally adopted settings of the two hyperparameters, which are detailed in Appendix B, can be regarded as a compromise between this practical limit and the performance.
## 5 Related Works
The research of MBRL can be roughly divided into two lines: the model usage and the model learning. This work focuses on model learning and adopts the most common model usage, that is, generating pseudo samples to enrich the data buffer, so as to reduce the interaction with the environment and accelerate policy learning [12, 13, 14, 15, 16, 17]. Most of previous works in MBRL train the model simply by minimizing each one-step prediction error for transitions available in the environment dataset [12, 13, 15]. However, in the multi-agent setting, the dimension of the joint observation-action space grows rapidly w.r.t. the number of agents, making it impractical to learn a global model for such complex environments [13, 14, 15]. Thus, a common approach is to train a local model for each agent which takes partial observations as input and predicts relevant information for the agent's policy [13, 14]. Therefore, MAMBA [1] proposes to extract relevant information for each local model from the global information via the Attention
Figure 3: The difference of the accumulative model errors between MAMBA and MAG (the accumulative errors of MAMBA minus the accumulative errors of MAG), on the 2c_vs_64zg map.
mechanism (Vaswani et al., 2017), so as to avoid modelling the whole complicated dynamics and accelerate the model learning. Although the Attention mechanism is effective in extracting information for different local models, the fuse of local information during multi-step model rollout may lead to the propagation of prediction errors between different local models, as discussed in the Section 1. To address this issue, we reformulate the model rollout process as the _model MMDP_ where the current joint policy is fixed to serve as the background environment and the local models are reversely regarded as decision-makers aiming at minimizing the global accumulative model error.
In the single-agent setting, some works have attempted to learn the model by treating the model rollout process as a sequential decision-making problem. Shang et al. (2019) propose an environment reconstruction method which models the influence of the hidden confounder on the environment by treating the platform, the user and the confounder as three agents interacting with each other. They focus on the offline setting (i.e., RL-based recommendation) and simultaneously train the model and the policy using a multi-agent imitation learning method. Xu et al. (2020) treat the model as a dual agent and analyze the error bounds of the model. They propose to train the model using imitation learning methods. Chen et al. (2022) also consider multi-step model errors, yet they mainly focus on handling counterfactual data queried by adversarial policies. Note that both (Xu et al., 2020) and (Chen et al., 2022) focus solely on model learning in the single-agent setting and do not combine with the policy learning phase.
There are also works considering multi-step prediction loss in the single-agent setting (Nagabandi et al., 2020; Luo et al., 2019). The essential difference between their multi-step loss and ours is that their loss is computed over the trajectories sampled from the environment dataset (collected by previous policies), while MAG minimizes the multi-step loss on the trajectories generated by active interactions between the local models as well as the current joint policy. From the theoretical perspective, the model error term in Theorem 1 is defined by the expectation over the current joint policy and the current local models, thus computing the multi-step loss on the trajectories generated by these current policy and current models can better approximate the lower bound, which guarantees better policy improvement.
## 6 Conclusion and Future Work
In this work, we first study how the prediction errors of agent-wise local models affect the performance lower bound, which necessitates the considerations of the interactions between models during multi-step model rollout. Based on this theoretical result, we reformulate the model rollout process as the _model MMDP_ by treating the local models as multi-step decision-makers and the current policies as the background environment. We then propose a multi-agent model learning framework, i.e., MAG, to maximize the accumulative global "model-reward" defined in the _model MMDP_ by considering the interactions between local models. We provide a practical implementation of MAG to optimize the above objective using the model predictive control. Empirically, we show that MAG outperforms both model-based and model-free baselines on several challenging tasks in the StarCraft II benchmark, and the quantitative analysis of the model error further validates the effectiveness of our algorithmic design. For the future work, we plan to study the problem of learning local models in the competitive or mixed cooperative-competitive scenarios, which can be seen as learning in a _Markov Game_.
Figure 4: Ablation study of the planning horizon \(H\).
Figure 5: Ablation study of the the number of random shooting trajectories \(L\).
## 7 Acknowledgments
We gratefully acknowledge the support from the National Natural Science Foundation of China (No.62076259), the Fundamental and Applicational Research Funds of Guangdong province (No.2023A1515012946), and the Fundamental Research Funds for the Central Universities-Sun Yat-sen University.
|
2309.13561 | Cordyceps@LT-EDI: Patching Language-Specific Homophobia/Transphobia
Classifiers with a Multilingual Understanding | Detecting transphobia, homophobia, and various other forms of hate speech is
difficult. Signals can vary depending on factors such as language, culture,
geographical region, and the particular online platform. Here, we present a
joint multilingual (M-L) and language-specific (L-S) approach to homophobia and
transphobic hate speech detection (HSD). M-L models are needed to catch words,
phrases, and concepts that are less common or missing in a particular language
and subsequently overlooked by L-S models. Nonetheless, L-S models are better
situated to understand the cultural and linguistic context of the users who
typically write in a particular language. Here we construct a simple and
successful way to merge the M-L and L-S approaches through simple weight
interpolation in such a way that is interpretable and data-driven. We
demonstrate our system on task A of the 'Shared Task on Homophobia/Transphobia
Detection in social media comments' dataset for homophobia and transphobic HSD.
Our system achieves the best results in three of five languages and achieves a
0.997 macro average F1-score on Malayalam texts. | Dean Ninalga | 2023-09-24T06:37:54Z | http://arxiv.org/abs/2309.13561v1 | # Cordyceps@LT-EDI: Patching Language-Specific
###### Abstract
Detecting transphobia, homophobia, and various other forms of hate speech is difficult. Signals can vary depending on factors such as language, culture, geographical region, and the particular online platform. Here, we present a joint multilingual (M-L) and language-specific (L-S) approach to homophobia and transphobic hate speech detection (HSD). M-L models are needed to catch words, phrases, and concepts that are less common or missing in a particular language and subsequently overlooked by L-S models. Nonetheless, L-S models are better situated to understand the cultural and linguistic context of the users who typically write in a particular language. Here we construct a simple and successful way to merge the M-L and L-S approaches through simple weight interpolation in such a way that is interpretable and data-driven. We demonstrate our system on task A of the _Shared Task on Homophobia/Transphobia Detection in social media comments_ dataset for homophobia and transphobic HSD. Our system achieves the best results in three of five languages and achieves a 0.997 macro average F1-score on Malayalam texts.
## 1 Introduction
In general, the US is seeing an increase in institutionalized transphobia in the form of banning gender-affirming care and the banning of transgender youth from several sports (Kline et al., 2023). However, studies on individuals who experience institutionalized transphobia in the US experience more psychological distress and instances of suicidal ideation (Price et al., 2023). The codifying of anti-trans laws then certainly must give confidence to those with transphobic beliefs and desires to spread anti-trans rhetoric in online spaces. Berger et al. (2022) recently presented results showing that LGBTQ youth often rely on social media for improved mental health outcomes and as a source of social connection that helps close mental health disparities. Therefore, appropriate content moderation on social media platforms stands to benefit from accurate NLP systems that can identify homophobia, transphobia, and other forms of hate speech.
Good knowledge of hate speech in a particular language may not always be useful for other languages, yet many common phrases and sayings are often expressed across languages. Namely, purveyors of hate speech often do not openly say hateful comments but instead rely on equally vicious code phrases, or _dogwhistles_, to avoid existing content moderation systems (Henderson and McCready, 2017; Magu et al., 2017). Knowledge of the hidden meanings of these encoded sayings can create powerful tools for improving online moderation (Mendelsohn et al., 2023). These phrases can easily transcend the regions of their origin, spreading across online communities without detection in vulnerable communities. Hence,
knowledge of dogwhistles in their current form will make content moderation systems more robust to these signals as they appear in different languages in new online spaces.
Textual databases built for hate speech analysis are predominantly in English, which creates language-based performance disparities (Jahan and Oussalah, 2021; Poletto et al., 2020; Aluru et al., 2020). As Wang et al. (2020) suggested, in M-L models languages are competing for model resources, potentially resulting in worse performance for low-resources languages. This performance bias is possibly due to that many M-L datasets used for pretraining popular language models often are majority English samples, often by a wide margin (Barbieri et al., 2021; Xue et al., 2020; Ri et al., 2021)). Consequently, there is a general disparity in performance when comparing English-only and M-L HSD models (Rottger et al., 2022).
Nozza et al. (2020) push for more pre-trained models in non-English languages as they will (naturally) be best for downstream tasks in the same language domain they are trained in. However, pre-training techniques typically require large datasets to guarantee good downstream performance. Given a relative lack of language-specific data for HSP, more indirect and creative approaches are required to alleviate the performance gap between English and non-English tasks.
For our present purposes, we are presented with multiple target languages and tasked to detect levels of homophobia and transphobia for each specified language using an automated system. We introduce Language-PAINT to jointly model M-L and L-S knowledge that incorporates recent work on weight interpolation.
In summary, our main contributions are the following:
* We publicize a language-based weight interpolation approach as the next step in advancing HSD research.
* We provide a demonstration of our framework on task A of the _Shared Task on Homophobia/Transphobia Detection in social media comments_(Chakravarthi et al., 2022).
* We provide preliminary evidence suggesting that our framework is robust to label distribution shifts.
## 2 Related Work
### Language Transfer in Hate Speech Detection
Several techniques from recent years have worked on closing the performance disparity between majority and minority languages in HSD. Namely, several attempts directly translate low-resource languages into high-resource ones (Pamungkas and Patti, 2019; Ibrohim and Budi, 2019). Pelicon et al. (2021) presents a data-based approach that first trains a M-L model for HSD, similar to our training scheme's initial step. Pelicon et al. (2021) use a percentage of L-S data to finetune their model where the percentage is chosen empirically. Choudhury et al. (2017) delay training with code-mixed data, opting to first train with mono-lingual samples using the two languages used in the code-mixed data. The popular IndicNLP (Kunchukuttan et al., 2020) uses bilingual word embeddings for translation and transliteration, typically between English and a target low-resource language. Biradar et al. (2021) subsequently attempt to incorporate IndicNLP's (Kunchukuttan et al., 2020) embeddings for code-mixed HSD.
### Weight Interpolation
In this paper, we adopt the interpolation strategy of _Weight-space ensembles for fine-tuning
(WiSE-FT) (Wortsman et al., 2021). In particular, we base our framework on a subsequent variation called PAINT (Ilharco et al., 2022) constructed to incorporate the input robustness of a zero-shot model into finetuned models across diverse tasks. Formally, given a single task \(t\) takes the weights of the _zero-shot_ model \(\theta_{z}\) and a finetuned model \(\theta_{f}\), the weight interpolation of PAINT performs the interpolation:
\[\theta^{t}=\alpha\theta_{z}+(1-\alpha)\theta_{f}\]
with \(\alpha\in[0,1]\). In addition to the specific experiments performed by (Ilharco et al., 2022), recent work shows that averaging two (or more) language models has the potential to leverage knowledge contained in each (Gueta et al., 2023; Don-Yehiya et al., 2022; Choshen et al., 2022). However, no prior work has studied weight space ensembling based on language to the best of our knowledge.
## 3 Methodology
Here, we use Bernice (DeLucia et al., 2022), a language model exclusively on Twitter1 data and is known to be performant on HSD across multiple languages. Indeed, many studies rely on Twitter, to construct datasets of code-mixed samples for various HSD approaches (Bhat et al., 2018; Bansal et al., 2020; Farooqi et al., 2021; Choudhury et al., 2017), which in aggregate, motivates our choice of language model.
Footnote 1: [https://twitter.com](https://twitter.com)
### Language-PAINT
Given \(k\) distinct groups of (possibly code-mixed) languages, we first train a M-L model on a dataset that includes all the languages. We continue training until saturation on a validation set, where we take the average F1 score across languages. Next, we create an additional \(k\) L-S models, - one for each language - where each is initialized with the weights of the M-L model. Finally, we perform linear interpolation between the weights of the M-L and each of the \(k\) L-S models. The resulting \(k\) models are used for inference on each language.
In mathematical terms, Language-PAINT takes the weights of the trained L-S model \(\theta^{i}_{ls}\) and the weights of the M-L model \(\theta_{ml}\) and performs the following interpolation:
\[\theta^{i}=\alpha\theta^{i}_{ls}+(1-\alpha)\theta_{ml}.\]
Where \(\theta^{i}\) is used to create predictions for the respective language \(i=1,..,k\) in the test set. In practice, we select alpha from a discrete set
Figure 1: Left: Average selected value for \(\alpha\) (thick black line) averaged over five runs for each language. Right: Average validation F1 score as a function of \(\alpha\) reported for each language, averaged over five runs.
\(\alpha\in\{0,0.1,0.2,...,1\}\) and select based on the resulting model's F1 performance on a held-out validation set.
### Ensembling
Our final prediction on the test sets is an ensembled output of five models trained on five stratified folds. To create these folds, we first conjoined the original training and development sets. Next, we divided the conjoined dataset into five folds using 80-20 train-validation splits, ensuring we maintain the label distribution across each fold. We then trained a fresh model on each training and validation fold using the methodology that is described above. For final inference, we sum the output probabilities of the five models selecting the maximum probability as the final prediction.
### Data Cleaning
To preserve as much textual information as possible, we apply minimal additional cleaning steps. Namely, we only remove a sample if it is found to be overlapping in both the train and development data. In total, we removed 1695 duplicate samples, where 54% of the dropped samples are in Tamil and 41% are in Malayalam.
## 4 Experiments and Results
### Experimental Setup
Here, we will perform experiments comparing the L-S, M-L, and, LangPAINT approaches. For our first experiment, we combine the training and development set into a single case study. We train five models re-sampling a random 80-20 train-validation split for each run and report the average results on the test set. For our second experiment, we combine the training, development, and, test sets into a single dataset. Where we train ten models re-sampling a random 80-10-10 train-validation-test split for each run, reporting the average of the results on each test set. For each of our two experiments, we use the _weighted_ F1 score to evaluate performance. All experiments were run on a single Tesla V4 GPU and we provide the training hyperparameters in Table 1.
## 5 Results
The results of our experiments are given in Table 2. We can see for most languages, the L-S approach tends to perform best, with the exception of the Malayalam language. This is reflective of our final leaderboard results where we used an ensemble method (see Section 3.2) that achieves a 0.997 macro average F1-score on Malayalam texts. Additionally, we report the selected values for \(\alpha\) and validation score as a function of \(\alpha\) in Figure 1 for this first experiment.
For our second experiment, our results (see Table 2) are much more in favor of our method. Perhaps the considerably worse performance of the L-S and M-S models is due to the high label-distribution shift between the re-sampled train and test splits. Nonetheless, LangPAINT appears to be robust to this shift and is still able to maintain good performance, with the only exception being the Spanish language.
## 6 Conclusion
In this paper, we introduce LangPAINT. LangPAINT is a weight space ensembling strategy (Wortsman et al., 2021) repurposed to jointly model the multi-lingual and language-specific
\begin{table}
\begin{tabular}{c||c} \hline \hline
**Paramter** & **Value** \\ \hline Batch Size & 16 \\ Learning Rate & 1e-5 \\ Optimizer & Adam \\ Loss & cross-entropy \\ \hline \hline \end{tabular}
\end{table}
Table 1: Training Hyper-parameters
signals of homophobia and transphobia. Our experiments suggest that our method is competitive with the language expert models and has the potential to be very robust to label distribution shifts. On task A of the _Shared Task on Homophobia/Transphobia Detection in social media comments_Chakravarthi et al. (2022) achieving the best results in three of five languages and achieves a 0.997 macro average F1-score on Malayalam, a low-resource language.
|
2309.03799 | FisheyePP4AV: A privacy-preserving method for autonomous vehicles on
fisheye camera images | In many parts of the world, the use of vast amounts of data collected on
public roadways for autonomous driving has increased. In order to detect and
anonymize pedestrian faces and nearby car license plates in actual road-driving
scenarios, there is an urgent need for effective solutions. As more data is
collected, privacy concerns regarding it increase, including but not limited to
pedestrian faces and surrounding vehicle license plates. Normal and fisheye
cameras are the two common camera types that are typically mounted on
collection vehicles. With complex camera distortion models, fisheye camera
images were deformed in contrast to regular images. It causes computer vision
tasks to perform poorly when using numerous deep learning models. In this work,
we pay particular attention to protecting privacy while yet adhering to several
laws for fisheye camera photos taken by driverless vehicles. First, we suggest
a framework for extracting face and plate identification knowledge from several
teacher models. Our second suggestion is to transform both the image and the
label from a regular image to fisheye-like data using a varied and realistic
fisheye transformation. Finally, we run a test using the open-source PP4AV
dataset. The experimental findings demonstrated that our model outperformed
baseline methods when trained on data from autonomous vehicles, even when the
data were softly labeled. The implementation code is available at our github:
https://github.com/khaclinh/FisheyePP4AV. | Linh Trinh, Bach Ha, Tu Tran | 2023-09-07T15:51:31Z | http://arxiv.org/abs/2309.03799v1 | # FisheyePP4AV: A privacy-preserving method for autonomous vehicles on fisheye camera images
###### Abstract
In many parts of the world, the use of vast amounts of data collected on public roadways for autonomous driving has increased. In order to detect and anonymize pedestrian faces and nearby car license plates in actual road-driving scenarios, there is an urgent need for effective solutions. As more data is collected, privacy concerns regarding it increase, including but not limited to pedestrian faces and surrounding vehicle license plates. Normal and fisheye cameras are the two common camera types that are typically mounted on collection vehicles. With complex camera distortion models, fisheye camera images were deformed in contrast to regular images. It causes computer vision tasks to perform poorly when using numerous deep learning models. In this work, we pay particular attention to protecting privacy while yet adhering to several laws for fisheye camera photos taken by driverless vehicles. First, we suggest a framework for extracting face and plate identification knowledge from several teacher models. Our second suggestion is to transform both the image and the label from a regular image to fisheye-like data using a varied and realistic fisheye transformation. Finally, we run a test using the open-source PP4AV dataset. The experimental findings demonstrated that our model outperformed baseline methods when trained on data from autonomous vehicles, even when the data were softly labeled. The implementation code is available at our github: [https://github.com/khaclinh/FisheyePP4AV](https://github.com/khaclinh/FisheyePP4AV).
Autonomous vehicle, privacy preserving, fisheye, face, license plate, distillation
+
Footnote †: dagger}\) Equally contributed.
\({}^{\text{\textcircled{S}}}\) Corresponding author.
## I Motivation
Data privacy protection for autonomous vehicles is turning into a serious issue that requires attention. Companies and research teams have started gathering a lot of data as machine learning has been employed more and more in autonomous driving for development and validation. Since 2018 [7], Waymo has accumulated 5 million miles. Only in 2020 [7] did Cruise collect more than 770,000 miles. With more data being collected comes more accountability for data privacy. For instance, laws from the European GDPR [1], California CCPA [2], Chinese CSL [3], or Japanese APRI [4] must be followed when collecting data on public highways. According to the regulations, participants' personal identity information must be protected and deleted upon request. Numerous commercial devices that de-identify acquired data have been released in response to these regulations, usually by obscuring camera data. The faces and license plates are made anonymous using Brighter AI1, Facebook Mapillary2, or UAI Anonymizer [5]. Celantur3 goes even further by masking people's faces, license plates, bodies, even entire automobiles.
Footnote 1: [https://brighter.ai/video-redaction-in-automotive/](https://brighter.ai/video-redaction-in-automotive/)
Footnote 2: [https://www.mapillary.com/geospatial](https://www.mapillary.com/geospatial)
Footnote 3: [https://www.celantur.com/](https://www.celantur.com/)
To the best of our knowledge, [6] is the first open benchmarking dataset used to assess a model on autonomous driving that protects privacy. 3,447 driving photos with faces and license plates on both fisheye camera and regular camera photographs are included in this dataset. In order to show the limitations of those models on the domain of autonomous driving, they also supplied a based line model and a thorough comparison with several pretrained models. Fisheye camera images are rarely utilized for training and testing a privacy-preserving model, in contrast to regular photos. Since the majority of previously trained models were trained on commonplace photos, they typically continue to perform admirably on new domain datasets. Due to these models' poor performance on fisheye photos, a unique technique is needed to help them adapt to the data from fisheye cameras.
In this article, we provide a new method for face and license plate recognition in fisheye camera images used for autonomous driving. We developed a collection of varied and realistic distortion methods to change the data from normal to fisheye-like in order to address the low performance on fisheye camera images. To be more precise, we employ four distortion models developed by [7, 8, 9] for producing a variety of fisheye-like training data. In order to overcome the shortcomings of the lack of ground truth for training, we extend the baseline model put forward by citepp4av to present an enhanced framework for training data that draws expertise from numerous teachers.
In summary, the main contributions of this work are in 3 folds:
* We propose our framework for training a model for face and license plate anonymization via model distillation from several teachers.
* We propose a fisheye transformation to convert both the image and the pseudo label supplied by the teacher model
into fisheye-like data for training the student model. For improved adaption, this fisheye transformation includes a variety of realistic distortion kinds.
* We train our anonymization model for self-driving cars. Despite the fact that our model was trained without any actual annotated dataset, the experimental results indicated that it outperformed the baseline model from [6].
The remainder of this paper is organized as follow: we present a our method in the section II. The next section III we present experiment and it's result to show the performance of our proposed method. Finally, section IV aim to conclude our paper.
## II Methods
Our framework is illustrated in the Figure 1. The main parts of the framework are the multiple teacher models, the PP4AV preprocessing, the fisheye transformation, and our student model. Due to the lack of ground truth, we use multiple models trained on other tasks to teach our model how to detect faces and license plates. This is done through pseudo label generation. After generating pseudo labels for training batch data, we use pseudo label preprocessing from PP4AV to aggregate the pseudo labels and confidence scores from various teacher models into a single set. This step improved the power of using multiple models to make high-quality, confident pseudo labels that will be used for training in the next step. The fisheye transformation will turn both images and their pseudo labels into data that looks like a fisheye. Lastly, the fisheye-like data we got at the end are used to train our model.
### _Teacher and student models_.
Similar as PP4AV baseline model, we select UAI Anonymizer, YOLOSFace [10], and RetinaFace [11] as three teacher models for face detection, and UAI Anonymizer as teacher model for license plate detection.
For student model, we keep the same setting of modification of YOLOX [12] which presented in PP4AV [6] for our student model. Three changes are made: (1) the Focus layer is swapped out for a stem block structure; (2) the SSP block is modified to use a smaller kernel; and (3) a P6 output block with a stride of 64 is added.
### _Fisheye transformation \(\Psi\)_
We define fisheye image transformation \(\Psi\) as a function of a set distortion transformation, i.e. \(\Psi=\mathcal{F}(\phi_{1},\phi_{2},...,\phi_{n})\) where \(\phi_{i}\) is \(i\)th distortion transformation function which transform normal data to fisheye-like data by distortion function, \(\mathcal{F}\) is an aggregation function which is used to aggregate the result from these distortion transformations. \(\mathcal{F}\) can be a discrete or continuous function. In our work, to most simplify, we setup \(\mathcal{F}\) as a random selection from a set of function \(\phi\). For more detail, given a normal data \(x\), the output of fisheye transformation is:
\[x^{\prime}=\Psi(x)=\mathcal{F}_{\phi_{1},\phi_{2},...,\phi_{n}}(x)=\phi_{i}(x) \tag{1}\]
where \(i\) is the randomly selected distortion function.
In this work, we apply four distortion functions \(\phi\) for transforming normal data to fisheye-like data which usually applied to autonomous vehicle's camera data. Four functions are circular [9], rectangular [7], radial [8] and tangential [8] transformation functions. The center of the square data is at \((0,0)\), and the coordinates for the four corners are \((1,1)\). Let indicate \((x,y)\) be the normalized coordinates of the input. By translating to the optical center and dividing by the focal length in pixels, one can determine the normalized image coordinates from the pixel coordinates. \((x_{d},y_{d})\) are used to indicate the deformed points.
**Circular transformation.** The conversion of a normal patch to a circular batch is expressed in the equations below. The circular patch formed is referenced by the output coordinates \((x^{\prime},y^{\prime})\).
\[\left(\begin{array}{c}x^{\prime}\\ y^{\prime}\end{array}\right)=\left(\begin{array}{c}x\cdot\sqrt{1-\frac{y^{2 }}{2}}\\ y\cdot\sqrt{1-\frac{x^{2}}{2}}\end{array}\right) \tag{2}\]
Equation 4 furthers squeezes the circular image towards the perimeter. Here \(r=\sqrt{(x^{\prime})^{2}+(y^{\prime})^{2}}\) is the radial distance from the center of the circular path.
\[\left(\begin{array}{c}x_{d}\\ y_{d}\end{array}\right)=\left(\begin{array}{c}x^{\prime}\cdot e^{-\frac{r^{2 }}{4}}\\ y^{\prime}\cdot e^{-\frac{r^{2}}{4}}\end{array}\right) \tag{3}\]
**Rectangular transformation.** Rectangular transformation function are expressed as below equation:
\[\left(\begin{array}{c}x_{d}\\ y_{d}\end{array}\right)=r_{f}\left(\begin{array}{c}x\sin\gamma\\ y\cos\gamma\end{array}\right) \tag{4}\]
where \(r_{f}\) is determined as below equation:
\[r=f\tan(\frac{r_{f}}{f}) \tag{5}\]
and \(\gamma\) is defined as below equation:
\[\gamma=\arctan\frac{x}{y} \tag{6}\]
which \(r=\sqrt{x^{2}+y^{2}}\), \(f\) is the focal length.
**Radial transformation.** Radial transformation function are expressed as below equation:
\[\left(\begin{array}{c}x_{d}\\ y_{d}\end{array}\right)=\left(\begin{array}{c}x\left(1+k_{1}\cdot r^{2}+k_ {2}\cdot r^{4}+k_{3}\cdot r^{6}\right)\\ y\left(1+k_{1}\cdot r^{2}+k_{2}\cdot r^{4}+k_{3}\cdot r^{6}\right)\end{array}\right) \tag{7}\]
where \(r=\sqrt{x^{2}+y^{2}}\), and \(k_{1},k_{2},k_{3}\) are radial distortion coefficients of the lens.
**Tangential transformation.** Tangential transformation function are expressed as below equation:
\[\left(\begin{array}{c}x_{d}\\ y_{d}\end{array}\right)=\left(\begin{array}{c}x+\left[2\cdot p_{1}\cdot x \cdot y+p_{2}\cdot(r^{2}+2\cdot x^{2})\right]\\ y+\left[p_{1}\cdot(r^{2}+2\cdot y^{2})+2\cdot p_{2}\cdot x\cdot y\right]\end{array}\right) \tag{8}\]
where \(p_{1},p_{2}\) are tangential distortion coefficients of the lens.
### _Loss function_
Similar to PP4AV baseline model [6], the loss function is as follows:
\[\mathcal{L}=\lambda\cdot\mathcal{L}_{iou}+\mathcal{L}_{cls}^{fl}+\mathcal{L}_{ obj}^{fl}+\gamma\cdot\mathcal{L}_{KL} \tag{9}\]
where \(\mathcal{L}_{cls}^{fl}\), \(\mathcal{L}_{obj}^{fl}\) are focal losses for classification and regression, respectively, and \(\gamma\) is the weight factor for KL divergence loss \(\mathcal{L}_{KL}\), \(\lambda\) is the weight factor for IoU loss \(\mathcal{L}_{iou}\).
## III Experiments
### _Settings_
In this section, we present experiment and the performance result of our model.
**Datasets.** We build our model training dataset from open datasets for autonomous driving that are already available. Although these datasets cover a wide range of contexts, they lack face and license plate annotations, which are disadvantages for our goal. We ignore all general-purpose public datasets because they are unrelated to the driving situation because we are focusing on public datasets for self-driving vehicles. Another issue is that no facial or license plate annotations can be found in any of the available datasets for self-driving automobiles. In our method, we seek to leverage the pretrained model (which we subsequently use as a teacher model to train our model) to teach our model via prediction rather than annotating and feeding the prediction of these models into our model, as we have previously researched. The training and validation sets in this experiment are summarized in table I. For training, 62,927 photos from six public datasets are combined, while 10,250 images are used for validation. We use the fisheye data and it's annotation from [6] for evaluation. This data consists of 244 fisheye camera images which originally provided by WoodScape [19] are well annotated both face and license plate.
**Pseudo label transformation.** Each label was altered to the new coordinate system for pseudo label processing. Four corner points and four edge midpoints were chosen for each bounding box, totaling eight points. These eight points were mapped to the coordinate system of the fisheye image using the same transformation function as the associated image. In the coordinate system of the fisheye image, the new points constitute a polygon. These additional eight points were used to find the smallest axis-aligned bounding rectangle, which was then saved as the new bounding-box label for the fisheye image.
**Experiment setting.** For evaluation metrics, we use standard metrics AP50 and AR50 which usually used for object detection to measure average precision and average recall at IOU=0.5. For fisheye transformation, we set \(p_{1},p_{2}\) are 0.2, 0.1 respectively for tangential distortion, \(k_{1},k_{2},k_{3},k_{4}\) are 0.2, 0.1, 0.05, 0.05 for radial distortion, and focal length is 250 for rectangular distortion. As a preprocessing step, data augmentation is used to increase the training's robustness. In particular, there is a 50% probability of doing the following augmentations: horizontal flips, brightness adjustment (0.2), saturation adjustment (0.2), contrast adjustment (0.2), hue jitter (0.1), mosaic, rotation, and shear. We train our model
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Dataset & Resolution & Train & Val \\ \hline Cityscape [13] & 2,048\(\times\)1,024 & 2,921 & 488 \\ \hline BDD100K [14] & 1,280\(\times\)720 & 41,568 & 7,370 \\ \hline Comma2K19 [15] & 1,164\(\times\)874 & 6,358 & 1,414 \\ \hline Bosch [16] & 2,464\(\times\)2,056 & 3,500 & 750 \\ \hline LeddarFraSet [17] & 1,440\(\times\)1,080 & 1,062 & 228 \\ \hline Kitti [18] & 1,240\(\times\)376 & 7,518 & 0 \\ \hline & **Total** & 62,927 & 10,250 \\ \hline \end{tabular}
\end{table} TABLE I: The overview and number of images in the training and validation set of the model.
Fig. 1: Illustration of our framework. Due to lack of ground truth, we leverage multiple models trained on other task for face and plate detection as the teacher models which are used to distill knowledge to our model via pseudo label generation. Fisheye transformation \(\Psi\) is used to transform pseudo labels and images into fisheye-like pseudo labels and images.
with hyperparameters setting: batch size 32, image resize to 640x640, learning rate is 0.0001, and we use SGD optimizer for training. All experiments were conducted on an NVIDIA DGX A100 server with 4 GPUs.
### _Quantitative results_
The Table II show the comparison of our model with PP4AV [6] on AP50 and AR50 for face and plate objects individually. The results show that our model outperform the baseline PP4AV in both AP50 and AR50. For face detection, our model improves 1.89% and 1.21% on AP50 and AR50 separately. For license plate detection, the improvement is minor at AP50 with 0.24% increasing and 1.94% increasing at AR50. The promising results show the significant improvement of our method which is realistic and adaptive to specific domain such as fisheye camera data in autonomous vehicles.
### _Qualitative analysis_
For qualitative analysis, we randomly select some images from the test set. Figure 2 compares our model to the baseline model from [6] and the ground truth. Because the first image was highly distorted, the [6] model failed to recognize almost all of the plates. Our algorithm, which was trained on fisheye-like data, can detect the majority of clear deformed plates. In comparison to the real world, our model missed only one far and not transparent plate. [6] has a misdetection on a human face on the boundary of a fisheye picture in the right shot. When the human face is positioned on the boundary of a significantly distorted image, as in the sample, the face's shape is also strongly warped. As in the real world, our model can recognize it correctly. The results in the sampled data show that the model trained on just normal images may not recognise distorted objects in fisheye images. It demonstrates the need of modifying and training models on similar types of data, such as fisheye-like images, for improved adaptability to fisheye camera images in autonomous driving.
## IV Conclusions
In this paper, we present a method for data anonymization on fisheye camera images of autonomous driving to guarantee regulations such as EU GDPR, CCPA, and CSL. Due to a lack of ground truth for training, we propose a framework to leverage knowledge from multiple teacher models trained on other tasks for face and license plate detection for training our model by distilling information via autonomous vehicle data. Furthermore, we propose a fisheye transformation that transforms normal data, such as an image and its label, into fisheye-like data by applying diverse and realistic distortion functions that are usually used for autonomous vehicle data. The experiment results on the fisheye test set data of PP4AV show our model improves significantly in performance compared to the baseline model. This promising result shows that our proposed method adapts efficiently to fisheye camera image-based object detection. In future work, we will consider extending our framework for computer vision tasks to other domain data based on fisheye camera data.
## References
* [1] "General data protection regulation (gdpr)," [https://gdpr-info.eu/](https://gdpr-info.eu/).
* [2] "California consumer privacy act," [https://www.oag.ca.gov/sites/all/files/agweb/pdfs/privacy/oal-sub-final-text-of-regs.pdf](https://www.oag.ca.gov/sites/all/files/agweb/pdfs/privacy/oal-sub-final-text-of-regs.pdf).
* [3] "China cybersecurity law (csl)," [http://www.cac.gov.cn/2016-11/07/c_1119867116.htm](http://www.cac.gov.cn/2016-11/07/c_1119867116.htm).
* appi," [https://www.ppc.go.jp/files/pdf/APPL_english.pdf](https://www.ppc.go.jp/files/pdf/APPL_english.pdf).
* [5] "Understand ai anonymizer," [https://github.com/understand-ai/anonymizer](https://github.com/understand-ai/anonymizer), accessed: 2022-07-12.
* [6] L. Trinh, P. Pham, H. Trinh, N. Bach, D. Nguyen, G. Nguyen, and H. Nguyen, "P94av: A benchmarking dataset for privacy-preserving autonomous driving," in _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)_, January 2023, pp. 1206-1215.
* [7] J. Kannala and S. Brandt, "A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses," _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 28, no. 8, pp. 1335-1340, 2006.
* [8] J. Heikkila and O. Silven, "A four-step camera calibration procedure with implicit image correction," in _Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition_, 1997, pp. 1106-1112.
* [9] J. Fu, I. V. Bajic, and R. G. Vaughan, "Datasets for face and object detection in fisheye images," _Data in Brief_, vol. 27, p. 104752, 2019.
* [10] D. Qi, W. Tan, Q. Yao, and J. Liu, "YoofFace: Why reinventing a face detector," _ArXiv preprint ArXiv:1205.12931_, 2021.
* [11] J. Deng, J. Guo, E. Ververas, I. Kotsia, and S. Zafeiriou, "Retinaface: Single-shot multi-level face localisation in the wild," _Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition_, 2020.
* [12] Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, "Yolox: Exceeding yolo series in 2021," _arXiv preprint arXiv:2107.08430_, 2021.
* [13] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, "The cityscapes dataset for semantic urban scene understanding," _Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition_, vol. 2016-December, 2016.
* [14] F. Yu, H. Chen, X. Wang, W. Xian, Y. Chen, F. Liu, V. Madhavan, and T. Darrell, "Bdd100k: A diverse driving dataset for heterogeneous multitask learning," _Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition_, 2020.
* [15] H. Schafer, E. Santana, A. Haden, and R. Biasini, "A commute in data: The commak219 dataset," _arXiv:1812.05752_, 2018.
* 2019 International Conference on Computer Vision Workshop, ICCVW 2019_, 2019.
* [17] J. L. Deziel, P. Merriaux, F. Tremblay, D. Lessard, D. Plourde, J. Stanguennee, P. Goulet, and P. Olivier, "Pixset : An opportunity for 3d computer vision to go beyond point clouds with a full-waveform lidar dataset," _IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC_, vol. 2021-September, 2021.
* [18] "Vision meets robotics: The kitti dataset," _International Journal of Robotics Research_, vol. 32, 2013.
* [19] S. Yogamani, C. Witt, H. Rashed, S. Nayak, S. Mansoor, P. Varley, X. Perrotton, D. Odea, P. Perez, C. Hughes, J. Horgan, G. Sistu, S. Chennupati, M. Uricar, S. Milz, M. Simon, and K. Amende, "Woodscape: A multi-task, multi-camera fisheye dataset for autonomous driving," _Proceedings of the IEEE International Conference on Computer Vision_, vol. 2019-October, 2019.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{} & \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Metrics} \\ \cline{3-4} & & AP50 & AR50 \\ \hline \hline \multirow{2}{*}{} & PP4AV [6] & 59.2\% & 63.92\% \\ \cline{2-4} & **Our** & **61.09\%** & **65.13\%** \\ \hline \hline \multirow{2}{*}{} & PP4AV [6] & 49.53\% & 58.17\% \\ \cline{2-4} & **Our** & **49.77\%** & **60.11\%** \\ \hline \end{tabular}
\end{table} TABLE II: Comparison of performance on fisheye test set of PP4AV [6] on Average Precision (AP) and Average Recall (AR) scores. |
2309.03941 | The two-mode puzzle: Confronting self-interacting neutrinos with the
full shape of the galaxy power spectrum | A cosmological scenario in which the onset of neutrino free streaming in the
early Universe is delayed until close to the epoch of matter-radiation equality
has been shown to provide a good fit to some cosmic microwave background (CMB)
data, while being somewhat disfavored by Planck CMB polarization data. To
clarify this situation, we investigate in this paper CMB-independent
constraints on this scenario from the Full Shape of the galaxy power spectrum.
Although this scenario predicts significant changes to the linear matter power
spectrum, we find that it can provide a good fit to the galaxy power spectrum
data. Interestingly, we show that the data display a modest preference for a
delayed onset of neutrino free streaming over the standard model of cosmology,
which is driven by the galaxy power spectrum data on mildly non-linear scales.
This conclusion is supported by both profile likelihood and Bayesian
exploration analyses, showing robustness of the results. Compared to the
standard cosmological paradigm, this scenario predicts a significant
suppression of structure on subgalactic scales. While our analysis relies on
the simplest cosmological representation of neutrino self-interactions, we
argue that this persistent - and somehow consistent - picture in which neutrino
free streaming is delayed motivates the exploration of particle models capable
of reconciling all CMB, large-scale structure, and laboratory data. | David Camarena, Francis-Yan Cyr-Racine, John Houghteling | 2023-09-07T18:00:01Z | http://arxiv.org/abs/2309.03941v2 | # The two-mode puzzle:
###### Abstract
A cosmological scenario in which the onset of neutrino free streaming in the early Universe is delayed until close to the epoch of matter-radiation equality has been shown to provide a good fit to some cosmic microwave background (CMB) data, while being somewhat disfavored by Planck CMB polarization data. To clarify this situation, we investigate in this paper CMB-independent constraints on this scenario from the Full Shape of the galaxy power spectrum. Although this scenario predicts significant changes to the linear matter power spectrum, we find that it can provide a good fit to the galaxy power spectrum data. Interestingly, we show that the data display a modest preference for a delayed onset of neutrino free streaming over the standard model of cosmology, which is driven by the galaxy power spectrum data on mildly non-linear scales. This conclusion is supported by both profile likelihood and Bayesian exploration analyses, showing robustness of the results. Compared to the standard cosmological paradigm, this scenario predicts a significant suppression of structure on subgalactic scales. While our analysis relies on the simplest cosmological representation of neutrino self-interactions, we argue that this persistent -- and somehow consistent -- picture in which neutrino free streaming is delayed motivates the exploration of particle models capable of reconciling all CMB, large-scale structure, and laboratory data.
## I Introduction
Although neutrinos are the least understood particles in the Standard Model (SM), they play a crucial role in the evolution of the Universe. Since they couple gravitationally to everything else, their presence impacts cosmological observables on a broad range of scales, leading to observational features that can be used to constrain some of their hitherto unknown properties. For instance, one can use cosmological data to provide important -- and competitive -- constraints on the sum of the masses of neutrinos [see e.g. Refs. 1; 2; 3].
Besides their mass, cosmological observables can be also used to study new interactions in the neutrino sector. In the SM picture, neutrinos decouple and begin to free streaming when the temperature of the cosmic plasma drops to \(\sim 1.5\) MeV. Since they are still gravitationally coupled, the free-streaming neutrinos effectively tap the baryon-photon fluid modifying the evolution of the cosmological perturbations. Such modifications, which appear as a phase shift and suppression of the amplitude of the cosmic microwave background (CMB) power spectra [4; 5], have been used to constrain the nature of the free streaming of neutrinos [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16] through the \(c_{\rm eff}\) and \(c_{\rm vis}\) parametrization [17].
Nonetheless, the presence of new physics in the neutrino sector can significantly alter the onset of the neutrino free streaming and, therefore, leave particular imprints in the Universe that can not simply be modeled by the \(c_{\rm eff}\) and \(c_{\rm vis}\) fluid approximation, calling for the necessity of considering a more realistic physical representation of the neutrino decoupling [18; 19]. Thus, in the last few years, several models with non-standard neutrinos have been considered to study the cosmological consequences of altering the free streaming of neutrinos [see Refs. 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71], for instance].
Interestingly, the analyses of self-interacting neutrino models have unveiled that some CMB data allow for a significant delay in the onset of the free streaming of neutrinos [18; 19; 23; 24; 28; 35; 40; 41; 50; 52; 61; 70; 71] and agree with two divergent pictures of the Universe: i) a paradigm where neutrinos moderately interact (\({\rm MI}_{\nu}\)) -- cosmologically reassembling the SM neutrinos, and ii) a cosmological picture where neutrinos strongly interact among themselves (\({\rm SI}_{\nu}\)). On the other hand, Planck CMB polarization data [1] seems to discover the simplest representation of the \({\rm SI}_{\nu}\) mode [50; 51; 52; 61], which contrasts with data from the Atacama Cosmology Telescope (ACT) [74], which tends to favor the \({\rm SI}_{\nu}\) mode [70; 71]. Moreover, analyses of the phase of CMB peaks [75] and of the baryon acoustic oscillation (BAO) [76; 77] have shown consistency with the expected phase shift from SM free-streaming neutrinos, further complicating the picture.
The existence of the \({\rm SI}_{\nu}\), which was first reported almost a decade ago [18; 23] and has persisted in various comprehensive analyses, including those using the most recent cosmological data [70; 71], has so far been entirely driven by CMB data, with little information about the large-scale structure (LSS) of the Universe included in these analyses (beyond the BAO geometric distances and CMB lensing).
This is significant as, due to a difference in the ampli
tude, \(A_{\rm s}\), and tilt, \(n_{\rm s}\), of the primordial curvature power spectrum favored by the \({\rm SL}_{\nu}\), this alternate cosmological scenario predicts conspicuous changes to the linear matter power spectrum [40] that could significantly impact the LSS of the Universe.
In this work, we investigate how a delayed onset of neutrino free streaming produced by the presence of novel self-interactions impacts the LSS of the Universe. Using the so-called Full Shape of the galaxy power spectrum [78; 79] along with Big Bang Nucleosynthesis (BBN) data [80; 81] and an effective four-fermion interaction to model self-interacting neutrinos, we show below, for the first time, that the large-scale distribution of galaxies displays a modest preference for new interactions in the neutrino sector. Crucially, this preference points to the same interaction strength favored by some CMB data, indicating that the evidence for new neutrino interactions is not likely driven by fortuitous noise features in these data sets. Since we focus here on the impact of neutrino interactions on LSS, we keep our analyses CMB agnostic. We will explore in an upcoming work whether self-interacting neutrinos create a statistically consistent scenario for both CMB and LSS data at the same time [82] (see also Ref. [83] for a similar recent analysis).
This paper is organized as follows. In Sec. II, we present the phenomenological neutrino interaction model used here and discuss its cosmological implications at linear scales. We then discuss the imprints that self-interacting neutrinos leave on the galaxy power spectrum in Sec. III. The data and methodology used in this paper are presented in Sec. IV, while our results and discussion are shown in Sec. V. Finally, we conclude in Sec. VI.
## II Phenomenological model of self-interacting neutrinos
Novel neutrino self-interactions beyond the SM delay the onset of their free streaming, hence suppressing the only source of anisotropic stress at early times. This suppression, in turn, impacts the evolution of the gravitational potentials, leaving significant modifications on both the evolution of photon and matter fluctuations [4; 5]. From the cosmological point of view, a delayed onset of neutrino free streaming can be phenomenologically embodied by the simplest representation of self-interacting neutrinos: an effective four-fermion interaction characterized by a dimensionful Fermi-like constant \(G_{\rm eff}\) coupling universally to all neutrino flavors. This leads to an interaction rate of the form
\[\Gamma_{\nu}\equiv aG_{\rm eff}^{2}T_{\nu}^{5}\,, \tag{1}\]
with \(T_{\nu}\) being the background temperature of neutrinos, and \(a\) is the scale factor describing the expansion of the Universe.
As shown in Ref. [40], this simple representation can indeed serve as a proxy to study the changes that a delayed onset of neutrino free streaming produces on cosmological observables. However, we stress that Eq. (1), which can be thought of as arising from neutrinos universally interacting via a massive mediator (see e.g. Ref. [84] for a review), is unlikely to correspond to a realistic configuration of self-interacting neutrinos. Indeed, when taken at face value, results from the study of supernovae [85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103] (see also Refs. [99; 100]); BBN [101; 102; 103]; IceCube experiments [104; 105; 106; 107; 108; 109; 110; 111]; particles colliders [107; 108; 109; 110; 111]; and decay kinematics of meson, leptons, tritium, and gauge boson [110; 111; 112; 113; 114; 115] exclude the flavor-universal parameter space of \(G_{\rm eff}\) capable of modifying the evolution of perturbations. Additionally, we note that in this simple flavor-independent framework, the \({\rm SL}_{\nu}\) mode is significantly disfavored by the Planck polarization data [50; 51; 52; 61]. Taken together, these constraints indicate that more complex flavor-dependent interactions are very likely required to realize a viable model. Nonetheless, since current data on the large-scale distribution of galaxies do not yet have the same constraining power as the CMB and are thus unlikely to be sensitive to the minute details of the interactions, we perform our analysis here using the flavor-universal rate given in Eq. (1). The use of this model also has the advantage of allowing for a direct comparison with previous results, including those obtained from the analysis of data from the ACT [70].
Besides using \(G_{\rm eff}\) to control the onset of neutrino free streaming, we also consider the effective number of relativistic species, \(N_{\rm eff}\), as a free parameter of the model. For the sake of simplicity, we fix the total mass of neutrinos to \(\Sigma m_{\nu}=0.06\) eV and assume a single massive neutrino containing all the mass instead of several degenerate massive neutrinos. Although \(\Sigma m_{\nu}\) has a crucial role in the analysis of CMB data, we note that current LSS data only weakly constrain this parameter [116; 117]. Therefore, including neutrino masses in the analysis will only increase the uncertainties of our final results. Yet, as shown in appendix A, the assumption of a fixed mass does not affect our conclusions. We stress that \(G_{\rm eff}\) is given in units of \({\rm MeV}^{-2}\), and that the usual Fermi constant corresponds to the value \(G_{\rm eff}\sim{\cal O}(10^{-11})\ {\rm MeV}^{-2}\). Unless otherwise stated, we assume here that models laying at \(-5.5\leq\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\leq-2.5\) belong to the \({\rm MI}_{\nu}\), and cosmologies following \(-2.5<\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\leq 0.5\) are associated to the \({\rm SI}_{\nu}\) regime.
To offer a fair comparison (in terms of degrees of freedom) between our case of study and the typical picture of neutrinos starting to free stream around \(T\sim 1.5\) MeV, here, we use the \(\Lambda{\rm CDM}+N_{\rm eff}\) model (with two massless and one massive neutrino with \(\Sigma m_{\nu}=0.06\) eV) to represent the standard paradigm. When necessary, we use the subscript (overscript) \(I_{\nu}\) to denote self-interacting neutrinos quantities, while simply using \(\Lambda{\rm CDM}\) to denote quantities related to the \(\Lambda{\rm CDM}+N_{\rm eff}\) model.
### Collision term and Boltzmann equations
As is usually done for ultra-relativistic particles [118], we expand the scalar neutrino temperature fluctuations with wavenumber \(\mathbf{k}\), proper momentum \(\mathbf{p}\), and conformal time \(\tau\) in terms of Legendre polynomial \(P_{\ell}\) as
\[\frac{\delta T_{\nu}}{T_{\nu}}(\mathbf{k},\mathbf{p},\tau)=\frac{1}{4}\sum_{ \ell=0}^{\infty}(-i)^{\ell}(2\ell+1)\nu_{\ell}(k,p,\tau)P_{\ell}(\mu), \tag{2}\]
where \(p=|\mathbf{p}|\), \(k=|\mathbf{k}|\), and \(\mu\) is the cosine of the angle between \(\mathbf{k}\) and \(\mathbf{p}\). Using the self-interaction rate given in Eq. (1) and the above decomposition, we can compute the collision term entering in the right-hand side of the Boltzmann equations to later derive the set of equations that will describe the evolution of cosmological perturbations in the presence of self-interacting neutrinos. Under the thermal approximation, the collision term at first order for the \(\nu\nu\to\nu\nu\) process is given by [40]
\[C_{\nu}\left[\mathbf{p}\right] =\frac{G_{\mathrm{eff}}^{2}T_{\nu}^{6}}{4}\frac{\partial\ln f_{ \nu}^{(0)}}{\partial\ln p}\sum_{\ell=0}^{\infty}(-i)^{\ell}(2\ell+1)\nu_{\ell} P_{\ell}(\mu) \tag{3}\] \[\times\left[A\left(\frac{p}{T_{\nu}}\right)+B_{\ell}\left(\frac{ p}{T_{\nu}}\right)-2D_{\ell}\left(\frac{p}{T_{\nu}}\right)\right]\,,\]
where \(f_{\nu}^{(0)}\) is the background (Fermi-Dirac) neutrino distribution function, and \(A(x)\), \(B_{\ell}(x)\), and \(C_{\ell}(x)\) are functions related to the different integral terms in the collision term [see App. C and D in Ref. 40, for a detailed derivation of the collision term].
Adopting the conformal Newtonian gauge, and using the collision term defined above, we derive the Boltzmann equations for massive neutrinos
\[\frac{\partial\nu_{\ell}}{\partial\tau} =-\frac{kq}{\epsilon}\left(\frac{\ell+1}{2\ell+1}\nu_{\ell+1}- \frac{\ell}{2\ell+1}\nu_{\ell-1}\right)+4\left[\frac{\partial\phi}{\partial \tau}\delta_{\ell 0}\right. \tag{4}\] \[+\frac{k}{3}\frac{\epsilon}{q}\psi\delta_{\ell 1}\bigg{]}-\frac{ \Gamma_{\nu}}{f_{\nu}^{(0)}}\left(\frac{T_{\nu,0}}{q}\right)\left[A\left( \frac{q}{T_{\nu,0}}\right)\right.\] \[\left.+B_{\ell}\left(\frac{q}{T_{\nu,0}}\right)-2D_{\ell}\left( \frac{q}{T_{\nu,0}}\right)\right]\nu_{\ell}\,,\]
where \(\phi\) and \(\psi\) are the scalar perturbations of the conformal Newtonian gauge, \(\nu_{\ell}\) is typical perturbation variable expanded in Legendre polynomials, \(T_{\nu,0}\) is the current temperature of neutrinos, \(q=ap\) is the comoving momentum, and \(\epsilon=\sqrt{q^{2}+a^{2}m_{\nu}^{2}}\) with \(m_{\nu}\) being the mass of the neutrino species.
Analogously, the Boltzmann equations for massless neutrinos can be derived by setting \(\epsilon=q\) and averaging Eq. (4) over momentum with \(f_{\nu}^{(0)}\),
\[\frac{\partial F_{\ell}}{\partial\tau} =-k\left(\frac{\ell+1}{2\ell+1}F_{\ell+1}-\frac{\ell}{2\ell+1}F_{ \ell-1}\right) \tag{5}\] \[+4\left[\frac{\partial\phi}{\partial\tau}\delta_{\ell 0}+\frac{k}{3} \psi\delta_{\ell 1}\right]-\alpha_{\ell}\Gamma_{\nu}F_{\ell}\,, \tag{6}\]
where \(F_{\ell}\) is the perturbation variable for massless neutrinos as defined in Ref. [118] and the collision term is characterized by:
\[\alpha_{\ell}=\frac{120}{7\pi^{4}}\int_{0}^{\infty}\mathrm{d}x\,x^{2}\left[A( x)+B_{\ell}(x)-2D_{\ell}(x)\right]\,. \tag{7}\]
We highlight that Eqs. (4) and (5) satisfy the energy and momentum conservation since the collision terms follow the relation \(\alpha_{\ell}=A+B_{\ell}-2D_{\ell}=0\) for \(\ell=\{0,1\}\).
We implemented Eqs. (4) and (5) in the cosmological code CLASS-PT[119; 120], which uses the Eulerian perturbations theory to compute the galaxy power spectrum at mildly non-linear scales. To avoid the stiffness of the Boltzmann equations at early times, when the mean free path of self-interacting neutrinos is much smaller than the Hubble horizon, we use the so-called tight-coupling approximation [121]. More precisely, we use the tight-coupling approximation at times where \(\Gamma_{\nu}>10^{3}\mathcal{H}\), with \(\mathcal{H}\) being the Hubble rate at conformal time 1.
Footnote 1: Our modified version of CLASS-PT as well as a more detailed description of our numerical implementation are available at [https://github.com/davidcato/class](https://github.com/davidcato/class)–interacting-neutrinos-PT.
### Cosmological implications of a delayed free streaming
In this Section, we briefly review how a delay in the free streaming of neutrinos impacts the cosmological observables, namely, the CMB and linear matter power spectrum. We refer the reader to Ref. [40] for a thorough discussion of how the assumption of Eq. (1) impacts the evolution of cosmological perturbations. All models discussed in this section use the same values for the cosmological parameters, except for \(G_{\mathrm{eff}}\), as well as \(A_{\mathrm{s}}\) and \(n_{\mathrm{s}}\) when indicated, whose values vary according to what is indicated in the corresponding label.
The changes that self-interacting neutrinos produce in the CMB power spectrum can be explained in terms of the gravitational pull produced by free-streaming radiation species. In the standard paradigm, after their decoupling, neutrinos travel supersonically across the Universe, gravitationally pulling the photon-baryon wave toward larger scales [4]. This gravitational tug felt by the photon-baryon wave results in a phase shift toward smaller \(\ell\) -- larger scales -- and a reduction of the amplitude of the CMB power spectra [4; 5; 18]. Contrastingly, a delay in the free streaming of neutrinos boosts the amplitude of the CMB power spectra and leads to a phase shift toward larger \(\ell\) -- smaller scales. This behavior also manifests as a small reduction of the sound horizon scale of photons, which marginally help to accommodate larger values of \(H_{0}\) in the CMB power spectrum [see Ref. 52, for instance].
On the other hand, the changes that self-interacting neutrinos imprint on the evolution of dark matter fluctuations, and consequently, on the matter power spectrum, are better understood by examining the gravitational potentials \(\psi\) and \(\phi\) in Newtonian gauge. Delaying the onset of the neutrino free streaming suppresses the anisotropic stress of the Universe, altering the evolution of the gravitational potentials by setting \(\psi=\phi\) until the onset of neutrino free streaming. Effectively, this suppression increases the initial value of the gravitational potential \(\psi\) and enhances its oscillatory envelope at horizon entry [40]. Depending on the scale, the interplay of these effects results in either a faster or slower decay of the gravitational potential \(\psi\) in comparison to the \(\Lambda\)CDM model. This feature gives rise to scale-dependent behavior of the dark matter perturbations, which can be distinguished by observing three different kinds of Fourier modes: \(k_{\rm h}^{\rm tc}\), a mode entering the horizon while neutrinos are still tightly coupled; \(k_{\rm h}^{\rm fs}\), a mode entering the horizon when neutrinos start to free stream; and \(k_{\rm h}\), a mode that crosses the horizon well after the onset of the neutrino free streaming.
Dark matter perturbations entering the horizon while neutrinos are still tightly coupled, \(k_{\rm h}^{\rm tc}\), will undergo an initial enhancement in amplitude at horizon entry due to an increase in the initial value of the gravitational potential \(\psi\). However, the absence of anisotropic stress will also amplify the oscillatory envelope of \(\psi\), leading to slower decay of the gravitational potential and, consequently, resulting in a net damping of amplitude of the dark matter perturbations compared to the \(\Lambda\)CDM picture. On the other hand, modes entering the horizon when the free streaming begins, \(k_{\rm h}^{\rm fs}\), will be influenced by the change in the initial conditions and the faster decay of \(\psi\), implying an enhancement in the dark matter perturbation amplitude [40]. Conversely, modes entering the horizon well after the beginning of neutrino free streaming, \(k_{\rm h}\), will remain unaltered compared with the standard cosmological picture.
Figure 1 shows the typical order of magnitude of the aforementioned modes as functions of the coupling strength \(G_{\rm eff}\). An extreme delay to the onset of free streaming produced by strongly self-interacting neutrinos, for instance, \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\sim 0\), will lead to substantial changes in the evolution of perturbations on linear scales \(k_{\rm h}^{\rm fs}\sim 0.05\;h/{\rm Mpc}\) (in this paper, we always refer to scales being linear or non-linear at \(z=0\)). Meanwhile, a less extreme \({\rm SI}_{\nu}\) model with \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\sim-2\) will mainly modify the evolution of perturbations in the (mildly) non-linear regime \(k_{\rm h}^{\rm fs}\sim 0.5\;h/{\rm Mpc}\). Furthermore, cosmologies well inside the \({\rm MI}_{\nu}\) regime, for instance \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\sim-4\), will induce changes at highly non-linear scales.
Figure 2 shows the matter linear power spectrum for the \(\Lambda\)CDM + \(N_{\rm eff}\) model and the self-interacting neutrino scenario with different values of \(G_{\rm eff}\); the gray band illustrates the Fourier modes probed through the full shape of the galaxy power spectrum data. We note that a delay of the free streaming induced by \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\approx-2.5\) (light salmon line) suppresses the linear power spectrum up to a factor of \(\gtrsim 10\%\) at scales \(k_{\rm h}^{\rm tc}\sim 20\;h/{\rm Mpc}\) while enhancing by overall the same factor modes around \(k_{\rm h}^{\rm fs}\sim 0.5\,h/{\rm Mpc}\). The power spectrum for modes crossing the horizon well after the onset of free streaming, here \(k_{\rm h}\lesssim 0.01\;h/{\rm Mpc}\), remains unaltered compared to the \(\Lambda\)CDM + \(N_{\rm eff}\) model. Due to a reduction of the radiation energy density, scenarios in which the free streaming of neutrinos starts after the matter-radiation equality experience a lower enhancement in the power spectrum at scales \(k_{\rm h}^{\rm fs}\). This is illustrated by \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\approx 0.5\) (solid light blue line), where one can note that the power spectrum just increases by a factor of \(\sim 8.5\%\) on scales \(k_{\rm h}^{\rm fs}\approx 0.03\;h/{\rm Mpc}\). Analogously to the CMB, self-interacting neutrinos induce a phase shift in the matter power spectrum around the typical BAO scale; this is \(k\approx 0.1\;h/{\rm Mpc}\). This shift is particularly appreciable for models that delay the onset of free-streaming neutrinos until close to recombination; see, for instance, the case of \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\approx 0.5\) (solid and dashed light blue lines).
We recall that the \({\rm SI}_{\nu}\) mode offers a good fit to the CMB data at the cost of reducing the amplitude, \(A_{\rm s}\), and tilt, \(n_{\rm s}\), of the primordial scalar power spectrum [19; 40; 50; 67]. The dashed blue lines in Fig. 2 show that a decrease in \(A_{\rm s}\) and \(n_{\rm s}\) not only produces a red-tilted power spectrum but also eases the bump expected on scales \(k_{\rm h}^{\rm fs}\). We note that the \({\rm SI}_{\nu}\)-like mode (dashed dark blue line) produces substantial changes across the linear and non-linear scales -- several of those scales will be accessible through the multipoles of the galaxy power spectrum (gray band).
Figure 1: Modes crossing the horizon during the neutrino tight-coupling era, \(k_{\rm h}^{\rm tc}\), the onset of free streaming, \(k_{\rm h}^{\rm fs}\), and well after the self-decoupling, \(k_{\rm h}\), as functions of the self-interaction coupling \(G_{\rm eff}\). The gray gradient represents the transition between the linear and non-linear scales at redshift 0. As described in the main text, self-interacting neutrinos lead to substantial changes in the evolution of dark matter perturbations at modes \(k_{\rm h}^{\rm fs}\) and \(k_{\rm h}^{\rm tc}\). For a strongly interacting neutrino model with \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\sim-1.5\), these modes will belong to the (mildly) non-linear realm today.
## III Full shape of the galaxy power spectrum
Depending on the value of the coupling strength, self-interacting neutrinos leave particular imprints either at linear and/or (mildly) non-linear scales today. As shown by Fig. 1 the most relevant cosmological cases mainly generate changes at scales in which the linear perturbation theory breaks down, i.e., modes with \(k\gtrsim 0.1\,h/\)Mpc. However, given that there is not a straightforward mapping between the linear and (mildly) non-linear power spectrum, it is complex to gauge _a priori_ the impact that the delaying of the onset of neutrinos will have in the (multipoles) galaxy power spectrum.
In this Section, we explore how the deferring of the free streaming of neutrinos impacts the multipoles of the galaxy power spectrum. As stated before, we used a modified version of the publicly available CLASS-PT code [119; 120], which relies on the Eulerian perturbation theory and makes use of the Einstein-de Sitter (EdS) convolution kernels [122] to compute the galaxy power spectrum and its multipoles at one-loop correction. Given that one-loop redshift-space perturbation theory is expected to break down for modes beyond \(k_{\rm max}\approx 0.25\,h/\)Mpc [78; 79; 123], we conservatively adopt \(k_{\rm max}=0.2\,h/\)Mpc for our main analysis, although, complementary analysis exploring the impact of \(k_{\rm max}\) will also be presented. Before examining the imprints that interacting neutrinos leave in the galaxy power spectrum, we qualitatively argue that the EdS kernels can be used even in the presence of self-interacting neutrinos as long as the onset of free streaming occurs before the matter-dominated era.
Due to their supersonic velocity, massive neutrinos that become non-relativistic in the matter-dominated era do not cluster on scales lower than the so-called free-streaming scale, which is characterized by the wavenumber \(k_{\rm NR}\approx 0.018\,\Omega_{m,0}^{1/2}\,(m_{\nu}/1\,{\rm eV})^{1/2}\,h/\)Mpc [124]. This produces a scale-dependent growth rate that leads to a suppression of the linear power spectrum at modes \(k>k_{\rm NR}\). This picture is expected to remain unchanged if the onset of the free streaming occurs before the matter-dominated era. Indeed, in such scenarios, the changes produced in the gravitational field \(\psi\) at horizon crossing only modify the initial shape of the transfer function and do not introduce any additional features that alter the evolution of the perturbations in the non-linear regime. Since most of the \(G_{\rm eff}\) parameter space considered here delays the free streaming of neutrinos until close to the matter-dominated era, we argue that the mildly non-linear power spectrum can be computed using the EdS kernels. We also note that this is consistent with the fact that the self-interactions considered here do not modify the free-streaming scale to a very good approximation.
The top panels of Fig. 3 show the monopole (left) and quadrupole (right) of the galaxy power spectrum for the \(\Lambda{\rm CDM}+N_{\rm eff}\) model and the self-interacting neutrino model with different values of \(G_{\rm eff}\). The ratios between the corresponding multipoles are shown in the bottom panels. All models shown here assume the same values for the galaxy power spectrum nuisance parameters. Concerning moderately self-interacting neutrinos, we note that models with a universal coupling of \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})=-4.5\) (red line) gives results indistinguishable from the standard picture, this being true both for the monopole and quadrupole. Meanwhile, models following \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})=-2.5\) produce a sizable enlargement of the monopole, without significantly deviating from the quadrupole predicted by the
Figure 2: Linear matter power spectrum (top panel) for the \(\Lambda{\rm CDM}+N_{\rm eff}\) cosmology and the self-interacting neutrino model with different values of \(G_{\rm eff}\). The ratios of the latter with the \(\Lambda{\rm CDM}+N_{\rm eff}\) model are shown in the bottom panel. Dashed and solid lines of the same color use the same value of \(G_{\rm eff}\) but lower values of \(\Lambda_{\rm s}\) and \(n_{\rm s}\) as specified in the labels. A delay in the free streaming of neutrinos driven by \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})=-1.5\) (solid dark blue line) suppresses and enhances the power spectrum at \(k_{\rm h}^{\rm lc}\sim 10\;h/\)Mpc and \(k_{\rm h}^{\rm ls}\sim 0.2\;h/\)Mpc, respectively. A decrease in \(A_{\rm s}\) and \(n_{\rm s}\) (dashed blue lines), however, smooths out the bump observed around \(k_{\rm h}^{\rm ls}\) and leads to a red-tilted power spectrum. The dashed dark blue line approximately corresponds to the \({\rm SL}_{\nu}\) mode. The gray band represents the range of scales probed by the galaxy power spectrum data.
\(\Lambda\text{CDM}+N_{\text{eff}}\) cosmology. Thus, similar to the case of CMB analysis, we expect the MI\({}_{\nu}\) regime and the standard cosmological model to provide a similar fit to the data.
Additionally, Fig. 3 shows that strongly interacting neutrinos (solid blue lines) significantly modify the different multipoles of the galaxy power spectrum. For instance, models with \(\log_{10}(G_{\text{eff}}/\text{MeV}^{-2})=-1.5\) increase the amplitude of \(P_{0}\) by a factor of 10% on scales \(k\gtrsim 0.08h/\text{Mpc}\) while also departing from the quadrupole of the standard case as \(k\) increases. Nonetheless, we note that a decrease in \(A_{\text{s}}\) and \(n_{\text{s}}\) reduces the offset between the \(\text{SI}_{\nu}\) regime and the \(\Lambda\text{CDM}+N_{\text{eff}}\) model. This illustrates that the typical \(\text{SI}_{\nu}\) mode found in CMB fits (dashed dark blue line) can potentially also offer a good fit to the galaxy power spectrum data.
## IV Data and Methodology
### Full shape power spectrum and BAO
We use the dataset from the twelfth data release of the Baryon Oscillation Spectroscopic Survey [125, 126, 127] and its corresponding window-free galaxy power spectrum [128, 129] to constrain the presence of new inter
Figure 3: Impact of \(G_{\text{eff}}\) in the monopole (left) and quadropole (right) of the galaxy power spectrum. The dashed and solid lines of the same color represent models with the same value of \(G_{\text{eff}}\) but lower values of \(A_{\text{s}}\) and \(n_{\text{s}}\). Models well inside the MI\({}_{\nu}\) regime results indistinguishable from the \(\Lambda\text{CDM}+N_{\text{eff}}\) case, however, sizable deviations are produced by models in the \(\text{SI}_{\nu}\) regime. Interestingly, a decrease in \(A_{\text{s}}\) and \(n_{\text{s}}\) can potentially compensate for the impact of \(G_{\text{eff}}\) (dashed dark blue line). This illustrates that the so-called \(\text{SI}_{\nu}\) mode could offer a good fit to the current LSS data. The data displayed here correspond to the subset in the NGC at \(z_{\text{eff}}=0.61\) (see Sec. IV).
actions in the neutrino sector. The galaxies from BOSS DR12 are distributed across four different subsets, which correspond to two redshift slices, \(0.2<z<0.5\) from the LOWZ sample (\(z_{\rm eff}=0.38\)) and \(0.5<z<0.75\) from the CMASS sample (\(z_{\rm eff}=0.61\)), and two sky cuts in the north and south Galactic cap (NGC and SGC, respectively). The galaxy power spectrum data are given for each of these subsets.
We constrain a potential delay in the onset of neutrino free streaming by analyzing the multipoles of the galaxy power spectrum \(P_{\ell}(k,z)\) (\(\ell=0,2,4\)) [129; 130], along with the \(Q_{0}(k,z)\) estimator [131]. This estimator, closely related to the real space power spectrum, is obtained using a linear combination of the first few power spectrum multipoles. As discussed in the previous Section, redshift-space perturbation theory breaks down for wavenumbers larger than \(k_{\rm max}\approx 0.25\;h/{\rm Mpc}\). Thus, our main analysis conservatively uses the multipoles in the wavenumber range \(k_{\rm min}=0.01\;h/{\rm Mpc}\) and \(k_{\rm max}=0.2\;h/{\rm Mpc}\). Since real-space perturbation theory can be safely applied to smaller scales [131], we consider measurements of the \(Q_{0}\) metric in the range \(k_{\rm min}=0.2\;h/{\rm Mpc}\) and \(k_{\rm max}=0.4\;h/{\rm Mpc}\). In both cases, we use a width bin of \(\Delta k=0.005\;h/{\rm Mpc}\). Furthermore, we also use the reconstructed power spectrum, which provides constraints on the so-called Alcock-Paczynski (AP) parameters [132].
We analyze this data using the BOSS likelihood presented in Ref. [129], which analytically marginalizes over the nuisance parameters that enter linearly in the power spectrum, i.e., the counterterms (monopole \(c_{0}\), quadrupole \(c_{2}\), hexadecapole \(c_{4}\), and fingers-of-God \(\tilde{c}\)), the third-order galaxy bias \(b_{\rm T_{3}}\), and the stochastic contributions (\(P_{\rm shot}\), \(a_{0}\), and \(a_{1}\)). The covariance matrix used for this likelihood has been computed using MultiDark-Patchy 2048 simulations [133; 134].
### Big Bang Nucleosynthesis
Complementary to LSS data, we use BBN data to effectively constrain the baryon density parameter, \(\omega_{b}\), and the effective number of relativistic species, \(N_{\rm eff}\). In particular, we follow the implementation presented in Ref. [135], which uses an interpolation table that depends on \(\omega_{\rm b}\) and \(N_{\rm eff}\) (extracted from the PArthENoPe code [136]), along with a measurement of the nuclear rate \(d(p,\gamma)^{3}{\rm He}\)[137], to theoretically predict the primordial abundance of helium, \(Y_{\rm He}\), and deuterium, \(y_{\rm DP}\). We constrain the theoretically predicted values of the primordial abundance of helium and deuterium using the measurements presented by Cooke _et al._[80] and Aver _et al._[81], respectively.
One might worry that neutrino self-interactions could impact BBN in a way that makes using the above prior inconsistent. However, neutrino self-interaction does not alter, in general, the standard electroweak decoupling of neutrinos from the rest of the Standard Model plasma. This is supported by the analysis presented in Ref. [138], which shows that the \({\rm SL}_{\nu}\) scenario has only a very slight impact on the predicted primordial abundances of helium and deuterium. Thus, theoretical predictions from standard BBN can be safely used to constrain the self-interacting neutrino cosmology.
### Scanning the Parameter Space
Since the delay in the onset of neutrino free streaming is described by a single parameter, our model is specified by seven cosmological parameters: the self-interaction strength, \(G_{\rm eff}\), and the six usual \(\Lambda\)CDM\(+N_{\rm eff}\) parameters 2. These latter parameters encompass the baryon density \(\omega_{\rm b}\), the cold dark matter density \(\omega_{\rm cdm}\), the Hubble constant \(H_{0}\), the effective number of relativistic species \(N_{\rm eff}\), and the amplitude and tilt of the primordial power spectrum, \(A_{\rm s}\) and \(n_{\rm s}\), respectively. In addition to this, the likelihood of the galaxy power spectrum includes three non-marginalized nuisance parameters for each subsample of the BOSS DR12, hence, twelve parameters are added to the parameter space. These parameters are the linear, \(b_{1}\), quadratic, \(b_{2}\), and second order galaxy, \(b_{\rm T_{2}}\), biases.
Footnote 2: Since the LSS data we use are insensitive to reionization, we assume a fixed value for the optical depth \(\tau_{\rm relo}=0.05\).
We employ two different schemes to explore this nineteen-dimensional parameter space: a profile likelihood and a Metropolis-Hasting sampling. The frequentist approach, provided by the profile likelihood, will allow us to identify possible volume effects, better understand the goodness of fit of the model and obtain some useful examples to illustrate the physical predictions of different regions of the parameter space. The Bayesian exploration through a Metropolis-Hasting algorithm, on the other hand, will provide a broader understanding of the
We profile the likelihood in twenty-four different values of the self-interaction coupling constant. Such values are equally linearly spaced in the range \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})=[-5.5,0.5]\). We find the best-fit to the data for each particular value of \(G_{\rm eff}\) by minimizing the likelihoods with the Derivative-Free Optimizer for Least-Squares package [139]. The main results of this analysis are a set of points that discretely sketch the profile likelihood. Additionally, we present a continuous representation of the profile likelihood. This representation is obtained applying a cubic spline to the \(\chi^{2}_{\rm min}\) points found with the minimization algorithm.
Complementary to this, and in order to unveil possible correlations between \(G_{\rm eff}\) and other cosmological parameters, we sample the parameter space using a Metropolis-Hasting algorithm. More specifically, we use the montepython code [140; 141]. To avoid possible deficiencies in the sampling, we start the exploration with a sufficiently wide proposal distribution on \(G_{\rm eff}\). We evaluate the convergence of our sampling by demanding \(R-1\sim{\cal O}(10^{-3})\), where \(R\) is the Gelman-Rubin diagnostic parameter [142].
## V Results and discussion
As mentioned in Sec. II, for simplicity, our main analyses assume a fixed value of the sum of neutrino masses \(\Sigma m_{\nu}=0.06{\rm eV}\). This in concordance with the fact that current galaxy power spectrum data poorly constrain this parameter [116; 117]. Nevertheless, for completeness, in App. A we demonstrate that our main results do not depend on this assumption. To avoid clutter, hereafter, we denote the combination of the galaxy power spectrum data (\(P_{\ell}+Q_{0}+{\rm AP}\)) as FS.
### Profile likelihood
We profile the likelihood considering both FS data alone and the combination of FS+BBN data. We quantify the goodness-of-fit of the self-interacting neutrino model using \(\Delta\chi^{2}\equiv\chi^{2}_{\rm min,\,I_{\nu}}-\chi^{2}_{\rm min,\,ACDM}\). The results of the profile likelihood analyses are presented in Fig. 4 and Tab. 1.
As anticipated in Sec. III, Fig. 4 shows that models well-inside the \({\rm MI}_{\nu}\) regime provide a fit to the data similar to \(\Lambda\)CDM. Indeed, the analysis of the FS+BBN data (blue line and points) shows that models with \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\lesssim-3.5\) lead to a negligible \(\Delta\chi^{2}\), while a slight decrease in \(\Delta\chi^{2}\) can be attained if we solely consider FS data (red line and points). The profile likelihood analysis thus reveals that, when compared with the \(\Lambda\)CDM\(+N_{\rm eff}\) model, moderately self-interacting neutrino offers at most a marginally better fit to the LSS data.
On the other hand, Fig. 4 shows that the FS data display a mild preference for the \({\rm SI}_{\nu}\) mode. Concretely, we observe that models with \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\approx-1.3\) offer a better fit to the data than \(\Lambda{\rm CDM}+N_{\rm eff}\). The goodness of fit of the \({\rm SI}_{\nu}\) mode results in \(\Delta\chi^{2}=-3.18\) for the analysis of FS data (red line and points) and \(\Delta\chi^{2}=-2.47\) for the analysis of FS + BBN data (blue line and points). Regardless of the constraints imposed by the BBN data, the \({\rm SI}_{\nu}\) mode appears to provide a slightly better fit to the galaxy power spectrum data than the standard cosmological model.
To better understand the structure of the likelihood surface as \(G_{\rm eff}\) is varied, we illustrate in Figs. 5 and 6 the galaxy and linear matter power spectra, respectively, for some of the \(\Delta\chi^{2}\) extrema obtained through the above profile likelihood analysis. For the better-fitting models,
\begin{table}
\begin{tabular}{l c c c} Parameter & \(\Lambda{\rm CDM}+N_{\rm eff}\) & \({\rm MI}_{\nu}\) mode & \({\rm SI}_{\nu}\) mode \\ \hline \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\) & - & \(-4.98\) & \(-1.33\) \\ \(N_{\rm eff}\) & 2.94 & 2.93 & 2.95 \\ \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & 68.51 & 68.21 & 68.19 \\ \(\omega_{\rm b}\) & 0.0226 & 0.0225 & 0.0225 \\ \(\omega_{\rm cdm}\) & 0.129 & 0.128 & 0.126 \\ \(\ln(10^{10}A_{\rm s})\) & 2.85 & 2.81 & 2.74 \\ \(n_{\rm s}\) & 0.902 & 0.906 & 0.85 \\ \(\sigma_{8}\) & 0.762 & 0.743 & 0.720 \\ \(\chi^{2}_{\rm min}\) & 767.65 & 767.44 & 765.18 \\ \end{tabular}
\end{table}
Table 1: Best-fits to the FS+BBN data of the \(\Lambda\)CDM \(+N_{\rm eff}\) model and two cases of the self-interacting neutrino model that represent the \({\rm MI}_{\nu}\) and \({\rm SI}_{\nu}\) modes. While the \({\rm MI}_{\nu}\) mode and the standard model offers a similar fit to the data, the \({\rm SI}_{\nu}\) mode leads to a lower \(\chi^{2}\) by decreasing \(A_{\rm s}\) and \(n_{\rm s}\).
we choose \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})=\{-4.98,-1.33\}\) to illustrate the MI\({}_{\nu}\) and SI\({}_{\nu}\) modes, respectively. On the other hand, we choose \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})=\{-2.37,-0.28\}\) to represent regions of the parameter space that offer a remarkably worse fit than the standard cosmological model, see Fig. 4. The corresponding best-fits to the FS+BBN data for the MI\({}_{\nu}\) and SI\({}_{\nu}\) modes and the \(\Lambda{\rm CDM}+N_{\rm eff}\) model are displayed in Tab. 1.
In concordance with the discussion presented in Sec. III, Fig. 5 shows that the MI\({}_{\nu}\) and SI\({}_{\nu}\) modes (solid dark orange and dark purple lines, respectively) only slightly deviate from the best-fit of the standard cosmological model. In the case of the MI\({}_{\nu}\) mode, this behavior is explained by the fact that models with \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\lesssim-3.5\) mostly change the power spectrum at \(k\gtrsim 0.5\;h/{\rm Mpc}\), i.e., at (non-linear) scales currently inaccessible by our modeling and observations. In contrast, we observe that the SI\({}_{\nu}\) mode attains a good fit to the data by decreasing the values with \(A_{\rm s}\) and \(n_{\rm s}\), see Tab. 1. Notably, this anticorrelation between the SI\({}_{\nu}\) regime and the primordial power spectrum parameters, \(A_{\rm s}\) and \(n_{\rm s}\), has been also observed in the CMB data [18].
Fig. 6 shows that, even when the predictions for the multipoles of the galaxy power spectrum are similar, the underlying linear matter power spectra for the MI\({}_{\nu}\) and SI\({}_{\nu}\) modes are significantly different. Indeed, owing to
Figure 5: Monopole (left) and quadropole (right) of the galaxy power spectrum for some of the \(\Delta\chi^{2}\) extrema obtained through the profile likelihood analysis of FS+BBN data, both for the self-interacting neutrino cosmology and the \(\Lambda{\rm CDM}+N_{\rm eff}\) model. We use \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})=\{-2.37,-0.28\}\) to illustrate regions of the parameter space disfavored by the data, while \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})=\{-4.98,-1.33\}\) to represent the MI\({}_{\nu}\) and SI\({}_{\nu}\) modes, respectively. The purple solid line shows that strongly self-interacting neutrinos following \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\approx-1.3\) can offer a good fit to the galaxy power spectrum data. The data displayed here correspond to the subset in the NGC at \(z_{\rm eff}=0.61\) (see Sec. IV).
the decrease in \(A_{\rm s}\) and \(n_{\rm s}\), the SI\({}_{\nu}\) mode predicts a \(\gtrsim 30\%\) (\(\gtrsim 40\%\)) suppression of the power spectrum at galactic (sub-galactic) scales while exhibiting a barely visible bump that peaks around \(k\approx 0.1\,h/{\rm Mpc}\). This model also displays an increase in power at very large scales. Remarkably, this general structure of the SI\({}_{\nu}\) matter power spectrum matches that found in Ref. [40] using CMB data only. On the other hand, the MI\({}_{\nu}\) mode features a bump that peaks well inside the non-linear scales and a modest and nearly constant suppression of the power spectrum for modes \(k\lesssim 1\,h/{\rm Mpc}\).
### Cosmological constraints
We scan the parameter space of the self-interacting neutrinos and \(\Lambda{\rm CDM}+N_{\rm eff}\) models using the Metropolis-Hasting algorithm implemented in montepython[140, 141]. We perform the exploration imposing a flat prior in the self-coupling strength \(\log_{10}(G_{\rm eff})=[-5.5,0.5]\) and the effective number of relativistic species \(N_{\rm eff}=[2.013,5.513]\). The other cosmological parameters are set to follow improper flat priors. Furthermore, we impose priors on the nuisance parameters of the BOSS likelihood following Ref. [129]. To illustrate the role of each data set in constraining the delay in the free streaming of neutrinos, we perform several analyses considering different combinations of the data. Our results are shown in Figs. 7, 8 and 9 and in Tab. 2. Unless otherwise stated, we conservatively assume \(k_{\rm max}=0.20\,h/{\rm Mpc}\).
#### iv.2.1 The role of the linear and (mildly) non-linear scales
Panels in the upper triangular portion of Fig. 7 show the constraints obtained from the analysis of BBN + \(P_{\ell}\) data when different values for \(k_{\rm max}\) are assumed. We note that data merely considering modes belonging to the linear scale, i.e. \(k_{\rm max}=0.1\,h/{\rm Mpc}\), do not constrain the self-coupling constant \(G_{\rm eff}\) (gray contours and lines). However, the situation significantly changes if we include modes associated with the mildly non-linear scales, that is, if we adopt \(k_{\rm max}=0.20\,h/{\rm Mpc}\) (blue contours and lines). In such a case, we not only observe an improvement in the constraints of all the cosmological parameters in general but also a net decrease of the posterior for models belonging to the MI\({}_{\nu}\) regime. Marginal improvements to the latter result are obtained if we consider \(k_{\rm max}=0.25\,h/{\rm Mpc}\) (dashed black contours and lines). It is important to note that, regardless of the value of \(k_{\rm max}\), we discern the existence of a non-trivial correlation between \(A_{\rm s}\) and \(n_{\rm s}\) and the strongly interacting neutrinos.
#### iv.2.2 The effects of the \(Q_{0}\) estimator and the AP test
We show in the panels of the lower triangular portion of Fig. 7 the results of the analyses when considering the \(Q_{0}\) and AP data. We observe that inclusion of the \(Q_{0}\) estimator data (light orange contours and lines) marginally improves the results of the BBN + \(P_{\ell}\) analysis (blue contours and lines). This is not surprising since although the \(Q_{0}\) estimator allows us to probe smaller scales, corresponding to wavenumber up to \(k=0.4\,h/{\rm Mpc}\), its current BOSS DR12-based estimative is shot noise-dominated [131].
On the other hand, the inclusion of the AP data (red contours and lines) increases the probability density distribution of the self-coupling constant around the MI\({}_{\nu}\) regime and slightly decreases the likelihood of models with \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\gtrsim-0.5\). Nonetheless, the AP
Figure 6: Linear matter power spectra for some of the \(\Delta\chi^{2}\) extrema obtained through the profile likelihood analysis of FS+BBN data, both for the self-interacting neutrino cosmology and the \(\Lambda{\rm CDM}+N_{\rm eff}\) model. \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})=\{-4.98,-1.33\}\) are chosen to illustrate the MI\({}_{\nu}\) and SI\({}_{\nu}\) modes, respectively, while \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})=\{-2.37,-0.28\}\) are used to represent regions of the parameter disfavored by the data. We note that the SI\({}_{\nu}\) mode found in the FS+BBN data (dark purple solid line), predicts conspicuous changes in the linear matter power spectrum, including a large suppression of the latter at galactic and sub-galactic scales.
data do not disfavor the \(\mathrm{SI}_{\nu}\) mode, which still offers a slightly better fit to the data than the \(\mathrm{MI}_{\nu}\) mode.
We present the constraints obtained from the analysis of \(\mathrm{BBN}+P_{\ell}+Q_{0}+\mathrm{AP}\) data for both the \(\Lambda\mathrm{CDM}+N_{\mathrm{eff}}\) and \(I_{\nu}\) models in Tab. 2. Additionally, we provide the constraints derived for the \(\mathrm{MI}_{\nu}\) and \(\mathrm{SI}_{\nu}\) modes. Such constraints are obtained by splitting the sampling into two subsets, one corresponding to \(\log_{10}(G_{\mathrm{eff}}/\mathrm{MeV}^{-2})\leq-2.5\) and the other to the opposite. We argue that this mode separation scheme is enough to provide an insight into the parameter space of the mild and strong interacting neutrino cosmologies. As in the case of the profile likelihood analysis, the fourth column in Tab. 2 shows the \(\mathrm{SI}_{\nu}\) mode leads to lower values of the primordial power
Figure 7: Marginalized constraints, at 68% and 95% confidence levels, on the cosmological parameters of the self-interacting neutrino model when considering different combinations of data. The upper triangular portion highlights the role of the linear and mildly non-linear scales in the task of constraining a delay in the onset of the free streaming of neutrinos, while the lower triangular portion emphasizes the contribution of the \(Q_{0}\) and AP data.
spectrum parameters: \(A_{\rm s}\) and \(n_{\rm s}\).
We emphasize that our results and previous analyses of CMB data not only reveal the presence of the \({\rm SI}_{\nu}\) but also agree that there exists an anticorrelation among \(G_{\rm eff}\) and the amplitude and tilt of the primordial power spectrum, \(A_{s}\) and \(n_{s}\), respectively. We explicitly illustrate this in Fig. 8, where we compare the underlying constraints obtained from the analysis of BBN + \(P_{t}\) + \(Q_{0}\) + AP data with the ones obtained in Ref. [40] from the analysis of the CMB TT + lens + BAO data. This comparison hints that it is possible to create a self-consistent picture for strongly self-interacting neutrinos, implying that cosmological data could allow a cosmological scenario in which neutrino free streaming is delayed until close to the matter-radiation equality epoch.
#### iv.2.3 Pondering the cosmological tensions
Finally, we assess the matter of cosmological tensions. As stated before, owing to correlations with \(A_{\rm s}\) and \(n_{\rm s}\), the \({\rm SI}_{\nu}\) mode leads to a power spectrum that is significantly suppressed at small scales, see Fig. 6. Tab. 2 shows that when compared to the \(\Lambda{\rm CDM}+N_{\rm eff}\) cosmology, the latter produces a \(\sim 3\%\) decrease in \(\sigma_{8}\), the root mean square of the matter fluctuations at \(8\,\,h/{\rm Mpc}\). Table 2 also shows that, regardless of the model, BOSS data consistently yield a lower value of \(\sigma_{8}\) when compared to the CMB constraints.
Furthermore, from Tab. 2, one can note that the \(\Lambda{\rm CDM}+N_{\rm eff}\) model and the self-interacting neutrino cosmologies produce indistinguishable values for \(H_{0}\). This similarity arises because, in both scenarios, the \(N_{\rm eff}\) value is tightly constrained by the primordial abundance of helium \(Y_{\rm He}\), resulting in nearly identical sizes of the baryon-photon sound horizon; the change induced by \(G_{\rm eff}\) in the \({\rm SI}_{\nu}\) case is subdominant. To better illustrate this point, we perform an extra analysis when considering BBN observations without the presence of \(Y_{\rm He}\) data. The results of this analysis are shown in Fig. 9.
Fig. 9 offers a direct comparison between the \(I_{\nu}\) and the \(\Lambda{\rm CDM}+N_{\rm eff}\) models when considering BBN data with (solid lines and contours) and without (dashed lines and contours) observations of \(Y_{\rm He}\). We immediately observed that removing the helium abundance constraint frees the values of \(N_{\rm eff}\) and \(H_{0}\) in both models, leaving them largely unconstrained by the FS data. This is the result of a well-known geometric degeneracy between the baryon-photon sound horizon (which can be adjusted by changing \(N_{\rm eff}\)) and the angular diameter distance (which scales as \(H_{0}^{-1}\)) [143]. This highlights the importance of BBN [144], and more generally, of our assumptions about the physics of the early Universe, to the value of the Hub
\begin{table}
\begin{tabular}{l c c c} Parameter & \(\Lambda{\rm CDM}+N_{\rm eff}\) & \({\rm MI}_{\nu}\) regime & \({\rm SI}_{\nu}\) regime \\ \hline \(\log_{10}(G_{\rm eff}/{\rm Mev}^{-2})\) & - & \(-4.07^{+0.77}_{-1.1}\) & \(-1.30^{+0.47}_{-0.37}\) \\ \(10^{2}\omega_{b}\) & \(2.259\pm 0.063\) & \(2.256\pm 0.065\) & \(2.257\pm 0.064\) \\ \(\omega_{cdm}\) & \(0.134^{+0.011}_{-0.014}\) & \(0.134^{+0.011}_{-0.015}\) & \(0.135^{+0.010}_{-0.014}\) \\ \(\ln\left(10^{10}A_{s}\right)\) & \(2.73\pm 0.16\) & \(2.73^{+0.15}_{-0.18}\) & \(2.64\pm 0.16\) \\ \(n_{s}\) & \(0.882\pm 0.069\) & \(0.881\pm 0.071\) & \(0.813\pm 0.072\) \\ \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & \(68.9^{+1.8}_{-2.0}\) & \(68.9\pm 1.9\) & \(69.0\pm 1.9\) \\ \(N_{\rm eff}\) & \(2.98^{+0.25}_{-0.29}\) & \(2.97\pm 0.28\) & \(2.98\pm 0.27\) \\ \(\sigma_{8}\) & \(0.725^{+0.044}_{-0.050}\) & \(0.730^{+0.042}_{-0.053}\) & \(0.702\pm 0.051\) \\ \end{tabular}
\end{table}
Table 2: 68% confidence level intervals for the cosmological parameters obtained from the analysis of the FS+BBN data for the different cosmologies here considered. Constraints on the mild and strong regimes are obtained by splitting the sampling into two subsets, one following \(\log_{10}(G_{\rm eff}/{\rm Mev}^{-2})\leq-2.5\) and the other the opposite, respectively. We highlight that \({\rm SI}_{\nu}\) cosmologies lead to lower values of \(A_{\rm s}\) and \(n_{\rm s}\).
Figure 8: Comparison between the marginalized constraints, at 68% and 95% confidence levels, on different parameters of the \(I_{\nu}\) model obtained from our main analysis and one of the analyses presented in Ref. [40].
ble constant inferred from FS data. Thus, FS data by themselves cannot weigh in on whether interacting neutrinos may play a role in the current discrepancy between different measurements of the Hubble constant. We note however that the posterior distribution of \(G_{\rm eff}\) is nearly independent of whether a BBN prior is assumed or not.
## VI Conclusions
Several analyses have pointed out that some cosmological data show a preference for a cosmological scenario in which the free streaming of neutrinos is delayed until close to the epoch of matter-radiation equality. Produced by yet-unknown strong self-interactions in the neutrino sector, this nonstandard scenario generates important changes in the evolution of cosmological perturbations at linear and nonlinear scales that could impact the LSS of the Universe. Here, we have investigated if LSS data are sensitive to these changes. We adopted the simplest cosmological representation for self-interacting neutrinos and later analyzed the Full Shape of the galaxy power spectrum, and BBN data, within this context. Remarkably, our analysis unveils the presence of the \({\rm SI}_{\nu}\) mode in the galaxy power spectrum data and adds a new chapter to the tale of the two modes.
Indeed, we have found that self-interacting neutrinos with \(\log_{10}(G_{\rm eff}/{\rm MeV}^{-2})\approx-1.3\), provide a good fit to the galaxy power spectrum data, regardless of the presence or absence of BBN priors. The goodness of fit of such a scenario has been quantified to be \(\Delta\chi^{2}\approx-2.5\) (\(\Delta\chi^{2}\approx-3\)) when BBN priors are (not) taken into account in the analysis, thus displaying a modest preference for strongly interacting scenarios over the \(\Lambda{\rm CDM}+N_{\rm eff}\) and \({\rm MI}_{\nu}\) models. Moreover, we have exposed that this modest preference for the \({\rm SI}_{\nu}\) is driven by the data in the mildly nonlinear scales.
Compared to the \(\Lambda{\rm CDM}+N_{\rm eff}\) model, the \({\rm SI}_{\nu}\) mode found in the galaxy power spectrum data displays a significant matter clustering suppression on small scales. Such a suppression, that is shown to be greater than 40% at sub-galactic scales, i.e, \(k\gtrsim 10\,h/{\rm Mpc}\), is driven by the underlying decrease of \(A_{\rm s}\) and \(n_{\rm s}\) that strongly self-interacting neutrinos prefer. Importantly, the same predicted suppression of small-scale power also appears in previous analyses that rely on the CMB observations [see Ref. [40], for instance]. This lack of small-scale power, although not as dramatic as in e.g. warm dark matter models [145; 146; 147], could be probed via substructure lensing (see e.g. Refs. [148; 149; 150; 151; 152; 153; 154]) or observations of the Milky Way satellites (see e.g. Refs. [155; 156; 157; 158]).
We conclude that our results, which are consistent across both profile likelihood and Bayesian exploration analyses, do not only expose the presence of the persistent \({\rm SI}_{\nu}\) mode in the galaxy power spectrum data but also suggest that cosmological data can potentially accommodate a self-consistent cosmological scenario in which the onset of the free streaming of neutrinos is delayed until close to the matter-radiation equality epoch. Although this finding does not pose an immediate issue for the \(\Lambda{\rm CDM}\) model (the statistical preference being mild), our analysis deepens the riddle around the two-mode puzzle as we now have two different kinds of cosmological data (CMB and galaxy clustering) showing some preference for the \({\rm SI}_{\nu}\). In line with this, we would like to bring back attention to one of the conclusions presented by Ref. [40]: while we typically explore new physics by proposing mild deformations of the \(\Lambda{\rm CDM}\) model, it is crucial to bear in mind that radically different scenarios could provide a good fit to the cosmological observables. Thus, our results motivate the thorough exploration of neutrino interaction models capable of reconciling all CMB and LSS data in the \({\rm SI}_{\nu}\) regime, including CMB polarization data from Planck [1]. Since polarization data are particularly sensitive to the anisotropic stress history of the Universe, they naturally are better probes of the flavor structure of the neutrino interactions. The fact that such data disfavor the simplest universal model considered here indicates that a more complex (and realistic) neutrino interaction model that includes a strong flavor dependence might be preferred. We leave to future work the study of a model capable of accommodating all CMB and LSS data, while not running afoul of other laboratory constraints.
It is also interesting to comment on how our results connect to previous free-streaming phase shift analyses showing consistency between the SM predictions and both CMB [75] and BAO [76; 77] data. These works use a one-parameter family of templates calibrated to \(\Lambda{\rm CDM}\) to measure the neutrino-induced phase shift from the data, phrasing their results in terms of the effective number of neutrino species, \(N_{\rm eff}\). By construction, such templates can only capture scenarios in which the free-streaming radiation fraction is constant in the era after BBN but prior to the epoch of recombination. The time-varying free-streaming fraction caused by the late neutrino decoupling we studied here leads to a phase shift structure of the CMB and BAO peaks that is distinct from that captured by the templates used so far, leaving them unable to directly capture the \({\rm SI}_{\nu}\) signal. In principle, phase-shift templates capable of capturing this time-dependent free-streaming fraction could be built, and the possible presence of the \({\rm SI}_{\nu}\) could be studied by isolating its impact on the phase of CMB and BAO peaks. We leave such an analysis to future works.
Finally, now that we have established the existence of the \({\rm SI}_{\nu}\) mode in two independent cosmological data sets, we can discard the possibility that its existence is caused by an accidental feature in the CMB sky. The apparent consistency between some CMB data and the large-scale distribution of galaxies indicates that the \({\rm SI}_{\nu}\), whatever its microscopic origin is, is an actual physical feature present in the data. While we have explored this feature here using the language of self-interacting neutrinos, it is also possible that our results are hinting at the existence of a yet-to-be-discovered early-Universe
phenomenon that is not related at all to new physics in the neutrino sector. Our results highlight the need for considering a broader range of phenomenologies deep in the radiation-dominated epoch that could be consistent with current cosmological observations.
###### Acknowledgements.
We thank Vera Gluscevic, Adam He, and Daniel Green for useful comments on an initial version of this manuscript. This work was supported by the National Science Foundation (NSF) under grant AST-2008696 and the REU site grant PHY-1659618. D. C. and F.-Y. C.-R. would also like to thank the Robert E. Young Origins of the Universe Chair fund for its generous support. We also would like to thank the UNM Center
Figure 9: Marginalized constraints, at 68% and 95% confidence levels, on select parameters of the \(\Lambda\mathrm{CDM}+N_{\mathrm{eff}}\) and \(I_{\nu}\) models obtained from the analysis of FS+BBN data with and without the inclusion of the primordial helium abundance, \(Y_{\mathrm{He}}\).
for Advanced Research Computing, supported in part by the NSF, for providing the research computing resources used in this work.
## Appendix A The impact of Neutrino mass
Our main results rely on the assumption of a fixed value for the sum of neutrino masses, more precisely, \(\Sigma m_{\nu}=0.06\) eV. However, to illustrate that this assumption does not bias our results and conclusions, we have carried an extra analysis of the FS+BBN data assuming \(\Sigma m_{\nu}\) as free parameter. The results of this complementary analysis is shown in Fig. 10.
We note that the assumption of a fixed value for the sum of neutrino masses does not significantly affect the constraints in \(G_{\rm eff}\), \(N_{\rm eff}\) or the cosmological derived parameters of interested \(H_{0}\) and \(\sigma_{8}\). Nonetheless, this assumption leads to slightly smaller values of the amplitude, \(A_{\rm s}\), and tilt, \(n_{\rm s}\), of the primordial power spectrum.
|
2309.08814 | URA*: Uncertainty-aware Path Planning using Image-based Aerial-to-Ground
Traversability Estimation for Off-road Environments | A major challenge with off-road autonomous navigation is the lack of maps or
road markings that can be used to plan a path for autonomous robots. Classical
path planning methods mostly assume a perfectly known environment without
accounting for the inherent perception and sensing uncertainty from detecting
terrain and obstacles in off-road environments. Recent work in computer vision
and deep neural networks has advanced the capability of terrain traversability
segmentation from raw images; however, the feasibility of using these noisy
segmentation maps for navigation and path planning has not been adequately
explored. To address this problem, this research proposes an uncertainty-aware
path planning method, URA* using aerial images for autonomous navigation in
off-road environments. An ensemble convolutional neural network (CNN) model is
first used to perform pixel-level traversability estimation from aerial images
of the region of interest. The traversability predictions are represented as a
grid of traversal probability values. An uncertainty-aware planner is then
applied to compute the best path from a start point to a goal point given these
noisy traversal probability estimates. The proposed planner also incorporates
replanning techniques to allow rapid replanning during online robot operation.
The proposed method is evaluated on the Massachusetts Road Dataset, the
DeepGlobe dataset, as well as a dataset of aerial images from off-road proving
grounds at Mississippi State University. Results show that the proposed image
segmentation and planning methods outperform conventional planning algorithms
in terms of the quality and feasibility of the initial path, as well as the
quality of replanned paths. | Charles Moore, Shaswata Mitra, Nisha Pillai, Marc Moore, Sudip Mittal, Cindy Bethel, Jingdao Chen | 2023-09-15T23:52:45Z | http://arxiv.org/abs/2309.08814v1 | URA*: Uncertainty-aware Path Planning using Image-based Aerial-to-Ground Traversability Estimation for Off-road Environments
###### Abstract
A major challenge with off-road autonomous navigation is the lack of maps or road markings that can be used to plan a path for autonomous robots. Classical path planning methods mostly assume a perfectly known environment without accounting for the inherent perception and sensing uncertainty from detecting terrain and obstacles in off-road environments. Recent work in computer vision and deep neural networks has advanced the capability of terrain traversability segmentation from raw images; however, the feasibility of using these noisy segmentation maps for navigation and path planning has not been adequately explored. To address this problem, this research proposes an uncertainty-aware path planning method, URA* using aerial images for autonomous navigation in off-road environments. An ensemble convolutional neural network (CNN) model is first used to perform pixel-level traversability estimation from aerial images of the region of interest. The traversability predictions are represented as a grid of traversal probability values. An uncertainty-aware planner is then applied to compute the best path from a start point to a goal point given these noisy traversal probability estimates. The proposed planner also incorporates replanning techniques to allow rapid replanning during online robot operation. The proposed method is evaluated on the Massachusetts Road Dataset, the DeepGlobe dataset, as well as a dataset of aerial images from off-road proving grounds at Mississippi State University. Results show that the proposed image segmentation and planning methods outperform conventional planning algorithms in terms of the quality and feasibility of the initial path, as well as the quality of replanned paths.
## I Introduction
A key step in navigating ground robots in unmapped, off-road environments is performing traversability estimation. The concept of traversability estimation refers to interpreting the geometry and appearance of the region of interest to determine whether a vehicle could drive through it safely depending on its capabilities [1][2]. In structured urban environments with clear road markings, local traversability estimation using sensors from a ground vehicle's perspective [3][4] is usually sufficient for navigation. Whereas in unstructured off-road environments such as dense forests or mountainous regions where the robot's field of view is limited, aerial-to-ground traversability estimation is advantageous to enable path planning from a global perspective [5][6][7].
Major advances in computer vision and deep neural networks (DNNs) have enabled work in traversability estimation from aerial images in the form of road segmentation [8][9] or terrain segmentation [10]. However, these works only consider the traversability prediction task without addressing the path planning task, which needs to account for errors and uncertainty in the perception model output. Another line of work has proposed ad-hoc modifications to conventional path planners such as Rapidly-exploring Random Trees (RRT) [11] and A* [12] by adding terrain and slip penalization terms to make the planner more risk-aware [13][14]. However, in these studies, such penalization terms are usually hand-engineered from prior knowledge instead of using a traversability measure that can be directly obtained from sensor data. Thus, research gaps remain in consolidating robotic path planning algorithms with recent advances in learning-based traversability estimation techniques.
This research proposes an uncertainty-aware path planning algorithm using aerial traversability estimation for off-road environments. An ensemble convolutional neural network (CNN) model is first used to perform segmentation of aerial images and output a traversal probability value at the pixel level. Given the noisy traversal probability estimates, an uncertainty-aware path planning algorithm is proposed to predict the best global path for a ground robot to travel from its start location to the goal location. A probabilistic replanning technique that combines information from noisy aerial-to-ground traversability estimates with accurate ground-level traversability measurements is applied so that the ground robot is able to rapidly scan and re-plan suitable paths during physical operation. Code 1 and datasets 2 are made publicly-available.
[MISSING_PAGE_POST]
## II Literature Review
Classical path planning algorithms in robotics mostly rely on static maps, which assume that information about which areas are traversable and which areas are not traversable are available in advance in the form of pre-built maps and do not change over time. Classical path planning may be further classified into sampling-based algorithms [15] and search-based algorithms [16]. Search-based algorithms include the popular Dijkstra's algorithm [17], A* algorithm [12], and state lattice algorithms [18]. Variants of search-based path planning include the weighted A* approach [19], which is faster and uses less memory, but is not optimal. Alternatively, the Anytime Repairing A* (ARA*) algorithm [20] provides a sub-optimal solution in a short period of time and continues to try to find an optimal solution within a specified time period. More recent approaches have also used deep reinforcement learning [21] or Bayesian optimization [22] to tune the hyperparameters of the planner. On the other hand, sampling-based approaches such as RRT use random samples drawn from traversable areas of the search space, which allow planning to be carried out in non-convex and high-dimensional spaces [11][23]. Overall, classical path planning algorithms may work well in structured environments but fail to address the problem of unstructured off-road environments with complex terrain and uncertain traversability [24].
To make an informed decision regarding the desired path during autonomous navigation, it is essential to make use of real-time semantic information derived from sensors that are observing the surrounding environment [25]. Recent advances in computer vision [26] and the release of big datasets [27][28] have facilitated research into path planning methods that can reason directly from sensor inputs [29][30]. In the domain of ground images, [31] proposes a neural network for predicting lane geometry and estimating a topology-preserving road network using a forward-looking camera. Similarly, [32] uses neural networks to improve the boundary quality for road segmentation. In the domain of aerial images, road semantic segmentation can also be carried out using convolutional neural networks (CNNs) [9][33], or graph neural networks [34][35] on UAV images or satellite images to provide traversability information to a ground vehicle. Most of these existing works only focus on improving the traversability prediction task without implementing a path-planning solution, which can account for errors and uncertainty in the perception model output.
Another key research gap that we plan to address in this paper is path planning in off-road environments. Urban roads are characterized by features such as curbs, buildings, traffic signs, road markings, and guard trails that can simplify the perception and planning problem[36, 37], whereas rural roads lack clear boundaries and intersections are complex and heterogeneous [38, 39]. The vast majority of autonomous driving systems have been trained using either urban or suburban datasets (e.g. KITTI [27] and Cityscapes [28]) without consideration for rural environments. Some datasets such as Robot Unstructured Ground Driving (RUGD) [40], OFF-Road Freespace Detection (ORFD) [41], DeepScene [42], Center for Advanced Vehicular Systems Traversability (CaT) [2, 43], do involve off-road environments but only evaluate perception tasks such as semantic segmentation and free-space detection and not planning tasks. In contrast, this paper will introduce a new aerial image dataset for off-road environments and use it as a benchmark for path planning.
## III Methodology
### _Problem definition_
Given an aerial image, \(\mathbf{I}\) of a region of interest, a start position, \(\mathbf{s}\), and goal position, \(\mathbf{g}\), in image coordinates: compute the _best_ path for a ground robot to travel from \(\mathbf{s}\) to \(\mathbf{g}\) using only information in \(\mathbf{I}\). The _best_ path is evaluated based on (i) quality, i.e. how short the total path length is for the computed path, and (ii) feasibility, i.e. how much of the computed path is actually traversable. \(\mathbf{I}\) is assumed to be captured with an image plane parallel to the ground plane so that pixel-wise distances are roughly proportional to real-world physical distances.
### _Traversable area segmentation from aerial images_
The segmentation step aims to take an aerial observation of a scene, pass it through a semantic information extractor in the form of a DNN, and finally predict the traversability of different regions in the scene at the individual pixel level [44]. The output of the segmentation network for a given aerial image will be the traversal probability distribution matrix for that particular image. For image segmentation, we utilize an ensemble of DNN methods to predict the traversal probabilities. Initially, the neural networks were pre-trained on the classification task using the ImageNet[45] dataset containing over 14 million images. Next, we fine-tuned the networks on the semantic segmentation task using
Fig. 1: Proposed pipeline for uncertainty-aware perception and planning from aerial observations
specific aerial image datasets to enable the networks to perform traversability estimation from aerial images.
#### Iii-B1 Network architecture
In this research, we developed an ensemble model utilizing output segmentation heads from U-Net [46] and DeepLabV3+ [47] built on a ResNet-50 [48] encoder and pre-trained on ImageNet [45]. Upsampling layers from U-Net and atrous convolution layers from DeepLabV3 are both common strategies in image semantic segmentation to process multi-scale contextual information in image data. The segmentation heads are trained to predict binary traversability (either traversable or not-traversable for each pixel) using the Dice loss function [49]. During inference, the output of the final softmax layer is used to extract a traversability map over the entire image. The middle dotted-line block in Figure 1 shows the proposed network architecture for traversable areas from aerial images.
In our empirical studies, we found that having a high recall rate for traversable terrain is important for successfully generating paths from the start position to the goal position. This is because if the ratio of regions predicted to be traversable compared to the regions predicted to be non-traversable is too low, the path planner may terminate prematurely before finding a traversable connection between the start position and the goal position. Thus, the proposed network architecture uses a max-pooling layer to combine predictions from an ensemble of segmentation heads. The output of the pooling layer has the highest probability of traversability among the input model predictions for each pixel. In our experiments, we found that the pooling function is effective in achieving generally higher recall rates for traversable terrain (refer to Table I in the Results section). Theoretically, the outputs of more than two segmentation networks may be pooled together in the ensemble model; however, in our experiments, we found that pooling together two segmentation networks gave adequate performance.
#### Iii-B2 Aerial image datasets
In this research, we make use of the Massachusetts Road Dataset (MRD) [50] and the DeepGlobe dataset (DGD) [51], which are both datasets of satellite images with a mix of urban and off-road environments. MRD contained 1108 training, 49 testing, and 14 validation images, all of 1500 x1500 in resolution and with corresponding ground truth labels. DGD contained 6226 training, 1101 testing, and 1243 validation images, all of 1024 x1024 in resolution but with only the training set having ground truth labels. We resized MRD and DGD images to a standard resolution of 1536 x1536 pixels to maintain consistency. Although these datasets are not directly applicable to the targeted domain of off-road environments, we used them for testing and comparison since these datasets are publicly released and have a large number of annotations readily available. For validation of the approach in the domain of off-road environments, we collected and annotated our own dataset of off-road aerial images obtained from the Center for Advanced Vehicular Systems (CAVS) proving grounds at Mississippi State University [52] (hereafter referred to as the CAVS dataset). The proving grounds is a 55-acre test facility featuring 12 rugged off-road trails filled with naturally occurring obstacles and terrain features such as rocks, tall grasses, wet lowlands, and wooded or obscured trails. For our CAVS dataset, we manually labeled a total of 403 images and split them into training, test, and validation sets of 332, 38, and 33 images respectively.
In addition, we applied data augmentation to generate sufficiently diverse samples for training. We pre-processed the images with random crop, horizontal flip, vertical flip, and random rotation at 0.75 probability for all the datasets.
#### Iii-B3 Hyperparameters
The networks were trained for a total of 15 epochs with a batch size of 16. The Adam [53] optimizer was used due to its faster convergence and fewer hyperparameter requirements. Softmax was used as the activation function for the segmentation prediction layer. These hyperparameter settings follow widely used standard training procedures and have been previously applied on the MRD [54] and DeepGlobe datasets [55]. Note that separate models were trained for each dataset.
### _Uncertainty-aware path planning_
In this subsection, we introduce an Uncertainty-aware A* (URA*) approach to generate suitable paths with respect to uncertainty in unknown environments. In traditional A*-based approaches [12][20][56], the environment is first discretized into states, and searches over the state-space are carried out based on the edge costs as the optimality criteria. However, this is assuming a perfect environment where the traversability and cost of every state are known in advance. In this research, we take anytime-replanning techniques from ARA* [20] and extend it to uncertain environments by incorporating predictions from a semantic segmentation network to generate robust paths that take into account the traversal probability of each state. We utilize this URA* algorithm, in conjunction with D*-lite [57] to extend to the replanning problem, with an algorithm we call Uncertainty-aware D*-lite (URD*) (described in the next subsection).
The traversability matrix obtained from the segmentation network is divided into a grid where each grid cell stores the traversal probability of that region. In this study, we resample the traversability matrix to a grid of 600x600 cells to speed up the computation. The path-planning algorithm will generate a sequence of cells to traverse from the start cell to the goal cell. A denser grid can be used to generate finer paths, at the cost of incurring higher computational time.
```
Input: Traversability Model Predictions, \(M\), State, \(s\) Output: f-value return \(g(s)+\epsilon*(dist(s,goal)-(\alpha*M(s)))\)
```
**Algorithm 1**URA* f-value
Algorithm 1 shows the f-value calculation of URA*, which determines the priority of which state to expand next. Similar to weighted-A* and ARA*, an \(\epsilon\) parameter is used to weight the heuristic vs. the g-value. The heuristic value for a state consists of the distance from the current state to the goal state subtracted by the traversal probability times a constant multiplier \(\alpha\). This places a higher preference on nodes that
have a higher probability of being free space and are also closer to the goal.
```
Input: Traversability Model Predictions, \(M\), \(s_{start}\), \(s_{goal}\) Output: Path from \(s_{start}\) to \(s_{goal}\) \(g(s_{start})=\infty\); \(g(s_{goal})=0\) \(OPEN=CLOSED=INCONS=\varnothing\) Insert \(s_{start}\) into \(OPEN\) with URA_f.value(\(s_{start}\)) ImprovePath) while\(\epsilon>1\)do Decrease \(\epsilon\) Move states from \(INCONS\) into \(OPEN\) Update all priorities in \(OPEN\) according to URA_f.value(s) \(CLOSED=\varnothing\) ImprovePath()
```
**Algorithm 2**URA*
Algorithm 2 shows the main loop of URA*. Similar to ARA*, this involves running weighted A* multiple times with \(\epsilon\) gradually lowering each time. The \(ImprovePath()\) function is borrowed from ARA* and recomputes the shortest path within a given \(\epsilon\) while reusing search efforts from the previous executions. In \(ImprovePath()\), the cost of visiting a node is calculated as \(1-M(s)\); the higher the predicted traversability probability, the lower the cost of a state.
### _Uncertainty-aware path replanning_
In this section, we introduce URD*, a probabilistic replanning technique that combines information from noisy aerial-to-ground traversability estimates with accurate ground-level traversability measurements. This algorithm is applied so that the ground robot is able to rapidly scan and re-plan suitable paths during physical operation. In order to effectively update the environment of the surrounding agent during traversal, we simulate LiDAR scans by using Bresenham's algorithm [58] to simulate the field of view of the robot as it moves through the environment and updates its internal representation of the traversable areas.
```
InitializeEnvironment() Initialize Tree with URA* \(s_{current}=s_{last}=s_{start}\) while\(s_{current}\neq goal\)do ComputeShortPath(\(s_{current}\), \(s_{goal}\)) if\(g(s_{current})=\infty\)then return \(fail\) \(s_{current}=\) \(argmin_{s^{\prime}e\text{\scsc}(s_{current})}(c(s_{current},s^{{}^{\prime}})+g (s^{{}^{\prime}}))\) Move to \(s_{current}\) UpdateEnvironment() ifany edge costs changedthen Update vertices with D*-lite procedure and URD* heuristic
```
**Algorithm 3**URD*
#### Iii-D1 Tree Initialization
Using Algorithm 2, the search tree initialization step is performed with URA*. Since URA* uses traversability prediction values as pseudo-costs, the initial search process is guaranteed to always find a path from the start state to the goal state.
```
Input: Traversability Model Predictions, \(M\), \(s_{start}\), \(s_{current}\) Output: Heuristic value of \(s\) \(d_{x}=||x_{start}-x_{current}||\) \(d_{y}=||y_{start}-y_{current}||\) \(u(s)=dist(s_{start},s_{goal})\) \(H=u(s)*(\gamma*min(d_{x},d_{y})+||d_{x}-d_{y}||)\) return \(min(H,dist(s_{start},s_{current}))\)
```
**Algorithm 4**URD* heuristic
#### Iii-D2 Replanning
In Algorithm 3, we adopt similar procedures to D*-lite [57] to replan paths to the goal, starting from the initial URA* search tree. Each time the simulated robot moves to a new state, \(s_{current}\), the traversability costs of the environment is updated by scanning a fixed radius around the robot and assigning the true traversability (i.e. the ground truth labels in the MRD, DeepGlobe, and CAVS datasets). This is implemented in the function \(UpdateEnvironment()\). Then, if the edge costs have changed, the vertices are updated according to the D*-lite procedure in combination with our new URD* heuristic.
#### Iii-D3 Improved Heuristic
In Algorithm 4, we determine the best node to expand during the replanning process by establishing a custom heuristic. We place higher importance on nodes that have a high traversal probability score as generated from the segmentation model and are closer to the goal. This heuristic is similar to the heuristic presented in [59], and we utilize a similar calculation method with a \(u(s)\) value term in the heuristic \(H\), where \(\gamma\) is the constant multiplier. We place a weight on the first equation to bias the algorithm away from the customized heuristic toward a simpler Euclidean distance heuristic as the number of replans increases as this indicates overestimation.
#### Iii-D4 Tree Resetting and Heuristic Scaling
In order to prevent the algorithm from being trapped in deadlock situations, we reset the search tree and plan a new path to the goal whenever \(s_{current}\) did not update after a few iterations. We also scale the \(\gamma\) term after each replan, to resort to the Euclidean distance term in the event that the segmentation model is highly inaccurate.
## IV Results
### _Performance analysis of traversability segmentation_
Table I shows the segmentation performance analysis of UNet, DeepLabV3+, and our ensemble model on the MRD, DeepGlobe, and CAVS datasets. The traversability predictions were compared pixel-by-pixel to the ground truth annotations. The evaluation metrics used are Dice Loss, standard deviation (SD) of Dice Loss, Intersection-over-Union (IoU), and recall rate, averaged over all images for
each dataset. Bold numbers indicate the best-performing model for each metric.
Results show that the proposed ensemble model for traversability segmentation achieved the lowest standard deviation in Dice Loss and the highest IoU for two out of the three datasets. More importantly, the proposed model achieved the highest recall rate for all datasets, demonstrating the benefit of an ensemble approach for maximizing the rate of finding traversable regions from aerial images. Still, the recall rates remain low at 50-60% across the three datasets. In the next section, we will present our results of uncertainty-aware reasoning and path planning to overcome these noisy segmentation results.
### _Performance analysis of path planning_
To evaluate the performance of URA* for calculation of the initial path, we opted to compare the algorithm with
Fig. 3: Path planning results for aerial images from different datasets. From top to bottom, the rows represent images from the Massachusetts Road Dataset, DeepGlobe dataset, and CAVS dataset. From left to right, the columns represent the (i) input aerial image, (ii) A* path (iii) RRT* path (iv) proposed URA* path and (v) proposed URD* replanned path. Red/blue dots indicate start/goal points whereas green lines indicate planned path. A path is not plotted if the algorithm fails to find a path between the start and goal points.
Fig. 2: Traversability segmentation results for aerial images from different datasets. From top to bottom, the rows represent images from the Massachusetts Road Dataset, DeepGlobe dataset, and CAVS dataset. From left to right, the columns represent the (i) original image, (ii) predicted segmentation mask PSM from U-Net (iii) PSM from DeepLabV3+ (iv) PSM from our ensemble model, and (v) ground truth segmentation mask
RRT* and A*, which are popular algorithms for path planning. For A* and RRT*, a confidence threshold of 50% was used as the cutoff threshold for converting the segmentation network predictions to a binary traversability map. We also considered an alternate version of A*, which we term as A**, where we lower the confidence threshold from 50% to 30% to give it a better chance of obtaining an initial solution. RRT* uses a step size of 5, search radius of 50, and 10000 iterations. In the path planning experiments, we manually fix the start and end points for each aerial image.
The results of path planning for generating the initial path are shown in Table II. These results are obtained from 49 images in the MRD test set, 29 images in the DeepGlobe validation set (since ground truth labels for the DeepGlobe test set has not been released), and 38 images in the CAVS test set. We use the normalized path length, average path accuracy, and success rate as evaluation metrics. The path length reflects the _quality_ of the planned path whereas the path accuracy reflects the _feasibility_ of the planned path. The normalized path length is calculated by dividing the computed path length in pixels with the straight line distance from start to goal in pixels. The path accuracy is calculated by comparing the pixels of the computed path with the ground truth traversability labels to determine the percentage of the computed path that lie in traversable regions. That is, the higher the path accuracy, the more likely the initial path is to be feasible for a robot. Finally, the success rate is calculated as the percentage of aerial images for which the path planning algorithm is able to generate an initial path without returning failure. Note that in cases where an algorithm is not successful in producing a complete path from the start to goal (given an input image), we use the maximum cost computed among all algorithms as a nominal value to penalize these failure cases.
Results in Table II show that the proposed URA* algorithm significantly outperforms baseline algorithms on normalized path length, path accuracy and success rate but expands significantly more nodes than A* to find a feasible solution. By integrating the traversability probabilities into the planning process, URA* is able to generate higher quality and more feasible paths. In addition, URA* is always successful in returning a solution. In contrast to A* or RRT*, which treats the input map as having binary traversability and may terminate prematurely if there are insufficient areas predicted to be traversable, URA* is designed to always be able to obtain a path from the start to goal by treating traversability as a continuous value.
The results of path planning for generating replanned paths for online operations are shown in Table III. Rapidly-replanning A* (RRA*) [60] and D*-lite [57] are used as baseline algorithms. Results show that the proposed URD* algorithm performs the best in terms of shortest path length and fewest nodes expanded in two out of three datasets considered. For the CAVS dataset, URD* performs slightly worse compared to RRA* because the dataset contains scenes with fewer twists and intersections and thus, the advantage of uncertainty-aware replanning was not as significant compared to the MRD and DeepGlobe datasets.
Figure 3 shows a visual comparison of the paths generated by the proposed algorithm overlaid on the predicted traversability maps. Results show that A* and RRT* mostly fail or take suboptimal paths due to the noisy traversability maps whereas URA* is able to generate reasonable paths and URD* can improve those paths after replanning.
## V Conclusions
In conclusion, this research demonstrated an uncertainty-aware path planning algorithm to compute the best path through a region with unknown traversability where only aerial images are available. In future work, we will investigate the possibility of using real-time traversability observations to update the segmentation network model to generate more accurate traversability estimations for replanning purposes. We will also conduct experiments with off-road vehicles to benchmark the effectiveness of this form of aerial-to-ground traversability estimation and planning in real-world conditions.
## Acknowledgment
The work reported herein was supported by by the National Science Foundation (NSF) (Award #IIS-2153101). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. |
2309.06187 | The Three Hundred: $M_{sub}-V_{circ}$ relation | In this study, we investigate a recent finding based on strong lensing
observations, which suggests that the sub-halos observed in clusters exhibit
greater compactness compared to those predicted by $\Lambda$CDM simulations. To
address this discrepancy, we performed a comparative analysis by comparing the
cumulative mass function of sub-halos and the
$M_{\text{sub}}$-$V_{\text{circ}}$ relation between observed clusters and 324
simulated clusters from The Three Hundred project, focusing on re-simulations
using GADGET-X and GIZMO-SIMBA baryonic models. The sub-halos' cumulative mass
function of the GIZMO-SIMBA simulated clusters agrees with observations, while
the GADGET-X simulations exhibit discrepancies in the lower sub-halo mass range
possibly due to its strong SuperNova feedback. Both GADGET-X and GIZMO-SIMBA
simulations demonstrate a redshift evolution of the sub-halo mass function and
the $V_{max}$ function, with slightly fewer sub-halos observed at lower
redshifts. Neither the GADGET-X nor GIZMO-SIMBA(albeit a little closer)
simulated clusters' predictions for the $M_{\text{sub}}$-$V_{\text{circ}}$
relation align with the observational result. Further investigations on the
correlation between sub-halo/halo properties and the discrepancy in the
$M_{\text{sub}}$-$V_{\text{circ}}$ relation reveals that the sub-halo's half
mass radius and galaxy stellar age, the baryon fraction and sub-halo distance
from the cluster's centre, as well as the halo relaxation state play important
roles on this relation. Nevertheless, we think it is still challenging in
accurately reproducing the observed $M_{\text{sub}}$-$V_{\text{circ}}$ relation
in our current hydrodynamic cluster simulation under the standard $\Lambda$CDM
cosmology. | Atulit Srivastava, Weiguang Cui, Massimo Meneghetti, Romeel Dave, Alexander Knebe, Antonio Ragagnin, Carlo Giocoli, Francesco Calura, Giulia Despali, Lauro Moscardini, Gustavo Yepes | 2023-09-12T12:53:46Z | http://arxiv.org/abs/2309.06187v1 | # The Three Hundred: \(M_{sub}-V_{circ}\) relation
###### Abstract
In this study, we investigate a recent finding based on strong lensing observations, which suggests that the sub-halos observed in clusters exhibit greater compactness compared to those predicted by \(\Lambda\)CDM simulations. To address this discrepancy, we performed a comparative analysis by comparing the cumulative mass function of sub-halos and the \(M_{\rm sub}\)-\(V_{\rm circ}\) relation between observed clusters and 324 simulated clusters from The Three Hundred project, focusing on re-simulations using Gadget-X and Gizmo-Simba baryonic models. The sub-halos' cumulative mass function of the Gizmo-Simba simulated clusters agrees with observations, while the Gadget-X simulations exhibit discrepancies in the lower sub-halo mass range possibly due to its strong SuperNova feedback. Both Gadget-X and Gizmo-Simba simulations demonstrate a redshift evolution of the sub-halo mass function and the \(V_{max}\) function, with slightly fewer sub-halos observed at lower redshifts. Neither the Gadget-X nor Gizmo-Simba(albeit a little closer) simulated clusters' predictions for the \(M_{\rm sub}\)-\(V_{\rm circ}\) relation align with the observational result. Further investigations on the correlation between sub-halo/halo properties and the discrepancy in the \(M_{\rm sub}\)-\(V_{\rm circ}\) relation reveals that the sub-halo's half mass radius and galaxy stellar age, the baryon fraction and sub-halo distance from the cluster's centre, as well as the halo relaxation state play important roles on this relation. Nevertheless, we think it is still challenging in accurately reproducing the observed \(M_{\rm sub}\)-\(V_{\rm circ}\) relation in our current hydrodynamic cluster simulation under the standard \(\Lambda\)CDM cosmology.
keywords: gravitational lensing - galaxy clusters - galaxies - dark matter
## 1 Introduction
Cold dark matter (CDM) plays an essential role in the formation and evolution of galaxies and galaxy clusters. It can be detected solely through its gravitational effects, such as the bending of the light from background galaxies. Galaxy clusters are gravitationally bounded systems with a mass around \(10^{14}\) to \(10^{15}\) solar masses, and dark matter makes up approximately 80 per cent of their mass. Gravity drives the process of structure formation, with haloes assembling hierarchically over time. Galaxy cluster haloes, in particular, are among the late-forming structures (White and Frenk, 1991; Tormen, 1998; Giocoli et al., 2007). Inside galaxy clusters, hundreds to thousands of sub-halos are resident in local minimum potential (Springel et al., 2001; Giocoli et al., 2010). These inner structures are known as sub-halos. Investigating and understanding these sub-halos will help us to understand galaxy cluster formation in detail.
The paper from Meneghetti et al. (2020) (hereafter M20) studied the gravitational lensing properties of both the cluster halos and sub-halos of the cluster samples observed in the Cluster Lensing and Supernova Survey with Hubble (CLASH Postman et al., 2012) and Hubble Frontier Fields (Lotz et al., 2017) and compared them to hydro-simulated galaxy clusters. In their study, M20 discovered that the Galaxy-Galaxy Strong Lensing (GGSL) probability from simulation, reconstructed means of lensing tool of Bergamini et al. (2019), is significantly lower compared to the observed clusters. This finding indicates that the observed clusters have much higher GGSL probability than those from hydrodynamic simulations under the \(\Lambda\)CDM cosmology. To support their argument, M20 used the maximum cir
cular velocities, \(V_{\rm circ}\)1, of sub-halos within galaxy clusters as a metric to assess the degree of compactness, as it directly reflects the sub-halo potential for producing the strong lensing events. This \(V_{\rm circ}\) is associated with the 1D-velocity dispersion \(\sigma_{0}\) by \(V_{\rm circ}=\sqrt{2}\sigma_{0}\), which is selected as one of the parameters in the lens modelling analysis of M20. They noticed that the sub-halos in observed clusters have higher \(V_{\rm circ}\) values when compared to sub-halo samples from mass-matched clusters in the cosmological hydrodynamic simulations by Planelles et al. (2014). These findings suggest that the galaxies in these observed clusters are more efficient in lensing background sources and potentially more concentrated. The discrepancy between simulation and observation results may arise from limitations in the simulation's resolution or the presence of systematics. It has been suggested that the simulation output is sensitive to mass resolution and tidal disruption (van den Bosch et al., 2018; Green et al., 2021), which could potentially impact sub-halos properties. However, Meneghetti et al. (2022) (see also Ragagnini et al., 2022) found that the resolution does not affect the GGSL probability, which, however, seems sensitive to the galaxy formation model implemented in the simulations. Nevertheless, it is still difficult to simultaneously reproduce galaxies' stellar mass function and internal structure. Another possible explanation is that this issue arises from an inaccurate understanding of the nature of dark matter within the \(\Lambda\)CDM paradigm, which may necessitate the exploration of alternative models such as self-interacting dark matter (SIDM) models (Yang and Yu, 2021; Bhattacharyya et al., 2022) and cold and sterile neutrino (SN) dark matter models (Despali et al., 2020).
Footnote 1: We will use \(V_{\rm circ}\) to denote the maximum circular velocities throughout this paper.
Using the simulated galaxy clusters from the Hydrangea/C-EAGLE cosmological hydrodynamic simulations, Bahe (2021) did a similar comparison to the observed clusters, as in M20. Only one simulated cluster from Hydrangea at redshift \(z=0.4\) matches closely with the mass range of the observed sample presented in M20, with mass of \(M_{\rm 200c}\)2\(>5\times 10^{14}h^{-1}M_{\odot}\). They claimed that sub-halos in this highly resolved simulation match well with the observations (see also another study by Robertson 2021 for resolution impact on simulation generated lensing signal). In their study, Bahe (2021) determined that \(V_{\rm circ}\) is higher by a factor of 2 in Hydrangea and is consistent with respect to the observation trend. This increase in the offset of the maximum circular velocity was attributed to the inclusion of baryons in the simulations. The comparison made by Bahe (2021) between simulations with and without baryonic matter (i.e. dark matter only) revealed that sub-halos with a higher fraction of baryonic matter exhibited higher \(V_{\rm circ}\), implying that dense stellar cores capable of sustaining tidal stripping play a major role in explaining the observed high lensing signals (Armitage et al., 2019, also see Bahe et al., 2019; Joshi et al., 2019). Additionally, Bahe (2021) also checked the result from the Illustris-TNG300 simulation (Marinacci et al. (2018); Naiman et al. (2018); Nelson et al. (2019); Pillepich et al. (2018); Springel et al. (2018)), and argued that both Illustris-TNG300 and Hydrangea simulations predicted high \(V_{\rm circ}\) values for massive sub-halos located in the vicinity of the cluster centre. Thus, Bahe (2021) concluded that there is no evidence of a significant disagreement between the observed sub-halos concentrations and predictions from the CDM model.
Footnote 2: \(M_{\rm 200c}\) represents the mass within a radius denoted as \(R_{\rm 200c}\), measured from the center of a galaxy cluster’s potential, where this radius signifies the region with an average density that is 200 times the critical density of the Universe.
On the contrary, Ragagnini et al. (2022) examined the effect of various numerical setups (such as resolution and softening length) and AGN feedback scheme on the interior structure of cluster sub-halos using six simulated zoomed-in regions of Dianoga, and they found contrasted results with respect to Bahe (2021). Their results suggested that regardless of the numerical configuration used, the sub-halos of simulated clusters were unable to reproduce the observed \(M_{\rm sub}-V_{\rm circ}\) (\(M_{\rm sub}\), sub-halo mass) relation from Bergamini et al. (2019). This failure to reproduce the scaling relation was particularly evident for sub-halo masses \(M_{\rm sub}<10^{11}h^{-1}M_{\odot}\), which corresponds to the mass range of interest for galaxy-galaxy strong lensing (GGSL) events. The simulated sub-halos exhibited \(V_{\rm circ}\) values are approximately 30% smaller compared to the observed scaling relation presented by Bergamini et al. (2019). This was also observed for Hydrangea simulations discussed in Bahe (2021). The scaling relationship between \(M_{\rm sub}\) (mass of sub-halos) and \(V_{\rm circ}\) (circular velocity), as derived from simulations, shows good agreement with observations in the high mass range (\(M_{\rm sub}>4\times 10^{11}h^{-1}M_{\odot}\)). However, concerns have been raised by Ragagnini et al. (2022) regarding the simulations' tendency to produce high stellar masses for sub-halos within this mass range. This discrepancy in stellar mass could potentially be a key factor contributing to the observed agreement in the \(M_{\rm sub}-V_{\rm circ}\) relation for the high mass range, and it may be associated with the Hydrangea simulations examined by Bahe (2021). Although the simulations can reproduce the correct scaling relationship between \(M_{\rm sub}\) and \(V_{\rm circ}\) by adjusting the AGN feedback strength, the resulting galaxies exhibit unrealistic properties, such as having larger stellar masses compared to observed galaxies. As demonstrated by Ragone-Figuero et al. (2018), both Hydrangea and IllustrisTNG simulations show excessively large stellar masses in the brightest cluster galaxies. Therefore, it is important to emphasize that both Meneghetti et al. (2022) and Ragagnini et al. (2022) clearly stated that simulations are unable to simultaneously reconcile with the observed \(M_{\rm sub}-V_{\rm circ}\) relationship in the two sub-halo mass regimes.
We would like to point out that all these previous studies are limited by the number of cluster samples which can not draw statistically solid conclusions as well as no correlation studies. In a recent letter, Meneghetti et al. (in prep.), performed a ray-tracing analysis of 324 galaxy clusters from the The Three Hundred3 and found that the Gizmo-Simba version run developed denser stellar cores and boosted the galaxy-galaxy strong lensing probability by a factor of \(\sim 3\) than its Gadget-X counterparts. In this companion paper, we also use the simulated galaxy clusters from the The Three Hundred project, as detailed in Cui et al. (2018, 2022), to compare with the observed \(M_{\rm sub}-V_{\rm circ}\) relation reported in M20. Although our simulated clusters have a slightly lower mass resolution than Planelles et al. (2014) and about 100 times lower than the Hydrangea simulated clusters, they have a significant advantage in terms of a large sample size, a relatively wide extensive mass range and, importantly, two different baryon models. Our sample includes approximately 10 times more simulated clusters than the Hydrangea sample used in Bahe (2021), and roughly 15 times more than the Dianoga simulation used in Ragagnini et al. (2022). These advantages allow us to statistically investigate and understand the discrepancy.
Footnote 3: [https://www.the300-project.org](https://www.the300-project.org)
The paper is structured as follows: In Section 2, we provide an introduction to the The Three Hundred galaxy cluster simulation with the Amiga Halo Finder (AHF) halo catalogue which was used to identify host-halos and their corresponding sub-halos. We will also explain our methodology for selecting the samples of host halos
and their sub-halos. In Section 3, we compare the sub-halos mass distribution of M20 to three reference clusters with the predictions from the simulation and examine how it evolves with redshift. In Section 4, we present the cumulative sub-halo \(V_{\rm circ}\) function for the simulations. In Section 5, we compare the observed \(M_{\rm sub}\) and \(V_{\rm circ}\) relation reported in Bergamini et al. (2019) with the one generated from the data set of simulated clusters in the The Three Hundred project. We also examine the influence of sub-halo and host-halo properties on the \(M_{\rm sub}-V_{\rm circ}\) relationship. Finally, in Section 6 we will summarise our results.
## 2 Simulations
The The Three Hundred project, introduced in Cui et al. (2018), consists of an ensemble of 324 galaxy clusters that were modelled based on the extraction of a mass-complete sample with largest virial halo mass \(M_{\rm vir}\gtrsim 8\times 10^{14}\,h^{-1}\,M_{\odot}\) at \(z=0\) from the Dark Matter-only Multidark simulation (MDPL2, Klypin et al., 2016). The MDPL2 simulation employs periodic boundary conditions with a cubic side of \(1\rm{Gpc}/h\) and contains \(3840^{34}\) dark matter particles, with a mass of \(1.5\times 10^{9}\,h^{-1}\,M_{\odot}\). This dark matter-only simulation adopts cosmological parameters (\(\Omega_{M}=0.307,\Omega_{B}=0.048,\Omega_{\Lambda}=0.693,\,h=0.678,\sigma_{8}= 0.823,n_{s}=0.96\)) based on the Planck observations from Planck Collaboration et al. (2016). Each selected cluster is placed at the centre of the re-simulated box inside a high-resolution spherical region with a radius of \(15h^{-1}\)Mpc. The regions are filled with gas and dark matter particles (with \(m_{\rm DM}=1.27\times 10^{9}\,h^{-1}\,M_{\odot}\) and \(m_{\rm gas}=2.36\times 10^{8}\,h^{-1}\,M_{\odot}\)) based on the original dark matter distribution, in accordance to the cosmological baryon fraction \(\Omega_{B}=0.048\). Beyond the \(15\,h^{-1}\)Mpc range, the outer region is populated with low-resolution mass particles to simulate any large-scale tidal effects similar in a computationally efficient way compared to the original MDPL2 simulation. Subsequently, the 324 selected regions undergo re-simulation using different baryonic models and codes, namely Gadget-X (Rasia et al., 2015) and Gizmo-Simba (Dave et al., 2019; Cui et al., 2022). For each simulated cluster in the The Three Hundred project using Gadget-X and Gizmo-Simba, we have 128 snapshot files corresponding to redshifts ranging from \(z=17\) to 0.0.
The details regarding the Gadget-X and Gizmo-Simba codes used for the re-simulation of clusters are as follows:
* **Gadget-X**: It is an updated, modified version of Gadget3 code (Murante et al., 2010; Rasia et al., 2015; Planelles et al., 2017) in which the evolution of dark matter is followed by the gravity solver of the Gadget3 Tree-PM code, an updated version of Gadget2 code (Springel, 2005). It incorporates an improved SPH scheme (Beck et al., 2016) with artificial thermal diffusion, time-dependent artificial viscosity, high-order Wendland C4 interpolating kernel, and wake-up scheme. The technique described in Wiersma et al. (2009) is used to compute gas cooling for an optically thin gas with consideration of the contribution of metals. Additionally, a uniform ultraviolet (UV) background is incorporated by adopting the approach outlined in Haardt and Madau (1995). Star formation in this work follows the approach described in Tornatore et al. (2007) and adopts the star formation algorithm presented by Springel and Hernquist (2003). This algorithm treats gas particles as multiphase, contributing to a self-regulating interstellar medium when their densities rise over a particular threshold. The star formation rate is determined solely by the gas density in this model. Stellar feedback, specifically supernova feedback, is implemented as a kinetic energy-driven scheme, following the prescription in Springel and Hernquist (2003). Each star particle is treated as a single stellar population (SSP), and the evolution of each SSP is modelled following Chabrier (2003) stellar evolution prescriptions. Metals from Type Ia and Type II supernovae, as well as from asymptotic giant branch phases, are taken into account in the simulation, with the code following the evolution of 16 chemical species. The growth of black holes and the implementation of AGN feedback in Gadget-X are based on the refined model presented in Steinborn et al. (2015). In this model, super-massive black holes grow via Eddington-limited Bondi-Hoyle-like gas accretion, with a distinction made between hot and cold components.
* **Gizmo-Simba** It is based on the GIZMO cosmological hydrodynamical code (Hopkins, 2015) with its mesh-less finite mass scheme and utilises the galaxy formation input physics of the state-of-the-art Simba simulation (Dave et al., 2019). The baryon model was re-calibrated because The Three Hundred initial conditions have a lower resolution than the original SIMBA simulation, and both simulations had different objectives (cosmological run for SIMBA and galaxy cluster for The Three Hundred). The GRACKLE-3.1 library (Smith et al., 2017) is utilised to implement the processes of radiative cooling, photon heating, and gas ionization. The spatially-uniform ultraviolet background model (Haardt and Madau, 2012) and the self-shielding prescription, based on the approach by Rahmani et al. (2013), are employed. Additionally, an \(H_{2}\)-based star formation model from the Mufasa(Dave et al., 2016) is included. The star formation-driven galactic winds are implemented based on a decoupled two-phase model. This model is also based on Mufasa, but with an additional mass loading factor derived from Angles-Alcazar et al. (2017). The chemical enrichment model tracks eleven elements with metals enriched from supernovae Type Ia and Type II and asymptotic giant branch stars. The black hole accretion description is based on two models: torque limited accretion model for cold gas (Angles-Alcazar et al., 2015, 2017) and hot gas accretion model based on Bondi (1952). It incorporates three AGN feedback modes: a kinetic subgrid model for both 'radiative mode' and 'jet mode' with bi-polar ejections, and a kinetic X-ray feedback model following Choi et al. (2012). A more extensive discussion about the baryon model can be found in Cui et al. (2022); Dave et al. (2016, 2019).
Apart from the disparities in their models, it is essential to recognise that the two codes differ in their objectives when comparing simulation outputs to observations. The Gadget-X simulation is tuned to accurately reproduce the gas properties and relations observed in observations, such as the temperature-mass (\(T-M\)) and integrated Sunyaev-Zeldovich decrement vs. mass (\(Y-M\)) relations (e.g., Li et al., 2020; Sayers et al., 2023; Li et al., 2023). On the other hand, the Gizmo-Simba simulation is calibrated to reproduce galaxy stellar properties, including the total stellar fraction, satellite stellar mass function, and Brightest Cluster Group (BCG) halo mass functions (see Zhang et al., 2022; Cui, 2022; Ferragamo et al., 2023, for comparisons between the two simulations). Since the introduction of The Three Hundred project in Cui et al. (2018), several studies have used this data on many different projects, such as, Haggar et al. (2020); Ansarifard et al. (2020); de Andres et al. (2022). We refer the readers to these papers for more details about the project.
### The Halo and Sub-halo Catalogues
The simulation data is analyzed using the AHF (Amiga Halo Finder) open-source software (Knollmann and Knebe, 2009) to generate halo/sub-halo catalogues. AHF identifies structures hierarchically within cosmological simulations. It detects and locates spherical over-density peaks in the density field of the simulation, consis
tently considering dark matter, stars, and gas particles. The physical properties of all identified halos are determined based on the gravitationally bound particles. Halo positions are determined based on the over-density peak and the radius \(R_{200c}\). Additionally, sub-structures, referred to as sub-halos, are identified using the same process. Sub-halos are smaller gravitationally bound entities located within the radius \(R_{200c}\) of a larger central structure termed the host halo.
AHF searches for connected overdensity regions within the radius \(R_{200c}\) of the main halo. These regions are considered potential sub-halos. For each potential sub-halo, AHF determines whether the particles within the overdensity region are gravitationally bound to the main halo. This involves analyzing and comparing the particles' velocities with the local escape velocity obtained using the spherical potential approximation. If the overdensity region is found to be gravitationally bound to the main halo, it is confirmed as a sub-halo. In the following subsection, we will describe the selection procedure of our host halos and their associated sub-halos used in our study.
### Host-halo and Sub-halo Sample Selection
We selected the sample from each simulated cluster region (for both Gadget-X and Gizmo-Simba in The Three Hundred dataset) focusing on three particular redshifts: \(z=0.394\), \(z=0.194\), and \(z=0\). The redshift \(z=0.394\) is primarily selected to enable a close comparison to observed galaxy clusters in M20, which have redshifts in the range \(0.2<z<0.6\) with the median \(z=0.39\). The two additional redshifts are for the purpose of evolution studies. Host halos with \(M_{200c}>6.5\times 10^{14}h^{-1}M_{\odot}\) are selected in each simulation region, ensuring that the uncontaminated mass fraction of the high-resolution particles is greater than \(0.98\)4. This mass cut is to cover the observed cluster mass range, note that the three cluster masses in M20 are: \(1.59\pm 0.36\) (MACS J1206.2-0847), \(1.04\pm 0.22\) (MACS J0416.1-0403) and \(2.03\pm 0.67\) (Abell S1063)\(\times 10^{15}\) M\({}_{\odot}\) (see Table 1 in Bergamini et al., 2019).
Footnote 4: The fraction is not 100 per cent, for AHF takes BH particles as low-resolution particles. However, changing this fraction does not affect our results.
For each host halo identified at three different redshifts in The Three Hundred project's simulation runs, we further made selections of sub-halos with two scenarios given below:
* Sub-halos that have \(M_{\rm sub}>1\times 10^{10}h^{-1}M_{\odot}\) and are located within a projected distance of less than \(0.15R_{200c}\) (where \(R_{200c}\) represents the radius of the host halo) from the host-halo centre in the simulation's \(XY\) plane, i.e., \(R_{\rm 2D}<0.15R_{200c}\).
* Sub-halos that have \(M_{\rm sub}>1\times 10^{10}h^{-1}M_{\odot}\) and are physically located at a distance less than \(0.15R_{200c}\) from their host-halo centre, i.e., \(R_{\rm 3D}<0.15R_{200c}\).
For the investigation of the cumulative sub-halo mass function and the \(M_{\rm sub}-V_{\rm circ}\) relation, we considered the sub-halo mass cut mentioned above. However, it's important to note that, in the correlation studies, we applied a significantly higher sub-halo mass cut of \(M_{\rm sub}>1.27\times 10^{11}h^{-1}M_{\odot}\) to the dataset. This was done to mitigate potential resolution-related issues that could affect the sub-halos properties. Similarly, we also eliminated any contaminated sub-halos with a low-resolution particle mass fraction greater than 2 per cent. sub-halos without any stars are also excluded from our analysis. Moreover, in Gadget-X, we found some sub-halos with unusually high stellar mass fraction ( \(f_{*}\), is approximately above 0.8) but very low dark matter content close to the host halo's centre (\(\approx R_{3D}<0.05R_{200c}\)), and subsequently excluded them from our analysis. However, this issue was not observed in the sub-halos sampled from the Gizmo-Simba runs.
The general information regarding our chosen sample is presented in Tables 1 for Gadget-X and 2 for Gizmo-Simba. Table 1 presents information on the selected host halos with the higher mass cut, including the number (\(N_{\rm host}\), column 2), the median mass \(M_{200c}\) (column 3) of host halos, the median number of sub-halos within \(R_{\rm 2D}<0.15R_{200c}\) of each host halo (column 4) and \(R_{\rm 3D}<0.15R_{200c}\) (column 6) with their median masses in column 5 and 6 respectively. Additionally, the table also provides the total number of selected sub-halos for \(R_{\rm 2D}<0.15R_{200c}\) (column 7) and \(R_{\rm 3D}<0.15R_{200c}\) (column 8) for Gadget-X simulated clusters. The different rows show these quantities at the three different redshifts. Similarly, Table 2 reports information for the selected host-halos and sub-halos for simulated clusters at three different redshifts for the Gizmo-Simba run.
Based on this dataset of simulated clusters' host-halos and sub-halos from The Three Hundred dataset, we will commence our investigation to examine whether significant offsets exist between the observations of M20 and the simulations in the context of strong gravitational lensing.
## 3 Sub-halo mass function
We begin our analysis by comparing the cumulative sub-halo mass functions predicted by The Three Hundred clusters to the ones derived from the lens model of the three reference clusters, MACSJ0416, MACSJ1206, and AS1063, in M20. We calculate the sub-halo mass function for each cluster to determine the median cumulative sub-halo mass function at the specified redshifts for the Gadget-X and Gizmo-Simba simulations. This is accomplished by utilising the available sub-halo information associated with each cluster. Next, we bin the sub-halos based on their mass, \(M_{\rm sub}\), into logarithmic mass bins and calculate the median value of \(N(>M_{\rm sub})\) for each bin. This process yields the median cumulative sub-halo mass function for the simulated clusters at the respective redshifts. Additionally, we calculate the lower and upper 34% percentiles for \(N(>M_{\rm sub})\) in each logarithmic mass bin as their associated certainty.
Fig. 1 depicts the median cumulative sub-halos mass function \(R_{\rm 2D}<0.15R_{200c}\) (left) and \(R_{\rm 3D}<0.15R_{200c}\) (right) for both Gadget-X and Gizmo-Simba, at three redshifts, \(z=0.394\), \(z=0.194\), and \(z=0\). The grey shaded region in Fig. 1 (left and right) represents the upper and lower 34% quantiles for the cluster at redshift \(z=0.394\) for Gadget-X and Gizmo-Simba. Despite employing different approaches, such as using projected 2D distance or considering the actual physical 3D distance between sub-halos and the host-halo centre, the cumulative sub-halo mass function follows a power-law trend when fitted analytically with a power law function, as previously demonstrated in Giocoli et al. (2008). The cumulative sub-halo mass function observed in the Gizmo-Simba simulation displays a clearly evident straight power law trend, with a power index almost equal to 1 when compared to the Gadget-X simulation. Upon comparing the results with the observed sub-halo mass functions from M20 obtained through a strong lensing model (represented by black curves with different line styles in Figure 1), we observe consistency between the observations of MACSJ0416 and MACSJ1206 and the results from Gizmo-Simba simulated clusters within \(R_{\rm 2D}<R_{\rm 200c}\). For Gadget-X simulated clusters, we find that the sub-halo mass function (\(R_{\rm 2D}<0.15R_{200c}\)) have a better agreement with obser
vation results for sub-halo masses greater than \(\sim 8\times 10^{10}h^{-1}M_{\odot}\). Regarding the low sub-halo mass function at the low-mass end, its baryon model has a stronger resolution dependence5 because its sub-halo mass function is closer to the power-law if we don't apply the stellar mass constraint \(M_{*}>0\) (see also Contreras-Santos et al. 2023, which found many dark sub-halos in Gadget-X.). In the scenario where sub-halos are situated within a radial distance of \(R_{\rm 3D}<0.15R2_{\rm 00c}\), both the Gadget-X and Gizmo-Simma simulations exhibit a lower median cumulative sub-halo mass function compared to the observed results. This discrepancy arises because the observational data inherently captures a 2D projection of the sub-halo distribution, while the condition \(R_{\rm 3D}<0.15R_{\rm 200c}\) in the simulations takes into account the complete 3D spatial distribution of sub-halos. The projection effect increases the sub-halo numbers by a factor of \(\sim 2.5\), regardless of the sub-halo masses. Note that we only considered sub-halos within \(R_{\rm 200c}\) of the host halo for projection. We verified that by applying this constraint (i.e., \(R_{\rm 3D}<R_{\rm 200c}\)), we only underestimate the sub-halo mass function for the case of \(R_{\rm 2D}<0.15R_{\rm vir}\) by approximately \(2.21\%\) compared to the much larger radial constraint of \(R_{\rm 3D}<2.5R_{\rm 200c}\). Although the whole volume for the projection case is about five times larger than the 3D
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline & & Median \(M_{\rm 200c}\) & Median & Median & Median & Median & Total & Total \\ \(z\) & \(N_{\rm host}\) & \([h^{-1}M_{\odot}]\) & \(N_{\rm 2D}^{\rm sub}\) & \(M_{\rm sub}^{2}[h^{-1}M_{\odot}]\) & \(N_{\rm 3D}^{\rm sub}\) & \(M_{\rm sub}^{3}[h^{-1}M_{\odot}]\) & \(N_{\rm 2D}^{\rm sub}\) & \(N_{\rm 3D}^{\rm sub}\) \\ \hline
0.394 & 90 & \(7.97\times 10^{14}\) & 10 & \(2.62\times 10^{11}\) & 3 & \(2.78\times 10^{11}\) & 895 & 310 \\
0.194 & 180 & \(8.17\times 10^{14}\) & 9 & \(2.70\times 10^{11}\) & 3 & \(2.84\times 10^{11}\) & 1719 & 576 \\
0 & 321 & \(8.46\times 10^{14}\) & 7 & \(2.57\times 10^{11}\) & 2 & \(2.49\times 10^{11}\) & 2631 & 875 \\ \hline \end{tabular}
\end{table}
Table 1: Host halos and sub-halos samples obtained from the Gadget-X simulated clusters dataset. The meaning of each column is indicated in the header (See Section 2.2 for further details). The information presented in the table pertains to sub-halos that are located at distances less than \(0.15R_{\rm 200c}\). Sub-halo mass threshold of \(M_{\rm sub}>1.27\times 10^{11}h^{-1}M_{\odot}\) is used in the table to calculate the statistics.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline & & Median \(M_{\rm 200c}\) & Median & Median & Median & Median & Total & Total \\ \(z\) & \(N_{\rm host}\) & \([h^{-1}M_{\odot}]\) & \(N_{\rm 2D}^{\rm sub}\) & \(M_{\rm sub}^{2}[h^{-1}M_{\odot}]\) & \(N_{\rm 3D}^{\rm sub}\) & \(M_{\rm sub}^{3}[h^{-1}M_{\odot}]\) & \(N_{\rm 2D}^{\rm sub}\) & \(N_{\rm 3D}^{\rm sub}\) \\ \hline
0.394 & 82 & \(8.04\times 10^{14}\) & 15 & \(2.42\times 10^{11}\) & 6 & \(2.46\times 10^{11}\) & 1264 & 488 \\
0.194 & 169 & \(8.14\times 10^{14}\) & 14 & \(2.35\times 10^{11}\) & 5 & \(2.33\times 10^{11}\) & 2373 & 966 \\
0 & 302 & \(8.32\times 10^{14}\) & 12 & \(2.30\times 10^{11}\) & 5 & \(2.21\times 10^{11}\) & 3922 & 1578 \\ \hline \end{tabular}
\end{table}
Table 2: Similar to Table 1, but for Gizmo-Simma.
Figure 1: 2D colored (left panel) and 3D (right panel) cumulative sub-halo mass functions. The dotted line style represents the Gizmo-Simma simulation results, while dash-dot lines show median cumulative sub-halo mass functions from Gadget-X. The shaded areas show the \(16^{th}-84^{th}\) percentiles from all clusters at \(z=0.394\). The mass functions of sub-halos in Gadget-X and Gizmo-Simma simulations are displayed for three redshifts: \(z=0.394\) (red), \(z=0.194\) (blue), and \(z=0\) (green). The projected results in the left panel used all sub-halos located within a projected 2D distance of \(0.15R_{\rm 200c}\), i.e., \(R_{\rm 200}<0.15R_{\rm 200c}\). On the other hand, the right panel illustrates the results using only sub-halos situated within a physical 3D distance of \(R_{\rm 200c}\), i.e., \(R_{\rm 100}<0.15R_{\rm 200c}\). In both panels, the same observed sub-halo mass functions from three reference clusters in M20 are presented with black curves with different line styles; see the legend for details.
case, there are much fewer galaxies/sub-halos at large radius (see Li et al., 2020, 2023, for example). Therefore, using a slightly larger projection distance will not affect this result much. To perform this volume comparison, we directly compared the volume of a sphere having a radius of \(0.15\ R_{\rm 200c}\) with the volume of a cylinder characterized by a radius of \(0.15\ R_{\rm 200c}\) and a height of \(R_{\rm 200c}\). Lastly, there is a weak redshift evolution of the sub-halo mass functions in all the simulation samples (see also Giocoli et al., 2008, 2010), which we will detail in the following subsection.
### The redshift evolution of the sub-halo mass function
It is expected that massive clusters have more sub-halos. As shown in Table 1 and Table 2, the median halo mass slightly increases as redshift drops for B Gadget-X and Gizmo-Simba. One should expect a higher sub-halo mass function at \(z=0\) than \(z=0.394\). However, the results in Figure 1 for both Gadget-X and Gizmo-Simba and within both \(R_{\rm 2D}\) and \(R_{\rm 3D}\) show an opposite evolution, i.e., a lower (fewer sub-halos) sub-halo mass function at \(z=0\) compared to \(z=0.394\). We suspect this could be due to the different halo mass distributions between these redshifts. Therefore, we investigate more on this in this subsection.
To examine the redshift evolution of the sub-halos mass function, we only present the projected results in the 2D case, which include more sub-halos. However, we note here that the \(R_{\rm 3D}\) results are similar to the \(R_{\rm 2D}\) ones. The sub-halos mass distribution for all Gadget-X and Gizmo-Simba simulated clusters at \(z=0.349\) is shown in the left panels (both a and b) of Fig. 2 with the line colour coding to the cluster mass, as indicated by the colour bar to its right. To show the residual redshift evolution of the sub-halo mass function, we first performed a normalisation step by dividing each host halo's cumulative sub-halo mass function by its own halo mass. This normalisation step eliminates any host halo mass dependence from the cumulative sub-halo mass distribution. We then proceed with the calculation of the median sub-halos mass function by grouping the normalised sub-halos mass in logarithmic mass bins and then calculating median \(N(>M_{\rm sub})/M_{H}\) in each bin. The right panel of Fig. 2 shows the redshift evolution of the normalised sub-halo mass distribution predicted by the simulations for both Gadget-X and Gizmo-Simba. The plot clearly illustrates the evolution of the sub-halo mass function as the redshift decreases from \(z=0.349\) to \(z=0\). The figure shows that, within a given parent halo, a greater number of sub-halos are expected to be observed at earlier times when they are dynamically young and less concentrated. This similar inference about the redshift evolution of the sub-halos mass aligns with the findings presented in Gao et al. (2004) and Gao et al. (2011). However, here we notice that the evolution we observed in our hydro-dynamical simulations is milder compared to the earlier studies of dark matter-only simulations. Interestingly, the same evolution trend for hydro-dynamical simulations was also reported in works such as Ragagnin et al. (2019), Ragagnin et al. (2022), and Despali & Vegetti (2017). This observation is further supported by the Median \(N_{\rm 2D}^{\rm sub}\) and Median \(N_{\rm 3D}^{\rm sub}\) columns in Tables 1 and 2 respectively. Moreover, upon examining Table 2 for Gizmo-Simba, we observe a general decreasing trend in the sub-halo mass (check \(M_{\rm sub}^{\rm 2D}\) and \(M_{\rm sub}^{\rm 3D}\) from redshift \(z=0.394\) to \(z=0\)). We have also verified that the same evolutionary trend for the sub-halo mass function is observed when considering a higher redshift snapshot with a value greater than \(z=0.394\). Moreover, the evolution of \(R_{\rm 200c}\) for a fixed halo mass between \(z=0\) and an arbitrary redshift \(z\) is governed by the equation, \(\frac{R_{\rm 200c}(z=0)}{R_{\rm 200c}(z)}=(H(z)/H_{0})^{2/3}\). This ratio indicates how the volume slightly increases at low redshift as a result of variations in the critical over-density, which can be expressed as \(\Omega_{m}(1+z)^{1/3}+\Omega_{\Lambda}\). This, the pseudo-halo mass evolution, partly leads to the decrease of the satellite number with the \(R_{\rm 200c}\) at \(z=0\). This is because the redshift evolution in Figure 2 is much larger than this density change.
## 4 Cumulative sub-halo \(V_{\rm Circ}\) function
In this section, we will calculate and compare the cumulative sub-halo \(V_{\rm circ}\) function for both the Gadget-X and Gizmo-Simba simulations at three different redshifts: \(z=0.394\), \(z=0.194\), and \(z=0\). To estimate each sub-halo \(V_{\rm circ}\) in both Gadget-X and Gizmo-Simba simulations, we used the output profiles generated by AHF (Knollmann & Knebe, 2009). The output files from the AHF contain radial profiles for halo/sub-halo various properties such as mass, density, rotation curve, escape velocity, etc. Here, we only used the rotation curve of each sub-halo to estimate \(V_{\rm circ}\) in both Gadget-X and Gizmo-Simba at the three redshifts. The circular velocity, denoted as \(V_{\rm circ}\) for a sub-halo is determined by identifying the maximum circular velocity at radii greater than zero, ensuring convergence and which is dominated by two-body collisions according to the criterion of Power et al. (2003). The rotation curve for halo/sub-halos is calculated inclusively considering both baryons and dark matter particles in the AHF profile file. We have verified that this value is compatible with the one in the AHF halo properties.
We calculate the sub-halo \(V_{\rm circ}\) function for each host cluster to determine the median cumulative sub-halo \(V_{\rm circ}\) function at the specified redshifts for both Gadget-X and Gizmo-Simba simulations. To calculate the median sub-halo \(V_{\rm circ}\) function, we interpolated the individual sub-halo \(V_{\rm circ}\) functions for each host halo at given \(V_{\rm circ}\) values, and then calculated the median value of \(N(>V_{\rm circ})\) using all the interpolated profiles. This process yields the median cumulative sub-halo \(V_{\rm circ}\) function for the simulated clusters at the respective redshifts.
Figure 3 illustrates the median cumulative sub-halo \(V_{\rm circ}\) function for both Gadget-X and Gizmo-Simba simulations at three different redshifts: \(z=0.394\), \(z=0.194\), and \(z=0\). The left panel shows the function for \(R_{\rm 2D}<0.15R_{\rm 200c}\), while the right panel shows it for \(R_{\rm 3D}<0.15R_{\rm 200c}\). In Figure 3, the shaded grey region represents the upper and lower 34% percentiles for clusters at redshift \(z=0.394\) in both Gadget-X and Gizmo-Simba simulations. The \(V_{\rm circ}\) functions for Gizmo-Simba are higher compared to Gadget-X for both \(R_{\rm 2D}<0.15R_{\rm 200c}\) and \(R_{\rm 3D}<0.15R_{\rm 200c}\). Once more, we notice that the projection effect leads to an approximately twofold increase in the sub-halo count in both the Gadget-X and Gizmo-Simba simulations. Additionally, we noticed a subtle redshift evolution in the cumulative sub-halo \(V_{\rm circ}\) function for both Gadget-Xand Gizmo-Simba, which agrees with the result of the sub-halo mass function considering the positive correlation between \(V_{\rm circ}\) and \(M_{\rm sub}\), further discussed below.
## \(M_{\rm sub}-V_{\rm Circ}\) relation
In this section, we investigate the discrepancy between the concentration of sub-halos in The Three Hundred simulations and the lensing results of M20. Following M20, Ragagnin et al. (2022); Bahe (2021), we employed the \(M_{\rm sub}\)-\(V_{\rm circ}\) relationship as a metric to infer the concentration of sub-halos within the clusters. The sample of selected sub-halos for both Gadget-X and Gizmo-Simba remains unchanged. To derive the \(M_{\rm sub}\)-\(V_{\rm circ}\) relationship, we initially divide the sub-halo masses into logarithmic mass bins and subsequently calculate the
median \(V_{\rm circ}\) for each respective bin. \(M_{\rm sub}\)-\(V_{\rm circ}\) relation is then obtained by plotting the middle values \(M_{\rm sub}\) in each bin with respect to the corresponding median values of \(V_{\rm circ}\) for each bin. This procedure was repeated for both Gadget-X and Gizmo-Simba simulations at the three redshifts considered in our study.
In Figure 4, we present the \(M_{\rm sub}\)-\(V_{\rm circ}\) relationship for the sub-halos from Gadget-X and Gizmo-Simba simulated clusters and compare it with the relation of the observed clusters derived by M20. The left panel of Figure 4 presents the projected results while the right panel shows the 3D case. We use different colors to show the the \(M_{\rm sub}\)-\(V_{\rm circ}\) relations (for both Gadget-X and Gizmo-Simba) at three different redshifts, namely \(z=0.394,z=0.194\) and \(z=0\). When sub-halos follow the distance constraint \(R_{\rm 2D}<0.15R_{\rm 200c}\), both Gadget-X and Gizmo-Simba simulated clusters exhibit consistently lower \(V_{\rm circ}\) values compared to the fitting line from observation. Sub-halos located at the periphery (i.e. \(R_{\rm 3D}\approx R_{\rm 200c}\)) for the \(2D\) case \(R_{\rm 2D}<R_{\rm 200c}\), cause the simulation's \(M_{\rm sub}\)-\(V_{\rm circ}\) relation to shift downward compared to the observed relation. Gizmo-Simba, though, shows slightly higher \(V_{\rm circ}\) than Gadget-X. Furthermore, Gizmo-Simba tends to have a weak redshift evolution as a higher \(V_{\rm circ}\) at \(z=0.394\) compared to \(z=0\), while no redshift evolution is presented in Gadget-X. Similarly, the same conclusions are reached for the case when \(R_{\rm 3D}<R_{\rm 200c}\), albeit that both seem to become closer to the observation fitting line, which is in agreement with M20 and our later correlation studies. Even after considering the sub-halos of the 10 most massive host halos, the discrepancy between the observed and simulated \(V_{\rm circ}\) values persists. We do not see significant differences between different sub-halo masses regarding the distances to the fitting line, although the shaded regions seem larger, thus closer to the fitting line, at higher sub-halo masses. We also emphasize that the disparity between the \(M_{\rm sub}\)-\(V_{\rm circ}\) relations of Gadget-X and Gizmo-Simba is primarily limited to the lower sub-halo mass range. For sub-halos with \(M_{\rm sub}>\approx 10^{12}h^{-1}\,{\rm M}_{\odot}\), they exhibit a notable degree of agreement. As we have noticed, the sub-halos at the lower halo mass range can be unresolved. Based on our analysis of the sampled data, we have not identified any significant deviations or trends in the \(M_{\rm sub}\)-\(V_{\rm circ}\) relation that can be directly attributed to unresolved sub-halos in the lower sub-halo mass range. Nevertheless, we acknowledge that further investigations with higher-resolution simulations is necessary to gain a more complete insight into how
Figure 2: In panels (a and b, left), we illustrate the unnormalized sub-halo mass functions at \(z=0.394\), demonstrating their dependence on halo mass. The right panels (a and b) display the corresponding cumulative sub-halo mass functions for the normalized sub-halo mass function at redshift \(z=0.394\), along with the normalized sub-halo mass functions at \(z=0.194\) and \(z=0\). Further details regarding these plots are provided in sub-captions (a) and (b).
Figure 4: The relationship between sub-halo mass (\(M_{\rm sub}\)) and maximum circular velocity (\(V_{\rm circ}\)) for the 2D projected sub-halos on left panel and for the 3D one on the right panel. The black solid line is observed fitting relation from M20 in both panels for reference. The \(M_{\rm sub}\)-\(V_{\rm circ}\) relation for the two simulations is distinguished by distinct line styles, with dashed-dot representing Gzzor-X and dotted representing Gzzor-Simha. The \(M_{\rm sub}\)-\(V_{\rm circ}\) relation in Gzzor-X and Gz
unresolved sub-halos in this mass range might affect the relation. We also observed that the simulated \(M_{\rm sub}\)-\(V_{\rm circ}\) relationship for sub-halos with masses \(M_{\rm sub}\lesssim 10^{11}h^{-1}\,M_{\odot}\), which is the most crucial mass range for GGSL events (Ragagnini et al., 2022), differs constantly from observations. Conversely, the \(M_{\rm sub}\)-\(V_{\rm circ}\) relation in simulations for the massive sub-halos, \(M_{\rm sub}>4\times 10^{11}h^{-1}\,M_{\odot}\), becomes closer to observed relation by varying the baryon parameters, as noted also in Bahe (2021). However, it's worth noting that this specific range of sub-halo masses is notably higher than what is observed, as highlighted in Ragagnini et al. (2022). Though we have a much larger sample and observe that the lines are closer to the observation line at the most massive sub-halo mass range, the discrepancy at the low sub-halo mass end remains unsolved. Note that the resolution, which could affect this statement for our simulations, given by the checks from Ragagnini et al. (2022), Bahe (2021) and our examinations of the high-resolution the300 clusters, does not significantly impact the \(M_{\rm sub}\)-\(V_{\rm circ}\) relation. Therefore, it is still unclear whether this can be solved by varying baryon models or not. The difference between Gizmo-Simba and Gaddef-X suggests that this may be the case. In the following section, we aim to investigate the influence of sub-halo properties on the \(M_{\rm sub}\)-\(V_{\rm circ}\) relation, where we examine how these properties relate to the difference between the \(V_{\rm circ}\) obtained from the simulation and the one derived from observed fitting relations. This difference serves as a measure of the goodness of fit to the \(M_{\rm sub}\)-\(V_{\rm circ}\) relation.
### The effects of the sub-halo properties on \(M_{sub}-V_{circ}\) relation
While Gizmo-Simba appears to be somewhat closer to the observed fitting line than Gaddef-X, the deviation from the observational results remains substantial. This is particularly pronounced in the case of the projected data, which holds greater importance in the observational context. It is interesting to see that different baryon models indeed give slightly different results, which means there may be a cure for this discrepancy by better calibrating the baryon models. Therefore, in order to understand the impact of sub-halo properties on the \(M_{\rm sub}-V_{\rm circ}\) relation, we perform a Spearman correlation analysis between the different physical properties of the sub-halos and the residual for all sub-halos in the \(R_{2D}<0.15R_{200c}\) case. The Spearman correlation test involves converting the data into ranks and then calculating the correlation between the ranks of the two variables. This Spearman correlation analysis between the different physical properties of the sub-halos and the residual not only provides more statistics, but also presents a consistent comparison to the observation result. The residual \(ds\) is calculated for each sub-halo by finding the distance between its \(V_{\rm circ}\) value obtained from the fitting line at its sub-halo mass \(V_{\rm circ}^{\rm fit}\) and the one derived from the simulations \(V_{\rm circ}^{\rm sim}\), it is further normalised to the fitted line value:
\[ds=\frac{V_{\rm circ}^{\rm sim}-V_{\rm circ}^{\rm fit}}{V_{\rm circ}^{\rm fit}}. \tag{1}\]
Note that, we only use sub-halos with \(M_{sub}>1.27\times 10^{11}h^{-1}\,M_{\odot}\) to calculate these correlation coefficients. This is attributed to the potential influence of simulation resolutions on certain sub-halo properties, as sub-halos below this range roughly consist of fewer than 100 dark matter particles. Apart from identifying halos and their corresponding sub-halos, AHF (Knollmann & Knebe, 2009) also provide many physical properties associated with them. Here, we investigate these quantities which should have the most effects on the \(M_{\rm sub}-V_{\rm circ}\) relation. The sub-halo properties analysed with the Spearman correlation test include the Bullock Spin parameter, which is a measure of the spin of the sub-halo based on Bullock et al. (2001), and the Peebles Spin parameter, which is another measure of the sub-halo's spin based on a different definition by Peebles (1969). The dimensionless spin parameter in Peebles (1969) is calculated as \(\sqrt{E}|J|/GM^{\frac{1}{2}}\), where \(E\) represents the total energy, \(J\) denotes the angular momentum, and \(M\) stands for the mass of the sub-halo or halo. However, estimating this quantity poses challenges as it requires determining the total energy \(E\) from simulations and observations. The difficulty arises from the necessity to compute the gravitational potential energy, which, in turn, relies on obtaining accurate information about the mass distribution. To overcome this problem, an alternative dimensionless spin parameter was proposed in Bullock et al. (2001). It is calculated as \(|J|/\sqrt{2}\,RV\), where \(|J|\) denotes the angular momentum, \(M\) is the mass of the sub-halo, \(R\) is the virial radius, and \(V\) is the virial circular velocity given by \(V=\frac{GM}{R}\). The measurements of \(J\), \(M\), and \(V\) are all confined to the virial radius \(R\). This makes this spin definition especially attractive since it solely depends on the material within \(R\), enabling its calculation for individual components. Hence, using this definition, the radial distribution of the spin is straightforward.
Furthermore, the analysis takes into account the baryonic mass fraction (f_b), which represents the proportion of baryonic matter (ordinary matter i.e. gas and stellar content) within the sub-halo. The centre of mass offset parameter (COM_offset), the distance between the centre of mass of the sub-halo and its density peak, is also considered. This is normally used as an indicator of the object's dynamical state (see Cui et al., 2017, for example).
In addition to that, we further calculate some galaxy and sub-halo properties that may be directly linked to the \(ds\), but not provided by AHF. These properties included in the analysis are the physical distance between the host-halo and the sub-halos (R_3D), the galaxy's half-stellar mass radii, the galaxy's stellar age, which is the mass-weighted mean of all-star particles within the half-stellar mass radius, the sub-halo half-mass radii and the galaxy/sub-halo concentrations. As it is very difficult to decide the density profiles for these sub-halos and therefore to estimate their concentration, it is very common to use the ratio of two radii, \(R_{80}\) and \(R_{20}\), as an indicator of the concentration. Here, \(R_{80}\) marks the radius where 80 per cent of the total (stellar) mass of the sub-halo (galaxy) is included. With a similar definition for \(R_{20}\), one would expect a more concentrated density profile should have a higher ratio \(R_{80}/R_{20}\).
The correlation between the physical properties of the sub-halos (for both Gaddef-X and Gizmo-Simba) and the residual \(ds\) is depicted in Fig. 5. It is clear that both simulations generally agree on the (anti-)correlation between \(ds\) and sub-halo/galaxy properties. Namely, the higher Spin, COM offset, galaxy/sub-halo half mass radius and concentration, further from cluster centre and galaxy concentration, the further distance to the fitted \(M_{\rm sub}-V_{\rm circ}\) relation. We displayed the Spearman correlation coefficient only for the Bullock spin parameter, which is more robust. The Peebles spin parameter showed a similar trend with a comparable correlation coefficient. At the same time, the older galaxy age (formed earlier) and sub-halos baryonic mass fraction will bring the simulated sub-halo \(V_{\rm circ}\) closer to the observed relation. We also examined the correlation trend for the Stellar mass fraction, which exhibited a positive correlation with the residual \(ds\). It displayed a closely similar magnitude to the baryon fraction. This is not surprising given that simulated satellites have virtually no gas; therefore, these two fractions are expected to be nearly identical. It is worth noting that the most significant sub-halo properties are galaxy stellar age, \(R_{3D}\) distance, sub-halo half mass radius, and baryon fractions. The Spearman correlation trends between \(ds\)
Figure 5: The Spearman correlation coefficient between the physical properties of sub-halos and the residuals \(ds\). The residual \(ds\) is computed as the distance from the sub-halo’s circular velocity obtained from the simulation to the one predicted by the observed relation. To distinguish between the two simulations, we use the red bar plots for the results of Gadget-X and the blue bar plots for the results of Gizmo-Shpha, respectively. The values of the bar plot are the Spearman correlation coefficient between the sub-halo residual \(ds\) and various sub-halo properties. Corr(X,\(ds\)) defines the Spearman correlation coefficient between the physical property \(X\) of sub-halos and the residual \(ds\). This parameter is obtained by rank-ordering the sub-halo property \(X\) and the residual \(ds\), and then calculating the Pearson coefficient based on this rank-order list. The value of this parameter falls between -1 and 1. For the Spearman correlation studies, we chose sub-halos that meet the following criteria: their mass \(M_{\rm sub}>1.27\times 10^{11}h^{-1}\) M\({}_{\odot}\), and their 2D projected distance \(R_{\rm 2D}<0.15R_{\rm 200c}\). The Spearman correlation coefficients are accompanied by superscripts and subscripts denoting upper 84% and lower 16% uncertainties, respectively.
and \(R_{3D}\), as well as \(ds\) and \(f_{b}\), obtained from our analysis, have also been reported in M20 and Bahe (2021), respectively. The positive correlation between \(ds\) and galaxy age suggests early galaxy formation in simulations will provide a better agreement, which is also consistent with the recent JWST observations on the very high-redshift galaxies (see Naidu et al., 2022; Finkelstein et al., 2022, for example). It is also interesting to note that in the Gizmo-Simba simulation, both the gas fraction (\(f_{b}\)) and galaxy age are more strongly positively correlated to \(ds\) compared to Gadget-X. For the negative correlation between \(ds\) and the sub-halo half mass radius, it is very easy to understand: the larger radius, the puller the sub-halo is, therefore, the lower \(V_{\rm circ}\). Naively, we also expect that the sub-halo half mass radius will be anti-correlated with the sub-halo concentration. By directly looking at the coefficient between the sub-halo half mass radius and concentration, which is also negatively correlated, we suspect that this is caused by different sub-halo masses for the anti-correlation at a fixed sub-halo will be diluted by plotting all the sub-halos together. Therefore, we state that the mentioned correlation between sub-halo half mass radius and concentration is not shown 6. Furthermore, this is also applied to the galaxy concentration parameter. This simple definition of concentration may not serve our purpose well here. We also emphasize that this correlation study holds greater relevance in the context of sub-halos with \(M_{\rm sub}>1.27\times 10^{11}h^{-1}\,{\rm M}_{\odot}\), as we applied this mass threshold to mitigate resolution-related effects that can influence the physical properties of sub-halos. Hence, no definitive conclusions can be drawn regarding the influence of sub-halo properties on the \(M_{\rm sub}-V_{\rm circ}\) relation for sub-halos falling below the mentioned mass threshold.
In addition to examining the correlations with \(ds\), which highlights the individual sub-halo properties' effect, we also compared the distribution of sub-halo properties between Gadget-X and Gizmo-Simba in Figure 6. The distributions of galaxy/sub-halo properties are presented with the 1D probability density functions for both simulations. Through these comparisons, we expected to further understand the model differences between the two simulations and how
Figure 6: The probability density functions (PDFs) of the baryonic mass fraction, galaxy concentration, galaxy half stellar radii, galaxy stellar age, sub-halo concentration and sub-halo half mass radii from top left to bottom right for both Gadget-X (red dash-dot steps) and Gizmo-Simba (blue solid steps). The distributions are presented in either linear or logarithms based on their spread ranges. The dotted vertical lines in each plot correspond to the median values of the distributions. For the comparison of sub-halo properties between the two simulations, we selected sub-halos that meet the following criteria: their mass, \(M_{\rm sub}>1.27\times 10^{11}h^{-1}\,{\rm M}_{\odot}\), and their 2D projected distance, \(R_{\rm 2D}<0.15R_{\rm 200c}\).
they impact the \(M_{\rm sub}-V_{\rm circ}\) relation. In Figure 6, only 6 interesting and important sub-halo properties are picked to show.
First, the sub-halos in the simulated clusters of Gadget-X contain a marginally higher amount of baryonic content compared to those in Gizmo-Simba. Note that the distribution difference is larger when including low-mass sub-halos. The positive correlation illustrated in Figure 5 indicates that as the baryon fraction increases, the simulation's \(M_{\rm sub}\)-\(V_{\rm circ}\) relation aligns more closely with the observed fitting relation. The explanation behind this is: the inclusion of baryons through tidal stripping leads to an observed offset towards higher \(V_{\rm circ}\) values in the \(M_{\rm sub}-V_{\rm circ}\) relation (Bahe, 2021), which is also supported by the presence of sub-halos with increased baryonic content results from the removal of dark matter in galaxies with the stellar mass is largely preserved (Armitage et al. (2019); Bahe et al. (2019); Joshi et al. (2019)). However, this result seems to contradict the conclusion that Gizmo-Simba is closer to the fitting line than Gadget-X while their baryon fractions are very similar. We suspect the baryon fraction is only a sufficient condition, not a necessary condition to bring up the \(V_{\rm circ}\). Similar to the baryon fraction, the galaxy age distributions between Gadget-X and Gizmo-Simba are also very similar with a slight excess of young galaxies in Gadget-X. Therefore, the similarity of the two sub-halo properties between Gadget-X and Gizmo-Simba indicates that other quantity differences are the key to explaining the differences in the \(M_{\rm sub}-V_{\rm circ}\) relation. They are the sub-halo/galaxy half-mass radii and concentrations: it is clear that Gizmo-Simba has smaller half-mass radii, thus a higher concentration of both galaxy and sub-halo compared to Gadget-X. This is in agreement with Meneghetti et al. in prep. which found that the GGLS signal is also higher in Gizmo-Simba than Gadget-X, albeit that is still about a few times lower than observation. To boost the \(V_{\rm circ}\), as well as the GGLS signal, we will need even more compact sub-halos/galaxies. To achieve that goal, we suspect an even earlier galaxy formation may bring the simulation closer to observation.
### Global cluster properties impact on \(M_{sub}-V_{circ}\) relation
The next step in our analysis is to investigate the influence of the global properties of the host halo on the \(M_{\rm sub}-V_{\rm circ}\) relationship. This investigation is to provide some hints on whether the selected clusters in observation are biased or not. In order to determine any potential impact, we provide a similar study on the Spearman coefficient between the physical properties of the host halos and the global residual \(\overline{ds}\). Here, the global residual \(\overline{ds}\) for each host halo is computed by averaging all its sub-halos' \(ds\), which are measured in the previous section.
In Figure 7, we show the coefficient between \(\overline{ds}\) and these four selected cluster properties: cNFW, Bullock spin parameter, COM offset and total baryon fractions. Additionally, the analysis considers the Navarro-French-White profile (Navarro, 1996) dimensionless concentration parameter (cNFW), which characterises the concentration of the sub-halo's density profile. The concentration parameter, denoted as cNFW, is typically determined by fitting a Navarro-Frenk-White profile to the halo density. It describes how the density of the halo changes with radial distance from its centre. Here, we simply use the concentration parameter in AHF calculated by following the approach of Prada et al. (2012). They utilise the circular velocity (\(V_{\rm circ}\)) and the circular velocity at the virial radius, which is defined in terms of the halo's virial mass and radii. All the other halo properties are introduced in the previous section.
Besides the cNFW parameter, the two simulations show similar correlations with the \(\overline{ds}\). Gadget-X suggests that concentrated halos tend to give a lower \(M_{\rm sub}-V_{\rm circ}\) relation, while Gizmo-Simba suggests the opposite. However, neither shows a strong relation with \(\overline{ds}\). Both Bullock and Peebles defined spin parameters negatively correlate with \(\overline{ds}\) indicating slow-rotating halos tend to be closer to the observed fitting line. We report the more robust Bullock spin parameter in Figure 7. Again, the correlation is not very strong. The highest coefficient is the COM, which suggests that the relaxed halos tend to agree with observation better. This can be understood as this: the relaxed cluster tends to form earlier (see Mostoghiu et al., 2019, for example, for the relations of cluster dynamical state with halo formation time and concentration), and the sub-halos inside tend to have a longer time for stripping, thus only the core regions are remaining, which will have a shorter half-mass radius with higher \(V_{\rm circ}\). However, it is worth note that one cluster, MACSJ0416, in M20, seems to be unrelaxed. This seems contract to our previous prediction. However, we argue that the majority of the sample in M20 (see also Meneghetti et al. (2022)) also are more relaxed. While the simulation sample is more balanced, see De Luca et al. (2021); Zhang et al. (2022).
The positive correlation between \(\overline{ds}\) and the halo baryon fraction is in agreement with the correlation result for sub-halos. It is expected that the higher halo baryon fraction connects with a higher sub-halo baryon fraction. However, it is unclear which is the determined reason: the baryon-rich halo merged into the host halo to bring more baryons or the host halo is baryon rich with the sub-halos can retain their baryon longer. It is naturally to think that a higher halo baryon fraction would induce stronger ram pressure with potentially stronger tidal forces, thus leads to a lower baryon fraction in the sub-halos. It is known recently that the gas in the infalling halos is easily stripped out (Haggar et al., 2020), even before reaching the virial radius of the cluster, this can also happen to the infalling groups as well (Haggar et al., 2023). There the baryon fraction for the satellite galaxies are dominated by stars. On the other hand, the galaxy is more concentrated compared to dark matter, thus less easy get stripped (see Contreras-Santos 2023 in prep.). Therefore, the two high baryon fractions are actually consistent, because the stronger stripping will only remove more dark matter particles and result in a higher subhalo baryon fraction.
## 6 Conclusions
The study by M20 examined the gravitational lensing properties of galaxy clusters and their sub-halos, revealing a significant discrepancy between observed clusters and hydrodynamic simulations within the \(\Lambda\)CDM cosmology. Notably, observed clusters exhibited a much higher probability of Galaxy-Galaxy Strong Lensing (GGSL) than simulated clusters. Moreover, they utilized maximum circular velocities (\(V_{\rm circ}\)) of sub-halos as a metric to assess compactness, finding that sub-halos in observed clusters had higher \(V_{\rm circ}\) values compared to those in mass-matched clusters from simulations. This suggests that galaxies in observed clusters are more efficient at lensing background sources and are more compact than those in the simulations. In this study, we thoroughly investigated the discrepancy between the simulations and observations discussed inM20.
In our study, we used simulated clusters from the The Three Hundred project Cui et al. (2018, 2022) with masses \(M_{200}>6.5\times 10^{14}h^{-1}M_{\odot}\). We aimed to compare these simulated clusters with the observations of three primary reference clusters of M20 that have a median redshift of \(z=0.39\). We selected a sample of 90 host clusters from the Gadget-X simulation and 82 host clusters from the Gizmo-Simba simulation at a redshift of \(z=0.394\) to compare it fairly with observations of M20. We then expanded our analysis by including host clusters at two additional redshifts:
\(z=0.194\) and \(z=0\) for evolutionary studies. The selected clusters at \(z=0.194\) for Gadget-X and Gizmo-Simba are 180 and 169, respectively. Similarly, at redshift \(z=0\), Gadget-X and Gizmo-Simba provide 321 and 302 cluster samples. Further details about our sample of selected clusters and their sub-halos can be found in Table 1 and Table 2. In our analysis, we found the following:
* The cumulative sub-halo mass function shows an overall consistency between MACSJ0416 and MACSJ1206 clusters from M20 and the Gizmo-Simba simulation with \(R_{\rm 2D}<0.15R_{\rm 200c}\) (Figure 1, left). However, for Gadget-X, agreement between the observation and simulation is only found at a higher sub-halos mass range. The discrepancy at the low-mass end is attributed to a stronger resolution dependence in the baryon model of Gadget-X. The \(2D\) vs \(3D\) comparison of the sub-halo mass function (as shown in Figure 1) highlights the substantial impact of projection effects, revealing a two-fold increase in sub-halo numbers in 2D compared to 3D.
* The redshift evolution study of cumulative sub-halos mass function reveals that while the median halo mass increases with decreasing redshift, the number of sub-halos within massive clusters decreases toward the present time. The analysis of the normalised sub-halo mass function shows a clear redshift evolution, where a greater number of sub-halos are expected to be observed at earlier times when they are less concentrated within their host halos. The sub-halo mass function for both Gadget-X and Gizmo-Simba at \(z=0\) is lower (fewer sub-halos) compared to \(z=0.394\), indicating a decrease in the number of sub-halos within host halos over time Figure 2.
* Both Gadget-X and Gizmo-Simba simulations consistently show lower circular \(V_{\rm circ}\) for sub-halos compared to the fitting line obtained from observations when following the distance constraint \(R_{\rm 2D}<0.15R_{\rm 200c}\). However, Gizmo-Simba exhibits slightly higher \(V_{\rm circ}\) values than Gadget-X. Furthermore, Gizmo-Simba shows a weak redshift evolution with higher \(V_{\rm circ}\) at \(z=0.394\) compared to \(z=0\), unlike Gadget-X.
* The \(M_{\rm sub}\)-\(V_{\rm circ}\) relationship for sub-halos with masses \(M_{\rm sub}<10^{11}h^{-1}M_{\odot}\) shows a noticeable difference between observations and simulations. This discrepancy is particularly relevant in the context of GGSL. On the other hand, when considering massive sub-halos with \(M_{\rm sub}>4\times 10^{11}h^{-1}M_{\odot}\), where simulations are a little closer to observed fitting relation, albeit not in perfect agreement, the significance of the discrepancy decreases due to the limited number of observed sub-halos within this mass range. Therefore, in this range of sub-halo masses, the observed fitting relation of M20 is extrapolated. The contrasting results obtained from the Gadget-X and Gizmo-Simba simulations indicate potential to address this issue by fine-tuning the baryon models used in the simulations. However, as shown by Meneghetti et al. (2022), this fine-tuning is difficult to achieve on the mass scales relevant for GGSL without creating inconsistencies with observations at higher masses. For example, simulations with high star formation efficiency and/or lower energy feedback from AGNs produce an excess of galaxies with masses \(\gtrsim 10^{12}\ M_{\odot}\) compared to observations. Meneghetti et al. (in prep.) noted that the Gizmo-Simbasimulations exhibit this problem.
* The Spearman correlation analysis between sub-halo/galaxy properties and the residual \(ds\) reveals that both simulations agree that there is a correlation or anti-correlation between \(ds\) and various sub-halo/galaxy properties Figure 5. The significant sub-halo properties that notably impact the residual \(ds\) are the galaxy stellar age,
Figure 7: Similar to Figure 5, but for the correlation between the cluster properties and global residuals (\(\overline{ds}\)). This correlation can be used to infer the impact of cluster/host-halo physical properties on the \(M_{\rm sub}-V_{\rm circ}\) relationship. Once again, we selected sub-halos from the host clusters that met the following criteria: their mass, \(M_{\rm sub}>1.27\times 10^{11}h^{-1}M_{\odot}\), and their 2D projected distance, \(R_{\rm 2D}<0.15R_{\rm 200c}\).
distance from the cluster's centre (\(R_{3D}\)), sub-halo half mass radius, and baryon fraction. The Spearman correlation value indicates that the sub-halo half mass radius and being further away from the cluster centre is associated with a more significant deviation from the observed \(M_{\rm sub}-V_{\rm circ}\) relation. On the other hand, older galaxy stellar age (formed earlier) and higher sub-halo baryonic mass fraction tend to bring the simulated sub-halo \(V_{\rm circ}\) closer to the observed relation.
* Upon comparing the sub-halo properties of Gadget-X and Gizmo-Simba, it is evident that Gadget-X exhibits slightly higher baryonic content in its simulated clusters' sub-halos (Figure 6). Additionally, the distribution of galaxy ages is highly comparable between Gadget-X and Gizmo-Simba, with a slightly higher proportion of young galaxies in Gadget-X (Figure 6). From the Spearman correlation analysis of sub-halo properties, we would anticipate that Gadget-X will be closer to the observation fitting line compared to Gizmo-Simba; however, we observed the opposite. Specifically, the size and concentration of sub-halos/galaxies are identified as crucial factors that contribute to the differences in the \(M_{\rm sub}-V_{\rm circ}\) relation, with Gizmo-Simba having smaller sizes and higher concentrations compared to Gadget-X. The differences in sub-halo properties imply that creating even more compact sub-halos/galaxies, maybe through earlier galaxy formation, may result in improved model-observational data alignment.
* Investigation of global host halo properties in relation to the \(M_{\rm sub}\)-\(V_{\rm circ}\) relationship reveals that relaxed halos exhibit the strongest alignment with observations (negative correlation with COM offset and \(\overline{ds}\)). A modest negative correlation between spin parameters and \(\overline{ds}\) indicates a tendency for slow-rotating halos to be closer to the observed fitting line, albeit with a weak correlation. Additionally, positive correlation is observed between \(\overline{ds}\) and the halo baryon fraction, suggesting a connection to the baryon fraction of sub-halos Figure 7.
In conclusion, our analysis of galaxy clusters simulated using both Gadget-X and Gizmo-Simba in the The Three Hundred project reveals a discrepancy when comparing them to observations of M20. Our findings suggest that some contemporary simulations struggle to faithfully replicate the observed abundance and compactness of sub-halos. This disparity may arise from limitations in baryonic modeling, systematic challenges within our simulation approaches, uncertainties in observational data and their modeling, or potentially, limitations inherent to the \(\Lambda\)CDM framework.
It is necessary to note here that the comparison done in this paper is based on the AHF halo catalogue instead of SUBFIND in previous studies. We refer to Onions et al. (2012) and Castro et al. (2023) for detailed comparisons between different sub-halo finders and discussions. Both AHF and SUBFIND have unbinding processes to remove the particles that are not gravitationally bound to the sub-halo. This is inconsistent with the sub-halo mass measured in observation apart from the projection effect. Nevertheless, using the observation-like sub-halo mass, will only increase the discrepancy between simulation and observation for both will increase the sub-halo mass and shift the simulated \(M_{\rm sub}\)-\(V_{\rm circ}\) towards the right side, i.e. away from the observed fitting line.
## Acknowledgements
The authors would like to express their sincere gratitude to Frazer Pearce and Elena Rasia for the insightful discussion and helpful comments, which significantly improved the analytical aspect of this work. WC is supported by the STFC AGP Grant ST/V000594/1 and the Atraccion de Talento Contract no. 2020-T1/TIC-19882 granted by the Comunidad de Madrid in Spain. He also thanks the Ministerio de Ciencia e Innovacion (Spain) for financial support under Project grant PID2021-122603NB-C21 and ERC: HORIZON-TMA-MSCA-SE for supporting the LACEGAL-III project with grant number 101086388. Carlo Giocoli thanks the support from INAF theory Grant 2022: Illuminating Dark Matter using Weak Lensing by Cluster Satellites.
The high-resolution simulations were performed at the MareNostrum Supercomputer of the BSC-CNS through The Red Espanola de Supercomputacion grants (AECT-2022-3-0027, AECT-2023-1-0013), and at the DIAL - DiRAC machines at the University of Leicester through the RAC15 grant: Seedcorn/ACTP317
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author. The simulation data is provided by the300 collaboration which can also be accessed upon request on their website.
|
2310.00495 | Characterization of hydrogenated amorphous silicon sensors on polyimide
flexible substrate | Hydrogenated amorphous silicon (a-Si:H) is a material having an intrinsically
high radiation hardness that can be deposited on flexible substrates like
Polyimide. For these properties a-Si:H can be used for the production of
flexible sensors. a-Si:H sensors can be successfully utilized in dosimetry,
beam monitoring for particle physics (x-ray, electron, gamma-ray and proton
detection) and radiotherapy, radiation flux measurement for space applications
(study of solar energetic particles and stellar events) and neutron flux
measurements. In this paper we have studied the dosimetric x-ray response of
n-i-p diodes deposited on Polyimide. We measured the linearity of the
photocurrent response to x-rays versus dose-rate from which we have extracted
the dosimetric x-ray sensitivity at various bias voltages. In particular low
bias voltage operation has been studied to assess the high energy efficiency of
these kind of sensor. A measurement of stability of x-ray response versus time
has been shown. The effect of detectors annealing has been studied. Operation
under bending at various bending radii is also shown. | M. Menichelli, L. Antognini, S. Aziz, A. Bashiri, M. Bizzarri, L. Calcagnile, M. Caprai, D. Caputo, A. P. Caricato, R. Catalano, D. Chilà, G. A. P. Cirrone, T. Croci, G. Cuttone, G. De Cesare, S. Dunand, M. Fabi, L. Frontini, C. Grimani, M. Ionica, K. Kanxheri, M. Large, V. Liberali, N. Lovecchio, M. Martino, G. Maruccio, G. Mazza, A. G. Monteduro, A. Morozzi, F. Moscatelli, A. Nascetti, S. Pallotta, A. Papi, D. Passeri, M. Pedio, M. Petasecca, G. Petringa, F. Peverini, L. Piccolo, P. Placidi, G. Quarta, S. Rizzato, G. Rossi, F. Sabbatini, L. Servoli, A. Stabile, C. Talamonti, J. E. Thomet, L. Tosti, M. Villani, R. J. Wheadon, N. Wyrsch, N. Zema | 2023-09-30T21:29:06Z | http://arxiv.org/abs/2310.00495v1 | # Characterization of hydrogenated amorphous silicon sensors on polyimide flexible substrate
###### Abstract
Hydrogenated amorphous silicon (a-Si:H) is a material having an intrinsically high radiation hardness that can be deposited on flexible substrates like Polymide. For these properties a-Si:H can be used for the production of flexible sensors. a-Si:H sensors can be successfully utilized in dosimetry, beam monitoring for particle physics (x-ray, electron, gamma-ray and proton detection) and radiotherapy, radiation flux measurement for space applications (study of solar energetic particles and stellar events) and neutron flux measurements. In this paper we have studied the dosimetric x-ray response of n-i-p diodes deposited on Polymide. We measured the linearity of the photocurrent response to x-rays versus dose-rate from which we have extracted the dosimetric x-ray sensitivity at various bias voltages. In particular low bias voltage operation has been studied to assess the high energy efficiency of these kind of sensor. A measurement of stability of x-ray response versus time has been shown. The effect of detectors annealing has been studied. Operation under bending at various bending radii is also shown.
Hydrogenated Silicon detectors, Radiation Hardness, Flexible detectors.
## I Introduction
Hydrogenated amorphous silicon (a-Si:H) is a disordered semiconductor that can be deposited by plasma-enhanced chemical vapor deposition (PECVD) from a mixture of Silane (SiH\({}_{4}\)) and hydrogen at typical temperatures of 180-250 \({}^{\circ}\)C. Plasma excitation for the material used to fabricate the sensors described in this paper, is performed using VHF at 70 MHz [1]. Due to the low deposition temperature of a-Si:H it can be easily deposited on flexible materials like Polymide (PI). The disordered nature of a-Si:H, also includes the presence of dangling bonds. In pure amorphous silicon, these dangling bonds, lead to a highly defective material. However, the passivation process by hydrogenation allows the reduction, by several orders of magnitude, of the density of the defects and also increases the bandgap. This results in making a-Si:H a viable material for radiation detector fabrication, for solar cells production and also for the development of electronic devices [2,3].
Another relevant feature of a-Si:H is its excellent radiation resistance. Photons [4,5], protons [6] and recently also neutrons [7] radiation tests have been performed on a-Si:H solar cells and detectors. The results of photon and proton tests are summarized in [8].
The HASPIDE project [9] is devoted to the development of a-Si:H sensors deposited on PI having either a n-i-p diode structure or charge selective contact device structure [10]. The
main application foreseen for these sensors includes: beam monitoring for high-energy physics applications and for clinical beams, and as TRansmission Detectors (TRDs) both for electron beams for radiotherapy and proton accelerators for hadron therapy. Additional interesting fields of application for these sensors include x-ray beam dose profiling for medical and industrial applications [11], detectors for solar flare events monitoring to be used in space missions [12], and neutron detection for industrial, nuclear safeguard and homeland security.
In this paper we report the x-ray response of 5 mm x 5 mm and 2 mm x 2 mm n-i-p diodes both having 2.5 \(\upmu\)m thickness. Both size of diodes have been studied for leakage current versus biasing voltage. Also the photocurrent at various dose rates has been studied in order to extract the radiation sensitivity of devices at various bias voltages including very low voltages (0-1 V). Preliminary annealing effects and long-term (about 6 h) stability of response measurements have been performed and the operation of bent detectors has also been tested.
## II X-ray response of n-i-p diodes on PI
The radiation sensors for the HASPIDE project have two different configurations n-i-p diodes and charge selective contact devices (CSC). N-i-p diodes are formed by a thin (e.g. tens of nm) layer of p-doped a-Si:H, a thicker (1-10 \(\upmu\)m) layer of intrinsic (undoped) a-Si:H and a thin layer of n-type doped a-Si:H. Charge selective contact devices [10] are based on a three-layer structure featuring a thin layer of metal-oxides where with a small activation energy (such as TiO\({}_{2}\)), a thick layer of intrinsic a-Si:H, and a thin layer of metal-oxides with a large activation energy (such as MoO\({}_{\mathrm{x}}\) or WO\({}_{\mathrm{x}}\)). In this paper tests on n-i-p diodes will be shown. The detailed structure of this device is shown in Fig. 1. On the top of a 25 \(\upmu\)m thick PI substrate an Aluminum layer is deposited via sputtering (90 nm thickness). In order to avoid diffusion of Aluminum in a-Si:H, a layer of 5 nm of Chromium is deposited over the Aluminum using the same technique. On top of this metal a layer of n-doped a-Si:H is deposited (ca. 20 nm thickness) via PECVD from a mixture of SiH\({}_{4}\), H\({}_{2}\) and PH\({}_{3}\). To create a n-i-p structure a 2.5 \(\upmu\)m layer of intrinsic a-Si:H is deposited on top of the n-doped layer (via PECVD). On top of this layer a patterned deposition of p-doped a-Si:H is performed (PECVD of a mixture of SiH\({}_{4}\), H\({}_{2}\) and B\({}_{2}\)H\({}_{4}\)). On the p-doped pattern a deposition of Indium Tin Oxide (ITO) is performed via sputtering; a top view of the resulting detector is shown in Fig.2.
The x-ray setup used for the measurements described in this paper is shown in Fig.3. The irradiated sample includes five 2 mm x 2 mm and one 5 mm x 5 mm n-i-p diodes and is glued and bonded (using a copper based conductive glue) to a PCB frame. This is connected to an interface board in connection with a Keithley 2400 SMU (Source Measuring Unit) that is used for biasing the sensor and measuring the output current with a resolution of about 1 pA. The sensor is exposed to x-rays generated by a 10 W x-ray tube from Newton Scientific operating at 50 kV maximum voltage and 200 \(\upmu\)A maximum current [13].
Fig 2: Picture from the top of the detector array before packaging. One 5 mm x 5 mm device and five 2 mm x 2 mm devices are deposited on PI. The light green area is the ITO contacts, the grey area is the intrinsic a-Si:H and the grey area below the green-yellow area is the Cr+Al back contact.
Fig 1: Layout of HASPIDE n-i-p diode prototype.
Fig.4 displays the measurements of dark current at room temperature versus voltage for a 2 mm x 2 mm (small diode) and for a 5 mm x 5 mm (large diode) sensor. The power absorption at 1 V of the detector is 10 pW (large diode) and 1 pW (small diode) while at 10 V bias is below 10 nW for a large diode and 1 nW for a small diode. The ratio between the leakage currents of the large and the small diode is approximately equal to the ratio of the sensor areas.
Fig.4 Leakage current at room temperature versus bias voltage for the 2 mm x 2 mm device (small diode) and for the 5 mm x 5 mm device (large diode).
In order to measure dosimetric sensitivity, the detectors have been irradiated with x-rays using a tube voltage of 40 kV in the range from about 20 to 200 uA of tube current. The dose rate of the emitted radiation in this setup was measured according to the procedure shown in [7]. The large and the small sensors were irradiated in the dose rate range from 0.36 to 3.11 eGy/s and the photocurrents have been measured at different values of the detector bias. After the subtraction of the leakage current, the photocurrent has been plotted versus x-ray tube emitted dose rate at various bias voltages and the results are plotted in Fig.5 for the large diode and in Fig. 6 for one small diode. From these figures it is possible to infer the very good linearity of the detector responses in the measured range.
Fig.5 Net photocurrent versus incident x-ray dose rate for the large (5 mm x 5 mm) device at various bias voltages.
Fig.6 Net photocurrent versus incident x-ray dose rate for the small (2 mm x 2 mm) device at various bias voltages.
The dosimetric sensitivities and linear regression coefficients have been extracted from the slopes of the lines coming from the linear fit; these calculated quantities are shown in Table I.
Fig.4: Leakage current at room temperature versus bias voltage for the 2 mm x 2 mm device (small diode) and for the 5 mm x 5 mm device (large diode).
Fig.3: Setup for X-ray testing. The detector shown in Fig.2 is connected to the SMU through a PCB interface frame. The picture also shows the x-ray tube collimator. The entire setup is enclosed in a climatic chamber for temperature stabilization.
From these measurements, it is possible to determine the power consumption under irradiation. For the small diode at 1V bias, the absorbed power is ranging from 54 pW at 0.36 cGy/s to 432 pW at 3.11 cGy/s, while at 8 V it ranges respectively from 2.74 nW to 14.76 nW. For the large diode also the photocurrent at 0V bias has been measured with negligible power consumption while at 8 V the power consumption of the detector ranges from 10.24 nW to 77.36 nW. These data demonstrate the very low power consumption of these sensors.
## III Detector Long-term stability of n-i-p devices measured with X-rays
A test on the longer term of the x-ray response of these devices has also been performed: a 5 mm x 5 mm device has been irradiated for 2.1 x 10\({}^{4}\) s at 40 kV tube bias voltage with a dose rate of 0.4 cGy/s. Fig.7 shows the raw data of the time profile of the collected photocurrent (red data points), the slow rise of the current, in the stabilization phase, may be due to thermal effects. For this reason a background dark current, evaluated by a linear extrapolation between the dark current before and after irradiation, has been subtracted to the raw photocurrent. The result of this correction is shown with the green data points. After the application of this algorithm, we are able to compensate for the thermal effect of the x-ray irradiation.
Fig.7 Raw photocurrent of a 5 mm x 5 mm n-i-p device biased at 8 V and irradiated at the rate of 0.4 cGy/s versus time (red data points) and photocurrent corrected for the increase of background leakage current due to thermal effect (green data points).
## IV Operation of the sensor during bending
In order to test the operation of the device when bent, a bending test has been performed on the sensor using a 615 nm optical laser. The test setup is shown in Fig.8. The detector is glued on a flexible Polyimide PCB support; the support is fixed on a jig equipped with two jaws which, approaching each other, cause the curvature of the support and therefore of the sensor. A camera observes the support from the side and using an appropriate software (ImageJ [14]) it is possible to superimpose a circle on the bent support image and calculate the radius of curvature of the bending (Fig.9). By illuminating the sensor with the laser the photocurrent has been measured on the sensor, to check if there are relevant changes due to bending. The laser is mounted on a movable support in order to keep the same distance from the sensor to correct for the small divergence. The measurement started from a flat configuration of the support and the measured value was used to the photocurrent measurements taken during bending. The curvature was then increased to a bending radius of about 8 mm and then decreased back again to a flat position. The results of the relative photocurrent versus curvature radius are shown in Fig. 10a. The points on the blue line were taken during bending radius decrease and the points on the brown line were taken during a bending radius increase, in order to check for degradation or hysteresis; Fig 10b shows the percentage of the deviation from the initial photocurrent.
Fig.8 The setup for the bending test. The sample is glued on a kapton PCB shielded by aluminum. The support is mounted on a jig with two jaws to change the bending radius. The camera and the laser diode (on the top) are also shown.
Fig.9: The bent sensor on the jig and the curvature radius measurement.
From the measurements we can see that except for the point at 8 mm bending radius, where the photocurrent is 95% of the flat response, the deviation from the flat response is below 3% and there is only a little difference between response during curvature radius decrease (compression) and response during curvature radius increase (relaxation).
## V Annealing studies
During radiation damage tests with neutrons [7] especially at \(\rm 10^{16}~{}n_{\rm cy}cm^{2}\) for n-i-p devices, a very large recovery effect due to annealing has been observed. The irradiated components, after the annealing, improved their characteristics not only in comparison with the post-irradiation phase but also in comparison to the performances measured before irradiation. For these reasons we tested non-irradiated components before and after the annealing. The annealing was performed in two phases: a) 12-hours of baking at 100 \({}^{\circ}\)C and b) 24-hours (overall) of baking at 100 \({}^{\circ}\)C. Fig. 11 shows the photocurrent versus time, where at an irradiation of 2.456 mGy/s, we can notice the time response of the various stages of the annealing test. Fig. 12 shows the photocurrent versus x-ray dose rate for the component after the first and second phase of the annealing test. From this graph we can notice a large increase in the dosimetric sensitivity response (from 1.8 to 18.0 nC/cGy) after the first phase while after the second phase a small decrease of the response is observed (from 18.0 to 17.4 nC/cGy).
From this test we can infer that annealing can be beneficial not only on irradiated components but also on non-irradiated ones. Although the best duration for this annealing is still under optimization, these results suggest it will be below 12 hours.
## VI Conclusions
Two different size (2 mm x 2 mm and 5 mm x 5 mm ) a-Si:H n-i-p devices on PI, (having 2.5 \(\upmu\)m thickness) have been built in the context of the HASPIDE project, aiming at the construction of flexible planar detectors for radiation flux measurements and neutron detection. These devices have been tested for leakage current, dosimetric sensitivity at various bias voltages, long-term behavior in response, flexibility and annealing. The results show a very good linearity in the dose rate range tested, in addition to a good sensitivity and leakage current scaling with the area of the devices. Power requirements of the detectors ranges from tens of pW to tens
Fig. 11: Photocurrent amplitude signal of an a-Si:H sensor before annealing (orange line), after 12 hours of annealing (blue line) and 24 hours annealing (purple line).
Fig. 10: Results from bending measurements. a) Charge response vs. bending radius. Measurements on the blue line are taken during compression and measurements on the brown line are taken during relaxation. b) Deviation from flat response (100%) vs bending radius.
of nW depending on sensor size and bias voltage. Bias voltage is related to the dosimetric sensitivity of the device; the greater the needed sensitivity, the higher bias voltage required and therefore a higher power consumption is expected. Long term behavior of the sensor is sufficiently stable especially if power dissipation is correctly implemented. Flexibility under operation is very good; above 1 cm of bending radius the photocurrent variations are contained within \(\pm\) 3% of the flat-sensor response. Furthermore, we observe beneficial effect on the device annealed for 12h at the temperature of 100 \({}^{\circ}\)C.
## Acknowledgements
The HASPIDE project is funded by INFN through the CSN5 and was partially supported by the "Fondazione Cassa di Risparmio di Perugia" RISAI project n. 2019.0245. F. Peverini has a PhD scholarship funded by the PON program. M. J. Large is supported by the Australian Government Research Training Program (AGRTP) scholarship and the Australian Institute of Nuclear Science (AINSE) Post-Graduate Research Award (PGRA). A. Bashiri is sponsored by Najran University, Saudi Arabia. L. Antognini and J. E. Thomet are supported by the Swiss National Science Foundation (grant number 200021_212208/1).
|
2309.14583 | On the dynamic behavior of the network SIR epidemic model | We study a susceptible-infected-recovered (SIR) epidemic model on a network
of $n$ interacting subpopulations. We analyze the transient and asymptotic
behavior of the infection dynamics in each node of the network. In contrast to
the classical scalar epidemic SIR model, where the infection curve is known to
be unimodal (either always decreasing over time, or initially increasing until
reaching a peak and from then on monotonically decreasing and asymptotically
vanishing), we show the possible occurrence of multimodal infection curves in
the network SIR epidemic model with $n\ge2$ subpopulations. We then focus on
the special case of rank-$1$ interaction matrices, modeling subpopulations of
homogeneously mixing individuals with different activity rates, susceptibility
to the disease, and infectivity levels. For this special case, we find $n$
invariants of motion and provide an explicit expression for the limit
equilibrium point. We also determine necessary and sufficient conditions for
stability of the equilibrium points. We then establish an upper bound on the
number of changes of monotonicity of the infection curve at the single node
level and provide sufficient conditions for its multimodality. Finally, we
present some numerical results revealing that, in the case of interaction
matrices with rank larger than $1$, the single nodes' infection curves may
display multiple peaks. | Martina Alutto, Leonardo Cianfanelli, Giacomo Como, Fabio Fagnani | 2023-09-26T00:13:43Z | http://arxiv.org/abs/2309.14583v2 | # Multiple peaks in
###### Abstract
We study a susceptible-infected-recovered (SIR) epidemic model on a network of interacting subpopulations and analyze the dynamical behavior of the fraction of infected agents in each node of the network. In contrast to the classical scalar epidemic SIR model, in which the fraction of infected is known to have a unimodal behavior (decreasing over time or initially increasing until reaching a peak and then decreasing), we show the possible occurrence of new multimodal behaviors in the network SIR model. We focus on the special case of rank-1 network matrices, which model subpopulations of homogeneously mixing agents with different interaction levels. We provide an upper bound on the number of changes of monotonicity of the fraction of infected at the single node level and provide sufficient conditions under which such multimodal behavior occurs. We then conduct a numerical analysis, revealing that, in case of more general network matrices, the dynamics may exhibit complex behaviors with multiple peaks of infection in each node.
Epidemic models, Susceptible-Infected-Recovered model.
## I Introduction
The pandemic emergence of last years has generated a renewed huge interest on epidemic compartment models that have proven to be effective tools for the forecasting of virus spreading but also in assisting in the design of policy containment rules such as distanciation and lockdown.
The simplest and most popular among these models is the SIR epidemic model introduced almost one century ago [2] and since then well studied in the literature [3, 4, 5]. According to it, a population is split into three categories: the _susceptible_ agents, who have not yet been infected and can catch the disease, the _infected_ agents, who are currently carrying the pathogen and allow the transmission of the disease, and the _recovered_ agents, who have healed from the infection and are for ever immune. The model assumes that the rate of new infections is proportional to the product between the number of susceptible and infected agents due to pairwise interactions, implicitly assuming the homogeneous mixing of the population. The crucial index in the SIR model is the so called _reproduction number_\(R(t)\), a time-dependent scalar quantity which describes the average new infections that an infected individual is producing. If \(R(t)<1\), then the fraction of infected agents is decreasing at time \(t\). On the other hand, if \(R(t)>1\), then the fraction of infected agents increases. As \(R(t)\) is monotonically decreasing in \(t\) and eventually becomes less than \(1\), an epidemic wave, as modeled by SIR, has necessarily a _unimodal_ behavior. Precisely, if \(R(0)<1\), no spread will occur, the number of infected individuals will be decreasing and it will approach \(0\) as time gets large. If instead \(R(0)>1\), the curve of infected will be increasing up to a maximum value (the _peak_) and then will start to decrease. The unimodal behavior of the SIR dynamics has been shown to hold also for more general interaction mechanisms [6] and is at the basis of several control strategies, including some recently proposed in the context of the COVID-19 pandemic, see, e.g., [7] and [8].
The classical SIR model relies on a number of homogeneity assumptions on the population regarding their mixing, their aptitude to contract the infection as well the time needed to recover that can hardly be met in realistic scenarios. This has motivated the introduction of networked versions of the SIR model [9, 10, 11, 12, 13]. In such models (that will be referenced as _network SIR models_), each node of the network represents a subpopulation of indistinguishable agents and may describe, depending on the applicative contexts, geographical areas or rather population categories like age, life style, etc... Interactions between agents of different nodes is coded into a matrix \(A\): given subpopulations \(i\) and \(j\), \(A_{ij}\) is the rate of new infections in node \(i\) due to the presence of infected agents in node \(j\) and may incorporate the peculiar susceptibility of individuals in \(i\), the rate of interactions among members of the two subpopulations, and, possibly, the effect of targeted containment policies. We shall refer to \(A\) as the _interaction matrix_ of the SIR network model.
Several recent papers [14, 15] use calibrated network SIR models to examine the impact of age-targeted mitigation policies for the COVID-19 pandemic showing how such policies (even with just two age groups) can outperform uniform intervention policies in terms of both mortality rates and economic productivity.
While most of the studies on the network SIR model are empiric, there are two notable exceptions. In [12] the authors discover a novel network reproduction number that is a decreasing function of time converging to \(0\) and that plays an equivalent role as the one in the scalar SIR model. When this reproduction number is below one, then a certain
aggregated infection index (a linear combination of the number of infected agents in the various subpopulations) decreases. When instead this number is above one, this aggregated infection index will first increase and, once the reproduction number becomes smaller than 1, it will start decreasing to \(0\). However, this aggregated infection index is defined through weights that depend on the initial condition and are possibly time varying and this limits its possible applications. In [16] the authors analyze a network SIR model with symmetric rank-\(1\) network matrices, which results from assuming that agents have different interaction levels but there is no homophily in the society (the analysis is then generalized to include homophily). It is shown that the heterogeneity affects the final size of the outbreak, namely it is possible to reach herd immunity with an aggregate fraction of infected smaller than in the scalar SIR. At the best of our knowledge, there is no analysis in the literature on the behavior of the curve of infected at single nodes which is relevant in understanding the effectiveness of targeted interventions.
This paper gives a novel theoretical contribution to understanding the network SIR model with \(n\) nodes in the special case when the interaction matrix has rank \(1\). This is a relevant case previously analyzed in [14], [16]. Our results are threefold. First, we provide \(n\) invariants of motion and a new aggregated infection index with fixed weights only depending on the matrix \(A\), which always exhibits a unimodal behavior as function of time, analogously to the curve of infected in the scalar case. Second, we carry on a node level analysis proving that the curve of infected at every node can undertake at most two changes of monotonicity before the reproduction number gets below \(1\), and since then it is monotonically decreasing. Third, we exhibit a class of network SIR model with just two nodes where the curve of infected at one of the two nodes effectively does not present an unimodal behavior, namely an initial decreasing behavior with a subsequent local minimum point, leading to an increasing behavior to a peak and then decreasing to zero.
We are aware that the phenomenon of multiple waves of infection cannot be totally explained by the heterogeneity introduced by the network and is also largely determined by the adaptive behavior and endogenous response of agents to the epidemic, as well as by the phenomenon of having immunity [17]. For example, some papers have studied models that take into account how agents adapt their behavior, resulting in a modification of the parameters of the model at macroscopic level [6], [18], or in relation to loss of immunity over time [19], [20].
We also acknowledge the many possible extensions of the network SIR model to more than three compartments to keep track for instance of the many forms of infection and possibly vaccination [21]-[24]. The possibility to extend our results to more complex models is left as future research.
The rest of the paper is organized as follows. In Section II we describe the network SIR model and summarize the results known in the literature. In Section III we state our main result, which characterizes all the possible behaviors that the dynamics may exhibit at single node levels when the interaction matrix has rank 1. In Section IV, we provide sufficient conditions for the existence of multimodal behaviors. In Section V we illustrate numerical simulations on more generale networks. Finally, in Section VI we summarize our work and discuss future research lines.
### Notation
Here we briefly gather some notational conventions adopted throughout the paper. We denote by \(\mathbb{R}\) and \(\mathbb{R}_{+}\) the sets of real and nonnegative real numbers, respectively, while \(\mathbb{R}_{+}^{n\times n}\) indicates the set of real matrices with dimension \(n\times n\) and nonnegative entries. The all-1 vector and the all-0 vector are denoted by \(\mathbf{1}\) and \(\mathbf{0}\) respectively, where the size of them may be deduced from the context. Given a vector \(x\), we let \(x^{T}\) denote its transpose, and \([x]\) indicate the diagonal matrix whose entries coincide with vector \(x\). For an irreducible nonnegative matrix \(A\), we let \(\lambda_{max}(A)\) and \(v_{max}(A)\) denote respectively the dominant eigenvalue of \(A\) and the corresponding left eigenvector, which is unique and has positive entries due to the Perron-Frobenius theorem. Inequalities between two vectors \(x\) and \(y\) in \(\mathbb{R}^{n}\) are meant to hold true entry-wise, i.e., e.g., \(x\leq y\) means that \(x_{i}\leq y_{i}\) for every \(i\), whereas \(x<y\) means that \(x_{i}<y_{i}\) for every \(i\), and whereas \(x\leq y\) means that \(x_{i}\leq y_{i}\) for every \(i\) and \(x_{j}<y_{j}\) for some \(j\).
## 2 Network SIR epidemic model
In this section, we introduce the network SIR epidemic model and gather some known results that will prove useful in the sequel.
We shall model networks as finite weighted directed graphs \(\mathcal{G}=(\mathcal{V},\mathcal{E},A)\), where \(\mathcal{V}=\{1,2,\ldots,n\}\) is the set of nodes, \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is the set of directed links, and \(A\) in \(\mathbb{R}_{+}^{n\times n}\) is a nonnegative weight matrix, to be referred as the interaction matrix, with the property that \(A_{ij}>0\) if and only if there exists a link \((i,j)\) in \(\mathcal{E}\) directed from node \(i\) to node \(j\). A network is referred to as strongly connected if its interaction matrix \(A\) is irreducible.
In a network SIR epidemic model, a set of interacting subpopulations \(\mathcal{V}\) are identified with the nodes of a network \(\mathcal{G}\). For every subpopulation \(i\) in \(\mathcal{V}\) the time-varying variables \(x_{i}\), \(y_{i}\), and \(z_{i}\) represent the fractions of susceptible, infected, and recovered individuals, respectively, so that sum
\[x_{i}+y_{i}+z_{i}=1\]
remains constant in time. The entries \(A_{ij}\) of the interaction matrix represent the product between the infection rate and the contact frequency between agents of subpopulation \(i\) and agents of subpopulation \(j\). Finally, a positive scalar parameter \(\gamma\) models the recovery rate, which is assumed to be homogeneous across the network.
The network SIR epidemic model with interaction matrix \(A\) and recovery rate \(\gamma\) is then the autonomous system of ordinary differential equations
\[\begin{cases}\dot{x}_{i}=-x_{i}\sum_{j}A_{ij}y_{j}\\ \dot{y}_{i}=x_{i}\sum_{j}A_{ij}y_{j}-\gamma y_{i}\\ \dot{z}_{i}=\gamma y_{i}\,,\end{cases} \tag{1}\]
for every \(i=1,\ldots,n\). Notice that the third equation is redundant since \(z_{i}(t)=1-x_{i}(t)-y_{i}(t)\) for every subpopulation \(i\) in \(\mathcal{V}\) and time \(t\geq 0\). The network SIR epidemic model (1) can then be more compactly rewritten in the following vectorial form
\[\dot{x}=-[x]Ay\,,\qquad\dot{y}=[x]Ay-\gamma y\,, \tag{2}\]
where \(x\) and \(y\) in \(\mathbb{R}^{n}_{+}\) denote the vectors of susceptible and infected individuals, respectively, in all the different subpopulations.
The following result gathers some basic properties of the network SIR model.
**Proposition 1**: _Consider the network SIR epidemic model (2) with irreducible interaction matrix \(A\) and recovery rate \(\gamma>0\). Then,_
1. _the set_ \(\mathcal{S}=\{(x,y)\in[0,1]^{2n}:x+y\leq\mathbf{1}\}\) _is invariant;_
2. _the set of equilibrium points in_ \(\mathcal{S}\) _is_ \[\mathcal{S}^{*}=\{(x^{*},\mathbf{0}):\,x^{*}\in[0,1]^{n}\}\,;\]
3. _an equilibrium point_ \((x^{*},\mathbf{0})\) _in_ \(\mathcal{S}^{*}\) _is stable if and only if_ \[\lambda_{\max}([x^{*}]A)<\gamma\,.\]
_Moreover, for every initial condition \((x(0),y(0))\) in \(\mathcal{S}\):_
1. _for every_ \(i=1,\ldots,n\)_,_ \(x_{i}(t)\) _is non-increasing for_ \(t\geq 0\)_, and_ \(x_{i}(0)>0\) _if and only if_ \(x_{i}(t)>0\) _for every_ \(t\geq 0\)_;_
2. _if_ \(y(0)\geq\mathbf{0}\)_, then_ \(y(t)>\mathbf{0}\) _for every_ \(t>0\)_;_
3. _there exists_ \(\mathbf{0}\leq x^{*}\leq x(0)\) _such that_ \[\lim_{t\to+\infty}x(t)=x^{*}\,,\qquad\lim_{t\to+\infty}y(t)=\mathbf{0}\,.\] (3)
See [11] and [12].
In the special case when \(n=1\), so that the interaction matrix reduces to a positive scalar value \(A=\beta>0\) representing the contagion rate, the network SIR epidemic model (2) reduces to the classical scalar SIR epidemic model
\[\dot{x}=-\beta xy\,,\qquad\dot{y}=(\beta x-\gamma)y\,. \tag{3}\]
For the scalar SIR epidemic model (3), a more refined analysis is available. In particular, the following fundamental result is known to hold true.
**Proposition 2**: _For the scalar SIR epidemic model (3), with contagion rate \(\beta>0\), recovery rate \(\gamma>0\), and initial condition \((x(0),y(0))\) such that \(0<x(0)\leq 1-y(0)\leq 1\),_
1. _the quantity_ \(\beta(x+y)-\gamma\log x\) _is an invariant of motion._
2. _Moreover, if_ \(y(0)>0\)_, then:_
3. _if_ \(\beta x(0)\leq\gamma\)_, then_ \(y(t)\) _is strictly decreasing for_ \(t\geq 0\)_;_
4. _if_ \(\beta x(0)>\gamma\)_, then there exists a peak time_ \(\hat{t}>0\) _such that_ \(y(t)\) _is strictly increasing for_ \(t\) _in_ \([0,\hat{t}]\) _and strictly decreasing for_ \(t\) _in_ \([\hat{t},+\infty)\)_;_
5. _the limit value_ \(x^{*}=\lim_{t\to+\infty}x(t)\) _is the unique solution of the equation_ \[\beta x^{*}-\gamma\log x^{*}=\beta(x(0)+y(0))-\gamma\log x(0)\,,\] (4) _in the interval_ \([0,\gamma/\beta]\)_._
See [25, Chapter 2.4].
We now provide a simple example of a network SIR epidemic model with just two nodes where the curve of infected agents at single node level has multiple peaks, for a range of initial conditions.
**Example 1**: _Consider the network SIR epidemic model (1) with \(n=2\) subpopulations, interaction matrix \(A=\mathbf{1}\mathbf{1}^{\prime}\), unitary recovery rate \(\gamma=1\), and initial condition_
\[y_{1}(0)=1-x_{1}(0)=\varepsilon\,,\quad y_{2}(0)=1-x_{2}(0)=0\,, \tag{5}\]
_for some \(\varepsilon>0\) is such that_
\[\frac{1-\varepsilon}{2-\varepsilon}(1-\log(2-\varepsilon))>\varepsilon\,. \tag{6}\]
_Notice that a range of such values of \(\varepsilon\) always exist since the function \(g(\varepsilon)=\frac{1-\varepsilon}{2-\varepsilon}(1-\log(2-\varepsilon))-\varepsilon\) is continuous in the interval \([0,1]\) and such that \(g(0)=1/2(1-\log 2)>0\)._
Observe that with these initial conditions we have
\[\dot{y}_{1}(0)=x_{1}(0)(y_{1}(0)+y_{2}(0))-y_{1}(0)=-\varepsilon^{2}<0\,, \tag{7}\]
which implies that \(y_{1}(t)\) is strictly decreasing for sufficiently small \(t>0\). We will now show that \(y_{1}(t)\) cannot remain decreasing for all values of \(t\geq 0\), but will necessary become increasing in a certain time range, before eventually starting to decrease again and vanish as \(t\) grows large.
Towards this goal, first observe that the aggregate variables \(\overline{x}=x_{1}+x_{2}\) and \(\overline{y}=y_{1}+y_{2}\) satisfy an autonomous scalar SIR epidemic model
\[\dot{\overline{x}}=-\overline{x}\,\overline{y}\,,\qquad\dot{\overline{y}}=( \overline{x}-1)\overline{y}\,. \tag{8}\]
Then, since \(\dot{\overline{y}}(0)=(\overline{x}(0)-1)\overline{y}(0)>0\,,\) Proposition 2 (iii) implies that there exists a peak time \(\hat{t}>0\) at which \(\hat{y}(\hat{t})=0\), i.e., \(\overline{x}(\hat{t})=1\). This, Proposition 2(i) and (5) imply that
\[\overline{y}(\hat{t})=\overline{x}(0)+\overline{y}(0)-\overline{x}(\hat{t})+ \log\frac{\overline{x}(\hat{t})}{\overline{x}(0)}=1-\log(2-\varepsilon)\,. \tag{9}\]
Since \(\dot{x}_{2}=-x_{2}\overline{y}\) and \(x_{2}(0)=1\), we have that
\[x_{2}(\hat{t})=\exp\Big{(}-\int_{0}^{\hat{t}}\overline{y}(t)\mathrm{d}t\Big{)} =\frac{\overline{x}(\hat{t})}{\overline{x}(0)}=\frac{1}{2-\varepsilon}\,, \tag{10}\]
Figure 1: Numerical simulation of the network SIR epidemic model with \(n=2\) nodes with interaction matrix \(A=\mathbf{1}\mathbf{1}^{\prime}\), recovery rate \(\gamma=1\), and initial condition \(y_{1}(0)=1-x_{1}(0)=0.2\) and \(y_{2}(0)=1-x_{2}(0)=0\) satisfying (5)-(6).
where the second equality follows from integrating the first equation in (8) and the last one follows from (5). It then follows from (9) and (10) that
\[\dot{y}_{2}(\hat{t})=x_{2}(\hat{t})\overline{y}(\hat{t})-y_{2}(\hat{t})=\frac{1- \log(2-\varepsilon)}{2-\varepsilon}-y_{2}(\hat{t})\,. \tag{11}\]
Now, assume by contradiction that \(\dot{\hat{y}}_{1}(t)\leq 0\) for all \(t\geq 0\). In particular, this would imply that \(y_{1}(\hat{t})\leq y_{1}(0)=\varepsilon\), so that
\[y_{2}(\hat{t})=\overline{y}(\hat{t})-y_{1}(\hat{t})\geq 1-\log(2-\varepsilon)- \varepsilon\,.\]
by (9). Recalling that \(\dot{\overline{y}}(\hat{t})=0\), substituting the above in the righthand side of (11), and using (6) we would then get
\[\dot{y}_{1}(\hat{t})=\dot{\overline{y}}(\hat{t})-\dot{y}_{2}(\hat{t})\geq\frac {1-\varepsilon}{2-\varepsilon}\left(1-\log(2-\varepsilon)\right)-\varepsilon> 0\,.\]
It then follows that the must exist some values of \(t\geq 0\) such that \(\dot{y}_{1}(t)>0\). Together with (7) and the fact \(\lim_{t\to+\infty}y_{1}(t)=0\) by Proposition 1(vi), this implies that \(y_{1}(t)\) has a multimodal behavior.
In fact, the results in Section III will imply that such behavior is necessarily as illustrated in Figure 1: \(y_{1}(t)\) is strictly decreasing in an interval \([0,\hat{t}_{1}]\), until reaching a positive local minimum point \(\hat{t}_{1}>0\), it is then strictly increasing in an interval \([\hat{t}_{1},\hat{t}_{1}]\) until reaching a second peak at some time \(\hat{t}_{1}>\hat{t}_{1}\), and is eventually strictly decreasing for \(t\geq\hat{t}_{1}\).
Somewhat surprisingly, the network SIR epidemic model considered in this example can actually be interpreted as a scalar SIR epidemic model where the population of agents has simply been split into two equally sized subpopulations, which have distinct initial conditions. Specifically, the first population has all the initially infected people while the second population is totally constituted by susceptible individuals. The parameters chosen are such that if the two subpopulations were isolated, the first subpopulation would undertake an exponential decrease to a disease-free state. However, because of the presence of the second subpopulation, the infection can further spread and eventually hit back the first subpopulation that thus suffers a second wave of infection with a second peak (the first one being at time \(0\)).
In the next section we study the network SIR epidemic model with rank-\(1\) interaction matrices and provide results on the changes of monotonicity of the curve of the infected fraction of individuals in each subpopulation. In particular, we show that, for arbitrary rank-\(1\) interaction matrices, the number of such changes of monotonicity can never exceed two.
## 3 The network SIR model with rank-\(1\) interaction matrices
In this section, we study the network SIR model in the special case when the interaction matrix \(A\) has rank \(1\), as per the following equivalent assumption.
**Assumption 1**: _The interaction matrix \(A\) satisfies_
\[A=ab^{T}\,, \tag{12}\]
_for two vectors \(a>\mathbf{0}\) and \(b>\mathbf{0}\) in \(\mathbb{R}^{n}\)._
**Remark 1**: _Notice that this case encompasses the one studied in [16] where authors impose the extra condition that \(A\) is symmetric._
Let us define the weighted sums of susceptible fraction of individuals
\[\bar{x}=\sum_{j=1}^{n}b_{j}x_{j}\,, \tag{13}\]
and, respectively, of infected fraction of individuals
\[\bar{y}=\sum_{j=1}^{n}b_{j}y_{j}\,, \tag{14}\]
across the network. Notice that, for rank-\(1\) interaction matrices \(A=ab^{T}\), the network SIR epidemic model's equations (2) can be rewritten as
\[\dot{x}_{i}=-a_{i}x_{i}\bar{y}\,, \tag{15a}\] \[\dot{y}_{i}=a_{i}x_{i}\bar{y}-\gamma y_{i}\,, \tag{15b}\]
for every \(i=1,\ldots,n\). Moreover, let
\[\tilde{x}=\sum_{j=1}^{n}a_{j}b_{j}x_{j}\,, \tag{16}\]
and
\[w_{i}=\tilde{x}-\gamma-a_{i}\overline{y}\,, \tag{17}\]
for every \(i=1,\ldots,n\).
### Invariants of motion and unimodality in the aggregate
We have the following technical result.
**Lemma 1**: _Consider the rank-\(1\) network SIR epidemic model (15a)-(15b). Then,_
\[\dot{\overline{x}}=-\overline{y}\tilde{x}\,, \tag{18}\]
_and_
\[\dot{\overline{y}}=\overline{y}\left(\tilde{x}-\gamma\right)\,. \tag{19}\]
_Moreover,_
\[\ddot{y}_{i}=a_{i}x_{i}\overline{y}w_{i}-\gamma\dot{y}_{i}\,. \tag{20}\]
_for every \(i=1,\ldots,n\)._
See Appendix 1.
Our next result generalizes Proposition 2(i) and determines \(n\) invariants of motions for the network SIR epidemic model with rank-\(1\) interaction matrix.
**Proposition 3**: _Consider the rank-\(1\) network SIR epidemic model (15a)-(15b). Then, for every \(i=1,\ldots,n\), the quantity_
\[h_{i}(x,y)=a_{i}(\overline{x}+\overline{y})-\gamma\log x_{i}\,,\]
_is an invariant of motion._
It follows from (18), (19), and (15a) that
\[\frac{\mathrm{d}}{\mathrm{d}t}h_{i}(x(t),y(t)) = a_{i}(\dot{\overline{x}}(t)+\dot{\overline{y}}(t))-\gamma\frac{ \dot{x}_{i}(t)}{x_{i}(t)}\] \[= -a_{i}\gamma\overline{y}(t)+\gamma a_{i}\overline{y}(t)\] \[= 0\,,\]
thus implying that \(h_{i}(x(t),y(t))\) remains constant along the solutions of the network SIR epidemic model (15a)-(15b).
We can now prove the following result, establishing that, for the network SIR model on rank-\(1\) interaction networks,
the weighted sum of the infected fractions of individuals \(\overline{y}\) defined in (14) has a unimodal behavior.
**Proposition 4**: _Consider the rank-\(1\) network SIR epidemic model (15a)-(15b). Then, for every initial condition \((x(0),y(0))\) such that_
\[0\leq x(0)\leq\mathbf{1}-y(0)\leq\mathbf{1}\,, \tag{21}\]
_we have that:_
1. \(\tilde{x}(t)\) _is strictly decreasing for_ \(t\geq 0\) _and_ \[\lim_{t\to+\infty}\tilde{x}(t)<\gamma\,;\] (22)
2. _if_ \[\tilde{x}(0)\leq\gamma\,,\] (23) _then_ \(\overline{y}(t)\) _is strictly decreasing for_ \(t\geq 0\)_;_
3. _if_ \[\tilde{x}(0)>\gamma\,,\] (24) _then there exists a peak time_ \(\hat{t}>0\) _such that_ \(\overline{y}(t)\) _is strictly increasing on_ \([0,\hat{t}]\) _and strictly decreasing on_ \([\hat{t},+\infty)\)_._
(i) By taking the derivative of both sides of equation (16) and substituting equation (15a), we get
\[\dot{\tilde{x}}=\sum_{j=1}^{n}a_{j}b_{j}\dot{x}_{j}=-\overline{y}\sum_{j=1}^{ n}a_{j}^{2}b_{j}x_{j}\,. \tag{25}\]
Now, the rightmost inequality in (21) and Proposition 1(v) imply that \(y(t)>0\) for every \(t>0\), whereas the leftmost inequality in (21) and Proposition 1(iv) imply that there exists some \(i\) in \(\{1,\ldots,n\}\) such that \(x_{i}(t)>0\) for every \(t\geq 0\). It then follows from (25) and the assumption that \(a>\mathbf{0}\) and \(b>\mathbf{0}\) that
\[\dot{\tilde{x}}(t)=-\overline{y}(t)\sum_{j=1}^{n}a_{j}^{2}b_{j}x_{j}(t)\leq-a_ {i}^{2}b_{i}x_{i}(t)\overline{y}(t)<0\,,\]
for every \(t>0\), which implies that \(t\mapsto\tilde{x}(t)\) is strictly decreasing for \(t\geq 0\). Now, let
\[\tilde{x}(\infty)=\lim_{t\to+\infty}\tilde{x}(t)\,.\]
Clearly, if \(\tilde{x}(0)\leq\gamma\), then \(\tilde{x}(\infty)<\tilde{x}(0)\leq\gamma\), so that inequality (22) is satisfied. On the other hand, if \(\tilde{x}(0)>\gamma\) and \(\tilde{x}(\infty)\geq\gamma\), then \(\tilde{x}(t)>\gamma\) for every \(t\geq 0\), so that, by equation (19),
\[\dot{\overline{y}}(t)=\overline{y}(t)\left(\tilde{x}(t)-\gamma\right)\geq 0 \,,\qquad\forall t\geq 0\,.\]
The above would then imply that \(t\mapsto\overline{y}(t)\) is nondecreasing for \(t\geq 0\), so that
\[\lim_{t\to+\infty}\overline{y}(t)\geq\overline{y}(0)>0\,,\]
thus contradicting Proposition 1(vi). Therefore, also when \(\tilde{x}(0)>\gamma\) we must have \(\tilde{x}(\infty)<\gamma\), thus completing the proof of point (i) of the claim.
(ii) If \(\tilde{x}(0)\leq\gamma\), by point (i) we have that \(\tilde{x}(t)<\gamma\) for every \(t>0\). Hence, equation (19) implies that
\[\dot{\overline{y}}(t)=\overline{y}(t)\left(\tilde{x}(t)-\gamma\right)<0\qquad \forall t>0\,,\]
thus showing that \(\overline{y}(t)\) is strictly decreasing for \(t\geq 0\).
(iii) If \(\tilde{x}(0)>\gamma\), by point (i), \(\tilde{x}(t)\) is strictly decreasing for \(t\geq 0\) and
\[\lim_{t\to+\infty}\tilde{x}(t)<\gamma\,.\]
Then, there necessarily exits a time \(\hat{t}>0\) such that \(\tilde{x}(t)>\gamma\) for every \(0\leq t<\hat{t}\), \(\tilde{x}(\hat{t})=\gamma\), and \(\tilde{x}(t)<\gamma\) for every \(t>\hat{t}\). It then follows from equation (19) that \(t\mapsto\overline{y}(t)\) is strictly increasing for \(t\) in \([0,\hat{t}]\) and strictly decreasing for \(t\) in \([\hat{t},+\infty)\), thus proving the claim.
**Remark 2**: _The result in [12] suggests a sort of unimodal behavior of the infection curve analogous to the scalar case. In particular, if \(\lambda_{\max}([x]A)(\tau)<\gamma\) for some \(\tau\geq 0\), then the aggregate curve of infected \(v_{max}(\tau)^{T}y(t)\) is monotonically decreasing to 0. Instead, if \(\lambda_{\max}([x]A)(0)<\gamma\), then for small times \(v_{max}(0)^{T}y(t)\) grows exponentially fast. However, notice how the aggregate index \(v_{max}(\tau)^{T}y(t)\) explicitly depends on \(x(\tau)\) and is not clear that \(v_{max}(0)^{T}y(t)\) is indeed unimodal. In our case study with rank-1 interaction matrices \(A=ab^{T}\), the left eigenvector of the \([x]A\) is precisely \(b\), so the aggregate is constant in this case and does not depend on the time instant._
### Dynamic behavior of the single populations
Let
\[\hat{t}=\inf\{t\geq 0:\,\tilde{x}(t)\leq\gamma\}\,, \tag{26}\]
and observe that Proposition 4(i) implies that \(\hat{t}<+\infty\). Also, for every \(i=1,\ldots,n\), let
\[\overline{t}_{i}=\inf\{t\!\geq\!0:\,w_{i}(t)\!\leq\!0\}=\inf\{t\!\geq\!0:\, \tilde{x}(t)\!\leq\!\gamma+a_{i}\overline{y}(t)\}\,, \tag{27}\]
and notice that
\[\overline{t}_{i}\leq\hat{t}\,, \tag{28}\]
and \(\overline{t}_{i}\leq\overline{t}_{j}\) if and only if \(a_{j}\leq a_{i}\). Hence, it is possible to order these time instants from the entries of vector \(a\).
We now present the following technical results that will prove useful in deriving our main result.
**Lemma 2**: _Consider the rank-\(1\) network SIR epidemic model (15a)-(15b) and every initial condition \((x(0),y(0))\) such that \(y(0)\geq 0\). Then,_
1. _for every_ \(t\geq 0\)_,_ \[\dot{w}_{i}(t)<-a_{i}\overline{y}w_{i}(t)\,;\]
2. \(w_{i}(t)\) _is strictly decreasing for_ \(0\leq t\leq\overline{t}_{i}\)_;_
3. _for every_ \(t>\overline{t}_{i}\)_,_ \[w_{i}(t)<0\,;\]
4. _if_ \(\dot{y}_{i}(t)=0\) _for some_ \(t\geq\overline{t}_{i}\)_, then_ \(t\) _cannot be a local minimum point of_ \(y_{i}(t)\)_._
See Appendix 2.
We can now state our main result, characterizing the dynamic behavior of the fraction of infected individuals in the single populations of the network SIR epidemic model with rank-\(1\) interaction matrix.
**Theorem 1**: _Consider the rank-\(1\) network SIR epidemic model (15a)-(15b). Let \(i\in\{1,\ldots,n\}\) be such that \(y_{i}(0)>0\). Then,_
1. \(y_{i}(t)\) _admits at most one local minimum time_ \(\tilde{t}_{i}\geq 0\)
Moreover, if such local minimum time \(\tilde{t}_{i}\) exists,
* it satisfies \[0\leq\tilde{t}_{i}\leq\overline{t}_{i}\,,\] (29) with \(\tilde{t}_{i}=\overline{t}_{i}=0\) if and only if \(w_{i}(0)\leq 0\) and \(\hat{y}_{i}(0)>0\);
* it cannot occur after any stationary local maximum point of \(y_{i}(t)\).
If \(w_{i}(0)\leq 0\) holds true, then \(\overline{t}_{i}=0\) and Lemma 2(iv) implies that no stationary point \(t\geq 0\) of \(y_{i}(t)\) can be a local minimum point. It follows that the only local minimum point of \(y_{i}(t)\) can possibly be \(\tilde{t}_{i}=0\) (which is the case if and only if \(\dot{y}_{i}(0)>0\)). On the other hand, if \(w_{i}(0)>0\), then the interior extremum theorem and Lemma 2(iv) imply that there cannot be any minimum points of \(y_{i}(t)\) in the interval \([\overline{t}_{i},+\infty)\). This proves point (ii).
We are then left with studying local minimum points of \(y_{i}(t)\) in the interval \([0,\overline{t}_{i})\). Let \(s\geq 0\) be a stationary local maximum point of \(y_{i}(t)\), and let \(u\) in \((s,\overline{t}_{i})\) be a (necessarily stationary) local minimum point of \(y_{i}(t)\). Then, we have that
\[\dot{y}_{i}(s)=\dot{y}_{i}(u)=0\,, \tag{30}\]
and
\[\ddot{y}_{i}(s)\leq 0\,,\qquad\ddot{y}_{i}(u)\geq 0\,. \tag{31}\]
Since \(y_{i}(t)>0\) for all \(t\geq 0\), we have that
\[\begin{array}{rcl}0&\geq&\ddot{y}_{i}(s)/y_{i}(s)\\ &=&a_{i}x_{i}(s)\overline{y}(s)w_{i}(s)/y_{i}(s)-\gamma\dot{y}_{i}(s)/y_{i}(s) \\ &=&\gamma w_{i}(s)\\ &>&\gamma w_{i}(u)\\ &=&a_{i}x_{i}(u)\overline{y}(u)w_{i}(u)/y_{i}(u)-\gamma\dot{y}_{i}(u)/y_{i}(u) \\ &=&\ddot{y}_{i}(u)/y_{i}(u)\\ &\geq&0\,,\end{array} \tag{32}\]
where the first and the last inequalities above follow from (31), the first and the last identities follow from (20), the other two identities from (30) and the fact that, by (15b), \(a_{i}x_{i}(t)\bar{y}(t)=\gamma y_{i}(t)\) when \(\dot{y}_{i}(t)=0\), and the strict inequality in the middle holds true because of Lemma 2(ii). As (32) is a contradiction, this shows that a local minimum point \(u<\hat{t}_{i}\) of \(y_{i}(t)\) cannot follow any stationary local maximum point \(s\geq 0\) of \(y_{i}(t)\), thus proving point (iii).
Finally, to prove point (i), assume by contradiction that there exist two distinct local minimum points \(r<u\) of \(y_{i}(t)\) in the interval \([0,\hat{t})\). Then, there would necessarily exist a local maximum point of \(y_{i}(t)\) in the interval \((r,u)\). But, since \(s>r\geq 0\), such local maximum point would also be stationary, thus violating point (iii). Therefore, there cannot exist two distinct local minimum points \(r<u\) of \(y_{i}(t)\) in the interval \([0,\hat{t}_{i})\), thus completing the proof of point (i).
**Remark 3**: _Notice that (29) and (28) imply that, if it exists, the local minimum point of \(y_{i}\) cannot occur after the peak of \(\bar{y}\)._
As a consequence of Theorem 1 we get the following result classifying the possible behaviors of the fraction of infected individuals in the single populations of the network SIR epidemic model with rank-\(1\) interaction matrix. This classification is based on the study of the sign of two quantities:
\[\dot{y}_{i}(0)=a_{i}x_{i}(0)\overline{y}(0)-\gamma y_{i}(0),\,w_{i}(0)=\tilde {x}(0)-\gamma-a_{i}\overline{y}(0).\]
**Theorem 2**: _Consider the rank-\(1\) network SIR epidemic model (15a)-(15b). Then, for every \(i=1,\ldots,n\),_
* _if_ \[\dot{y}_{i}(0)\leq 0\,,\] (33) _and_ \[w_{i}(0)\leq 0\,,\] (34) _then_ \(y_{i}(t)\) _is strictly decreasing for_ \(t\geq 0\)_;_
* _if_ \[\dot{y}_{i}(0)>0\,,\] (35) _or if_ \[\dot{y}_{i}(0)=0\,,\] (36) _and_ \[w_{i}(0)>0\,,\] (37) _then there exists a peak time_ \(\hat{t}_{i}>0\) _such that_ \(y_{i}(t)\) _is strictly increasing on_ \([0,\hat{t}_{i}]\) _and strictly decreasing on_ \([\hat{t}_{i},+\infty)\)_;_
* _if_ \[\dot{y}_{i}(0)<0\,,\] (38) _and_ \[w_{i}(0)>0\,,\] (39) _then either_ \(y_{i}(t)\) _is strictly decreasing for_ \(t\geq 0\) _or there exist a local minimum time_ \(\hat{t}_{i}\) _and a peak time_ \(\hat{t}_{i}\) _such that_ \(0<\hat{t}_{i}<\hat{t}_{i}\) _and_ \(y_{i}(t)\) _is strictly decreasing on_ \([0,\hat{t}_{i}]\)_, strictly increasing on_ \([\hat{t}_{i},\hat{t}_{i}]\)_, and strictly decreasing on_ \([\hat{t}_{i},+\infty)\)_;_
(i) If (34) holds true, then \(\overline{t}_{i}=0\). On the other hand, (33) and Theorem 1(ii) rule out the possibility that there exist any minimum point for \(y_{i}(t)\). Therefore, \(y_{i}(t)\) is strictly decreasing for \(t\geq 0\).
(ii) If condition (35) holds true, then Theorem 1(i) implies that \(\hat{t}_{i}=0\) is the only minimum point of \(y_{i}(t)\). On the other hand, if equation (36) and condition (39) both hold true, then it follows from (20) that
\[\ddot{y}_{i}(0)=a_{i}x_{i}(0)\overline{y}(0)w_{i}(0)-\gamma\dot{y}_{i}(0)= \gamma\overline{y}^{2}(0)w_{i}(0)>0\,,\]
(where the second identity follows from the fact that, by (15b), \(a_{i}x_{i}(t)\bar{y}(t)=\gamma y_{i}(t)\) when \(\dot{y}_{i}(t)=0\)) thus implying that also in this case \(\hat{t}_{i}=0\) is a local minimum point for \(y_{i}(t)\).
Since, by Proposition 1(vi),
\[\lim_{t\to+\infty}y_{i}(t)=0<y_{i}(0)\,, \tag{40}\]
and, by Theorem 1(i), \(y_{i}(t)\) cannot have another local minimum points besides \(\hat{t}_{i}=0\), it follows that exists a peak time \(\hat{t}_{i}>0\) such that \(y_{i}(t)\) is strictly increasing for \(t\) in \([0,\hat{t}_{i}]\) and strictly decreasing for \(t\) in \([\hat{t}_{i},+\infty)\).
(iii) From condition (38), \(0\) is a nonstationary local maximum point of \(y_{i}(t)\). Since, by Theorem 1(i), \(y_{i}(t)\) can have at most one local minimum point, and (40) holds true, it follows
that either \(y_{i}(t)\) is strictly decreasing for \(t\geq 0\) (in case there is no local minimum point) or, if a local minimum point \(\hat{t}_{i}>0\) exists, then there exists also a peak time \(\hat{t}_{i}>\hat{t}_{i}\) so that \(y_{i}(t)\) is strictly increasing on \([0,\hat{t}_{i}]\) and strictly decreasing on \([\hat{t}_{i},+\infty)\).
The previous result provides a classification of the behavior of the fraction of infected individuals in the single populations and in Figure 2 there is a conceptual illustration of this result. In particular, observe that if \(\dot{y}_{i}(0)=0\), then the behavior of the single infected fraction depends only on the sign of the quantity \(w_{i}(0)\), thus, in this case, the Theorem 2 provides a tight condition.
In Theorem 2, each condition is considered from the perspective of the individual population \(i\). However, note that if (33) is true for all \(i=1,\ldots,n\), which means that the infected curve in each population starts decreasing, then no \(y_{i}\) will have a local minimum point. Indeed, suppose to multiply both sides of (33) by \(b_{i}\) for all \(i=1,\ldots,n\) and then sum over all populations, this implies \(\tilde{x}(0)<\gamma\) and therefore \(w_{i}(0)<0\) for all \(i=1,\ldots,n\).
## 4 Sufficient conditions for multimodal behaviors
We now consider a particular class of rank-1 interaction matrices in the form
\[A=\beta\mathbf{1}b^{T}\,, \tag{41}\]
with \(\beta>0\) and \(\mathbf{1}^{T}b=1\). This is a special case of the one studied in Section 3 where the vector \(a\) has all equal entries and the entries of \(b\) sum up to \(1\).
**Remark 4**: _This model corresponds to a scenario in which all individuals have the same susceptibility to the disease but different capabilities of spreading the disease. For example, individuals wearing medical masks become infected with the same probability but spread the disease differently. Note that a simple case of this class of matrices is \(A=\mathbf{1}\mathbf{1}^{\prime}\), that is studied in Example 1. The network SIR epidemic model with this interaction matrix is of interest for control applications, as analyzed in [14]. Indeed, even if the dynamics at the nodes are homogeneous and thus the infection spreads at the same rate, it may be convenient to divide agents into multiple groups, for example, to study the effects of differentiated control policies, especially in cases whereby the cost of applying a control and epidemic cost for the diffusion of the disease may differ depending on the age of the agents._
We observe that, for rank-1 interaction matrices in the form (41), the dynamics become
\[\dot{x}_{i}=-\beta x_{i}\bar{y},\qquad\dot{y}_{i}=\beta x_{i}\bar{y}-\gamma y_ {i}\,, \tag{42}\]
for every \(i=1,\cdots,n\), and
\[\dot{\overline{x}}=-\beta\overline{x}\,\overline{y},\qquad\dot{\overline{y}}= \overline{y}\left(\beta\overline{x}-\gamma\right)\,, \tag{43}\]
since \(\overline{x}\) and \(\tilde{x}\) differ in a constant term only. The next result provides sufficient conditions for multimodal behavior of the infection curve at single node level and encompasses Example 1. We first need to define auxiliary functions
\[g_{i}(\varepsilon)=\frac{1-\varepsilon}{1-b_{i}\varepsilon}\left(1-\frac{ \gamma}{\beta}+\frac{\gamma}{\beta}\log\frac{\gamma}{\beta(1-b_{i}\varepsilon )}\right)-\varepsilon\,, \tag{44}\]
Notice that
\[g_{i}(0)=1-\frac{\gamma}{\beta}+\frac{\gamma}{\beta}\log\frac{\gamma}{\beta}, \quad g_{i}(1)=-1\]
As a consequence, when \(\gamma/\beta<1\), \(g_{i}\) admits zeroes in \([0,1]\) and we put
\[\overline{\varepsilon}_{i}=\min\left\{\varepsilon\in[0,1]:\,g_{i}( \varepsilon)=0\right\}. \tag{45}\]
**Proposition 5**: _Consider the rank \(1\) network SIR epidemic model (15a)-(15b) with \(a=\beta\mathbf{1}\) and \(\mathbf{1}^{T}b=1\). Consider an agent \(i\in\{1,\ldots,n\}\) and an initial condition \((x(0),y(0))\) that satisfy the following conditions:_
\[x(0)+y(0) =\mathbf{1} \tag{46}\] \[\beta x_{i}(0)\bar{y}(0)-\gamma y_{i}(0) <0\] (47) \[\beta\overline{x}(0) >\gamma\] (48) \[0<y_{i}(0) <\overline{\varepsilon}_{i} \tag{49}\]
Then, there exist a local minimum time \(\hat{t}_{i}\) and a peak time \(\hat{t}_{i}\) such that \(0<\hat{t}_{i}<\hat{t}_{i}\) and \(y_{i}(t)\) is strictly decreasing on \([0,\hat{t}_{i}]\), strictly increasing on \([\hat{t}_{i},\hat{t}_{i}]\), and strictly decreasing on \([\hat{t}_{i},+\infty)\).
From (47) and (42) it follows that \(\dot{y}_{i}(0)<0\), which implies that \(y_{i}(t)\) is strictly decreasing for sufficiently small \(t>0\). On the other hand, (43) and (48) imply that \(\bar{y}(t)\) is strictly increasing for sufficiently small \(t>0\). Since \(\overline{x}\) and \(\overline{y}\) satisfy the scalar autonomous SIR epidemic model (43), this implies that \(\overline{y}(t)\) has a peak at some time \(\hat{t}>0\) and
\[\overline{x}(\hat{t})=\frac{\gamma}{\beta}\,. \tag{50}\]
From Proposition 2(i) we obtain that the aggregate peak of infection is
\[\begin{split}\overline{y}(\hat{t})&=\overline{x}(0)+ \overline{y}(0)-\overline{x}(\hat{t})+\frac{\gamma}{\beta}\log\frac{\overline{x }(\hat{t})}{\overline{x}(0)}\\ &=1-\frac{\gamma}{\beta}+\frac{\gamma}{\beta}\log\frac{\gamma}{ \beta\overline{x}(0)}\,,\end{split} \tag{51}\]
Figure 2: Conceptual illustration of Theorem 2.
where the second equality follows from (46) and (50). Moreover, (42) and (43) imply that
\[\frac{x_{i}(t)}{x_{i}(0)}=\frac{\overline{x}(t)}{\overline{x}(0)}\,, \tag{52}\]
for every \(i=1,\ldots,n,\) and every \(t\). Therefore, using (50) we obtain
\[x_{i}(\hat{t})=\frac{\gamma}{\beta}\frac{x_{i}(0)}{\overline{x}(0)}\,, \tag{53}\]
We now prove that \(y_{i}(t)\) cannot remain decreasing for all \(t>0\). Assume by contradiction that
\[\dot{y}_{i}(t)\leq 0\,,\qquad\forall t\in[0,\hat{t}]\,. \tag{54}\]
In particular, this implies that \(y_{i}(\hat{t})\leq y_{i}(0)\). This together with (53) and (51) imply that
\[0 \geq \dot{y}_{i}(\hat{t})\] \[= \beta x_{i}(\hat{t})\bar{y}(\hat{t})-\gamma y_{i}(\hat{t})\] \[= \gamma\frac{x_{i}(0)}{\overline{x}(0)}\left(1-\frac{\gamma}{ \beta}+\frac{\gamma}{\beta}\log\frac{\gamma}{\beta\overline{x}(0)}\right)- \gamma y_{i}(\hat{t})\] \[\geq \gamma\left[\frac{x_{i}(0)}{\overline{x}(0)}\left(1-\frac{\gamma }{\beta}+\frac{\gamma}{\beta}\log\frac{\gamma}{\beta\overline{x}(0)}\right)- y_{i}(0)\right]\]
Notice now that, because of the assumptions on \(b\), it holds that \(\overline{x}(0)=1-\overline{y}(0)\leq 1-b_{i}y_{i}(0)\). Since the last expression in (55) is decreasing in \(\overline{x}(0)\) and \(x_{i}(0)=1-y_{i}(0)\), we obtain that
\[0\geq\gamma g_{i}(y_{i}(0)) \tag{56}\]
Notice that by (48), we necessarily have that \(\gamma/\beta<1\). This implies that \(g_{i}(0)>0\) and, together with (56), implies that \(y_{i}(0)\geq\overline{\varepsilon}_{i}\), thus violating (45). This contradiction implies that \(y_{i}(t)\) cannot remain decreasing for all \(t>0\). The thesis then follows from Theorem 2.
**Remark 5**: _Observe that the set of model parameters and initial conditions that satisfy the assumptions of Proposition 5 is nonempty. To prove this, let us consider a network with \(n\) nodes, interaction matrix as defined in (41) with parameters \(\beta>\gamma\) and_
\[b_{1}<\min\left\{\frac{\gamma}{\beta},1-\frac{\gamma}{\beta}\right\}\,, \tag{57}\]
_Fix an initial condition such that \(y_{1}(0)\in(0,\overline{\epsilon}_{1})\) and \(y_{j}(0)=0\) for every \(j=2,\cdots,n\). Notice that \(\overline{y}(0)=b_{1}y_{1}(0)\). A straightforward check shows that (47) and (48) are automatically satisfied putting no further restriction on \(y_{1}(0)\). Therefore, all assumptions of Proposition 5 are satisfied._
**Remark 6**: _Observe that under the assumptions of Proposition 5 we can provide an upper bound for stationary infection peaks of a node \(i\). Let \(\hat{t}_{i}\) be the peak time of node \(i\), hence \(\dot{y}_{i}(\hat{t}_{i})=0\). This implies by (42)_
\[y_{i}(\hat{t}_{i})=\frac{\beta x_{i}(\hat{t}_{i})}{\gamma}\overline{y}(\hat{t} _{i})\,. \tag{58}\]
_Since the aggregate quantity of infected is limited above by its infection peak, i.e. \(\overline{y}(t)\leq\overline{y}(\hat{t})\), for all \(t\geq 0\), and the fraction of susceptibles is monotonically decreasing, then_
\[y_{i}(\hat{t}_{i}) \leq \frac{\beta x_{i}(0)}{\gamma}\overline{y}(\hat{t})\] \[= \frac{\beta x_{i}(0)}{\gamma}\left(\overline{x}(0)+\overline{y}(0 )-\overline{x}(\hat{t})+\gamma\log\frac{\overline{x}(\hat{t})}{\overline{x}(0 )}\right)\] \[= x_{i}(0)\left(\frac{\beta}{\gamma}-1+\log\frac{\gamma}{\overline {x}(0)}\right)\,,\]
_where the first equivalence follows from (51) and the last one from (50) and (46)._
## 5 Numerical Simulations
In this section we provide numerical simulations of the network SIR model. We start by considering a network with \(n=5\) nodes and rank-1 interaction matrix. We can observe from Figure 3 that nodes \(1\) and \(3\) exhibit multimodal behaviors with two changes of monotonicity: for each \(i=1,3\), the fraction of infected \(y_{i}(t)\) is strictly decreasing for times \(t\) in \([0,\hat{t}_{i}]\), it is strictly increasing in \([\hat{t}_{i},\hat{t}_{i}]\), and it is strictly decreasing for \(t\) in \([\hat{t}_{i},+\infty)\). The bottom plot shows the aggregate variable \(\overline{y}(\hat{t})\), which has a unimodal behavior with a peak in \(\hat{t}\), as proved in the Proposition 4. Note also that, as consequence of Theorem 1(ii) (cfr. Remark 3), \(\hat{t}_{i}\leq\hat{t}\) for every \(i=1,3\), namely the local minimum of the infected curve in each node cannot occur after the aggregate infection peak.
Figure 4 illustrates numerical simulations of a network SIR model with \(n=2\) and full rank interaction matrix. This simulation shows that dynamics may exhibit more complex behaviors than in the rank \(1\) case. In particular, the bottom plot of Figure 4 shows that for suitable initial conditions, the curve of infected in node 1 exhibits two stationary local maxima (with the second higher than the first one) with three changes of monotonicity. We remark that for rank-1 interaction matrices this behavior is ruled out by Theorem 1(iii).
In Figure 5 we illustrate a numerical simulation of a network SIR model with \(n=4\) nodes, where the curves of infected at single node level exhibit three peaks. Moreover, we observe the presence of delays among the peaks in different nodes. Such simulations show that the limitation of the number of peaks to two is a peculiar feature of rank \(1\) matrices while for general network matrices, even in correspondence of a limited number of nodes, such limitation does no longer hold. It is also important to note that successive infection peaks can also be greater than the previous one, as observed in Figure 6. This is an interesting phenomenon especially in the design of a control, as an epidemic process is simulated in which after one peak of infection a second larger one may occur.
## 6 Conclusion
In this paper, we studied the network epidemic SIR model and provided theoretical results on the dynamics for the special case of rank \(1\) interaction matrices, which finds applications in epidemics control, as shown in [14]. We first proved that, in contrast to the scalar SIR model, in the network SIR model the curve of infected individuals in a single node may exhibit multimodal behaviors, and established sufficient conditions for the occurrence of this phenomenon. We then provided a
linear combination of the fraction of infected in each node that exhibits a unimodal behavior, and characterized all the possible behaviors that the dynamics may exhibit at single node level, showing that the infection curve in a single node can undergo two changes of monotonicity at most. We then conducted a numerical analysis showing that for more general interaction matrices the network SIR model may exhibit more than two peaks at single node level. Future work aims to include in the model more complex phenomena, such as waving immunity and endogenous response of agents in response to the disease, to fully describe the occurrence of multimodal behaviors in epidemic models.
|
2310.00481 | LANCAR: Leveraging Language for Context-Aware Robot Locomotion in
Unstructured Environments | Navigating robots through unstructured terrains is challenging, primarily due
to the dynamic environmental changes. While humans adeptly navigate such
terrains by using context from their observations, creating a similar
context-aware navigation system for robots is difficult. The essence of the
issue lies in the acquisition and interpretation of context information, a task
complicated by the inherent ambiguity of human language. In this work, we
introduce LANCAR, which addresses this issue by combining a context translator
with reinforcement learning (RL) agents for context-aware locomotion. LANCAR
allows robots to comprehend context information through Large Language Models
(LLMs) sourced from human observers and convert this information into
actionable context embeddings. These embeddings, combined with the robot's
sensor data, provide a complete input for the RL agent's policy network. We
provide an extensive evaluation of LANCAR under different levels of context
ambiguity and compare with alternative methods. The experimental results
showcase the superior generalizability and adaptability across different
terrains. Notably, LANCAR shows at least a 7.4% increase in episodic reward
over the best alternatives, highlighting its potential to enhance robotic
navigation in unstructured environments. More details and experiment videos
could be found in http://raaslab.org/projects/LLM_Context_Estimation/ | Chak Lam Shek, Xiyang Wu, Wesley A. Suttle, Carl Busart, Erin Zaroukian, Dinesh Manocha, Pratap Tokekar, Amrit Singh Bedi | 2023-09-30T20:26:00Z | http://arxiv.org/abs/2310.00481v3 | # LANCAR: Leveraging Language for Context-Aware
###### Abstract
Robotic locomotion is a challenging task, especially in unstructured terrains. In practice, the optimal locomotion policy can be context-dependent by using the contextual information of encountered terrains in decision-making. Humans can interpret the environmental context for robots, but the ambiguity of human language makes it challenging to use in robot locomotion directly. In this paper, we propose a novel approach, LANCAR, that introduces a context translator that works with reinforcement learning (RL) agents for context-aware locomotion. Our formulation allows a robot to interpret the contextual information from environments generated by human observers or Vision-Language Models (VLM) with Large Language Models (LLM) and use this information to generate contextual embeddings. We incorporate the contextual embeddings with the robot's internal environmental observations as the input to the RL agent's decision neural network. We evaluate LANCAR with contextual information in varying ambiguity levels and compare its performance using several alternative approaches. Our experimental results demonstrate that our approach exhibits good generalizability and adaptability across diverse terrains, by achieving at least 10% of performance improvement in episodic reward over baselines. The experiment video can be found at the following link: [https://raaslab.org/projects/LIM_Context_Estimation/](https://raaslab.org/projects/LIM_Context_Estimation/).
## I Introduction
Reinforcement Learning (RL) is prevalent in robotics, impacting manipulation [1], navigation [2], and locomotion [3] tasks. RL agents learn optimal actions by interacting with environments. However, developing unified learning RL agents that can work robustly in diverse conditions is a critical challenge, as optimal policies in diverse conditions could diverge a lot [4]. Consider the case when computing robust locomotion policies for a legged robot platform that can operate in an unstructured environment with various terrains. A standard RL-based approach does not always lead to good performance in this scenario, as policy parameters trained on a specific terrain may not translate to another [5]. One possible approach could be letting the robot agent model all environmental properties as a part of its state and then learn a policy that works in all conditions. However, this method is not feasible due to the sensor limitations in fields of view or ranges. The incapability of sensors in detecting certain vital factors can downgrade the performance, _e.g._ a robot might slip on loose soil without detecting its looseness. Policy generalization and adaptation across diverse terrains is still an open problem [6].
To address these issues, many prior works attempt to extract contextual information from environments from graph-like structures [7] or autoencoders [8] to assist decision-making. However, those methods lack the reasoning and inference ability to handle complicated terrains. A natural idea in these scenarios is to cooperate with humans to interpret the environmental contextual information to robots through their comprehensive sensing and better reasoning abilities. For instance, humans can correlate wet grassland with high damping upon observation, a connection robots struggle to make. However, the ambiguity of human language prevents this reasoning ability from being directly used by robots [9], as many similar sentences can be interpreted differently. As a result, it is challenging for robots to make use of human-provided contextual information.
The recent success of the Large Language Model (LLMs) and their ability to perform chain-of-thought [10], logic reasoning [11], and common sense reasoning [12] is a promising approach to address these problems. An interesting line of work is to incorporate LLM within RL frameworks so that an RL agent can use LLM to assist its learning process and make it more sample-efficient. Several prior studies have made attempts along this line, such as using LLMs to predict the reward functions necessary for RL [13] or providing control inputs for robots [14]. These approaches, though intriguing, still do not exploit the full potential of the reasoning abilities of LLM. We believe that LLMs are better suited as intermediaries, serving as interfaces to translate human language into formats that are more accessible to RL agents. This prevents human instructions from dominating and interrupting the decision-making process of the RL agent, which may cause performance degradation.
**Main Contribution:** In this paper, we investigate the possibility of utilizing LLMs to interpret contextual information from environments (through their reasoning ability) to help RL agents perform robot locomotion tasks. Specifically, we study a quadruped robot navigating various terrains with a human observer helping to interpret the context. The context refers to terrain properties that the robot might not directly perceive. Fig. 1 gives an overview of our approach. In Scenario \(1\), a robot traverses various terrains without any contextual information. Given the complexity of the terrains encountered, robots may fail to develop a generalized policy for all terrains. In Scenario \(2\), the robot traverses the same set of diverse terrains but receives contextual information from human observers1, like _"You are entering a grassland right after the rain"_ or _"You are walking on a dry rocky road under
the sun"_. Robots use an LLM-based translator to generate the embedding representing contextual information from the human interpretation, resulting in better decision-making as a supplement to robots' own observations.
Our approach leverages LLMs' generality in interpreting human languages. By translating the languages into index or contextual embeddings, the interpretation helps to address the ambiguity within human languages and makes robots adapt to diverse terrains with generalized control policies by taking advantage of the collaboration with human observers.
We justify the advantage of our approach, LANCAR, using the _spot-mini-mini_[15] robot simulator. Results demonstrate that LANCAR outperforms the baseline no-context and indexing approaches, as in most of the cases, LANCAR has at least 10% of performance improvement. We summarize our main contributions in this work as follows.
* We propose a novel approach, LANCAR, that incorporates LLMs into RL in robot decision-making that enables robots to utilize external contextual information from human observers or VLMs and generate a more robust and generalized RL policy.
* We propose an LLM-based contextual information translator module that interprets _high-level_, ambiguous, human language contextual information of environments into contextual information embedding accessible for RL agents with the reasoning ability of LLMs.
* We evaluate LANCAR and several alternative approaches with three case studies in _low-level_, _high-level_, and VLM-interpreted contextual information. We validate the efficacy of LANCAR in policy generalizability and adaptability across diverse terrains.
## II Related Works
**Navigation in Complex Environments.** Reliable robot locomotion and navigation in a complex environment is a long-lasting challenging task, as robots must learn an adaptive policy across diverse terrain [16]. NAUTS [17] proposes an approach that makes robots adaptive to off-road diverse terrain with a negotiation process among various navigational policies. VINet [18] uses a novel navigation-based labeling scheme for terrain classification and generalization on both known and unknown surfaces. Ada-Nav [2] presents a novel approach that adaptively tunes policy evaluation trajectory lengths with policy entropy and evaluates this approach in both simulated and real-world outdoor environments. Though many approaches listed above reveal good performance in practice, they mainly train and execute their approach in a terrain dataset with a limited number of terrains. Many of them depend on a semantic approach in terrain adaptation, which may constrain their ability to generalization in real-world diverse terrains.
**Human-robot Collaboration.** Human-robot collaboration investigates interaction strategies between humans and robots, especially in unstructured environments [19]. With the emergence of LLM, human-robot collaboration has received a boost as robots can take advantage of human knowledge and common sense reasoning through LLMs. Ren et al. [20] propose an approach that allows robots to seek help from humans with the assistance of LLM. SayTap [21] uses foot contact patterns as the interface between human commands in natural language and a locomotion controller that outputs _low-level_ commands. RE-Move [22] uses human-language instructions to help robots avoid obstacles. LM-Nav [23] uses LLM and VLM in object detection for robots' navigation tasks. These prior works provide insights into the potential in applying LLM in robot control tasks. However, they do not directly address the challenge of policy generalization across contexts, which is the focus of this paper.
**LLMs in Robotics and RL.** Large Language Model (LLM) [24] and Vision-Language Model (VLM) [25] reveal their ability of In-context Learning (ICL) through zero-shot
Fig. 1: **Task Description. We consider two robot learning approaches. The first existing approach (**TOP**) is when the robot moves over diverse terrains with a trained policy without any contextual information. Given the complexity of the terrains, robots may face difficulties in developing a generalized policy to address all types of terrains, leading to the failure of its ultimate policy. Our proposed approach (**BOTTOM**) has the robot moving over diverse terrains with our trained policy and contextual information from human observers or visual-language models. Robots convert this interpreted contextual information into embeddings with LLM. With the extra contextual information added to robots’ own observations, robots could develop better policies with a better understanding of the environment.**
or few-shot examples given within the context [26]. This has stimulated progress in vision-and-language navigation [27]. RT-2 [28, 29] allows manipulators to use the Internet-scale data from the VLMs in their decision-making by taking the action output sequence as another language. Bucker et al. [30, 31] use LLMs to allow human language to improve the manipulator trajectories. Mees et al. [32] utilize LLM to decompose the _high-level_ tasks into sub-tasks for robot to execute. Fu et al. [33] use LLMs as a driving assistant in autonomous driving tasks. For reinforcement learning, prior works have explored using LLMs in determining reward values [13] and policy explainability in human-AI interaction [34]. However, we notice that there is no prior attempt to use LLM to understand the environments' observation space and use this inference in RL agents, while our approach explores this idea within our framework.
## III Methodology
### _Problem Formation_
We model the problem as an extension of a Partially Observable Markov Decision Process (POMDP), specifically as an implicit POMDP [35]. An implicit POMDP is specified by a tuple, \(\langle\mathcal{S},\mathcal{A},\mathcal{O},\Omega,\mathcal{Z},\mathcal{F}, \mathcal{T},\mathcal{R},\gamma\rangle\), where the state space, \(\mathcal{S}=\mathcal{S}_{ex}\cup\mathcal{S}_{im}\), is composed of both explicitly observable states \(\mathcal{S}_{ex}\) and implicitly observable states \(\mathcal{S}_{im}\). The explicitly observable states are those environmental states directly observable from the agent's onboard sensors. The agent's observation space is \(\mathcal{O}\). The observation function is given by \(\Omega:\mathcal{S}_{ex}\to\mathcal{O}\). The implicitly observable states are the contextual information in the environment that cannot be detected directly by the robots but still affect robots' policies. \(\mathcal{Z}\) denotes the embedding of the contextual information from the implicitly observable states \(\mathcal{S}_{im}\), while the mapping function between the two is \(\mathcal{F}:\mathcal{S}_{im}\to\mathcal{Z}\). Nevertheless, the implicitly observable states (_i.e._ contextual information) can still be inferred by robots through reasoning over visual perception or tactile sensing or through human language feedback. In this work, our primary focus is to recover \(\mathcal{S}_{im}\) using contextual information given in natural language.
The action space \(\mathcal{A}\) represents the agent's feasible actions. The transition function \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\to\mathcal{S}\) characterizes the dynamics of the robot within the environment. The reward function \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}\) quantifies the reward of the agent's actions. \(\gamma\) is the discounted factor. The agent's policy \(\pi\) is given by \(\pi:\mathcal{O}\times\mathcal{Z}\to\Delta(\mathcal{A})\), while \(\Delta(\mathcal{A})\) represents the probability distribution over the action space. We formulate our problem as a finite horizon optimization. The objective is to find an optimal policy \(\pi^{*}\) that maximizes the expected cumulative reward
\[\pi^{*}=\arg\max_{\pi}\mathbb{E}_{\pi\sim\{s_{t},a_{t}\}_{t=0}^{H-1}}\sum_{t= 0}^{H-1}\gamma^{t}R(s_{t},a_{t}) \tag{1}\]
where \(H\) is the length of the episode.
### _Human-Robot Collaboration Framework_
We introduce a human-robot collaboration framework, LANCAR, as depicted in Figure 2. To recover the contextual information from environments, we introduce the LLM-based context translator module in addition to the standard RL agent. When robots traverse in environments with diverse terrains at time \(t\), robots observe the environment's explicitly observable states \(s_{ex}^{t}\), and the human observer or VLM interprets the implicitly observable states \(s_{im}\) (_i.e._ contextual information). Here, we assume that the contextual information is consistent within one episode so that \(s_{im}\) is fixed. The human observer or VLM provides qualitative descriptions or captions of the contextual information to the LLM translator. The LLM translator extracts the environmental properties from the contextual information and generates the contextual embedding \(z\), which is concatenated with the observations \(o_{t}\) as the input for RL agents. RL agents produce the action \(a_{t}\) using their control policies \(\pi\) given the context-aware inputs and execute the action in the environment. This framework is designed to be compatible with various RL methods, offering flexibility in its implementation.
The framework is designed delicately to adapt humans' assistance to enhance agents' performance. While it is hypothesized that well-trained agents are better suited to produce a sequence of continuous decisions, direct human control over such well-trained agents may disrupt the decision-making process, potentially leading to degraded performance. On the other hand, human-provided descriptions translated into state estimates over \(\mathcal{S}_{im}\) can serve as valuable assistance, enabling the agent to improve its overall performance.
### _LLM-based Context Translator_
The LLM plays a central role in our framework by utilizing prompts to convert human-interpreted or VLM-perceived contextual information from the environment into contextual embeddings accessible to RL agents. We design our context translator module based on In-context Learning (ICL), which allows us to utilize the reasoning ability of LLM through zero-shot or few-shot examples. ICL provides us with an interpretable interface to communicate with the LLM without
Fig. 2: **Context-Aware Reinforcement Learning Robot Locomotion. Our framework introduces a context translator aside from the standard RL framework. For the environment with diverse terrains, the agent gets the explicitly observable state as the observation, and the human observer (or VLM) perceives the context information as the implicitly observable state. The human observer (or VLM) interprets the contextual information to the LLM translator. The LLM translator extracts the environmental properties from the contextual information and generates the contextual embedding, which is concatenated with the observations as the input for RL agents. RL agents produce the action using their control policies given the context-aware inputs and execute the action in the environment.**
any training procedure that imitates human reasoning and decision-making process [26]. In our framework, we provide descriptive sentences of contextual information to LLM, along with prompts with examples of inputs and outputs that LLM may encounter during training.
An example prompt is presented in Figure 3. It consists of several sections. In the first two sections, we provide our task descriptions and a detailed description of each environmental property of interest as contextual information. To map qualitative descriptions of contextual information into embeddings, we provide a set of multiple-choice questions in the last section of the prompt. Each question is related to a single environmental property and LLM must choose among several pre-defined qualitative descriptive words. The answers are mapped into concatenated one-hot vectors, as contextual embeddings generated by LLM to provide to RL agents. For example, if the contextual information describes the terrain with two properties, saying _This terrain has very low friction and very high damping. very low friction_ maps into an one-hot vector \([1,0,0,0]\) and _very low friction_ maps into another one-hot vector \([0,0,0,1]\), then the contextual embedding of this terrain is \([1,0,0,0,0,0,0,1]\).
We note that our contextual approach, leveraging human-generated prompts and responses, enables the LLM to effectively bridge the gap between natural language descriptions and actionable state information, a key aspect of our framework's success in recovering contextual information from unobservable states of environments.
### _Reinforcement Learning Agent_
In this paper, we employ Augmented Random Search (ARS) [36], as the reinforcement learning algorithm for the robot control agent. Both ARS and its ancestor approach, BRS, use the finite difference approach, which approximates the actual gradient value through derivative sampled in \(2N\) directions and updates the network parameters by perturbing policy parameters within the range of \([-\delta,+\delta]\) to assess resulting rewards within that range, while \(\delta\) is randomly generated from a normal distribution. Compared with BRS, ARS further improves the performance of RL policies by normalization and using top-performing directions to update the network parameters. In addition, ARS uses a linear policy, instead of a non-linear policy like the neural network, to simplify the RL algorithms.
## IV Empirical Results and Discussion
In the experiments, we aim to answer the following two questions regarding performance and policy generalization. The first question is: _Does external contextual information improve the performance of the agent when the agent is operating in diverse conditions?_ To answer this question, we designed a series of experiments and compared our framework with alternative approaches that include or exclude contextual information. In these experiments, each context is a different terrain. The second question we investigate is: _Is the LLM model effective when the given input is high-level, open-ended, and ambiguous in retrieving contextual information and thereby in robot locomotion?_. To answer this question, we use _low-level_, precise, and organized human interpretation of contextual information in training but use _high-level_, vague, and unorganized contextual information from actual human observers in evaluation.
We use GPT-4 [37] as our LLM model. As an advanced case study, we also apply VLM to generate image captions of scenes observed by the robot to act as an automated proxy for human language. We consider that the contextual information generated by VLM in this circumstance imitates the actual operation scenario of robots in the real world.
### _Environments_
We use a quadruped robot locomotion simulator, _spot-mini-mini_[15], built in PyBullet [38]. The mission of the robot is to move straight as far as possible within a limited time. We set the time length of an episode to be \(5,000\) steps. The observable state space is given by the touch sensors and includes the values of joint angles, velocities, and torques for all motors. The state is \(16\)-dimensional. The extra context, depending on the training method covered in Section IV-B is also provided to the agent. The action space is the desired joint angle for each of the fourteen joints. The reward function is given by the robot's traveling distance \(d_{x}\) with respect to \(x\), the penalty \(d_{y}\) for deviating from the \(y\) axis, and its instantaneous posture \(p\). The reward function is defined as \(J=d_{x}+0.03d_{y}+10p\). For the ARS agent, the learning rate is \(0.03\). The number of samples for \(\delta\) is \(16\). The noise amplitude applied in the exploration is \(0.05\).
Given the large number of training examples, it is impractical to manually generate contextual information each time. Instead, we use LLM to automatically generate contextual
Fig. 3: **An Example Prompt for LANCAR.** The prompt for LANCAR includes four parts. The first is the _high-level_ task description for LLM. The second is the details and intuitive examples of terrain properties of interest. The third part is the examples given for in-context learning; the input given here is a _low-level_ context of terrain, and the outputs are determined by question-answering of multi-choice problems. The last part is the actual input to LLM expecting that gets the embedding.
information from the terrains during training as well. In the training phase, we ask the LLM to generate detailed, _low-level_ qualitative descriptions of pre-defined properties of the terrain. Specifically, we first randomly generate samples with parameter values quantitatively describing properties given in Table I of the simulated environment. We then design a prompt that provides sampled parameter values for training environments to the LLM. We describe the value of these properties with qualitative words: _None_, _Very Low_, _Low_, _Medium_, _High_ and _Very High_. Given the true parameter values, we ask the LLM to generate a _low-level_ contextual description. A sample description generated during training is: _This environment has no restitution when collision, low friction, very high stiffness level, and no damping._
In the evaluation phase, we conduct three case study experiments with increasing difficulty levels, i.e., increasing vagueness of the contextual information provided to the LLM to examine the reasoning ability of our approach. Specifically, we evaluate the following three types of contexts (in increasing order of open-endedness):
#### Iv-A1 Low-Level Context
The contextual information provided by human observers during evaluation gives detailed qualitative descriptions of environmental properties, the same as those given in the training phase. Three evaluation terrain descriptions we used are provided in Table II.
#### Iv-A2 High-Level Context
The contextual description provided by human observers is _high-level_, open-ended, vague, and descriptive of the environmental conditions, rather than the environmental properties. Three evaluation terrain descriptions we used are provided in Table III.
#### Iv-A3 VLM-Interpreted Context
Instead of a human providing the description to the LLM, we use a VLM instead. The VLM first generates captions given an image representing the scene. The contextual description generated by the VLM depicts the visual observations at a _high level_. This caption is then provided to the LLM to generate the embedding. Since the contextual information undergoes a two-stage interpretation before generating the embedding, the distortion in understanding the contextual information may contribute to a large deviation between the perceived and actual contextual information, which leads to a much more challenging scenario in evaluation. In this paper, we use BLIP [39] as our VLM model to generate the image captions. The image inputs and the generated captions we used are provided in Figure 4. From image captions, we find that contextual information captured by the current version of VLM is always distorted by irrelevant information, like misinterpreting the head of spot robot shown in both Fig. 3(a) and 3(b) as _yellow suitcase_ and is taken as the subject in image captions, or emphasizing on some minor information over terrain information in captioning, like the shadow in Fig. 3(a). All these factors raise more challenges in contextual information interpretation in our framework.
### _Approaches Compared_
We conduct a series of experiments on our approach, LANCAR, and some baseline approaches, to evaluate the effect of the usage and design of contextual information embedding strategies. We evaluate the following approaches:
#### Iv-B1 No-Context
The RL agent only uses environmental observation in their decision-making. No contextual information is used. The decision does not rely on the LLM output. It will be used as the baseline of the experiment.
#### Iv-B2 Indexing
The context is encoded as a one-hot vector. The RL agent labels all terrains encountered during training with a unique index. During evaluation, the output of the LLM is converted into a one-hot vector which is then combined with the environmental observation as input to the RL agent. The one-hot vector labels the \(i\)-th element of the vector as one for the \(i\)-th terrain.
#### Iv-B3 LANCAR
This is the approach we propose in this paper. The LLM generates contextual embeddings by interpreting human observers or VLM in the way presented in Section III-C, and the RL agent incorporates contextual embeddings with environmental observation in their decision-making. The contextual embeddings are represented by a vector composed of six one-hot vectors. Each one-hot vector quantifies properties in Table I into six intervals.
### _Results_
#### Iv-C1 Case Study: Low-Level Contextual Description
The _low-level_ contextual descriptions provided in evaluation are
\begin{table}
\begin{tabular}{|c||c|} \hline ID & Contextual Information \\ \hline \multirow{2}{*}{A} & This environment has no restitution when collision, \\ & very high friction, and no damping. \\ \hline \multirow{2}{*}{B} & This environment has no restitution when collision, \\ & very low friction, and no damping. \\ \hline \multirow{2}{*}{C} & This environment has high restitution when collision, \\ & very high friction, and very high damping. \\ \hline \end{tabular}
\end{table} TABLE II: Low-Level Contextual Information for Case Study
\begin{table}
\begin{tabular}{|c||c|} \hline Property & Value Range \\ \hline Height-Field & True/False \\ \hline Restitution & \([0,0.2]\) \\ \hline Lateral / Horizontal Friction & \([0,1]\) \\ \hline Rolling Friction & \([2\times 10^{4},1.6\times 10^{5}]\) \\ \hline Stiffness & \([0,1]\) \\ \hline Damping Coefficient & \([0,0.5]\) \\ \hline \end{tabular}
\end{table} TABLE I: Properties for Training Terrains
Fig. 4: VLM-interpreted Contextual Information for Case Study.
\begin{table}
\begin{tabular}{|c||c|} \hline ID & Contextual Information \\ \hline \multirow{2}{*}{D} & The spot is walking on a grassland after last \\ & night’s rain, though it’s sunny now. \\ \hline \multirow{2}{*}{E} & The spot is walking on a mountain road covered by \\ & ice. It’s snowy now. \\ \hline \multirow{2}{*}{F} & The spot is walking on the beach near the sea \\ & under the sun. \\ \hline \end{tabular}
\end{table} TABLE III: High-Level Contextual Information for Case Study
given in Table II. Context A is the normal terrain, Context B has low friction, and Context C has high damping. In general, low friction and high damping are more challenging for RL.
Table IV shows the evaluation results in terms of episodic reward over all three _low-level_ contexts. We find that LANCAR outperforms the other two approaches. The indexing approach has slightly worse performance than LANCAR in general, but its performance is not stable across all terrains, as its episodic reward is close to LANCAR in Context C (high damping), but much worse in Context B (low friction). This indicates its limited adaptation ability over variation in the terrain and contextual information.
#### Iv-C2 Case Study: High-Level Contextual Description
Our second case study uses _high-level_ contextual descriptions given in Table III. This is a harder problem than the _low-level_ description as it relies on the reasoning ability of the LLM translator. We denote Context D as the contextual information for _Moist Grassland_, Context E for _Snowy Mountain Road_, and Context F for _Sunny Beach_. All three terrains are more difficult than those given in the _low-level_ context case study, in terms of surface stiffness, damping, and friction.
Table V shows the evaluation results in terms of episodic reward over all three _high-level_ contextual descriptions. We find that LANCAR and indexing have close performance in Context D (_Moist Grassland_) and Context F (_Sunny Beach_), while indexing performs the worst in Context E (_Snowy Mountain Road_), marking its failure in addressing low friction terrains. The No-Context baseline, on the other hand, has a stable performance over all contexts but is consistently worse than LANCAR.
#### Iv-C3 Case Study: VLM-Interpreted Contextual Description
The last case study uses VLM-Interpreted contextual information given in Figure 4 by presenting images taken by robots in the real world to generate contextual information. We design this case study to simulate circumstances that robots may encounter in the real world where there is a no human to provide contextual information, but instead, it must be obtained automatically. We denote Context G as _Dry Grassland_, Context H as _Stiff Road_. Due to the ambiguity in the terrain images acquired in the outdoor unstructured environments, image captions generated by VLM could be greatly distorted and noisy, which sets up a higher demand for LLM translators to extract key information and reason with incomplete contextual information, so we take this case study as the hardest one.
Table VI shows the evaluation results in terms of episodic reward over all three VLM-Interpreted contexts. We find that LANCAR and indexing have close performance in Context H (_Stiff Road_), but there is a clear gap between their performance in Context G (_Dry Grassland_). We find that _Stiff Road_ terrain is more like Context A (the normal terrain) given in the _low-level_ context case study, which is low in its difficulty level. As a result, it is acceptable for indexing and LANCAR to have similar performance. Notably, LANCAR has a better performance in _Dry Grassland_, indicating its better generalization ability across diverse terrains.
### _Discussion_
Our experiments consistently demonstrated that the context-aware methods (indexing and LANCAR) outperformed the baseline no-context method. Between the two context-aware methods, LANCAR consistently achieved the best results due to its adaptability to the set of diverse scenarios. The indexing method still performs well in most of the cases, as well as LANCAR, except for the poor performance in specific cases. However, when it fails, the indexing method performs much worse. We suspect this is due to the LLM incorrectly classifying the terrain which leads to the RL agent applying vastly different policy parameters. This is not an issue with LANCAR, since we do not explicitly identify the context. Rather, we represent the context as an embedding. As such, even if the predicted embedding during evaluation is slightly different than _ground truth_, it has a marginally smaller effect on the performance of the RL agent.
## V Conclusion
This paper proposes a novel approach that allows human observers and VLM to interpret the implicit contextual information in the environment and use LLM to translate this information into contextual embeddings that could be understood by RL agents, RL agents concatenate their own observation with these contextual embeddings in their decision-making. Results validate the efficacy of LANCAR in policy generalizability and adaptability across diverse terrains over three case studies using contextual information with different ambiguity levels. In our future work, we intend to develop an advanced approach that incorporates a VLM and an LLM to interpret environmental contextual information by generating more precise captions of observed images of robots and refining the reasoning abilities in these image captions [40]. In addition, we will try to investigate a mechanism that makes robots more adaptive when transitioning from one context to another within the same episode. We hope this contributes to a more robust and adaptive strategy in real-world robot locomotion tasks.
|
2309.15689 | Further analysis of cGAN: A system for Generative Deep Learning
Post-processing of Precipitation | The conditional generative adversarial rainfall model "cGAN" developed for
the UK \cite{Harris22} was trained to post-process into an ensemble and
downscale ERA5 rainfall to 1km resolution over three regions of the USA and the
UK. Relative to radar data (stage IV and NIMROD), the quality of the forecast
rainfall distribution was quantified locally at each grid point and between
grid points using the spatial correlation structure. Despite only having
information from a single lower quality analysis, the ensembles of post
processed rainfall produced were found to be competitive with IFS ensemble
forecasts with lead times of between 8 and 16 hours. Comparison to the original
cGAN trained on the UK using the IFS HRES forecast indicates that improved
training forecasts result in improved post-processing.
The cGAN models were additionally applied to the regions that they were not
trained on. Each model performed well in their own region indicating that each
model is somewhat region specific. However the model trained on the Washington
DC, Atlantic coast, region achieved good scores across the USA and was
competitive over the UK. There are more overall rainfall events spread over the
whole region so the improved scores might be simply due to increased data. A
model was therefore trained using data from all four regions which then
outperformed the models trained locally. | Fenwick C. Cooper, Andrew T. T. McRae, Matthew Chantry, Bobby Antonio, Tim N. Palmer | 2023-09-27T14:33:04Z | http://arxiv.org/abs/2309.15689v1 | # Further analysis of cGAN: A system for Generative Deep Learning Post-processing of Precipitation
###### Abstract
The performance of a rainfall post-processing model (cGAN) is examined at three locations across the North America and compared to the UK.
cGAN trained on local data was competitive with the IFS ensemble forecast. cGAN trained on all regions had the best performance.
Training on ECMWF IFS forecasts leads to a CRPS that is lower (better) than training on ERA5.
###### Abstract
The conditional generative adversarial rainfall model "cGAN" developed for the UK (Harris et al., 2022) was trained to post-process into an ensemble and downscale ERA5 rainfall to 1km resolution over three regions of the USA and the UK. Relative to radar data (stage IV and NIMROD), the quality of the forecast rainfall distribution was quantified locally at each grid point and between grid points using the spatial correlation structure. Despite only having information from a single lower quality analysis, the ensembles of post processed rainfall produced were found to be competitive with IFS ensemble forecasts with lead times of between 8 and 16 hours. Comparison to the original cGAN trained on the UK using the IFS HRES forecast indicates that improved training forecasts result in improved post-processing.
The cGAN models were additionally applied to the regions that they were not trained on. Each model performed well in their own region indicating that each model is somewhat region specific. However the model trained on the Washington DC, Atlantic coast, region achieved good scores across the USA and was competitive over the UK. There are more overall rainfall events spread over the whole region so the improved scores might be simply due to increased data. A model was therefore trained using data from all four regions which then outperformed the models trained locally.
## 1 Introduction
Given the measurements we have it is impossible to perfectly predict the exact amount of rainfall at some time in the future. Instead we try to predict a distribution of rainfall with ensemble methods. Generative-adversarial networks or GANs (Goodfellow et al., 2014) have been introduced as a method for approximating distributions and have been further developed to incorporate conditioning information (Mirza and Osindero, 2014). Leinonen et al. (2021) conditioned the empirical rainfall distribution on smoothed rainfall radar data using a GAN to "downscale" and reproduce the original un-smoothed data. This work was extended by Harris et al. (2022) to downscale and post-process ECMWF forecast data towards radar data over the UK. Similar work was performed independently by Price and Rasp (2022) at the same time. For a more in depth review of the background literature see Harris et al. (2022).
Since then Yang et al. (2023) applied a GAN to post-process precipitation forecasts over China and Leinonen et al. (2023) developed a diffusion model for precipitation nowcasting which shows some advantages in comparison to the equivalent GAN models. Another neural network based approach has been developed to post-process global medium range forecasts of precipitable water (Agrawal et al., 2023). A large neural network model has been developed over the USA (Andrychowicz et al., 2023) that rather than post-processing computes the entire forecast of precipitation up to 24 hours ahead and a similar model has been favourably assessed in an operational setting (Ben-Bouallegue et al., 2023).
The goal of this paper is to further test the model we call cGAN developed in Harris et al. (2022) to find out where it does well and where it can be improved. In addition to the ECMWF IFS HRES deterministic forecast, the output of the cGAN, trained to correct ERA5 data with respect to higher resolution rainfall radar data, is compared to the 6-18 hour ECMWF IFS ensemble forecast predictions. In addition to looking at the UK, we add three regions of the USA and compare models trained on them separately and look at how good these models are outside of their training region. We focus on four metrics for different aspects of the quality of the produced rainfall distribution; the Cumulative Rank Probability Score (CRPS) and Root-Mean-Squared Error of the Ensemble Mean (RMSEEM) which measure the quality of the one dimensional conditional distribution at individual points, and the Radially Averaged Log Spectral Distance (RALSD) and Variogram score, both of which measure the quality of the spatial relationship between rainfall points.
## 2 cGAN USA
The model employed here is the conditional generative adversarial network "cGAN" developed in Harris et al. (2022) over the UK region. cGAN is in turn based upon a rainfall downscaling model developed by Leinonen et al. (2021). In Harris et al. (2022), the cGAN model takes ECMWF HRES (ECMWF, 2021) forecasts of multiple atmospheric variables at a \(0.1^{\circ}\) resolution with lead times between 6 and 18 hours, and outputs an ensemble of rainfall predictions of NIMROD (Met Office, 2023) rainfall radar data at the same time with a resolution of \(0.01^{\circ}\) (\(\sim 1\)km). In this study variables from the lower resolution ECMWF ERA5 reanalysis (Hersbach et al., 2020) and satellite determined orography are used as inputs (see table 1) and cGAN is trained to produce an ensemble forecast of the stage IV rainfall product (Du, 2011) linearly interpolated from \(\sim 4\)km to \(\sim 1\)km over three USA regions (figure 1) and NIMROD over the UK.
Training was performed on a single NVIDIA A100 accelerator with reduced numerical precision (Appendix A) taking around 4-5 days to train each cGAN model. As in (Harris et al., 2022) the data across split into smaller sub-images of 20 \(\times\) 20 (low-resolution)
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt}} \hline \hline
**Region** & **Centered on** & **Latitude** & **Longitude** \\ \hline Atlantic Coast & Washington DC & 34.2\({}^{\circ}\)N - 43.6\({}^{\circ}\)N & 81.72\({}^{\circ}\)W - 72.32\({}^{\circ}\)W \\ Great Plains & Sioux City & 38.84\({}^{\circ}\)N - 48.24\({}^{\circ}\)N & 101.43\({}^{\circ}\)W - 92.03\({}^{\circ}\)W \\ Pacific North-West & Portland & 40.55\({}^{\circ}\)N - 49.95\({}^{\circ}\)N & 125.95\({}^{\circ}\)W - 116.55\({}^{\circ}\)W \\ UK & UK national grid & 49.55\({}^{\circ}\)N - 58.95\({}^{\circ}\)N & 7.45\({}^{\circ}\)W - 1.95\({}^{\circ}\)E \\ \hline \hline \end{tabular}
\end{table}
Table 1: Model inputs
and 200 \(\times\) 200 (high-resolution), by randomly sampling patches from the full-sized images. In total, for each model, 640'000 samples were taken. For all regions except for Portland, the training data was taken from 2016, 2017 and 2018. For Portland the training data was taken from 2018 and 2019. As reported in Harris et al. (2022) the quality of the trained model does not converge smoothly. So evaluations of the CRPS against 2019 validation data (2020 for Portland) were performed on 33 models selected from the final third of the training run. The model with the lowest CRPS was then chosen as the model for that region. Results reported here were computed by evaluating this model on unseen 2020 test data for all regions except for Portland and 2021 test data for Portland.
An additional model was trained using data from all four regions resulting in almost four times the quantity of training data. The model trained on all regions used all of the training, validation and test data, still segregated into training/validation/test sets, as described above.
## 3 Data
### Nimrod
Over the UK we use the NIMROD 1-km data product (Met Office, 2023). A number of radar stations across the UK and Ireland provide data at 5 minute intervals, which is then processed to calibrate, correct for hair and remove artefacts. Close to the radar stations the spatial resolution is around 1 km and reduces to around 5km further away. This data is then merged onto a 1 km resolution national grid which includes regions where the true resolution is worse than 1km. To process the NIMROD data for cGAN, a sub-region of 5 minute snapshots of rainfall rates are averaged over 1 hour periods and linearly interpolated to a \(0.01^{\circ}\) longitude-latitude grid. The procedure is identical to that employed by Harris et al. (2022).
### Stage IV
NCEP/EMC Stage IV Data is a gridded rainfall data set over the USA at \(\sim 4\) km resolution and hourly time intervals. It is derived using a combination of rain gauges,
Figure 1: The regions within which cGAN was applied, see also table 2. Model regions. 940 \(\times\) 940 longitude-latitude grid points at \(0.01^{\circ}\) resolution. Ranges start and end at the grid point centres.
rainfall radar by the 12 River Forecast Centers in the continental USA who use differing algorithms and apply local manual quality control. It is then mosaicked together by NCEP, (Nelson et al., 2016). An eastern portion of the stage IV data is operationally assimilated into IFS (Lopez, 2011)
### Terrain and Land-sea mask
Elevation data in the USA is derived from the 30 arc-second, (\(\sim 1\) km at the equator) GMTED2010 data set (Danielson & Gesch, 2010). In each of the 3 USA regions is it interpolated, using the nearest neighbour, onto a 0.01\({}^{\rm o}\) longitude-latitude grid. The land sea mask is derived from the 10m resolution ESA WorldCover 2020 dataset (Zanaga et al., 2021). Each of the elevation grid points corresponds to a subset of nearest neighbour WorldCover grid points. The land sea mask is the fraction of these points that are of permanent water bodies, including rivers, lakes and ocean. Elevation data for the UK is unchanged with respect to the model used by Harris et al. (2022). It consists of a \(\sim\) 1.25 km resolution elevation dataset developed for very high resolution versions of IFS and the land-sea mask used for the operational IHRES forecast.
### Era5
For practical convenience, in this study we have substituted the short range IFS forecast used for training cGAN in Harris et al. (2022) for the ERA5 reanalysis (Hersbach et al., 2020). Both systems use the same family of data assimilation algorithms to obtain their fields, however ERA5 uses an older version with a lower resolution than the operational system. ERA5 might however benefit from more data that didn't make it into the operational model in time. In the operational IFS forecast it has been suggested that initial rainfall predictions take some time to "balance", which is part of the motivation for using forecasts out to 1 day instead of the initial condition. We don't know if this is also a problem with ERA5.
We would therefore expect that training using ERA5 would lead to broadly the same conclusions as if we trained with the IFS forecast, however in an operational setting, it might me more optimal to use the best model available.
## 4 Scores
To compare against the IFS ensemble forecast we focus on three metrics used in Harris et al. (2022), namely the Root-Mean-Squared Error of the Ensemble Mean (RMSEEM), the Continuous Ranked Probability Score (CRPS) (Gneiting & Raftery, 2007), the Radially Averaged Log Spectral Distance (RALSD) and add a fourth, the variogram score (Scheuerer & Hamill, 2015).
The Root-Mean-Squared Error of the Ensemble Mean (RMSEEM) has the advantage that it is simple and quantifies the ability of an ensemble forecast to represent the mean of the distribution. In contrast, the Root-Mean-Squared error of an individual forecast is particularly problematic for rainfall. This is because rainfall events can be quite local and intense. If a rainfall event is correctly forecast, but it is in slightly the wrong place the RMS error of an individual forecast might be higher than not forecasting rainfall at all. It is therefore not included.
The Continuous Ranked Probability Score (CRPS) (Gneiting & Raftery, 2007) quantifies the quality of the entire forecast distribution independently at each grid point, but completely ignores the relation between grid points. It is a _proper score_, meaning that it is minimised when the forecast has the same distribution as the measurements.
Both the RMSEEM and CRPS can be used to assess the quality of the forecast at each point separately. Neither address the spatial structure of a forecast. The energy score (Gneiting and Raftery, 2007) is a multi-dimensional generalisation of the CRPS (Hersbach, 2000). However it lacks statistical power to distinguish covariances (Pinson and Tastu, 2013) and therefore the spatial structure of forecasts. The fractions skill score (Roberts, 2008; Roberts and Lean, 2008) is a popular score for spatial verification. However it can be difficult to interpret (Appendix B) specifically in regions or times of low rainfall (Mittermaier, 2021). Instead we focus on the RALSD because it was used in Harris et al. (2022) and the variogram score because it was developed specifically to address these problems.
### The Radially Averaged Log Spectral Distance (RALSD)
The Radially Averaged Log Spectral Distance (RALSD) is a score used by Harris et al. (2022) to measure the quality of the spatial relationship between forecast locations:
\[\text{RALSD}=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\left(10\log_{10}\overline{P}_{ \text{radar},i}-10\log_{10}\overline{P}_{\text{model},i}\right)^{2}} \tag{1}\]
Here \(\overline{P}_{\text{model},i}\) and \(\overline{P}_{\text{radar},i}\) are the radially averaged power spectra of the respective model and radar data and \(N\) is the number of points in the spectra after radial averaging. In Harris et al. (2022)\(\overline{P}_{\text{model},i}\) and \(\overline{P}_{\text{radar},i}\) were computed for single images and then the RALSD for each image was averaged over all ensemble members and sample dates. Here we find \(\overline{P}_{\text{model},i}\) and \(\overline{P}_{\text{radar},i}\) by averaging radially and over our data set including ensemble members and different rainfall dates, before computing the RALSD. In our case some regions are irregular, most notably the Portland region, see eg. figure 3. In order to compute \(\overline{P}_{\text{radar},i}\) values are set to zero in the masked region within the bounding rectangle. This will introduce some inaccuracy in this score and so the same inaccuracy is introduced into the forecasts for the calculation of \(\overline{P}_{\text{model},i}\), by also setting the equivalent regions to zero. The power spectrum is the discrete Fourier transform of the discrete covariance function which leads us to the variogram score which can be computed without this inaccuracy.
### The variogram score
The variogram score (Scheuerer and Hamill, 2015) is designed to assess the forecast representation of the relation between variables. It is a proper score because is it minimised when the forecast distribution equals the true distribution. It is not strictly proper because there are other distributions that can match the score of the true distribution. The variogram score measures the difference between variograms which are closely related to the covariance and correlation. The variogram may be defined as
\[2\gamma_{ij}\left(X_{i},X_{j}\right) =\text{Var}\left[X_{i}-X_{j}\right] \tag{2}\] \[=\text{E}\left[\left(X_{1}-X_{2}\right)^{2}\right]-\text{E} \left[X_{i}-X_{j}\right]^{2}\] (3) \[=\text{Var}\left[X_{i}\right]+\text{Var}\left[X_{j}\right]-2\text {Cov}\left[X_{i},X_{j}\right]. \tag{4}\]
Here \(X_{i}\) and \(X_{j}\) represent a random process at two locations \(i\) and \(j\). The variogram score for a forecast is defined as
\[S(\mathbf{y},\mathbf{X})=\sum_{i=1}^{d-1}\sum_{j=i+1}^{d}w_{ij}\left(\left|y_{ i}-y_{j}\right|^{p}-E_{F}\left[\left|X_{i}-X_{j}\right|^{p}\right]\right)^{2} \tag{5}\]
where \(d\) is the number of pixels in each image, \(y_{i}\) denotes a measurement at pixel \(j\) and \(E_{F}\left[\left|X_{i}-X_{j}\right|^{p}\right]\) denotes the expectation of the difference between the forecast \(\mathbf{X}\) at pixels \(i\) and \(j\). When the forecast distribution is in the form of an \(m\) member ensemble \(\mathbf{x}^{(1)},\mathbf{x}^{(2)},\ldots,\mathbf{x}^{(m)}\)
\[E_{F}\left[\left|X_{i}-X_{j}\right|^{p}\right]\approx\frac{1}{m}\sum_{k=1}^{m} \left|x_{i}^{(k)}-x_{j}^{(k)}\right|^{p},\qquad i,j=1,2,\ldots,d.\]
\(p\) is a parameter which we choose \(p=0.5\), which appears to be "good" for multivariate normal distributions (Scheuerer & Hamill, 2015). \(w_{ij}\) is a selection of user defined weights that correspond to how important each pair of points is. In the estimate of the expectation values, the difference between two points often increases with distance because their correlation decreases. Down-weighting pairs that are expected to have relatively weak correlations can therefore benefit the signal-to-noise ratio. We therefore choose
\[w_{ij}=\left\{\begin{array}{ll}\exp(-kD(i,j))&D(i,j)\leq D_{\max}\\ 0&\text{otherwise.}\end{array}\right.\]
where \(D\left(i,j\right)\) is function that returns the distance in pixels between points \(i\) and \(j\) so that more distant relationships are considered to be less important. \(D_{\max}\) is a cut-off distance in pixels and \(k\) is a decay constant that sets the decay in \(w_{ij}\) to approximate the decay in rainfall spatial auto-correlation with distance in pixels.
The double sum in equation (5) makes computation of the variogram score expensive over large images. When the region is regular it is possible to speed this up using the Fast-Fourier-Transform. However, in our case the regions are irregular. We therefore compute the variogram score over low resolution (\(94\times 94\) pixels \(\sim 10\) km resolution) spatial averages of the rainfall. This choice can be justified by the fact the spatial auto-correlation changes relatively slowly over high resolution (\(\sim 1\)km) pixels. We choose \(D_{\max}=5\) low resolution pixels (\(\sim\,50\)km) corresponding to when the rainfall spatial auto-correlation falls to \(\sim 0.4\), and \(k=0.175\) (low resolution pixels)\({}^{-1}\).
## 5 Results
All results are computed from 256 randomly chosen dates and times from the test data year. We use 50 ensemble members, in contrast to the 100 used in Harris et al. (2022), because we compare to the IFS ensemble, that also has 50 members. Scores such as the CRPS have biases that are a function of ensemble size. To fairly compare the CRPS of cGAN and interpolated IFS we therefore need to use the same ensemble size.
Figure 2 provides an example of cGAN inputs and outputs for a particular time: The Stage IV data (radar + gauges) is what we use as the "truth" and is not seen by cGAN. The inputs to cGAN come from ERA5 and other variables, see table 1. In figure 2 it is clear that ERA5 rainfall is in a different location to Stage IV and looks blurry due to it's lower resolution. The cGAN ensemble mean prediction has a similar intensity to the ERA5 rainfall and is in a different location, that although approximately covers the region of Stage IV rainfall, is not in exactly the same place. Each ensemble member prediction, along the bottom row, displays an intensity of rainfall much more similar to Stage IV, although very low rainfall still seems to be over predicted. Note that the somewhat horizontal bands of rainfall, present in Stage IV, but not present in the ERA5 rainfall, are represented in the cGAN predictions, showing the ability of cGAN to predict the spatial structure of rainfall patterns.
### CRPS and RMSEEM scores
The scores in table 2 indicate that with respect to the CRPS, each model performs well relative to the IFS ensemble, in its own region. That is, from ERA5 data cGAN creates a rainfall ensemble that is competitive with the IFS ensemble at short lead times. The model that was trained on all regions performed even better.
Examining how well models trained in one region predict another indicates how well generic systematic biases and uncertainty in the ERA5 rainfall model are corrected in cGAN. The Sioux City model for example, which is the model trained using data exclusively from the Sioux region, had a low CRPS in the Sioux region. However, it appears to be particularly poor at forecasting elsewhere. The Washington model, in contrast, appears to give a good CRPS everywhere, including in the Sioux City region. We
have a few hypothesis for why the Washington model is doing so well. Firstly, it is possible that the Washington region has all of the ingredients necessary to fit cGAN to generic weather. Much like the UK, there is variable elevation, including areas of ocean. The UK model also does relatively well on all regions, however not as well as IFS or the Washington model. There is also a lot of variability over both the Washington and UK regions, in contrast to the Sioux and Portland regions where there are long periods or areas of dry weather. So it might also be that the Washington model has effectively more data to train on. The scores of the Washington model motivated the production of the all region model which equaled or outperformed the other models.
The model "cGAN UK HRES" refers to the original model trained in Harris et al. (2022) which used IFS HRES forecasts as its input instead of ERA5. We used this model, without any re-training, with ERA5 data as its input instead, which might be expected to break it. However, it outperforms the "cGAN UK" model that is identical except that it was trained using ERA5. The number in brackets in table 2 is the score reported by Harris et al. (2022) using HRES inputs. This suggests that using a better dynamical forecast model results in improved cGAN predictions, in both training and evaluation.
The "IFS ensemble" model represents the output of the entire 50 member ensemble, linearly interpolated to the 1km grid, effectively the simplest downscaling method. The mean-absolute-error of IFS ensemble member 2 illustrates the improvements to the CRPS in this context by moving from a deterministic to an ensemble forecast. The higher resolution HRES deterministic forecast outperforms ensemble member 2, however it still does not do better than the full ensemble by this metric. Linearly interpolating ERA5, results in similar CRPS to the HRES forecast in our context. ERA5 being the input data, these numbers show how much improvement cGAN is able to make.
Figure 2: Three example cGAN ensemble members in the Atlantic coast region entered on Washington DC on 14th January 2020 at 15:00 UTC. The grey regions in the radar plot correspond to no stage IV data. Although cGAN makes predictions in the grey regions, these are not included when evaluating scores.
A known problem with rainfall forecasts is that they tend to drizzle with light rain rather more than in reality and under represent the extreme rainfall events. To address this in the simplest way, the ERA5 rainfall was scaled to reproduce the Stage IV and NIM-ROD rainfall distribution. A given ERA5 rainfall quantity has a corresponding NIM-ROD rainfall quantity at the same point in its cumulative distribution function (CDF). This mapping was performed for each 1km pixel separately because although the CDF is less certain than for using all locations at once, it varies considerably in space, for example due to altitude. The resulting CRPS is improved for all regions, except for the UK, but not enough to fully explain, or outperform cGAN.
Predicting zeros everywhere actually does quite well on the CRPS, outperforming many of the deterministic forecasts. We are not sure why this is, since the issues of correctly predicting rainfall in randomly slightly the wrong place are not necessarily present in the CRPS. It might be due to the extreme nature of rainfall not being represented fully in the forecasts, or spatial biases in the rainfall distribution. The IFS ensemble and all cGAN models outperform predicting zero rainfall.
The CRPS reported in table 2 is the mean of the CRPS computed at each \(\sim 1\) km grid cell, which is plotted in the left column of figure 3. Comparison to the average rainfall (right column) indicates that the CRPS is dominated by where it is usually rains.
The performance of cGAN relative the the IFS ensemble is plotted in the central column. In the Pacific north-west region (top, centred on Portland) cGAN has particular problems on the coast, near the edge of the domain. There are no ocean grid points represented at all and this might help explain the problem. However, in the hilly north of Scotland in the UK region (bottom) a similar problem occurs. This is somewhat offset by cGAN having lower CRPS scores over the neighbouring ocean, indicating potential regions for improvement in IFS. Other than that, cGAN is better in some regions and the IFS ensemble in others with no clear pattern. The particularly high rainfall region off the Atlantic coast (third from top, centred on Washington DC), is represented better by IFS and has a large contribution to the area mean CRPS, but the adjacent re
\begin{table}
\begin{tabular}{l|c|c c c c} \hline \hline & & \multicolumn{4}{c}{Data} \\ Model & Metric & Portland & Sioux & Washington & UK \\ \hline cGAN Portland & CRPS & 0.068 & 0.106 & 0.278 & 0.231 \\ cGAN Sioux & CRPS & 0.069 & 0.060 & 0.137 & 0.120 \\ cGAN Washington & CRPS & 0.064 & **0.057** & 0.113 & 0.101 \\ cGAN UK & CRPS & 0.083 & 0.080 & 0.139 & 0.097 \\ cGAN UK HRES & CRPS & 0.073 & 0.060 & 0.122 & **0.096** (0.086) \\ cGAN all regions. & CRPS & **0.060** & **0.058** & **0.109** & 0.098 \\ IFS Ensemble. & CRPS & 0.070 & 0.060 & 0.123 & 0.098 \\ IFS Ens. member 2 & MAE & 0.102 & 0.095 & 0.188 & 0.144 \\ IFS HRES & MAE & 0.101 & 0.088 & 0.185 & 0.141 \\ ERA5 & MAE & 0.096 & 0.086 & 0.181 & 0.155 \\ ERA5 PDF mapped & MAE & 0.086 & 0.088 & 0.175 & 0.162 \\ Zeros & MAE & 0.097 & 0.071 & 0.147 & 0.131 \\ \hline \hline \end{tabular}
\end{table}
Table 2: CRPS (lower is better) of cGAN trained on and generating an ensemble of rainfall forecasts using ERA5 data in all cases except cGAN UK HRES that was trained on and uses HRES data. In a deterministic forecast, each ensemble member is identical and the resulting CRPS is equal to the mean-absolute-error (MAE) which is also included. A description of the deterministic MAE models is in section 5.1.
gion of ocean is represented better by cGAN. Further investigation is required to quantify if this is due to a small number of weather events.
Rainfall has a particularly extreme distribution, with large regions of no rainfall at all and a high number of heavy rainfall events in the tail of the distribution. As shown in figure 3, the CRPS is dominated by the high rainfall regions. The distribution of the logarithm of rainfall, where it occurs, is far less extreme. For example, if the tail of the rainfall distribution follows a power law, then the tail of the distribution of logarithm of rainfall will decay exponentially. In order to examine the quality of the rainfall distribution away from its extremes, we plot the CRPS of \(\log{(r+0.01\text{mm/h})}\) where \(r\) stands for rainfall, figure 4. The reason for adding \(0.01\text{ mm/h}\) is to avoid having to deal with periods of zero rainfall while spreading the large number of low rainfall events across the distribution. Comparing the first columns of figures 3 and 4 shows that the spatial CRPS(\(\log(r+0.01)\)) field is much smoother in space and is not entirely dominated by the rainfall quantity. In particular the uniformity in the Sioux region suggests much less of a dependence upon individual weather events. Comparison of the Portland region to the elevation (third column of figure 4) shows the dependence of the CRPS(\(\log(r+0.01)\)) upon elevation, which is also somewhat related to rainfall quantity. However, the elevation in the Sioux and Washington regions appears to play much less of a role. In the UK, the CRPS(\(\log(r+0.01)\)) still has a similar pattern to the rainfall quantity.
Comparison of CRPS(\(\log(r+0.01)\)) between cGAN and the IFS ensemble is plotted in the middle column of figure 4. In the Portland region cGAN appears once again to have difficulty along the coastline and at high altitude, whilst getting improved results at lower elevations. In the region of Sioux city, cGAN and the IFS ensemble are pretty indistinguishable, while in the region centred on Washington DC cGAN has improved the CRPS(\(\log(r+0.01)\)) almost everywhere. In both of these regions there seems to be little dependence of CRPS(\(\log(r+0.01)\)) upon elevation. Over the UK, cGAN does poorly in some regions of the west coast, as with the CRPS, and IFS is better over the east coast and south east England. Overall the averages of these images correspond well to the respective averages in table 2.
The root-mean-squared-error of the ensemble mean for each model is presented in table 3. The numbers here tell much the same story as the CRPS scores in table 2, indicating that this aspect of the rainfall distribution is similarly represented.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & \multicolumn{3}{c}{Data} & \\ Model & Portland & Sioux & Washington & UK \\ \hline cGAN Portland & **0.341** & 0.614 & 1.074 & 0.698 \\ cGAN Sioux & 0.379 & 0.601 & 0.878 & 0.518 \\ cGAN Washington & 0.362 & **0.581** & 0.828 & 0.447 \\ cGAN UK & 0.416 & 0.622 & 0.879 & **0.429** \\ cGAN UK HRES & 0.405 & 0.593 & 0.853 & 0.459 \\ cGAN All regions & 0.349 & 0.582 & **0.816** & 0.437 \\ IFS Ensemble & 0.385 & 0.681 & 0.921 & 0.453 \\ ERA5 (RMSE) & 0.349 & 0.625 & 0.881 & 0.472 \\ Zeros (RMSE) & 0.483 & 0.634 & 0.941 & 0.509 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Root-Mean-Squared Error of the Ensemble Mean (RMSEEM, lower is better) of cGAN trained on and generating an ensemble of rainfall forecasts using ERA5 data in all cases except cGAN UK HRES that was trained on and uses HRES data for forecasts.
Figure 3: **Left:** CRPS (lower is better) at each grid point of the 50 member cGAN ensemble trained in each region separately. **Middle:** Difference between the CRPS of cGAN and IFS. Blue indicates that cGAN has the lower (better) CRPS, red indicates that IFS has the lower CRPS. **Right:** Average rainfall in each region. **Top to bottom**: Pacific north-west, Great plains, Atlantic coast, UK.
Figure 4: **Left:** CRPS(log(\(r\)+0.01)) at each grid point of the 50 member cGAN ensemble trained in each region separately. \(r\) represents rainfall in mm/h. **Middle:** Difference between the CRPS(log(r+0.01)) of cGAN and IFS. Blue indicates that cGAN has the lower (better) CRPS(log(r+0.01)), red indicates that IFS has the lower CRPS(log(r+0.01)). **Right:** Elevation in each region. **Top to bottom**: Pacific north-west, Great plains, Atlantic coast, UK.
### Spatial variability
The Radially Averaged Log Spectral Distance (RALSD) for each of the models is presented in table 4. With the exception of the Sioux model, each model appears to optimise these scores for its own region. In contrast to the CRPS, although the Washington model is good in the USA, it does less well over the UK. And vice-versa for the UK and UK HRES models. cGAN trained on all regions does not have particularly impressive scores by this metric and does not reproduce the power spectrum as well as the local models. A similar story applies to the corresponding variogram scores in table 5 although the cGAN UK HRES and cGAN all region models appear to do much better.
For the deterministic forecasts IFS HRES scored similarly to IFS ensemble member 2 and surprisingly worse than the lower resolution ERA5. The cGAN HRES was the best model over the UK however it performed badly in the USA, considerably worse than IFS ensemble member 2. It is not clear why this should be the case. Further investigation is required to determine if it was due to potential outlier ensemble members or some other cause. We have pretty much the same story for the variogram scores of log(\(r\) + 0.01) (not shown), with the exception that the IFS ensemble over the UK is relegated to score below the cGAN UK models (which did better) and has a similar score to the cGAN all region model.
Like the CRPS, the variogram score in table 5 is the average of the variogram scores at each grid location. Plotting these maps (not shown) indicates that like the CRPS, the variogram score is high in the regions of high rainfall. There were also some isolated places, for example in the south east of Ireland, where the variogram score was high. Inspection of the NIMROD data in this region and the Stage IV data in others revealed a number of radar artefacts, indicating that the variogram score is sensitive to them.
The radially averaged power spectra used to compute the RALSD are plotted in figure 5. At low frequencies the curves are proportionally close together on the log scale. For all regions, as the frequency increases the interpolated IFS forecast models fall below the measurement data in order of their resolution. This classical resolution dependent behaviour can be understood to reflect the ability of a fluid model to represent different scales. Smoothing the rainfall data achieves similar, though not identical, curves. Starting from the ERA5 model, in each region cGAN has managed to generate the high frequency variability necessary to correct these curves. Although, the power spectrum in the Sioux city region was not corrected as well as in the other regions. Note that oth
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & \multicolumn{4}{c}{Data} & \\ Model & Portland & Sioux & Washington & UK \\ \hline cGAN Portland & 1.607 & 5.524 & 2.333 & 4.792 \\ cGAN Sioux & **1.144** & 6.886 & **1.026** & 3.935 \\ cGAN Washington & 4.659 & 3.508 & 1.427 & 12.047 \\ cGAN UK & 4.665 & 10.357 & 7.066 & 1.473 \\ cGAN UK HRES & 11.321 & 5.595 & 4.873 & **0.792** \\ cGAN All regions & 4.202 & **1.377** & 1.528 & 5.223 \\ IFS Ensemble & 6.896 & 7.621 & 8.164 & 15.042 \\ IFS Ens. member 2 & 5.407 & 6.173 & 7.285 & 14.836 \\ IFS HRES & 3.445 & 4.775 & 5.672 & 13.373 \\ ERA5 & 5.860 & 9.012 & 7.775 & 15.031 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Radially Averaged Log Spectral Distance (RALSD, lower is better) of cGAN and the linearly interpolated IFS as in table 3. For more insight see figure 5.
erwise "good" models, such as cGAN HRES and cGAN all regions performed poorly with this metric when compared to the local models.
### 0.5.3 Orography resolution
In these experiments cGAN over the USA was trained using \(\sim 1\)km orography and land-sea mask derived from the GMTED2010 and ESA WorldCover 2020 data sets. For the UK we, and Harris et al. (2022), use the lower resolution orography and land-sea mask field used with the IFS. To see the impact, we also trained the ERA5 cGAN UK model with different orography fields. The results, summarised in table 6, indicate that using the low resolution IFS ensemble field or GMTED2010 has only a marginal if any impact upon the CRPS, RMSEEM and Variogram score with respect to using the higher resolution field. However the RALSD was degraded. This degradation might be within the random variation seen between training runs though, compare to table 5.4 below.
### 0.5.4 Model variation
To understand the uncertainty in the model training, which is a somewhat random process, the models in each region are trained again from scratch. They are then evaluated again using the same dates and times used in the first training-evaluation. The resulting scores (table 5.4) give some indication of the range of uncertainty in the training procedure. Using the same dates (within each region) reduces the variability in the scores making them easier to compare. The CRPS, RMSEEM and Variogram scores appear quite robust over the two training runs. The same cannot be said for the RALSD. This is despite the RALSD being quite robust within independent sample dates using a single model.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Orography & CRPS & RMSEEM & RALSD & Variogram \\ \hline GMTED2010 & 0.0970 & 0.459 & 1.736 & 2.357 \\ IFS high resolution & 0.0972 & 0.434 & 1.473 & 2.358 \\ IFS operational & 0.0970 & 0.432 & 2.197 & 2.336 \\ \hline \hline \end{tabular}
\end{table}
Table 6: cGAN trained over the UK on ERA5 with orography and the land-sea mask from three different sources. The highest resolution orography is GMTED2010 and the lowest is that from the operational IFS.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & \multicolumn{4}{c}{Data} & \\ Model & Portland & Sioux & Washington & UK \\ \hline cGAN Portland & 1.479 & 2.377 & 5.791 & 4.928 \\ cGAN Sioux & 1.511 & 2.283 & 4.313 & 3.119 \\ cGAN Washington & 1.360 & 1.805 & 3.762 & 2.618 \\ cGAN UK & 2.027 & 3.318 & 4.675 & 2.358 \\ cGAN UK HRES & 1.588 & **1.761** & 3.602 & 2.197 \\ cGAN All regions & **1.255** & 1.921 & **3.396** & 2.321 \\ IFS Ensemble & 2.665 & 2.752 & 5.772 & **2.046** \\ IFS Ens. member 2 & 1.714 & 2.240 & 4.343 & 2.673 \\ IFS HRES & 1.786 & 2.168 & 4.417 & 2.579 \\ ERA5 & 1.581 & 1.979 & 3.883 & 2.615 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Variogram scores (lower is better) of cGAN and the linearly interpolated IFS as in table 3.
## 6 Conclusion
Figure 5: The radially averaged power spectra for models over each of the four regions. In contrast to Harris et al. (2022), these curves are averages over all sample dates. They represent \(\overline{P}_{\mathrm{radar},i}\) and \(\overline{P}_{\mathrm{model},i}\) used to calculate the RALSD in equation 1. Compare to table 4.
To measure the true uncertainty in the scores, the evaluation could be computed multiple times using random dates within the test year. However we found that this introduces an unacceptably large weather induced variability. The scores also depend on the precise selection of region and are only used to compare models, so reducing variation is the priority.
Training models is the most computationally intensive part of this work. The cost of additional training runs to more accurately quantify the training uncertainty is currently prohibitive. Unfortunately, with only two data points on the training axis, there is nothing to be gained by employing any statistical techniques.
## 6 Discussion and conclusions
Application of the cGAN model developed in Harris et al. (2022) to 3 additional regions over the USA leads to broadly similar results to those reported for the UK. cGAN has been shown here to be capable of post-processing low resolution ERA5 data into an ensemble competitive with the IFS ensemble and IFS HRES forecast at short cad times as measured by the CRPS. In addition cGAN was able to "downscale" the ERA5 rainfall data to \(\sim 1\) km resolution and correct the high spatial frequency variability towards that of the measured rainfall, outperforming the IFS ensemble and IFS HRES forecast over the USA.
In general, models trained upon a particular region did well at predicting the rainfall in that region, but less well elsewhere. This could be for example because of dynamics or conditions specific to each region, or possibly a lack of training data. Over the two to three years of training data, there were long periods where large areas remained dry. Particularly in the regions centred on Portland and Sioux city where the local cGAN models performed less well. In an attempt to account for this lack of data another cGAN model was trained using all of the training data from all four regions. Unlike the other models, this model might find it harder to take advantage of dynamics specific to the local region. In addition, the character of the radar data is different in each region: \(\sim 4\) km for Stage IV and \(\sim 1\) km for NIMROD, with each region in the USA and UK using different processing algorithms and calibration. Although the radar hardware is similar. Despite these disadvantages, the cGAN model using training data from all regions equaled or outperformed the local models everywhere.
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline Region & CRPS & RMSEEM & RALSD & RALSD & RALSD & Variogram \\ & & & & first 128 & last 128 & \\ \hline Portland 1 & 0.068 & 0.341 & 1.607 & 1.800 & 1.441 & 1.479 \\ Portland 2 & 0.064 & 0.334 & 2.055 & 2.363 & 1.701 & 1.300 \\ \hline Sioux 1 & 0.059 & 0.601 & 6.886 & 5.701 & 7.951 & 2.283 \\ Sioux 2 & 0.062 & 0.585 & 1.838 & 1.642 & 2.109 & 2.333 \\ \hline Washington 1 & 0.112 & 0.828 & 1.427 & 1.165 & 1.715 & 3.762 \\ Washington 2 & 0.112 & 0.831 & 3.206 & 2.478 & 4.100 & 3.454 \\ \hline UK 1 & 0.096 & 0.429 & 1.473 & 1.042 & 2.011 & 2.358 \\ UK 2 & 0.094 & 0.419 & 1.021 & 1.132 & 1.018 & 2.271 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Variation in scores in each regions with different training runs of each model. The evaluation of the scores is performed using the same dates and times in each region. For the RALSD additional scores were averaged over the first and last 128 sample dates as well as all 256, the default used here.
If it is the case that the ensembles produced by cGAN model are improved by simply using more training data, there are many rainfall radar stations around the world online now, in addition to the regions of the USA that were not considered here. However, quality and characteristics of the gridded products are not uniform and challenges remain. Africa for example has no available rainfall radar station data that we are aware of.
To further validate the methodology, the model using training data from all regions should be tested elsewhere where there is radar. For example Taiwan, where rainfall forecasts can be challenging. All four regions in this paper are in the extra-tropics, so it is not clear if cGAN is limited to the large scale rainfall patterns in the extra-tropics. Training and testing cGAN in tropical regions is difficult due to the lack of ground based radar data. We are currently investigating this direction in the horn of Africa region using satellite measurements of rainfall, which do not occur as often and are reported at a lower resolution compared to ground based radars. We are also investigating post-processing of forecasts into the medium range future and using information from the entire forecast ensemble rather than a single data set.
UK model evaluation suggests that using a better dynamical forecast model results in improved cGAN predictions, in both training and evaluation.
## Appendix A Reduced Numerical Precision
The models were trained on an NVIDIA A100 accelerator, on which TensorFlow automatically employed the "TensorFloat-32" (TF-32) format for many internal calculations. This number format has the range of 32-bit numbers but the precision of 16-bit numbers and has the advantage of lower computational cost compared to full 32-bit computations. Negligible impact of this reduced precision in the UK model trained using IFS data was reported by Harris et al. (2022).
## Appendix B Fractions skill score (FSS) example
A perfect forecast has a FSS of 1, and a "no skill" forecast has a FSS of 0. Suppose we have a field, figure B1, that looks like A 50% of the time and B 50% of the time. If we forecast our field by randomly choosing A and B with equal probability, then we get a lower (worse) fractions skill score than if we randomly forecast C and D instead. For this example the variogram score (lower is better) with unit weights and \(p\,=\,0.5\) returns 75 for forecasting A and B and 100 for forecasting C and D.
### Open Research Section
The code for the GAN and VAE-GAN models used in this paper is available at [https://doi.org/10.5281/zenodo.6922291](https://doi.org/10.5281/zenodo.6922291). A cleaned-up version of the code, with the same core functionality, is available at [https://github.com/ljharris23/public-downscaling](https://github.com/ljharris23/public-downscaling) -cgan. We would recommend this for people looking to build on our work. Our code was adapted from Jussi Leinonen's GAN model, available at [https://github.com/jleinonen/dowscaling-rnn-gan](https://github.com/jleinonen/dowscaling-rnn-gan). All experiments in this paper were performed within TensorFlow 2.7.0. The ECMWF forecast archive can be obtained through MARS; more details are available at [https://www.ecmwf.int/en/forecasts/access-forecasts/access-archive](https://www.ecmwf.int/en/forecasts/access-forecasts/access-archive) -datasets. MARS accounts for academic use are available for free, subject to certain conditions; see [https://www.ecmwf.int/en/forecasts/accessing-forecasts/licences](https://www.ecmwf.int/en/forecasts/accessing-forecasts/licences) -available. The NIMROD radar data set can be obtained through CEDA; more details are available at [https://catalogue.ceda.ac.uk/uuid/27dd6ffba67f667a18c62de6c3456350](https://catalogue.ceda.ac.uk/uuid/27dd6ffba67f667a18c62de6c3456350). A CEDA Archive account is required in order to access this data. The Stage IV data may be obtained at [https://data.eol.ucar.edu/dataset/21.093](https://data.eol.ucar.edu/dataset/21.093).
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant No 741112, ITHACA). MC gratefully acknowledge funding from the MAELSTROM EuroHPC-JU project (JU) under No 955513. The JU receives support from the European Union's Horizon research and innovation programme and United Kingdom, Germany, Italy, Luxembourg, Switzerland, and Norway.
|
2309.14597 | Policy Optimization in a Noisy Neighborhood: On Return Landscapes in
Continuous Control | Deep reinforcement learning agents for continuous control are known to
exhibit significant instability in their performance over time. In this work,
we provide a fresh perspective on these behaviors by studying the return
landscape: the mapping between a policy and a return. We find that popular
algorithms traverse noisy neighborhoods of this landscape, in which a single
update to the policy parameters leads to a wide range of returns. By taking a
distributional view of these returns, we map the landscape, characterizing
failure-prone regions of policy space and revealing a hidden dimension of
policy quality. We show that the landscape exhibits surprising structure by
finding simple paths in parameter space which improve the stability of a
policy. To conclude, we develop a distribution-aware procedure which finds such
paths, navigating away from noisy neighborhoods in order to improve the
robustness of a policy. Taken together, our results provide new insight into
the optimization, evaluation, and design of agents. | Nate Rahn, Pierluca D'Oro, Harley Wiltzer, Pierre-Luc Bacon, Marc G. Bellemare | 2023-09-26T01:03:54Z | http://arxiv.org/abs/2309.14597v3 | # Policy Optimization in a Noisy Neighborhood:
###### Abstract
Deep reinforcement learning agents for continuous control are known to exhibit significant instability in their performance over time. In this work, we provide a fresh perspective on these behaviors by studying the return landscape: the mapping between a policy and a return. We find that popular algorithms traverse _noisy neighborhoods_ of this landscape, in which a single update to the policy parameters leads to a wide range of returns. By taking a distributional view of these returns, we map the landscape, characterizing failure-prone regions of policy space and revealing a hidden dimension of policy quality. We show that the landscape exhibits surprising structure by finding simple paths in parameter space which improve the stability of a policy. To conclude, we develop a distribution-aware procedure which finds such paths, navigating away from noisy neighborhoods in order to improve the robustness of a policy. Taken together, our results provide new insight into the optimization, evaluation, and design of agents.
## 1 Introduction
It is well-documented that agents trained with deep reinforcement learning can exhibit substantial variations in performance - as measured by their episodic return. The problem is particularly acute in continuous control, where these variations make it difficult to compare the end product of different algorithms or implementations of the same algorithm [11; 20] or even reliably measure an agent's progress from episode to episode [9]. A recurring finding is that simply averaging the return produced by a set of policies may be insufficient for rigorous evaluation.
In this paper, we demonstrate that high-frequency discontinuities in the mapping from policy parameters \(\mathbf{\theta}\) to the return \(R(\mathbf{\theta})\) are an important cause of return variation. As a consequence of these discontinuities, a single gradient step or perturbation to the policy parameters often causes important changes in the return, even in settings where both the policy and the dynamics are deterministic. Because an agent's parameters constantly change during training and should be robust to minute parametric perturbations, we argue that the _distribution_ of returns in the neighborhood of \(\mathbf{\theta}\) is in fact a better representative of its performance, both from an evaluation and an optimization perspective.
**Noisy neighborhoods in the return landscape.** We call the _return landscape_ the mapping from \(\mathbf{\theta}\) to \(R(\theta)\), our main object of study. We show that the return often varies substantially within the vicinity of any given \(\mathbf{\theta}\), forming what we call a _noisy neighborhood_ of \(\mathbf{\theta}\). Based on this observation, we demonstrate the usefulness of studying the landscape through the distribution of returns obtained from small perturbations of \(\mathbf{\theta}\). In the important case where these perturbations result from a single gradient step, we call the resulting object the _post-update return distribution_.
**Diversity in equally-performing policies.** We show that different neighborhoods correspond to different post-update return distributions and agent behaviors. We discover that at equal average returns, different policies obtained by the same deep RL algorithm may in fact have substantially different distributional profiles, as measured by statistics of the post-update return distribution. Moreover, we uncover that many of these distributions are long-tailed and we find the source of these tails to be sudden failures from an otherwise successful policy.
**Effect on learning dynamics.** We expose how the transition between noisy and smooth parts of the landscape happens. Surprisingly, while large valleys of low return are visible when linearly interpolating between similarly performing policies from different runs, we show no such valleys typically exist between policies from the same run. Based on this insight, we show that it is possible to find an improved set of parameters \(\mathbf{\theta}\) that achieves comparable return, but substantially lower post-update variation.
We believe the phenomenon we study is central to deep reinforcement learning in continuous control. Beyond its effect on learning dynamics (for example, through increased variance and implicit exploration [39]), it is also a potential driver of instability in sim2real settings, even in the face of seemingly small environmental changes. Additionally, it suggests that one should not simply deploy the policy obtained at the end of a training run, and that further post-training tuning may be beneficial.
## 2 Background
In reinforcement learning, an agent interfaces with an environment. In this paper, we are interested in continuous control environments modelled as a finite-horizon Markov Decision Process (MDP) \(\mathcal{M}=\langle\mathcal{S},\mathcal{A},r,f,T,\rho_{0}\rangle\), where \(\mathcal{S}\equiv\mathbb{R}^{n}\) is the state space, \(\mathcal{A}\equiv\mathbb{R}^{m}\) is the action space, \(r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is a reward function, \(f:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\) is a deterministic transition function, \(T\) is the horizon, and \(\rho_{0}=\mathcal{U}(s_{I}-\beta,s_{I}+\beta)\) is an initial state distribution with \(s_{I}\) being an initial reference state, and \(\beta\in\mathbb{R}^{n}\) an environment-dependent parameter. We assume that each agent produces a stationary Markovian deterministic policy \(\pi_{\mathbf{\theta}}:\mathcal{S}\rightarrow\mathcal{A}\) within a parametrized family \(\Pi_{\Theta}=\{\pi_{\mathbf{\theta}}:\mathbf{\theta}\in\Theta\subset\mathbb{R}^{d}\}\). In an episodic setting, the interaction of the agent with the environment with a given policy \(\pi_{\mathbf{\theta}}\) from some state \(s\in\mathcal{S}\) produces a trajectory in the environment and,
Figure 1: A visualization for two policies visited by SAC in the hopper environment. We show the return landscape in their proximity, their post-update return distributions, and the visual appearance of their learned gaits. We plot the mean of each return distribution as an orange line. Despite featuring a similar level of return, we observe that the policy in the noisy neighborhood performs an unstable curved gait which is faster but more prone to failure, as visible in the thick left tail of the post-update return distribution.
consequently, a _return_:
\[G_{\mathbf{\theta}}(s)=\sum_{t=1}^{T}r(s_{t},a_{t}) \tag{1}\] \[\text{s.t. }s_{t}=f(s_{t-1},a_{t-1}),a_{t}=\pi_{\mathbf{\theta}}(s_{t}),s_{ 1}=s.\]
We are interested in understanding how small changes to the policy parameter affect the associated return. To this end it is sufficient to study the return from the reference state \(s_{I}\) (in Appendix A.4 we show that similar effects occur across the state space). The _return landscape_ is our main object of study.
**Definition 2.1** (Return Landscape).: The return landscape is the mapping from policy parameters to return, starting from the initial reference state:
\[R(\mathbf{\theta})=G_{\mathbf{\theta}}(s_{I}). \tag{2}\]
Figure 1 (left) depicts small portions of the return landscape for a particular environment and policy parametrization (we describe the visualization procedure below).
In this work, we will use the policies discovered by popular algorithms to characterize the topology of the return landscape. We focus on policy-based deep reinforcement learning algorithms for continuous control, such as Soft Actor-Critic (SAC) [19], Twin-Delayed DDPG (TD3) [16], and PPO [42] which use neural network function approximators to represent the policy. Such algorithms learn good behavior in the environment by maximizing the discounted return. In the process, they produce a sequence of policies
\[\mathbf{\theta}_{0},\mathbf{\theta}_{1},\dots,\mathbf{\theta}_{N},\text{ s.t. }\mathbf{\theta}_{t+1}=\text{u}(\mathbf{\theta}_{t},X_{t})\text{ for all }t, \tag{3}\]
where \(\text{u}:\Theta\times\mathbb{R}\rightarrow\Theta\) is the algorithmic policy update function, and \(X_{t}\) is some random variable abstracting the stochasticity inherent to the update. For example, SAC and TD3 construct parametric updates by sampling a small number of transitions (minibatches) from their replay buffer [29; 31].
## 3 A Distributional View on Return Landscapes
The return landscape arises from the interaction between an environment and a class of parameterized policies. We first consider how this landscape varies in the immediate vicinity (or neighborhood) of policies produced by deep reinforcement learning algorithms. Given a reference policy, a natural choice is to consider how the return is affected by single updates to the policy parameters. To this end, we view the collection of possible returns obtained by evaluating the updated policy as a distribution over returns; as we will see, this distribution widely varies across the return landscape.
**Definition 3.1** (Post-Update Return).: Let \(\Pi_{\Theta}\) be a parametric space of deterministic policies and \(\text{u}\) an update function. Given \(\pi_{\mathbf{\theta}}\in\Pi_{\Theta}\), its _post-update return_ is defined as:
\[\mathcal{R}(\mathbf{\theta})=R(\text{u}(\mathbf{\theta},X)),\ \ X\sim P, \tag{4}\]
where \(P\) is a an algorithm-dependent source of stochasticity.
The post-update return inherits randomness from the underlying training algorithm and it is thus a random variable. Clearly, a post-update return will have an associated policy and trajectory, which are in turn random variables. In this work, we will leverage the distribution of post-update returns as a tool to investigate the properties of neighborhoods of the return landscape.
The different panels of Figure 1 illustrate how the return landscape in the neighborhood of \(\mathbf{\theta}\) translates into different post-update return distributions. Here, the return landscape is visualized along two update directions computed by the training algorithm based on two different batches sampled from its replay buffer, such that 1.0 on each axis corresponds to a single parameter update in that direction (details in Appendix A.2). The middle panel shows the corresponding post-update return distribution estimated using 10000 samples. We find that the distribution from the noisy neighborhood (top) exhibits a significant left tail, while the distribution from the quieter neighborhood (lower) is concentrated around its mean. On the right, we illustrate the gait produced by the reference policies (the origin in the left panel). We find qualitatively that the policy in the noisy neighborhood exhibits a curved gait which is sometimes faster, but unstable, whereas the policy in the smooth neighborhood produces an upright gait which can be slower, yet is very stable. We include similar evidence for other environments in Appendix A.11.
### Post-Update Return Distributions as a Characterization of the Return Landscape
The mean of the post-update distribution naturally captures the average behavior represented by an algorithm as it traverses a given neighborhood. We further characterize this distribution by measuring its standard deviation (a measure of spread around the mean) and its skewness (a measure of asymmetry). In our context, a negative skewness describes a distribution with a heavy left tail, similar to the one shown in Figure 1. Such a tail is especially interesting to us as it indicates lower-than-expected returns. However, we find that skewness is not directly interpretable as a numerical quantity. To capture these tails interpretably, we introduce a metric we call _left-tail probability_. The left-tail probability of a random variable \(Y\) is defined as
\[\mathrm{LTP}_{\alpha}(Y)=\mathbb{P}[0\leq Y<\alpha\cdot\mathrm{mode}(Y)]. \tag{5}\]
This quantity satisfies some desirable properties within the context of our study. First, it uses the mode of the distribution as a reference value. This is by contrast with the mean of the distribution, which may not correspond to the "majority" behavior (as illustrated in the top half of Figure 1). It also allows us to more easily compare the tailedness of distributions generated from policies of widely varying returns. Second, it is an easily-interpretable quantity which measures the total probability mass falling in the left tail. For simplicity, here we assume that \(Y\) is positive, noting the idea can be naturally generalized to random variables bounded below. In our analyses we write \(\mathrm{LTP}\equiv\mathrm{LTP}_{1/2}\) to measure drops from the mode of the post-update return distribution of at least 50%. In practice, we estimate the LTP by leveraging the Chernoff estimator [10], computing the mode as the midpoint of the interval of the most populated bin in a 100-bin histogram.
Equipped with these metrics, we measure the mean and the other three statistics of the post-update return for a set of \(600\) policies produced, across trials and iterations, by three popular deep RL algorithms (TD3, SAC and PPO). We use \(20\) seeds per algorithm and \(10\) checkpoints per seed, for a total of \(200\) policies per algorithm. These checkpoints are equally-spaced in time in training runs of \(1\) million steps for TD3 and SAC and \(60\) million steps for PPO. Each of the \(600\) distributions is estimated by performing \(1000\) independent updates to the starting policy and then rolling the resulting deterministic policy out in the environment for \(1000\) time-steps to compute its return. Each update is different due to a different batch sampled from the replay buffer for TD3 and SAC, and to a different batch of data from the environment collected by a randomly-perturbed policy for PPO. This amounts to millions of policy evaluations for which, for computational reasons, we primarily use the easily parallelizable environments from the Brax simulator [15]. We also include similar results on the post-update return distributions of policies trained on DeepMind Control Suite [44] and on games from the
Figure 2: A scatter plot showing mean return and standard deviation, skewness or left-tail probability of the post-update return distribution of policies produced by three popular deep RL algorithms on the ant Brax task. Each point corresponds to a given policy’s post-update return distribution, with six selected policies highlighted by star markers showing a range of diverse distributions.
ALE [7] in Appendix A.5 and A.6. Additional experimental details, including the hyperparameters and implementations used for running these algorithms, can be found in Appendix A.1.
Figure 2 illustrates how different policies produced by deep RL algorithms correspond to a wide range of post-update return distributions, as measured by our chosen metrics 2. For each metric, we report the bootstrapped mean using 1000 resamples to account for sampling error in the post-update returns collected for a given policy, and omit the corresponding bootstrapped confidence intervals for visual clarity, as they are very small. In particular, this scatter plot shows that different policy parameters achieve similar levels of returns (as measured by the distribution mean) but a wide range of possible levels of variability, as measured by standard deviation, skewness and left-tail probability. This suggests, in a similar way to the example shown in Figure 1, that algorithms discover behaviors which can be qualitatively very different from one another, and that leveraging the post-update return distribution can offer a new lens to investigate different dimensions of policy quality.
Footnote 2: Note that the LTP is not properly defined for a small number of policies achieving negative return, that appear as points on the right border of the scatter plot.
These results suggests that simply optimizing the mean return of a policy might ignore its distributional aspect. In particular, a practitioner will likely prefer, for a given level of return, a policy featuring a post-update return distribution with lower levels of standard deviation or left-tail probability. Intuitively, such a policy may correspond to a safer behavior, both able to more robustly accommodate additional updates from its training algorithm and possibly to deal with other unexpected sources of perturbation during deployment.
### Analyzing Failures
One characteristic feature of the post-update distributions studied above is the existence of a significant lower tail for many policies visited by the three deep RL algorithms TD3, SAC and PPO. This is visible in their skewness, but especially in their left-tail probability, which demonstrates that many policies produce returns which are unexpectedly poor after up to roughly 10% of updates. We now take a closer look at the specific mechanism by which small changes in an agent's actions results in a wide range of returns in continuous control.
Our experimental procedure is as follows. For each environment, we randomly select 10 policies from the logged checkpoints of 20 independent runs of TD3, conditioned on the fact that the policy has a left-tail probability which is greater than zero. These are policies that we know are prone to poor returns following an update. For each policy, we compute the post-update return distribution by collecting trajectories in the environment after a single update to the original policy. According to this procedure, we identify two trajectories drawn from the neighborhood around the policy: a successful trajectory, characterized by a return within 10% of the mean of the post-update distribution, and a failing trajectory, characterized by a return of less than 50% of the mode of the post-update distribution, as in the left-tail probability.
Figure 3: A visualization of how failures occur in the halfcheetah and walker2d tasks. The left subplots compare the reward-per-timestep obtained by a successful and failing trajectory generated by two policies in the same noisy neighborhood. The right subplots show the simultaneous evolution of returns for 10 such trajectory pairs (that can be thought of as a race to collect the most rewards), with the trajectory pair from the left indicated by a matching star marker. The right subplots indicate that policies from the same neighborhood behave similarly (diagonal segments of the curve) until the failing policy makes a sudden misstep and collects low rewards (horizontal segments).
Our goal is to understand the differences between these successful and failing trajectories in order to explain how long-tail returns occur. To this end, Figure 3 depicts two views of the trajectory data. For each environment, the left subplot considers a single pair of successful/failing trajectories corresponding to one of the chosen policies, and plots the reward per timestep earned in these two trajectories. These results suggest that the failing policies which make up the tail of the post-update distribution are capable of collecting similar rewards to the successful policies, yet are prone to missteps which result in episode termination (as in walker2d) or transition to a low-reward, quasi-absorbing state (as in halfcheetah). Figure 4 shows an example of such a misstep in walker2d.
We present a broader view of these observations through the right subplots, per-environment, in Figure 3. Here, we plot each of the trajectory pairs as a parametric curve in time. For both the successful and failing trajectories, we compute the return up to time \(t\), \(R_{\leq t}=\sum_{i=1}^{t}r(s_{i},a_{i})\). Then, for each value of \(t\), we plot \(R_{\leq t}\) for the successful and failing trajectories as a point on the curve, allowing us to visualize the simultaneous evolution of both trajectories.
We assume that \(R_{\leq t+1}=R_{\leq t}\) when the length of one trajectory exceeds the other, that is, no additional reward is collected after the trajectory terminates. The resulting visualization reveals several notable findings. First, we show that nearly all trajectory pairs begin by following the line \(y=x\), indicating that the respective policies collect rewards at almost exactly the same rate. Next, we observe that many curves rapidly diverge from this line to horizontal, indicating that the failing trajectory suddenly starts collecting little to no reward, while the successful trajectory continues. In walker2d, these divergences reflect sudden terminations of the episode, represented by horizontal lines. In halfcheetah, which does not terminate, we see that instead the failing agent gets stuck in low-reward absorbing states, but is sometimes able to recover and go back to collecting reward at the same rate as the successful trajectory. We include similar visualizations for the hopper and ant environments in Appendix A.8, which support the same conclusions.
Taken together, these results demonstrate that some policies exist on the edge of failure, where a slight update can trigger the policy to take actions which push it out of its stable gait and into catastrophe. Indeed, when we compare the gaits learned by policies of high left-tail probability to those which are more well-behaved under updates, we observe that the behaviors of the former are qualitatively more unstable (Figure 1, with more examples in Appendix A.11).
## 4 Navigating Return Landscapes
In the previous section, we took a fine-grained look at the return landscape, using post-update return distributions to characterize the neighborhood of different policies learned by deep RL algorithms. We now consider this landscape on a more global scale, specifically how the agent's return changes as one interpolates between different policies.
### Connectivity in the Return Landscape
For our analysis, we use 200 policies generated by different runs of TD3. From these we select pairs of policies with different post-update return distributions, as measured by their standard deviation or left-tail probability, but similar mean. Consider two sets of policy parameters \(\mathbf{\theta}_{1}\) and \(\mathbf{\theta}_{2}\), for which the post-update return distribution implied by \(\mathbf{\theta}_{1}\) has higher LTP than the implied by \(\mathbf{\theta}_{2}\). We linearly interpolate these two to form a family of parameters \(\mathbf{\theta}=\alpha\mathbf{\theta}_{1}+(1-\alpha)\mathbf{\theta}_{2},\alpha\in[0,1]\). For each such \(\mathbf{\theta}\), we then record the return \(R(\mathbf{\theta})\) obtained by a single simulation with the corresponding policy.
In Figure 5, we show the result of this interpolation for six pairs of policies in the hopper and walker2d environments, in two distinct cases. In the first case, the two policies have been produced by the same run of TD3 (i.e., starting from the same initialization and history of batches); in the second case, the two policies have been generated by independent repetitions of the algorithm. The plot shows interesting information about the global structure of the return landscape: the interpolation
Figure 4: The trajectory of a successful (top) and failing (bottom) policy, both coming from the same post-update distribution in walker2d. They exhibit a similar gait until right before the failure.
process traverses different parts of the landscape, highlighting a transition between a noisy part of the landscape to an inherently smoother one. Interestingly, the interpolations between policies from the same run and from different runs exhibit very different qualities. When interpolating between policies of different runs, the process traverses entire regions of the landscape of poor return, until the point in which it gets to the neighborhood of the second policy. By contrast, when interpolating between policies from the same run, the transition from a noisy to a smooth landscape happens without encountering any valley of low return - even when these policies are separated by hundreds of thousands of gradient steps in training. This is particularly surprising given that \(\mathbf{\theta}\) is a high-dimensional vector containing all of the weights of the neural network, and there is no a priori reason to believe that interpolated parameters should result in policies that are at all sensible.
To further quantify the phenomenon, we want to measure the proportion of return collapses encountered when interpolating between policies. We use the following experimental design. We sample for each environment a set of 500 pairs of policies from the same runs and a set of 500 pairs of policies from different runs. Then, we linearly interpolate between policies in the pairs, producing 100 intermediate policies, and randomly perturb them using Gaussian noise with standard deviation \(3\times 10^{-4}\) to obtain an estimate of the mean of their (random) post-update return distribution. Then, for each pair of policies, we measure how frequently the return collapses in between the two extremes, by counting how many times it becomes less than 10% of the minimum return of the two original policies. We then average this _Below-Threshold Proportion_ across pairs, and across environments using triable [1].
Figure 6 shows that there is on average almost no drop in return when interpolating among policies from the same run. We additionally report similar results on four ALE games in Appendix A.6.
We hypothesize this might be interpreted as each individual run of the algorithm specializing on a different family of behaviors, for which, due to the geometry of the return landscape, interpolation between policy parameters does not have any disrupting effect. This result can be interpreted as being related to linear mode connectivity [17; 18], a phenomenon observed in supervised learning, for which different points in the loss landscape of neural networks can be connected by near-constant-loss paths. In other words, it appears there is typically no barrier of low average return separating policies generated from the same run, even when those policies feature very different levels of stability. The existence of such a phenomenon in the RL setting is far from certain: the optimization objective is non-stationary and the evaluation metric (the return instead of the loss) depends on an environment and multiple forward passes from a neural network.
Figure 5: Return of the policies obtained by linear interpolation of the parameters of policies of approximately the same level of return in the hopper and walker2d environments. The neighborhoods traversed transition from being noisy to being smooth; policies from the same run are connected by paths with no valleys of low performance in the return landscape, even if separated by hundreds of thousands of updates (i.e., at least \(1\times 10^{5}\) steps for all pairs of policies from the same run).
Figure 6: Proportion of return collapses when interpolating between randomly-sampled policies produced by either the same or different runs in Brax. Far fewer return collapses are observed when interpolating between policies produced by the same run. Results are aggregated over all environments with 95% bootstrapped C.I and 500 pairs of policies.
### Stabilizing Policies by Navigating the Landscape
Overall, Figure 5 demonstrates the existence of paths in the return landscape which are able to increase the level of stability of a given policy, but are not necessarily followed in a spontaneous way by typical policy optimization algorithms. In the absence of a desirable end policy to interpolate towards, we would like to understand if it is possible to find similar stabilizing paths (as measured by the LTP), given a starting policy inhabiting a noisy neighborhood of the return landscape. We conjecture that this is feasible by filtering the policy updates produced by an algorithm: In particular, we propose to reject gradient updates that lead to policies with less favorable post-update return distributions.
In our procedure, which is outlined in Algorithm 1, we use the CVaR as a heuristic to compare the stability of post-update return distributions3, as it is effectively a measure of the left tail mean [38]. Our procedure works as follows: starting with a given policy, we use TD3 to interact with the environment, maintain a replay buffer, and compute updates to the policy and critic parameters. However, before applying a proposed update, we first estimate the post-update return distributions of the _post-update_ policies by sampling TD3 updates from random minibatches of the replay buffer and evaluating the returns of the corresponding policies. If the estimate of the post-update return CVaR is not sufficiently high relative to that of the post-update return distribution of the current policy, the update is _rejected_, so that the networks and the replay buffer are reverted to the state that they were in before the update was computed. In our experiments, we study the effect that such a rejection mechanism has on the evolution of the LTP by comparing the trajectories induced by this procedure without the ability to reject (i.e., regular TD3) and with the ability to reject.
Footnote 3: See Appendix A.9 for further justification.
In Figure 7, we show the improvement in LTP that this algorithm induces when applied to the same policy, aggregated across Brax tasks, using at least 10 policies per environment, after only forty gradient steps. We additionally present scatter plots demonstrating the effect of applying Algorithm 1 to individual policies in Appendix A.10. Our results demonstrate that this rejection procedure can be an effective tool for reducing the LTP of an existing policy.
## 5 Related Work
Reliability of deep RL.The goal to avoid catastrophic drops in performance was at the core of the development of foundational methods in deep RL based on conservative updates [41, 42]. Previous work also studied the development of safer algorithms for learning and exploration, both from the theoretical and the empirical standpoints [25, 30, 32, 37, 48]. Our work focuses on understanding the landscape visited by commonly employed policy optimization algorithms and shows that it is
Figure 7: LTP reduction over 40 gradient steps without rejections (TD3) and with rejections (Algorithm 1). Data is aggregated over starting policies, environments, and five independent runs for each starting policy. We see that Algorithm 1 is strictly superior to TD3 with respect to LTP reduction. Results are aggregated over environments with 95% bootstrapped confidence interval.
possible to relatively easily move from parts of the landscape that induce dangerous behaviors to safer policy parameter vectors. On a higher-level, the sensitivity of deep RL algorithms to stochasticity and hyperparameters, and the extreme variability of results across seeds has been the object of previous studies [2; 11; 20], which mostly focused on proposing more reliable evaluation metrics. Previous work [9] also explicitly advocated for measuring the stability of deep RL algorithms over different axes and using a diverse set of metrics. Our paper proposes a complementary perspective, based on return landscapes and on a distributional view on them. Our procedure which leverages the directions proposed by a policy optimization algorithm to improve the LTP of a policy is related to previous work based on rejection/backtracking strategies [25; 40].
**Return and loss landscapes.** Return landscapes have been previously investigated at a coarser level under the name of reward surfaces/landscapes. In particular, they have been employed for studying the alignment of the gradient directions suggested by policy optimization algorithms to directions of improvement in the actual environment [23] and investigating performance degradation as a long-term optimization danger in such algorithms [43]. Our study of return landscapes with a distributional view in an otherwise fully deterministic setting sheds new light both on the landscape itself and on how it can be leveraged to characterize individual policies. More generally, the investigation of the return that policies collect in an environment is related to the study of the loss landscape of neural networks in supervised learning, for which different techniques have been proposed [28]. Those techniques, together with RL-specific tools, have been employed to explore the loss landscapes of RL algorithms, by visualizing them [5], probing their interaction with entropy regularization [3] or larger neural networks [35]. Our discovery of how policies from the same run are connected by simple paths in parameter space is related to (linear) mode connectivity, which shows a similar behavior in the landscapes of neural networks trained in supervised learning tasks [12; 13; 14; 17; 18]. Finally, our work is related to _distributional RL_[6], but we specifically focus on the post-update return distribution as opposed to the distribution of returns under a given policy.
## 6 Discussion and Future Work
In this paper, we have investigated return landscapes in continuous control tasks, as traversed by deep RL algorithms. We demonstrated the existence of noisy neighborhoods of the landscape, where a single update to the policy parameters produces a wide-breadth of post-update returns. By taking a distributional view on these neighborhoods, we revealed the existence of neighborhoods of similar mean return, yet different statistics, which correspond to qualitatively different agent behaviors. We studied the characteristics of failing policies and trajectories in such neighborhoods and attributed their subpar performance to sudden collapses in trajectory reward, rather than overall degradation in the policy. By focusing on linear paths through the global policy landscape, we showed that the landscape exhibits macro-scale variations which extend beyond specific local neighborhoods, and that policies from the same run can be surprisingly connected by linear paths with no valleys of low return. Finally, we demonstrated a simple procedure which discovers paths towards smoother regions of the landscape, starting from a trained policy.
Our results suggest that some of the previously-observed reliability issues in deep reinforcement learning agents for continuous control may be due to the fundamental structure of the return landscape for neural network policies. In particular, while the return of policy in a given neighborhood may be adequate, the distributional structure of the neighborhood characterizes additional dimensions of policy quality: How stable is this policy? What kind of behavior has the agent learned? Is it safe to perform further optimization of this policy? These nuances indicate the potential utility of a landscape-inspired approach to the design of reliable deep RL algorithms.
In addition, our study of parameter interpolation on the return landscape reveals new curiosities surrounding the training behavior of deep reinforcement learning agents. It appears that many policies from the same run fall within a single basin of the return landscape; we conjecture that this may correspond to the algorithm "specializing" on one particular behavior. Our demonstration of regions of lower and higher variability in returns along such paths further supports the possibility of robustifying existing policies, yet also raises the question of whether there are significantly different behaviors separated by barriers of low return, and whether our algorithms can find them. As they are beyond the scope of this paper, we reserve such questions for future work.
Acknowledgements
The authors thank Jesse Farebrother, Georg Ostrovski, David Meger, Rishabh Agarwal, Josh Greaves, Max Schwarzer and Pablo Castro for insightful discussions and useful suggestions on the early draft, the Mila community for creating a stimulating research environment, and the Digital Research Alliance of Canada for computational resources. This work was partially supported by CIFAR, Fonds de recherche du Quebec (FRQNT) and Gruppo Ermenegildo Zegna.
|
2309.06193 | Space-time structured plasma waves | Electrostatic waves play a critical role in nearly every branch of plasma
physics from fusion to advanced accelerators, to astro, solar, and ionospheric
physics. The properties of planar electrostatic waves are fully determined by
the plasma conditions, such as density, temperature, ionization state, or
details of the distribution functions. Here we demonstrate that electrostatic
wavepackets structured with space-time correlations can have properties that
are independent of the plasma conditions. For instance, an appropriately
structured electrostatic wavepacket can travel at any group velocity, even
backward with respect to its phase fronts, while maintaining a localized energy
density. These linear, propagation-invariant wavepackets can be constructed
with or without orbital angular momentum by superposing natural modes of the
plasma and can be ponderomotively excited by space-time structured laser pulses
like the flying focus. | J. P. Palastro, K. G. Miller, R. K. Follett, D. Ramsey, K. Weichman, A. V. Arefiev, D. H. Froula | 2023-09-12T13:03:16Z | http://arxiv.org/abs/2309.06193v2 | # Space-time structured plasma waves
###### Abstract
Electrostatic waves play a critical role in nearly every branch of plasma physics from fusion to advanced accelerators, to astro, solar, and ionospheric physics. The properties of planar electrostatic waves are fully determined by the plasma conditions, such as density, temperature, ionization state, or details of the distribution functions. Here we demonstrate that electrostatic wave packets structured with space-time correlations can have properties that are independent of the plasma conditions. For instance, an appropriately structured electrostatic wave packet can travel at any group velocity, even backward with respect to its phase fronts, while maintaining a localized energy density. These linear, propagation-invariant wave packets can be constructed with or without orbital angular momentum by superposing natural modes of the plasma and can be ponderomotively excited by space-time structured laser pulses like the flying focus.
A defining characteristic of plasma is its ability to exhibit collective motion. This motion often manifests as coordinated oscillations of the constituent particles, mediated by their mutual electrostatic attraction or repulsion. The oscillations, or electrostatic waves, play a critical role in nearly every branch of plasma physics. In fusion, electrostatic waves can be both a feature, providing a means to measure plasma conditions [1; 2; 3; 4; 5; 6], and an impediment, growing unstably to the point of disrupting plasma confinement and heating [7; 8; 9; 10; 11; 12; 13; 14]. Advanced accelerators harness electrostatic waves to accelerate electrons to relativistic energies over short distances, with the ultimate goal of miniaturizing radiation sources and particle colliders [15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. As a final, naturally occurring example, the mode conversion of electrostatic waves driven by fast electrons can explain the emission of type III radio bursts from the solar wind [25; 26; 27].
In each of these systems, the evolution of electrostatic waves impacts performance, dynamics, or observations. The evolution of planar electrostatic waves, i.e., waves having a single frequency \(\omega\) and wavevector \(\mathbf{k}\), is fully determined by the plasma conditions through the dispersion relation \(\varepsilon(\omega,\mathbf{k})=0\). More specifically, the phase velocity \(\mathbf{v}_{p}=[\omega(\mathbf{k})/k]\mathbf{e_{k}}\) can depend on the density, temperature, ionization states, or details of the distribution functions. Physically occurring electrostatic waves exist as superpositions of plane waves with amplitudes and phases imposed by a driver, such as an intense laser pulse or charged particle beam. A typical driver excites the wave packets without introducing correlations in \((\omega,\mathbf{k})\) space. As a result, the wave packets retain properties similar to those of a plane wave. However, electrostatic wave packets can also be driven so that they feature correlations in \((\omega,\mathbf{k})\) space. With appropriate structuring, these correlations can produce emergent properties that are independent of the plasma conditions.
The structuring of _electromagnetic_ waves with space-time correlations has provided new opportunities for laser-based applications and basic science [28; 29; 30; 31; 32; 33; 34; 35]. This has motivated the development of optical techniques for creating structured light, such as propagation-invariant [36; 37; 38; 39; 40; 41; 42], flying focus [43; 44; 45; 46; 47], and arbitrarily-structured-laser (ASTRL) pulses [48]. While these techniques cannot be directly applied to electrostatic waves, much of the mathematical formalism carries over: at a fundamental level, all waves evolve according to a wave equation. Thus, by using an appropriate driver, one can construct electrostatic analogs to propagation-invariant, flying focus, or ASTRL pulses.
This manuscript introduces the concept of space-time structured plasma waves. A space-time structured plasma wave (STP) can be constructed, with our without orbital angular momentum, by superposing natural electrostatic modes of a plasma with a particular correlation in \((\omega,\mathbf{k})\) space.
As an example, we focus on the special case of a linear, propagation-invariant electrostatic wave packet with a group velocity that is independent of the plasma conditions. The excitation of such an STP can be achieved experimentally by using the ponderomotive force of a structured laser pulse like a flying focus. STPs offer a new class of collective excitations that may provide additional control over dynamics such as wave-particle interactions, particularly in situations where the driver can be structured.
Figure 1 contrasts a conventional, localized plasma wave with an STP. The conventional plasma wave propagates with a group velocity determined by the plasma conditions. As the wave propagates, diffraction causes a rapid drop in the peak energy density. The STP travels at a velocity that is independent of the plasma conditions and maintains its profile, and peak energy density, over an extended distance. In this example, the peak energy density travels in the opposite direction as the phase fronts and the nominal group velocity.
The formulation of STPs will be presented for pure electrostatic waves in the absence of external fields. Pure electrostatic plane waves have a wavevector that is parallel to their electric field \(\mathbf{E}\) and have no magnetic field, i.e., \(\mathbf{k}\times\mathbf{E}=0\). These waves are completely described by their electrostatic potential. The electrostatic potential \(\phi\) of a plasma wave packet can be expressed as a superposition of plane waves constrained by the dispersion relation:
\[\phi(\mathbf{x},t)=\int\phi_{0}(\omega,\mathbf{k})e^{i(\mathbf{k}\cdot \mathbf{x}-\omega t)}\delta[\varepsilon(\omega,\mathbf{k})]d\mathbf{k}d\omega, \tag{1}\]
where \(\delta\) is the Dirac delta function and the conditions \(\phi_{0}(\omega,\mathbf{k})=\phi_{0}^{*}(-\omega,-\mathbf{k})\) and \(\varepsilon(\omega,\mathbf{k})=\varepsilon^{*}(-\omega,-\mathbf{k})\) ensure that \(\phi\) is real. The constraint imposed by the dispersion relation collapses one of the integrals in Eq. (1) and is typically used to write the frequency in terms of the wavevector, i.e., \(\omega=\omega(\mathbf{k})\) with \(\varepsilon[\omega(\mathbf{k}),\mathbf{k}]=0\) implied.
Aside from the dispersion relation, an additional constraint \(C(\omega,\mathbf{k})\) can be applied by writing
\[\phi_{0}(\omega,\mathbf{k})=\bar{\phi}_{0}(\omega,\mathbf{k})\delta[C(\omega, \mathbf{k})]. \tag{2}\]
The most general form of an STP uses \(C(\omega,\mathbf{k})\) to introduce correlations in \((\omega,\mathbf{k})\) space. Motivated by propagation invariant and flying focus laser pulses [33; 36; 37], the constraint is chosen here to allow for an arbitrary, specified group velocity \(v_{g}\):
\[C(\omega,\mathbf{k})=\frac{v_{g}}{(\omega_{0}-v_{g}k_{0})}[(\omega-v_{g}k_{z} )^{2}-(\omega_{0}-v_{g}k_{0})^{2}]. \tag{3}\]
Upon setting Eq. (3) equal to zero in accordance with the delta function, one can verify that
\[\frac{\partial\omega}{\partial k_{z}}=v_{g}. \tag{4}\]
Figure 1: Evolution of the cycle-averaged energy density \(\varepsilon_{0}\langle k_{0}^{2}\phi^{2}\rangle\) for a conventional and space-time structured plasma wave (STP). The conventional plasma wave (left) diffracts as it propagates from left to right at a nominal group velocity \(v_{n}\) determined by the plasma conditions. The peak energy density of the STP (right) travels in the opposite direction as the nominal group velocity and phase velocity while maintaining a constant spatiotemporal profile. In both cases, \(k_{0}w_{0}=20\). The STP has \(v_{g}=-v_{n}\) and \(\ell=1\). For the conventional plasma wave, \(Z_{0}=4.5w_{0}\) and \(\ell=0\). Space is normalized by \(w_{0}\) and time by \(\tau=\omega_{0}w_{0}^{2}/2u^{2}\) [see Table 1 and Eqs. (29) and (30)]. The contours have the same normalization, while each projection is normalized to its maximum.
Substituting Eq. (2) into Eq. (1) and applying the constraint provides the electrostatic potential of the STP:
\[\phi(\mathbf{x}_{\perp},\eta,\xi)=\frac{1}{2}e^{ik_{0}\eta}\Phi(\mathbf{x}_{\perp },\xi)+\text{c.c.}, \tag{5}\]
where \(\eta=z-v_{0}t\), \(v_{0}=\omega_{0}/k_{0}\), \(\xi=z-v_{g}t\),
\[\Phi(\mathbf{x}_{\perp},\xi)=\int\bar{\Phi}(\Omega,\mathbf{k}_{\perp})e^{i \mathbf{k}_{\perp}\cdot\mathbf{x}_{\perp}+i\Omega\xi/v_{g}}\delta(\varepsilon)d \mathbf{k}_{\perp}d\Omega, \tag{6}\]
\(\Omega=\omega-\omega_{0}\), and \(\bar{\Phi}(\Omega,\mathbf{k}_{\perp})=\bar{\phi}_{0}(\omega_{0}+\Omega, \mathbf{k}_{\perp},k_{0}+\Omega/v_{g})\). Equation (5) demonstrates that an electrostatic potential constructed with the correlation \(C(\omega,\mathbf{k})\) has phase fronts that travel at the velocity \(v_{0}\) and an envelope \(\Phi\) that travels at the group velocity \(v_{g}\).
Thus far, the formulation has been relatively abstract. To make the concept more tangible, examples will be presented for a non-relativistic, non-flowing plasma composed of electrons and a single ion species. The dispersion relation for such a plasma can be derived using the Vlasov-Poisson system of equations and is given by
\[\varepsilon(\omega,\mathbf{k})=1+\chi_{e}(\omega,\mathbf{k})+\chi_{i}(\omega, \mathbf{k})=0, \tag{7}\]
where
\[\chi_{s}(\omega,\mathbf{k})=\frac{\omega_{ps}^{2}}{k^{2}}\int\frac{\mathbf{k }\cdot\nabla_{\mathbf{v}}f_{s}}{\omega-\mathbf{k}\cdot\mathbf{v}}d\mathbf{v} \tag{8}\]
is the susceptibility for species \(s\), \(\omega_{ps}=(q_{s}^{2}n_{s}/\varepsilon_{0}m_{s})^{1/2}\) is the plasma frequency, \(n_{s}\) is the density, and \(f_{s}=f_{s}(\mathbf{v})\) is the velocity distribution function. Equation (7) predicts the existence of two elementary plasma waves: a high-frequency electron plasma wave and a low-frequency ion-acoustic wave.
The dispersion relation for electron plasma waves can be found in the limit that the phase velocity is much greater than the electron thermal velocity, i.e., \(v_{p}\gg v_{Te}\), where \(v_{Ts}=[\int v_{k}^{2}f_{s}d\mathbf{v}]^{1/2}\) and \(v_{k}=\mathbf{e}_{k}\cdot\mathbf{v}\). In this limit, Eq. (7) reduces to
\[\varepsilon(\omega,\mathbf{k})\approx 1-\frac{\omega_{pe}^{2}}{\omega^{2}}-3k^ {2}\lambda_{De}^{2} \tag{9}\]
where \(\lambda_{De}=v_{Te}/\omega_{pe}\) is the electron Debye length. The dispersion relation for ion-acoustic waves can be found in the opposite limit where the phase velocity is much smaller than the electron thermal velocity, i.e., \(v_{p}\ll v_{Te}\). Here, Eq. (7) reduces to
\[\varepsilon(\omega,\mathbf{k})\approx 1+\frac{1}{k^{2}\lambda_{De}^{2}}-\frac{ \omega_{pi}^{2}}{\omega^{2}}. \tag{10}\]
In both cases, Landau damping has been neglected. This is a good approximation when \(k\lambda_{De}\lesssim 0.2\) or \(v_{p}/v_{Ti}\gg 1\) for electron plasma and ion-acoustic waves, respectively.
Without the constraint \(C(\omega,\mathbf{k})\), the plasma waves travel at a group velocity determined by the plasma conditions. Solving for the frequency in Eqs. (9) and (10) yields \(\omega(k)=(\varpi^{2}+u^{2}k^{2})^{1/2}\) and the group velocity
\[\frac{\partial\omega}{\partial k_{z}}=\frac{u^{2}k_{z}}{\omega}, \tag{11}\]
where the values of \(\varpi\) and \(u\) for each wave are defined in Table 1. With the constraint \(C(\omega,\mathbf{k})\) [Eq. (3)], the relation \(\omega(k)=(\varpi^{2}+u^{2}k^{2})^{1/2}\) still holds, but now the transverse wavenumber is a function of the longitudinal wavenumber (or frequency), such that
\[\frac{\partial\omega}{\partial k_{z}}=\frac{u^{2}}{\omega}\left(k_{z}+\frac{1 }{2}\frac{\partial k_{\perp}^{2}}{\partial k_{z}}\right)=v_{g}, \tag{12}\]
where \(k_{z}=k_{0}+\Omega/v_{g}\) has been used. Thus the velocity \(v_{g}\) is completely independent of the plasma conditions.
Using the explicit expressions for \(\varepsilon\) from Eqs. (9) and (10), the delta function enforcing the dispersion relation can be written in the general form
\[\delta(\varepsilon)=\left|\frac{\partial\Omega}{\partial\varepsilon}\right| _{\Omega=\Omega_{n}}\delta[\Omega-\Omega_{n}(k_{\perp})], \tag{13}\]
where
\[\begin{array}{l}\Omega_{n}(k_{\perp})=-k_{0}v_{g}v_{0}\left(\frac{v_{g}-v_{n }}{v_{g}^{2}-u^{2}}\right)\\ \\ +\left[(k_{0}v_{g}v_{0})^{2}\left(\frac{v_{g}-v_{n}}{v_{g}^{2}-u^{2}}\right)^{ 2}+\left(\frac{v_{g}^{2}u^{2}k_{\perp}^{2}}{v_{g}^{2}-u^{2}}\right)\right]^{1/ 2}\end{array} \tag{14}\]
and \(v_{n}=u^{2}/v_{0}\). In arriving at Eq. (14), the choice was made to set \(\omega_{0}=(\varpi^{2}+u^{2}k_{0}^{2})^{1/2}\). With this choice, \(v_{0}=\omega_{0}/k_{0}\) equals the phase velocity of the plasma wave in the plane-wave limit \(v_{g}\to 0\)
and \(v_{n}\) equals the nominal group velocity in the absence of space-time structuring. Applying Eq. (13) in Eq. (6) collapses the integral over \(\Omega\), leaving only the integral over \(\mathbf{k}_{\perp}\):
\[\Phi(\mathbf{x}_{\perp},\xi)=\int\tilde{\Phi}(\mathbf{k}_{\perp})e^{i\mathbf{k }_{\perp}\cdot\mathbf{x}_{\perp}+i\Omega_{n}(k_{\perp})\xi/v_{g}}d\mathbf{k}_{ \perp}, \tag{15}\]
where \(\tilde{\Phi}(\mathbf{k}_{\perp})=|\partial_{\varepsilon}\Omega|_{\Omega= \Omega_{n}}\tilde{\Phi}[\Omega_{n}(k_{\perp}),\mathbf{k}_{\perp}]\). The function \(\tilde{\Phi}(\mathbf{k}_{\perp})\) determines the spatiotemporal profile of the arbitrary group velocity plasma wave.
Analytic expressions for the spatiotemporal profile can be found in the "paraxial" approximation, i.e., when the condition
\[k_{\perp}^{2}\ll\frac{v_{0}(v_{g}-v_{n})^{2}}{v_{n}(v_{g}^{2}-u^{2})}k_{0}^{2} \tag{16}\]
is satisfied. Upon using this condition, \(\Omega_{n}\) simplifies to
\[\Omega_{n}(k_{\perp})\approx\left(\frac{v_{g}v_{n}}{v_{g}-v_{n}}\right)\frac{ k_{\perp}^{2}}{2k_{0}}. \tag{17}\]
With the quadratic dependence of \(\Omega_{n}\) on \(k_{\perp}\), a natural choice for \(\tilde{\Phi}(\mathbf{k}_{\perp})\) is a superposition of Laguerre-Gaussian modes, i.e.,
\[\tilde{\Phi}(\mathbf{k}_{\perp})=\sum_{p,\ell}\tilde{\Phi}_{p\ell}\kappa^{ \ell}L_{p}^{\ell}(\kappa)\mathrm{exp}(-\tfrac{1}{2}\kappa^{2})e^{i\ell\theta_ {k}}, \tag{18}\]
where \(\kappa=k_{\perp}w_{0}/\sqrt{2}\), \(w_{0}\) characterizes the transverse width, \(L_{p}^{\ell}\) is a generalized Laguerre polynomial, and \(\theta_{k}\) is the azimuth in transverse wavenumber space. The spatiotemporal profile of the STP is then given by
\[\begin{split}\Phi(\mathbf{x}_{\perp},\xi)=\sum_{p,\ell}\Phi_{p \ell}\frac{w_{0}}{w}\Big{(}\frac{\sqrt{2}r}{w}\Big{)}^{\ell}L_{p}^{\ell}\Big{(} \frac{2r^{2}}{w^{2}}\Big{)}e^{i\ell\theta}\\ \mathrm{exp}\left[-\Big{(}1-i\frac{\xi}{\xi_{0}}\Big{)}\frac{r^{2 }}{w^{2}}-i(2p+\ell+1)\mathrm{arctan}\frac{\xi}{\xi_{0}}\right],\end{split} \tag{19}\]
where \(w(\xi)=w_{0}[1+(\xi/\xi_{0})^{2}]^{1/2}\),
\[\xi_{0}=\frac{(v_{g}-v_{n})k_{0}w_{0}^{2}}{2v_{n}}, \tag{20}\]
\(r=(x^{2}+y^{2})^{1/2}\), \(\theta=\arctan(y/x)\) is the azimuth in configuration space, and constant factors have been absorbed into the amplitudes \(\Phi_{p\ell}\). The profile of the STP advects at the group velocity \(v_{g}\), has a characteristic duration \(\xi_{0}/v_{g}\), and can have any orbital angular momentum value \(\ell\).
The analysis so far has demonstrated that an arbitrary group velocity STP can be constructed theoretically, but has not provided a prescription for how to do so in practice. Plasma waves can either exist as thermal fluctuations or be driven by external forces. Thermal fluctuations have no
correlations in \((\omega,{\bf k})\) space, and other than having to satisfy \(\varepsilon(\omega,{\bf k})=0\), \(k_{\perp}\) and \(k_{z}\) are completely independent, i.e., \(\partial k_{\perp}/\partial k_{z}=0\). As a result, an STP must be driven by external forces, such as those exerted by particle beams or electromagnetic waves. In the presence of an external force \({\bf F}({\bf x},t)\), the potential of a generic electrostatic wave is given by
\[\phi({\bf x},t)=\int\frac{i}{ek}\frac{\chi_{e}(\omega,{\bf k})}{\varepsilon( \omega,{\bf k})}[{\bf e}_{k}\cdot\hat{{\bf F}}(\omega,{\bf k})]e^{i({\bf k} \cdot{\bf x}-\omega t)}d{\bf k}d\omega, \tag{21}\]
where \(e\) is the elementary charge. Resonant excitation of an STP requires that the force \({\bf F}({\bf x},t)\) be a function of space and time in the combinations \(\eta=z-v_{0}t\) and \(\xi=z-v_{g}t\).
Electromagnetic waves provide a flexible option for driving STPs. Laser pulses, in particular, can exhibit correlations between two or more degrees of freedom, including polarization, orbital angular momentum, and spatio-spectral content, and can interact in geometries ranging from co- to counter-propagating. When the frequencies of the electromagnetic waves are much greater than \(\omega_{0}\), the disparity of time scales allows for a cycle-averaging over their periods. The end result is a "ponderomotive guiding center" equation of motion with the effective force
\[{\bf F}({\bf x},t)=-\frac{1}{2}m_{e}c^{2}\nabla\langle{\bf a}\cdot{\bf a}\rangle \tag{22}\]
where \({\bf a}({\bf x},t)=e{\bf A}({\bf x},t)/m_{e}c\) is the total normalized vector potential of the electromagnetic waves, satisfying \(|{\bf a}|\ll 1\), and \(\langle\rangle\) represents a cycle-average.
An STP can be resonantly excited using a superposition of two flying focus pulses. Flying focus pulses feature an intensity peak that can travel at any velocity \(v_{f}\), while maintaining a near-constant spatiotemporal profile. The interference of two flying focus pulses with \(v_{f}=v_{g}\) and distinct frequencies and wavenumbers satisfying \(\omega_{1}-\omega_{2}=\omega_{0}\) and \({\bf e}_{z}\cdot({\bf k}_{1}-{\bf k}_{2})=k_{0}\) produces the ponderomotive force neccessary to resonantly drive an STP. Specifically, the superposition
\[{\bf a}({\bf x},t)=\tfrac{1}{2}\sum_{j\in(1,2)}{\mathbf{a}}_{j}({\bf x}_{\perp}, \xi)e^{i(k_{j}z-\omega_{j}t)}+{\rm c.c.}, \tag{23}\]
where \({\mathbf{a}}_{j}({\bf x}_{\perp},\xi)\) is the envelope of each pulse, results in a ponderomotive force term
\[{\bf F}_{d}({\bf x}_{\perp},\eta,\xi)=-\frac{i}{8}m_{e}c^{2}k_{0}({\mathbf{a}}_{1} \cdot{\mathbf{a}}_{2}^{*})e^{ik_{0}\eta}{\bf e}_{z}+{\rm c.c.}. \tag{24}\]
In writing Eq. (23), it has been assumed that the durations of the flying focus pulses are much longer than their periods \(2\pi/\omega_{j}\).
Without further specification of the \({\mathbf{a}}_{j}\), the electrostatic potential of the driven STP is given by \(\phi({\bf x}_{\perp},\eta,\xi)=\frac{1}{2}e^{ik_{0}\eta}\Phi({\bf x}_{\perp}, \xi)+{\rm c.c.}\), with
\[\Phi({\bf x}_{\perp},\xi)=\int S(\Omega,{\bf k}_{\perp})e^{i{\bf k}_{\perp} \cdot{\bf x}_{\perp}+i\Omega\xi/v_{g}}d{\bf k}_{\perp}d\Omega \tag{25}\]
\[\begin{split} S(\Omega,\mathbf{k}_{\perp})=\frac{m_{e}c^{2}}{32\pi^{3} e}\frac{\mathrm{s}(v_{0}-v_{g})}{|v_{g}|}\frac{k_{0}\chi_{e}(\omega,\mathbf{k})}{k \varepsilon(\omega,\mathbf{k})}\\ \int(\mathbf{a}_{1}\cdot\mathbf{a}_{2}^{*})e^{-i\mathbf{k}_{\perp}\cdot \mathbf{x}_{\perp}-i\Omega\xi/v_{g}}d\mathbf{x}_{\perp}d\xi,\end{split} \tag{26}\]
where s is the sign function, and \(\chi_{e}\), \(\varepsilon\), and \(k\), are evaluated at \(\omega=\Omega+\omega_{0}\) and \(k_{z}=k_{0}+\Omega/v_{g}\). Thus, the ponderomotive force of the two flying focus pulses drives an electrostatic potential with phase fronts that travel at \(v_{0}\) and an envelope \(\Phi(\mathbf{x}_{\perp},\xi)\) that travels at \(v_{g}\). Note that while the frequency and wavenumber matching conditions, i.e., \(\omega_{1}-\omega_{2}=\omega_{0}\) and \(\mathbf{e}_{z}\cdot(\mathbf{k}_{1}-\mathbf{k}_{2})=k_{0}\), are identical to those required for stimulated Raman or Brillouin scattering (electron and ion-acoustic waves, respectively), excitation of an STP _does not_ require instability.
Equations (24) and (26) provide an exact, linear solution for a driven STP in the spectral domain. While these solutions demonstrate the salient physics, they are "monochromatic," that is, they oscillate in \(\eta\) with a single period \(2\pi/k_{0}\). More generally, the potential will be a superposition of these solutions, such that
\[\phi(\mathbf{x}_{\perp},\eta,\xi)=\tfrac{1}{2}e^{ik_{0}\eta}\int\dot{\Phi}( \mathbf{x}_{\perp},k^{\prime},\xi)e^{ik^{\prime}\eta}dk^{\prime}+\mathrm{c.c.}, \tag{27}\]
where \(k^{\prime}\) represents a wavenumber shift about the central wavenumber \(k_{0}\) and the envelope of the potential \(\Phi(\mathbf{x}_{\perp},\eta,\xi)=\int\dot{\Phi}(\mathbf{x}_{\perp},k^{\prime },\xi)e^{ik^{\prime}\eta}dk^{\prime}\) now depends on \(\eta\).
Direct evaluation of Eq. (21) can be challenging. As an alternative, when \(\omega_{0}\) is close to the natural mode frequency of the plasma wave, Eq. (21) can be recast as the configuration-space wave equation
\[\left(\partial_{t}^{2}+\varpi^{2}-u^{2}\nabla^{2}\right)\phi(\mathbf{x},t)= \pm\frac{1}{8}\omega_{0}^{2}(\mathbf{a}_{1}\cdot\mathbf{a}_{2}^{*})e^{ik_{0}\eta}+ \mathrm{c.c.}, \tag{28}\]
where \(\phi\) has been normalized by \(m_{e}c^{2}/e\), and the top and bottom signs are taken for electron plasma and ion-acoustic waves, respectively. If the \(\mathbf{a}_{j}\) are independent of \(\eta\) (or approximately so), Eq. (28) reduces to
\[\left[2i\kappa\frac{\partial}{\partial\xi}+\frac{u^{2}-v_{g}^{2}}{u^{2}}\frac {\partial^{2}}{\partial\xi^{2}}+\nabla_{\perp}^{2}\right]\Phi_{s}(\mathbf{x}_ {\perp},\xi)=\mp\frac{\omega_{0}^{2}(\mathbf{a}_{1}\cdot\mathbf{a}_{2}^{*})}{8u^{2}}, \tag{29}\]
where \(\kappa=k_{0}(v_{n}-v_{g})/v_{n}\) and the subscript \(s\) refers to the STP. The homogeneous dispersion relation for Eq. (28) is given by Eq. (14) and the homogeneous, paraxial solutions by Eq. (19). The evolution of \(\Phi_{s}(\mathbf{x}_{\perp},\xi)\) contrasts that of a conventional plasma wave for which Eq. (28) is often simplified as
\[\left[2i\frac{\omega_{0}}{u^{2}}\frac{\partial}{\partial t}+\frac{\partial^{2 }}{\partial\xi^{2}}+\nabla_{\perp}^{2}\right]\Phi_{c}(\mathbf{x}_{\perp},\xi,t)=\mp\frac{\omega_{0}^{2}(\mathbf{a}_{1}\cdot\mathbf{a}_{2}^{*})}{8u^{2}}, \tag{30}\]
where \(\zeta=z-v_{n}t\), the subscript \(c\) refers to a conventional plasma wave, and \(|\partial_{t}\Phi_{c}|\ll|\omega_{0}\Phi_{c}|\) has been assumed. In Fig. 1 the homogeneous solutions to Eqs. (29) and (30) are compared for the initial conditions \(\Phi_{s}(\mathbf{x}_{\perp},0)=\Phi_{0}(\sqrt{2}r/w_{0})\exp{(-r^{2}/w_{0}^{2} )}e^{i\theta}\) and \(\Phi_{c}(\mathbf{x}_{\perp},\zeta,0)=\Phi_{0}\exp{(-r^{2}/w_{0}^{2}-\zeta^{2}/ Z_{0}^{2})}\), respectively. The \(\ell=1\) mode was chosen for the STP to illustrate its ability to carry orbital angular momentum.
As a final note, the delta function that enforces the dispersion relation in Eq. (6) can also be written in terms of the perpendicular wavenumber \(k_{\perp}\):
\[\delta(\varepsilon)=\left|\frac{\partial k_{\perp}}{\partial\varepsilon} \right|_{k_{\perp}=k_{\perp,n}}\delta[k_{\perp}-k_{\perp,n}(\Omega)], \tag{31}\]
where
\[k_{\perp,n}(\Omega)=\frac{1}{u}\left[\Omega^{2}\left(1-\frac{u^{2}}{v_{g}^{2} }\right)+2\omega_{0}\Omega\left(1-\frac{u^{2}}{v_{0}v_{g}}\right)\right]^{1/2}. \tag{32}\]
This allows one to write \(\tilde{\Phi}\) as a function of \(\Omega\) instead of \(\mathbf{k}_{\perp}\) when evaluating \(\Phi(\mathbf{x}_{\perp},\xi)\) in Eq. (15). With this convention, the paraxial approximation is given by \(k_{\perp,n}(\Omega)\approx\frac{1}{u}[2\omega_{0}\Omega(1-\frac{u^{2}}{v_{0}v _{g}})]^{1/2}\).
Space-time structured plasma waves (STPs) exhibit properties that are independent of the plasma in which they exist. Unlike conventional plasma waves, which are devoid of correlations in \((\omega,\mathbf{k})\) space and are therefore constrained by the plasma conditions, STPs are constructed with correlations that provide control over their evolution. An example of arbitrary-group-velocity STPs was presented, which was motivated by the subfield of structured light dedicated to controlling the trajectory of peak laser intensity, i.e., spatiotemporal pulse shaping. While much of the analysis from spatiotemporal pulse shaping carries over [38; 49; 50], unstructured plasma waves are distinct in that their nominal group velocity can be significantly different than their phase velocity. STPs can be realized experimentally, with or without orbital angular momentum, by using the ponderomotive force exerted by two space-time structured laser pulses. More-advanced correlations may allow for STPs with more-exotic structures, such as spatiotemporal optical vortices [51; 52]. Further work will generalize STPs to magnetized plasma waves, consider STPs driven by charged particle beams, and explore whether STPs can provide control over wave-particle interactions, including linear and nonlinear Landau damping, trapped particle instabilties, or kinetic inflation [53; 54; 55; 56; 57; 58; 59; 60; 61].
###### Acknowledgements.
The authors would like to thank A. Raymond, K.L. Nguyen, and T.T. Simpson for discussions. |
2309.10929 | Specializing Small Language Models towards Complex Style Transfer via
Latent Attribute Pre-Training | In this work, we introduce the concept of complex text style transfer tasks,
and constructed complex text datasets based on two widely applicable scenarios.
Our dataset is the first large-scale data set of its kind, with 700 rephrased
sentences and 1,000 sentences from the game Genshin Impact. While large
language models (LLM) have shown promise in complex text style transfer, they
have drawbacks such as data privacy concerns, network instability, and high
deployment costs. To address these issues, we explore the effectiveness of
small models (less than T5-3B) with implicit style pre-training through
contrastive learning. We also propose a method for automated evaluation of text
generation quality based on alignment with human evaluations using ChatGPT.
Finally, we compare our approach with existing methods and show that our model
achieves state-of-art performances of few-shot text style transfer models. | Ruiqi Xu, Yongfeng Huang, Xin Chen, Lin Zhang | 2023-09-19T21:01:40Z | http://arxiv.org/abs/2309.10929v1 | # Specializing Small Language Models towards Complex Style Transfer via Latent Attribute Pre-Training
###### Abstract
In this work, we introduce the concept of complex text style transfer tasks, and constructed complex text datasets based on two widely applicable scenarios. Our dataset is the first large-scale data set of its kind, with 700 rephrased sentences and 1,000 sentences from the game Genshin Impact. While large language models (LLM) have shown promise in complex text style transfer, they have drawbacks such as data privacy concerns, network instability, and high deployment costs. To address these issues, we explore the effectiveness of small models (less than T5-3B) with implicit style pre-training through contrastive learning. We also propose a method for automated evaluation of text generation quality based on alignment with human evaluations using ChatGPT. Finally, we compare our approach with existing methods and show that our model achieves state-of-art performances of few-shot text style transfer models.
+
Footnote †: Corresponding Author. Email: [email protected].
+
Footnote †: Corresponding Author. Email: [email protected].
## 1 Introduction
Text style transfer is a task in natural language generation that involves modifying the style of a given text while preserving its content. It has a wide range of applications, including conversational assistant with customized person [8], writing assistant [21], automatic text simplification [6], text debiasing [14], and censoring offensive language [12]. However, traditional approaches to text style transfer often rely on parallel corpora, which may be unavailable or require significant manual effort to collect and annotate [5]. Recent work has shown promising results in unsupervised methods that offer an alternative solution by leveraging large amounts of unpaired text data without the need for explicit parallel annotation [7, 9]. However, unsupervised methods often suffer from the lack of explicit control over the generated text's output and may produce outputs that do not adhere to the desired style, making them less suitable for specific style transfer tasks. Previous research has also been concentrated on transferring text across simple styles like sentiment and politeness, while there have been few studies on more complex text style transfers like personality, creativity, and conciseness.
In this work, we first define the complex text styles as styles that are hardly distinguishable from each other except by professionals working in the relevant fields. For example, the lines from two similar characters in a video game are complex text styles as only the designers of the characters may discern the subtle differences between the personalities of the two figures. The high standard required for labeling texts of complex styles makes crowd-sourcing infeasible to generate high-quality datasets of complex text styles even in non-parallel settings. To tackle this problem and facilitate the study of text style transfer models, we picked two complex styles of interest, authorship and creativity, and constructed two large-scale datasets for benchmarking complex style transfer power of language models.
While large models like LLM have shown promise in complex text style transfer, they have drawbacks such as data privacy concerns, network instability, and high deployment costs. To address these issues, we explore the effectiveness of small models (less than T5-3B) with implicit style pre-training through contrastive learning. With introducing the concept of specialization, we develop a high-efficiency text style generator that can be deployed offline with low costs.
Automatic evaluation of generation results is a challenging task in complex text style transfer because it requires an objective and reliable way to measure the quality of generated text. To address this challenge, we proposed a novel evaluation method based on ChatGPT, which involves generating a response from ChatGPT given a prompt that asks ChatGPT to classify the generated text, and then comparing the response with human evaluations of the same texts.
To validate the effectiveness of our evaluation method, we conducted experiments on both simple and complex datasets, and found that the alignment between ChatGPT and human evaluation reached 98\(\%\) and 93\(\%\) respectively. This result indicates that our proposed method is reliable and can provide a useful tool for automated evaluation of complex text style transfer models. We also measured the accuracy of our evaluation method using traditional metrics (Sacre-BLEU), and discuss the alignment of the accuracy metrics in the experiments on both simple and complex datasets.
The contributions of this paper can be summarized as follows: (1) We introduced the concept of complex text style transfer, and constructed two benchmark datasets for evaluating complex text style transfer models; (2) We proposed an implicit style pre-training method for small-scale models, which achieved state-of-art performances of few-shot approaches on complex text style transfer tasks and reached comparable performances to large language models; and (3) We introduced an automatic evaluation method for complex text style transfer based on ChatGPT, which provides a more objective and efficient way than previous metrics to evaluate complex text style transfer models based on human evaluation experiment results. Our work can be accessed at the following repository: code.
Methods
### Preliminaries
#### 2.1.1 Problem Definition
Text style transfer is the task of automatically transforming the style of a given text into a different target style while preserving its content and meaning. Some text styles are well-defined and characterized by specific attributes that are easily recognizable or distinguishable from other styles, which we term as simple text styles. For example, the happy text style is characterized by a cheerful tone, a lively sentence structure, and the use of positive words and expressions such as "happy", "joyful", "exciting", and "amazing". On the other hand, the sad text style is characterized by the use of negative words and expressions such as "gloomy", "depressed", "heartbroken", "lonely", etc.. These text styles are relatively straightforward and commonly recognized by most people. As a result, researchers can easily obtain labeled datasets for simple text style transfers by using crowdsourcing methods.
While simple text style are common in the real world and are easily recognizable, more complex styles can pose a greater challenge and are often of greater application value. For example, in the legal domain, converting verbose legal documents into plain language versions could increase accessibility for non-experts, while in the medical field, transferring complex medical jargon into layman's terms could improve patient understanding and compliance.
In this work, we give a preliminary definition to the complex text style, which refers to the style of a given text that is difficult for non-experts to discern and categorize. Such complex styles may include personality, domain-specific jargon, or other highly specialized terminology. The nature of these styles makes it difficult to rely on crowdsourcing to label the texts, as only experts in the relevant field may be able to accurately distinguish between different styles. Therefore, the challenge of complex text style transfer lies in developing effective models that can capture and transfer these complex stylistic nuances without relying on extensive labeled data.
#### 2.1.2 Dataset Descriptions
To study the complex style transfer tasks, we construct two large-scale datasets based on two domains, personality and creativity, for bench marking the complex style transfer power of language models. The descriptions for each dataset are as below, and the samples and specifications are shown in Table 1.
**Genshin** is a collection of dialogues spoken by characters in the video game Genshin Impact. The dataset includes lines spoken by over 48 characters, with each character having distinct personalities and speaking styles. Each character has 50-80 lines of non-parallel dialogue, which means certain characters have unique lines that do not have corresponding lines from other characters.
**Rephrase** consists of parallel corpus of 200 English sentences in seven different styles (Standard, Fluency, Formal, Simple, Creative, Expand, Shorten). To produce the dataset, we first collect 200 sentences from the Internet with uncorrelated content. Then we paraphrase the sentences with QuillBot, a powerful online paraphrasing tool. For each style paraphrasing process, QuillBot is instructed to prioritize preserving semantic content over style transformation.
To validate the effectiveness of our model and make the experiments directly comparable to previous approaches, we also consider two simple groups of styles. The first is the Amazon sentiment dataset [11], which consists of reviews on Amazon that are labeled either positive or negative. The second dataset we use is the Grammarly's Yahoo Answers Formality Corpus (GYAFC) [16] dataset, containing a total of 110K informal / formal sentence pairs.
### Latent Style Space Pre-Training
Fig. 2 describes our overall model architecture. Our work is closely reminiscent of [18], which uses a large pretrained language model to learn style representations. Our work differ in the way that we include a contrastive loss that measures the similarity between the input embeddings and extractor embeddings, which allows the style extractor to capture more precise text style representations[1, 22]. We describe the model architecture in details below.
Our model is designed with two simple observations: (1) the pre-trained large language models likely already contain powerful style representation; (2) style tends to remain unchanged within a given piece of corpus. Our model includes a encoder-decoder module based on Text-to-Text Transformer (T5) [15]. During training, we use corrupted versions of inputs and instructs the model to restore the original sentences, thereby resulting in a reconstruction task. We expect the corruption strategies to eliminate the style from original sentences and hence force our model to discover the style representation during reconstruction. Such reconstruction task also aligns with how the original T5 model is trained.
Inspired by [18], our model consists of an additional style extractor module. During our experiments, the architecture of the extractor is the same as the one used in the encoder-decoder module (either T5-base or T5-large), and its input is an uncorrupted sentence preceding the target. The output of the extractor is added to that of the encoder, which is then fed into the decoder to produce the final output. The weights of our model are initialized with those of a pretrained model, but the weights are not fixed during training.
To improve the learning power of our model, we introduce Barlow Twins loss [25] that measures the similarity of learned representations of context and target sentences. The Barlow Twins loss measures the difference between the empirical cross- correlation matrix \(\mathcal{C}=z_{A}^{T}z_{B}\in\mathbb{R}^{D\times D}\) of the embeddings. Given two sets of inputs \(x_{A}\) and \(x_{B}\in\mathbb{R}^{N\times C\times H\times W}\) within the same data class and matrices of their corresponding embedding vectors \(z_{A}\) and \(z_{B}\in\mathbb{R}^{N\times D}\) given by the encoder, the loss is defined as:
\[\text{BT}(z_{A},z_{B})=\sum_{0\leq i<D}(1-\mathcal{C}_{ii})^{2}+\delta\sum_{0 \leq i<D}\sum_{0\leq j<D,\,j\neq i}\mathcal{C}_{ij}^{2} \tag{1}\]
where \(D\) is the dimension of the embedding space, and \(\mathcal{C}_{ij}\) stands for the elements in the empirical cross-correlation matrix between \(z_{A}\) and \(z_{B}\). \(\delta\) is a non negative hyper-parameter, which we find the optimal value to be around \(1*10^{-4}\).
The Barlow Twins loss is able to evaluate the similarity between the embedding vectors, and encourages different features of the embedding vectors to be less correlated. In this paper, we apply Barlow Twins on the embeddings of the extractor modules to enforce the extractor to learn an expressive latent space that captures the implicit attributes within complex styles. We apply Barlow Twins on two levels. For the sentence-level, We use the sentence directly preceding the input sentence as the context sentence, run the extractor module on the context sentence and the input sentence, and minimize the Barlow Twins loss between the two. For the paragraph-level, we use all the sentences in the corpus except for the input sentence itself as context sentences, then calculate the average of the Barlow Twins loss for each pair of the input-context pairs.
One client, two clients, three clients!
When the sun's sort, bathe in sunlight. But when the moon's out, bathe in moonlight-On, you sleep? Get some reset, I'm gonn take a walk by myself...
Ree-hoe, room's out, and so on I'll
Icease show you soon fire tricks. First, I'mhed than,...,hoshi Fire butterfly! Be free!
Take you open [O14] Tell ne there is, quickly. I need to go seal her away, hee-hee!
All young and no yin? flush, who homes such people existed in this world?
Ugb, I declare this dish dead, next in peace.
Icease-level Stanned with a side of pream dumping=
Well here those who rise in the early sorn, while those late to bed I shall forever=
[MISSING_PAGE_POST]
sentence is given by
\[s_{\text{recon}}=\text{dec}(\text{enc}(f(s_{\text{target}}))+\text{ext}(s_{\text {context}}))\]
The training objective is given by
\[L=\text{CE}(s_{\text{recon}},s_{\text{target}})+\lambda\cdot\text{BT}(\text{ext}( s_{\text{context}}),\text{ext}(s_{\text{recon}})) \tag{2}\]
where \(\lambda\) is a hyperparameter.
#### 2.2.2 Inference
Our model applies a few-shot approach during inference time. To transfer a sentence \(i\) from source style attribute \(a_{src}\) to target style attribute \(a_{\text{target}}\), we assume that a small number of sentences are available for each style. We derive the style representation by running the extractor on each style set of sentences and averaging the extractor outputs, giving style attributes \(a_{\text{src}}\) and \(a_{\text{target}}\). We also infer the basis style attribute vector \(a_{i}\) by running the extractor on the sentence \(i\) itself and perform a linear transformation in the style vector space, producing the style difference vector \(a_{diff}\) as
\[a_{\text{diff}}=a_{i}+\beta\cdot(a_{\text{target}}-a_{\text{src}})\]
where \(\beta\) is a hyperparameter. The style difference vector is then added back to the encoder-decoder module, resembling the architecture during training phase and producing the final transferred sentence.
In practice, we find too large \(\beta\) forces the transferred sentence to lose semantic meaning of the original sentence, while small \(\beta\) deactivates the style transfer. We find the optimal \(\theta\) to be around \([1,20]\), depending on the specific attributes of interests.
### Automatic Evaluation Procedure for Complex Style Transfer Tasks
**Limitations of Previous Approaches** Automatic evaluation of text style transfer tasks usually involve measuring the transferred style strength and semantic preservation of the outputs. The evaluation of complex text style tasks requires experts and cannot be achieved through crowd-sourcing, making automated evaluation a challenge. Previous research efforts in this area have relied on measuring the transferred style strength with a separately trained style classifier based on BERT [2] and or its variants such as RoBERTa-Large [10] and DeBERTa-v3-Large [3, 4].
While this approach has shown promise in some contexts, it has been observed that complex text style transfer often involves multiple categories and low-data scenarios, resulting in a dearth of available samples. This would create difficulties for the training of BERT classifiers, which requires enormous labeled style datasets. As a result, the accuracy of BERT-based automated evaluation methods in complex style transfer tasks is low and can be comparable to random results, as shown in Table 1.
**ChatGPT-based Accuracy Evaluation** Recent research on large language models (LLMs) such as ChatGPT has highlighted their potential for breakthroughs in NLP understanding and generation capabilities, even under conditions of few-shot or zero-shot scenarios. Building on these developments, we propose a set of LLM-based automatic evaluations for complex text style tasks, as illustrated in Table 1. The proposed methodology involves constructing dataset-dependent prompts to guide the evaluation process.
To test the effectiveness of this approach, we conducted experiments on both complex and simple datasets. The results, as presented in Table 1, demonstrate that the automatic evaluation capabilities of the LLM-based models are significantly stronger than those of simpler language models (SLMs). The high accuracy achieved by the LLM-based models attests to the efficacy of this approach for automated evaluation of complex text style tasks.
Given the strength of ChatGPT among LLMs, it was selected as the preferred model for evaluating accuracy in our study. We used the gpt-3.5-turbo model as the default classifier because it is one of the most powerful and affordable language model with publicly available APIs as of date. The current price for gpt-3.5-turbo is 80.002 per one thousand tokens and the whole evaluation process costs less than ten dollars. We also compared the classification accuracy of gpt-3.5-turbo with two other OpenAI models, text-davinci-003 and code-davinci-002 and the results are reported in Table 1. The results show that all three models achieve nearly perfect accuracy on the simple datasets, while more than doubling the performance of DeBERTa-v3-large, the best performing Bert model, on complex datasets. Overall, the findings of this study highlight the potential of LLM-based automated evaluations for complex text style tasks, especially in contexts where sample sizes are limited. We encourage future research in this area to explore the applicability of these approaches in various NLP domains and investigate the potential of other LLMs for automated evaluation tasks.
**Other Metrics** Finally, we utilize SacreBLEU [13] to measure the content preservation between the output and the input, and also report the geometric mean ("G-score") of accuracy and SacreBLEU as an overall evaluation of the model performance following [23].
## 3 Experiments
In this section, we examine BTTS on several datasets. We present the main quantitative results in Sec 3.1, analysis on the effects of contrastive loss in Sec 3.2, ablation studies in Sec 3.3, and qualitative results in Sec 3.4.
### Main Quantitative Results
Table 2 compares the performances of our model against other state-of-art models. Our BTTS achieves the best performances in both classification accuracy and content preservation metrics among few shot models including CP-G and CP-B models by [24] and TextSETTR by [18]. Our model also exceeds by a small margin the performance of the models that utilize labeled data, including B-GST [20], DeleteAndRetrieve [9], and CrossAligned [19].
Table 2b shows the models' performances on _Genshin_ and _Rephrase_. Both metrics significant dropped across all tested models. This is potentially due to the complex nature of the hybrid text
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **Name** & **Simple** & **Complex** \\ \hline \multirow{3}{*}{**Bert**} & BERT & 86.7 & 20.2 \\ & RoBERTa & 89.7 & 26.7 \\ & DeBERTa & 92.6 & 31.2 \\ \hline \multirow{3}{*}{**LLM**} & gpt-3.5-turbo & 99.3 & 75.6 \\ & text-davinci-003 & 99.5 & 73.3 \\ \cline{1-1} & code-davinci-002 & 94.4 & 65.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification Accuracy on simple and complex datasets by BERT-based models and OpenAI’s GPT models. The simple accuracy is reported as the average of a model’s classification accuracy on the formality and sentiment dataset, while the complex accuracy are the average of a model’s classification accuracy on the Genshin and Rephrase dataset.
style transfer on the two datasets. Our model again achieves the best performances among all the state-of-art models.
### Analysis on Contrastive Loss
We built BTTS upon the architecture of TextSETTR [18] with careful attention paid to ensure that the model architecture and parameters remained the same except for the addition of the contrastive loss module. As such, the experiment results presented in 3.1 naturally suffice for a comparative study on the effectiveness of the contrastive loss module. In this section, we provide further analysis of the contrastive loss module to provide additional insights into the workings of the contrastive loss and its potential for improving text style transfer performance.
#### 3.2.1 Hidden Embedding Visualization
In order to show that our style extractor is capable of encoding various elements of textual style, we generated style vectors for 15,000 lines of text from three different review categories sourced from the Amazon data of [11]. We selected 2,500 positive (4 or 5 star) and 2,500 negative (1 or 2 star) samples, while removing examples where our BERT classifier disagreed with the label. In addition, we also selected 2,500 formal and 2,500 informal samples from the GYAFC dataset [16] to evaluate the ability of our style extractor in another perspective. The resulting 2D UMAP dimensionality reduction plot (Figure 3, bottom) clearly displays distinctions between sentiments. To compare, we also ran UMAP on style vectors from TextSETTR(Figure 3, top). The noticeable contrast between the two plots suggests that our training process helps to produce a representation space that distinguishes between the different attributes.
#### 3.2.2 Parameter Sensitivity Analysis
In this section, we perform a sensitivity analysis of our model's performance on the Amazon sentiment dataset with respect to two hyperparameters in the loss function, namely \(\lambda\) and \(\delta\) terms. The heatmap, displayed in Fig. 4, shows the model's accuracy on the Amazon sentiment dataset, as measured by ChatGPT, for 25 combinations of the two hyperparameters. The \(\lambda\) term controls the relative weight between the Barlow Twins loss and the cross-entropy loss for the overall training objective in Eq. 2, while the \(\delta\) term controls the relative importance of the off-diagonal loss compared to the diagonal loss in Eq. 1. The heatmap demonstrates that the model's performance is highly sensitive to both hyperparameters, and that certain combinations of the two can lead to a significant improvement in accuracy, with the best setting at \(\lambda=1*10^{-2}\), \(\delta=1*10^{-4}\).
### Ablation Studies
**Model Size** In experiments, we study the impact of the language model size on the performance of BTTS by training the model with both T5-base and T5-large. T5-base consists of 220 million parameters while T5-large consists of 770 million parameters, and the results are shown in Table 2. It shows that using a larger pretrained language model significantly increased the performance of our model, achieving state-of-art performances in all the style transfer tasks conducted in the experiment.
**Exemplar Size** Our model sets up the inference procedure in a few-shot setting. In the main experiments, we use 30 sentences for each input and target text styles. In this section, we scrutinize the effect of the size of the exemplars on the performance of BTTS. We changed the size of the exemplars to 16, 8, 4, 2, 1, and 0 (representing zero-shot), and re-evaluated the model on Sentiment and Genshin datasets, whose results are reported in Table 3.
\begin{table}
\end{table}
Table 2: Automatic evaluation metrics on simple and hybrid text style transfer tasks. The reported models include our BTTS model with two different pretrained language model sizes (T5-base and T5-large), and previous work.
The results show that the accuracy metric decreases somewhat insignificantly until the shot size becomes less than two. The experiments reveal BTTS' limitation on inference application under zero-shot or extremely few shot settings. However, it does tell that the model is still competitive with a fairly low amount of shot numbers.
### Qualitative Results
#### 3.4.1 Human Evaluation
We conduct human evaluation as a complement to automatic metrics. We sample 50 examples for each of the four datasets (Sentiment, Formal, Genshin, Rephrase) and ask 30 experiment participants to rank the (1) style transferred strength (2) semantic preservation (3) sentence fluency of the four models. Model with the best performances is given the rank 1, whereas that of the worst performance is given the rank 4.
Table 4 shows the average ranking of the four models based on the responses from the experiment participants. It supports our claim that BTTS has the best performance on both single and hybrid style transfer tasks.
#### 3.4.2 Results Analysis
Table 5 shows a few examples of the transferred sentences on the four datasets in both transfer directions. We presents the inputs for each transfer case and the respective results of BTTS trained with T5-base and T5-large. We also include the outputs of TextSETTR for comparison. For _Genshin_, we consider the style transfer between two specific characters, Hu Tao and Noelle. Hu Tao is regarded as a character with more active and outgoing personality, while Noelle
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Method** & **Style** & **Content** & **Fluency** \\ \hline BTTS (T5-base) & 2.3 & 2.9 & 3.2 \\ BTTS (T5-large) & 1.1 & 1.6 & 1.5 \\ \hline TextSETTR & 2.7 & 2.6 & 2.8 \\ GST & 3.9 & 3.1 & 2.5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Human Evaluation Metrics
Figure 4: Heatmap of BTTS’s performance on the Amazon sentiment dataset with varying hyperparameter values in the Barlow Twins loss.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & **Shot Size** & **Acc.** & **Content** & **G** \\ \hline
**Baseline** & 30 & 70.4 & 53.5 & 61.4 \\ \hline & 16 & 69.6 & 52.7 & 60.6 \\
**Few-Shot** & 8 & 69.3 & 53.1 & 60.7 \\ & 4 & 68.0 & 49.6 & 58.1 \\ & 2 & 61.8 & 51.8 & 56.6 \\ & 1 & 56.5 & 48.3 & 52.2 \\ \hline
**Zero-shot** & 0 & 54.2 & 51.3 & 52.7 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experiments on shot size. The model is based on BTTS(T5-large) and the \(\lambda\) value is set to 4. The experiments are conducted during the inference stage, and hence no model training is involved.
Figure 3: 2D UMAP embeddings of the style vectors learned by TextSETTR model (Top) and BTTS model (Bottom) on two datasets: Amazon review sentiment (Left) and Grammarly’s Yahoo Answers Formality Corpus (Right). BTTS model shows a better clustering and separation of the style vectors than TextSETTR model. However, perfect separation should not be expected as the dimensions have been compressed.
is considered to be more calm and conservative. For _Rephrase_, we consider the transfer task between simplicity and creativity. The same model is used for each case without extra training.
## 4 Related Works
Most of the text style transfer methods take the line of unsupervised learning, where models are trained on text that have labeled attributes but are non-parallel. Various architecture have been utilized to learn the text style representation, including RNN [9] and Transformers [20]. While these methods have shown great success, their applicability are limited by the style labels that are required for training, which are not readily available for many desired attributes in the real world.
Recently, [24] develops a few shot approach that trains on unlabeled data, and only takes a few samples during inference for style transfer. This line of work removes the need for labeled training samples, but generally do not perform well on complicated style transfer tasks involving transferring multiple attributes simultaneously. Our work is most closely related to [18], which includes an extractor module consisting of a large language model to learn style representations. Our work differs in the way that we add Barlow Twins loss to improve the robustness of the model in complex style transfer tasks, and also achieve state-of-art performances on one-to-one attribute transfer.
The current advancement in large language models (LLMs) have shown remarkable capabilities in generating natural language texts across various domains and tasks. Some works have explored the use of LLMs for zero-shot text style transfer, where no model fine-tuning or exemplars in the target style are required. Instead, a natural language instruction is given to the LLM as a prompt, and the LLM is expected to rewrite the input sentence in the desired style. One such work is [17], who propose a augmented zero-shot learning method for arbitrary text style transfer with LLMs. They frame style transfer as a sentence rewriting task and use a natural language instruction that specifies both the input and output styles, as well as some additional information such as synonyms or antonyms. They show that their method can perform well on standard style transfer tasks such as sentiment and formality, as well as on more creative transformations such as "make this melodramatic" or "insert a metaphor". They also demonstrate that their method can handle multiple styles simultaneously and generate diverse outputs.
## 5 Conclusion
We formally defined the concept of complex text style transfer and constructed a large-scale dataset for the task. We have explored the use of small models with implicit style pre-training, which achieved state-of-art performances of few-shot approaches on complex text style transfer tasks. We have introduced a novel evaluation metric based on ChatGPT that has better alignment with human judgement.
\begin{table}
\begin{tabular}{l|l|l} \hline \hline & **Formal\(\implies\)Informal** & **Informal\(\implies\)Fornal** \\ \hline Input & I do not understand why people like films of that sort. & I think pretty much they’re all kind of humorous. \\ \hline TextSETTR & I no understand why people like movie like that. & I’m thinking they all funny, kind of. \\ \hline BTTS & I don’t get why people like those kinds of movies. & I believe they are all quite humorous. \\ \hline \hline & **Positive\(\implies\)Negative** & Negative\(\implies\)Positive \\ \hline Input & I had it a long time now and I still love it. & I will never buy this product again. \\ \hline TextSETTR & I had it a short time, and I never loved it. & I not buy product again, but different product. \\ \hline BTTS & I had it for a long time now but I no longer love it. & I will buy this product again. \\ \hline \hline & **Hu Tao\(\implies\)Noelle** & **Noelle\(\implies\)Hu Tao** \\ \hline Input & Lemme show you some fire tricks. First... Fire! And then... Whoosh! Fire butterfly! Be free! & I cannot rest on my laurels. After all, I’m not even formally a knight yet. Until that day, I must work even harder. \\ \hline TextSETTR & I’m gonna show you fire tricks. Fire! Whoosh! Butterfly! Free! & Can’t relax on my laurels yet, need to work harder. After all, I am not even a knight yet. \\ \hline BTTS & Allow me to demonstrate some techniques with flames. & Oh, I simply cannot rest on my laurels! One day, I will officially become a knight. Until that day comes, I will dedicate myself to mastering my skills. I cannot wait to show the world just how magical I truly am! \\ \hline \hline & **Simple\(\implies\)Creative** & **Creative\(\implies\)Simple** \\ \hline Input & The American antelope, also called the pronghorn, is a typical grassland animal on the continent. & As a region grows in population, more infrastructure, such as roads, garbage dumps, and water treatment plants, will be required to support its residents. \\ \hline TextSETTR & The antelope from America, also called pronghorn, live on grassland like others. & When there’s more people in a region, it means it needs more infrastructure, like roads, water treatment plants, and garbage dumps to keep everyone happy. \\ \hline BTTS & With its sleek frame and nimble hooves, the American antelope - also known as the pronghorn - is a quintessential denizen of the vast grasslands that sprawl & Growing population would mean more infrastructure like roads are required in a region. \\ \hline \hline \end{tabular}
\end{table}
Table 5: Examples of transferred sentences by BTTS and TextSETTR. Attributes are colored for formality and sentiment transfers on both directions. |
2308.00185 | Sierpiński fractals and the dimension of their Laplacian spectrum | We estabish rigorous estimates for the Hausdorff dimension of the spectra of
Laplacians associated to Sierpi\'nski lattices and infinite Sierpi\'nski
gaskets and other post-critically finite self-similar sets. | Mark Pollicott, Julia Slipantschuk | 2023-07-31T22:43:11Z | http://arxiv.org/abs/2308.00185v1 | # Sierpinski fractals and the dimension of their Laplacian spectrum
###### Abstract
We establish rigorous estimates for the Hausdorff dimension of the spectra of Laplacians associated to Sierpinski lattices and infinite Sierpinski gaskets and other post-critically finite self-similar sets.
Dedicated to Karoly Simon on the occasion of his 60+1st birthday
+
Footnote †: The authors were partly supported by ERC-Advanced Grant 833802-Resonances.
## 1 Introduction
The study of the Laplacian on manifolds has been a very successful area of mathematical analysis for over a century, combining ideas from topology, geometry, probability theory and harmonic analysis. A comparatively new development is the theory of a Laplacian for certain types of naturally occurring fractals, see [29, 26, 31, 9, 28, 12, 21], to name but a few. A particularly well-known example is the following famous set.
**Definition 1.1**.: The _Sierpinski triangle_\(\mathcal{T}\subset\mathbb{R}^{2}\) (see Figure 1(a)) is the smallest non-empty compact set1 such that \(\bigcup_{i=1}^{3}T_{i}(\mathcal{T})=\mathcal{T}\) where \(T_{1},T_{2},T_{3}\colon\mathbb{R}^{2}\to\mathbb{R}^{2}\) are the affine maps
Footnote 1: In the literature, this set is also often referred to as the _Sierpinski gasket_, and denoted \(SG_{2}\).
\[T_{1}(x,y) =\left(\frac{x}{2},\frac{y}{2}\right)\qquad T_{2}(x,y)=\left( \frac{x}{2}+\frac{1}{2},\frac{y}{2}\right)\] \[T_{3}(x,y) =\left(\frac{x}{2}+\frac{1}{4},\frac{y}{2}+\frac{\sqrt{3}}{4} \right).\]
A second object which will play a role is the following infinite graph.
**Definition 1.2**.: Let \(V_{0}=\{(0,0),(1,0),(\frac{1}{2},\frac{\sqrt{3}}{2})\}\) be the vertices of \(\mathcal{T}\) and define \(V_{n}=\bigcup_{i=1}^{3}T_{i}(V_{n-1})\). Further, fix a sequence \(\omega=(\omega_{n})_{n\in\mathbb{N}}\subset\{1,2,3\}^{\mathbb{N}}\), and let
\[V^{\infty}=\bigcup_{i=1}^{\infty}V^{n}\text{ with }V^{n}=T_{\omega_{1}}^{-1} \circ\cdots\circ T_{\omega_{n}}^{-1}(V_{n}),\]
where we use the inverses
\[T_{1}^{-1}(x,y) =(2x,2y)\qquad T_{2}^{-1}(x,y)=(2x-1,2y)\] \[T_{3}^{-1}(x,y) =\left(2x-\frac{1}{2},2y-\frac{\sqrt{3}}{2}\right).\]
The definition of \(V^{\infty}\) depends on the choice of \(\omega\), however as will be explained below, the relevant results do not, allowing us to omit the dependence in our notation. The points in \(V^{\infty}\) correspond to the vertices of an infinite graph \(\mathcal{L}\) called a _Sierpinski lattice_ for which the edges correspond to pairs of vertices \((v,v^{\prime})\), with \(v,v^{\prime}\in V^{\infty}\) such that \(\|v-v^{\prime}\|_{2}=1\) (see Figure 1(b)). Equivalently, \(\mathcal{L}\) has an edge \((v,v^{\prime})\) if and only if
\[v,v^{\prime}\in T_{\omega_{1}}^{-1}\circ\cdots\circ T_{\omega_{n}}^{-1}\circ T _{i_{n}}\circ\cdots\circ T_{i_{1}}(V_{0})\]
for some \(i_{1},\ldots,i_{n}\in\{1,2,3\}\), \(n\geqslant 0\).
Finally, we will also be interested in infinite Sierpinski gaskets, which can be defined similarly to Sierpinski lattices as follows.
**Definition 1.3**.: For a fixed sequence \(\omega=(\omega_{n})_{n\in\mathbb{N}}\), we define an _infinite Sierpinski gasket_ to be the unbounded set \(\mathcal{T}^{\infty}\) given by
\[\mathcal{T}^{\infty}=\bigcup_{n=0}^{\infty}\mathcal{T}^{n},\text{ with }\mathcal{T}^{n}=T_{\omega_{1}}^{-1}\circ\cdots\circ T_{\omega_{n}}^{-1}( \mathcal{T}),\]
which is a countable union of copies of the standard Sierpinski triangle \(\mathcal{T}\) (see Figure 1(c)). As for Sierpinski lattices, the definition of \(\mathcal{T}^{\infty}\) depends on the choice of \(\omega\), but we omit this dependence in our notation as the cited results hold independently of it.
The maps \(T_{1}\), \(T_{2}\) and \(T_{3}\) are similarities on \(\mathbb{R}^{2}\) with respect to the Euclidean norm, and more precisely
\[\|T_{i}(x_{1},y_{1})-T_{i}(x_{2},y_{2})\|_{2}=\frac{1}{2}\|(x_{1},y_{1})-(x_{2}, y_{2})\|_{2}\]
for \((x_{1},y_{1}),(x_{2},y_{2})\in\mathbb{R}^{2}\) and \(i=1,2,3\), and thus by Moran's theorem the Hausdorff dimension of \(\mathcal{T}\) has the explicit value \(\dim_{H}(\mathcal{T})=\frac{\log 3}{\log 2}\)[8]. We can easily give the Hausdorff dimensions of the other spaces. It is clear that \(\dim_{H}(\mathcal{L})=1\) and since an infinite Sierpinski gasket \(\mathcal{T}^{\infty}\) consists of countably many copies of \(\mathcal{T}\) it follows that we also have \(\dim_{H}(\mathcal{T}^{\infty})=\frac{\log 3}{\log 2}\).
In this note we are concerned with other fractal sets closely associated to the infinite Sierpinski gasket \(\mathcal{T}^{\infty}\) and the Sierpinski lattice \(\mathcal{L}\), for which the Hausdorff dimensions are significantly more difficult to compute.
In SS2 we will describe how to associate to \(\mathcal{T}\) a Laplacian \(\Delta_{\mathcal{T}}\) which is a linear operator defined on suitable functions \(f\colon\mathcal{T}\to\mathbb{R}\). An eigenvalue \(\lambda\geqslant 0\) for \(-\Delta_{\mathcal{T}}\) on the Sierpinski triangle is then a solution to the basic identity
\[\Delta_{\mathcal{T}}f+\lambda f=0.\]
The spectrum \(\sigma(-\Delta_{\mathcal{T}})\subset\mathbb{R}^{+}\) of \(-\Delta_{\mathcal{T}}\) is a countable set of eigenvalues. In particular, its Hausdorff dimension satisfies \(\dim_{H}(\sigma(-\Delta_{\mathcal{T}}))=0\). A nice account of this theory appears in the survey note of Strichartz [29] and his book [30].
By contrast, in the case of the infinite Sierpinski gasket and the Sierpinski lattice there are associated Laplacians, denoted \(\Delta_{\mathcal{T}^{\infty}}\) and \(\Delta_{\mathcal{L}}\), respectively, with spectra \(\sigma(-\Delta_{\mathcal{T}^{\infty}})\subset\mathbb{R}^{+}\) and \(\sigma(-\Delta_{\mathcal{L}})\subset\mathbb{R}^{+}\) which are significantly more complicated. In particular, their Hausdorff dimensions are non-zero and therefore their numerical values are of potential interest. However, unlike the case of the dimensions of the original sets \(\mathcal{T}^{\infty}\) and \(\mathcal{L}\), there is no clear explicit form for this quantity. Fortunately, using thermodynamic methods we can estimate the Hausdorff dimension2 numerically to very high precision.
Footnote 2: In this case the Hausdorff dimension equals the Box counting dimension, as will become apparent in the proof.
**Theorem 1.4**.: _The Hausdorff dimension of \(\sigma(-\Delta_{\mathcal{T}^{\infty}})\) and \(\sigma(-\Delta_{\mathcal{L}})\) satisfy_
\[\dim_{H}(\sigma(-\Delta_{\mathcal{T}^{\infty}}))=\dim_{H}\left(\sigma(-\Delta _{\mathcal{L}})\right)=0.55161856837246\ldots\]
A key point in our approach is that we have rigorous bounds, and the value in the above theorem is accurate to the number of decimal places presented. We can actually estimate this Hausdorff dimension to far more decimal places. To illustrate this, in the final section we give an approximation to \(100\) decimal places.
It may not be immediately obvious what practical information the numerical value of the Hausdorff dimension gives about the sets \(\mathcal{T}^{\infty}\) and \(\mathcal{L}\) but it may have the potential to give an interesting numerical characteristic of the spectra. Beyond pure fractal geometry, the spectra of Laplacians on fractals are also of practical interest, for instance in the study of vibrations in heterogeneous and random media, or the design of so-called fractal antennas [6, 10].
We briefly summarize the contents of this note. In SS2 we describe some of the background for the Laplacian on the Sierpinski graph. In particular, in SS2.3 we recall the basic approach
of _decimation_ which allows \(\sigma(\Delta_{\mathcal{T}})\) to be expressed in terms of a polynomial \(R_{\mathcal{T}}(x)\). Although we are not directly interested in the zero-dimensional set \(\sigma(-\Delta_{\mathcal{T}})\), the spectra of \(\sigma(-\Delta_{\mathcal{T}^{\infty}})\) and \(\sigma(-\Delta_{\mathcal{L}})\) actually contain a Cantor set \(\mathcal{J}_{\mathcal{T}}\subset[0,5]\), the so-called Julia set associated to the polynomial \(R_{\mathcal{T}}(x)\).
As one would expect, other related constructions of fractal sets have similar spectral properties and their dimension can be similarly studied. In SS3 we consider higher-dimensional Sierpinski simplices, post-critically finite fractals, and an analogous problem where we consider the spectrum of the Laplacian on infinite graphs (e.g., the Sierpinski graph and the Pascal graph). In SS4 we recall the algorithm we used to estimate the dimension and describe its application. This serves to both justify our estimates and also to use them as a way to illustrate a method with wider applications.
## 2 Spectra of the Laplacians
### Energy forms
There are various approaches to defining the Laplacian \(\Delta_{\mathcal{T}}\) on \(\mathcal{T}\). We will use one of the simplest ones, using energy forms.
Following Kigami [12] the definition of the spectrum of the Laplacian for the Sierpinski gasket \(\mathcal{T}\) involves a natural sequence of finite graphs \(X_{n}\) with
\[X_{0}\subset X_{1}\subset X_{2}\subset\cdots\subset\bigcup_{n}X_{n}\subset \overline{\bigcup_{n}X_{n}}=:\mathcal{T},\]
the first three of which are illustrated in Figure 2. To this end, let
\[V_{0}=\left\{(0,0),(1,0),\left(\frac{1}{2},\frac{\sqrt{3}}{2}\right)\right\}\]
be the three vertices of \(X_{0}\). The vertices of \(X_{n}\) can be defined iteratively to be the set of points satisfying
\[V_{n}=T_{1}(V_{n-1})\cup T_{2}(V_{n-1})\cup T_{3}(V_{n-1})\quad\text{ for }n\geqslant 1.\]
We denote by \(\ell^{2}(V_{n})\) (for \(n\geqslant 0\)) the real valued functions \(f\colon V_{n}\to\mathbb{R}\) (where the \(\ell^{2}\) notation is used for consistency with the infinite-dimensional case despite having no special significance for finite sets).
Figure 2: The first three graphs for the Sierpinski triangle.
**Definition 2.1**.: To each of the finite graphs \(X_{n}\) (\(n\geqslant 0\)) we can associate bilinear forms \(\mathcal{E}_{n}\colon\ell^{2}(V_{n})\times\ell^{2}(V_{n})\to\mathbb{R}\) called _self-similar energy forms_ given by
\[\mathcal{E}_{n}(f,g)=c_{n}\sum_{x\sim_{n}y}(f(x)-f(y))(g(x)-g(y)), \tag{2.1}\]
where \(x,y\in V_{n}\) are vertices of \(X_{n}\), and \(x\sim_{n}y\) denotes neighbouring edges in \(X_{n}\). In particular, \(x\sim_{n}y\) precisely when there exists \(x^{\prime},y^{\prime}\in V_{n-1}\) and \(i\in\{1,2,3\}\) such that \(x=T_{i}(x^{\prime})\) and \(y=T_{i}(y^{\prime})\). The value \(c_{n}>0\) denotes a suitable scaling constant. With a slight abuse of notation, we also write \(\mathcal{E}_{n}(f):=\mathcal{E}_{n}(f,f)\) for the corresponding quadratic form \(\ell^{2}(V_{n})\to\mathbb{R}\).
To choose the values \(c_{n}>0\) (for \(n\geqslant 0\)) we want the sequence of bilinear forms \((\mathcal{E}_{n})_{n=0}^{\infty}\) to be consistent by asking that for any \(f_{n-1}\colon V_{n-1}\to\mathbb{R}\) (for \(n\geqslant 1\)) we have
\[\mathcal{E}_{n-1}(f_{n-1})=\mathcal{E}_{n}(f_{n}),\]
where \(f_{n}\colon V_{n}\to\mathbb{R}\) denotes an extension which satisfies:
1. \(f_{n}(x)=f_{n-1}(x)\) for \(x\in V_{n-1}\); and
2. \(f_{n}\) satisfying (a) minimizes \(\mathcal{E}_{n}(f_{n})\) (i.e., \(\mathcal{E}_{n}(f_{n})=\min_{f\in\ell^{2}(V_{n})}\mathcal{E}_{n}(f)\)).
The following is shown in [30], for example.
**Lemma 2.2**.: _The family \((\mathcal{E}_{n})_{n=0}^{\infty}\) is consistent if we choose \(c_{n}=\left(\frac{5}{3}\right)^{n}\) in (2.1)._
The proof of this lemma is based on solving families of simultaneous equations arising from (a) and (b). We can now define a bilinear form for functions on \(\mathcal{T}\) using the consistent family of bilinear forms \((\mathcal{E}_{n})_{n=0}^{\infty}\).
**Definition 2.3**.: For any continuous function \(f\colon\mathcal{T}\to\mathbb{R}\) we can associate the limit
\[\mathcal{E}(f):=\lim_{n\to+\infty}\mathcal{E}_{n}(f)\in[0,+\infty]\]
and let \(\text{dom}(\mathcal{E})=\{f\in C(\mathcal{T}):\mathcal{E}(f)<+\infty\}\).
**Remark 2.4**.: We can consider eigenfunctions \(f\in\text{dom}(\mathcal{E})\) which satisfy Dirichlet boundary conditions (i.e., \(f|V_{0}=0\)).
### Laplacian for \(\mathcal{T}\)
To define the Laplacian \(\Delta_{\mathcal{T}}\) the last ingredient is to consider an inner product defined using the natural measure \(\mu\) on the Sierpinski triangle \(\mathcal{T}\).
**Definition 2.5**.: Let \(\mu\) be the natural measure on \(\mathcal{T}\) such that
\[\mu\left(T_{i_{1}}\circ\cdots\circ T_{i_{n}}\text{co}(V_{0})\right)=\frac{1}{ 3^{n}}\quad\text{ for }i_{1},\ldots,i_{n}\in\{1,2,3\},\]
where \(\text{co}(V_{0})\) is the convex hull of \(V_{0}\), i.e., the filled-in triangle.
In particular, \(\mu\) is the Hausdorff measure for \(\mathcal{T}\), and the unique measure on \(\mathcal{T}\) for which
\[T_{i}^{*}\mu=\frac{1}{3}\mu\quad\text{ for }i=1,2,3.\]
The subspace \(\text{dom}(\mathcal{E})\subset L^{2}(\mathcal{T},\mu)\) is a Hilbert space. Using the measure \(\mu\) and the bilinear form \(\mathcal{E}\) we recall the definition of the Laplacian \(\Delta_{\mathcal{T}}\).
**Definition 2.6**.: For \(u\in\text{dom}(\mathcal{E})\) which vanishes on \(V_{0}\) we can define the Laplacian to be a continuous function \(\Delta_{\mathcal{T}}u\colon\mathcal{T}\to\mathbb{R}\) such that
\[\mathcal{E}(u,v)=-\int(\Delta_{\mathcal{T}}u)vd\mu\]
for any \(v\in\text{dom}(\mathcal{E})\).
**Remark 2.7**.: For each finite graph \(X_{n}\), the spectrum \(\sigma(-\Delta_{X_{n}})\) for the graph Laplacian \(\Delta_{X_{n}}\) will consist of a finite number of solutions of the eigenvalue equation
\[\Delta_{X_{n}}f+\lambda f=0. \tag{2.2}\]
This is easy to see because \(V_{n}\) is finite and thus the space \(\ell^{2}(V_{n})\) is finite-dimensional and so the graph Laplacian can be represented as a matrix. There is then an alternative pointwise formulation of the Laplacian of the form
\[\Delta_{\mathcal{T}}u(x)=\frac{3}{2}\lim_{n\to+\infty}5^{n}\Delta_{X_{n}}u(x) \tag{2.3}\]
where \(x\in\bigcup_{n=1}^{\infty}V_{n}\setminus V_{0}\). The eigenvalue equation \(\Delta_{\mathcal{T}}u+\lambda u=0\) then has admissible solutions provided \(u,\Delta_{\mathcal{T}}u\in C(\mathcal{T})\). A result of Kigami is that \(u\in\text{dom}(\mathcal{E})\) if and only if the convergence in (2.3) is uniform [13].
### Spectral decimation for \(\sigma(-\Delta_{\mathcal{T}})\)
We begin by briefly recalling the fundamental notion of spectral decimation introduced by [21, 22, 2], which describes the spectrum \(\sigma(-\Delta_{\mathcal{T}})\).
**Definition 2.8**.: Given the polynomial \(R_{\mathcal{T}}\colon[0,5]\to\mathbb{R}\) defined by
\[R_{\mathcal{T}}(x)=x(5-x),\]
we can associate local inverses (see Figure 3) \(S_{-1,\mathcal{T}},S_{+1,\mathcal{T}}\colon[0,5]\to[0,5]\) of the form
\[S_{\epsilon,\mathcal{T}}(x)=\frac{5}{2}+\frac{\epsilon}{2}\sqrt{25-4x}\quad \text{ for }\epsilon=\pm 1. \tag{2.4}\]
The process of spectral decimation (see [30, SS3.2] or [9]) describes the eigenvalues of \(-\Delta_{\mathcal{T}}\) as renormalized limits of (certain) eigenvalue sequences of \(-\Delta_{X_{n}},n\in\mathbb{N}\). These eigenvalues, essentially, follow the recursive equality \(\lambda_{n+1}=S_{\pm 1,\mathcal{T}}(\lambda_{n})\), while the corresponding eigenfunctions of \(-\Delta_{X_{n+1}}\) are such that their restrictions to \(V_{n}\) are eigenfunctions for \(-\Delta_{X_{n}}\). Thus, the eigenvalue problem can be solved inductively, constructing solutions \(f\) to the eigenvalue equation (2.2) at level \(n+1\) from solutions at level \(n\in\mathbb{N}\). The values of \(f\) at vertices in \(V_{n+1}\setminus V_{n}\) are obtained from solving the additional linear equations that arise from the eigenvalue equation \(\Delta_{X_{n+1}}f+\lambda f=0\), which allows for exactly two solutions. The exact limiting process giving rise to eigenvalues of \(-\Delta_{\mathcal{T}}\) is described by the following result.
**Proposition 2.9** ([9, 21, 4]).: _Every solution \(\lambda\in\mathbb{R}\) to the eigenvalue equation_
\[\Delta_{\mathcal{T}}u+\lambda u=0 \tag{2.5}\]
_can be written as_
\[\lambda=\frac{3}{2}\lim_{m\to+\infty}5^{m+c}\lambda_{m}, \tag{2.6}\]
_for a sequence \((\lambda_{m})_{m\geqslant m_{0}}\) and a positive integer \(c\in\mathbb{N}_{0}\) satisfying_
1. \(\lambda_{m_{0}}=2\) _and_ \(c=0\)_, or_ \(\lambda_{m_{0}}=5\) _and_ \(c\geqslant 1\)_, or_ \(\lambda_{m_{0}}=3\) _and_ \(c\geqslant 2\)_;_
2. \(\lambda_{m}=\lambda_{m+1}(5-\lambda_{m+1})=R_{\mathcal{T}}(\lambda_{m+1})\) _for all_ \(m\geqslant m_{0}\)_; and_
3. _the limit (_2.6_) is finite._
_Conversely, the limit of every such sequence gives rise to a solution of (2.5)._
We remark that equivalently, the sequence \((\lambda_{m})_{m\geqslant m_{0}}\) could be described recursively as \(\lambda_{m+1}=S_{\epsilon_{m},\mathcal{T}}(\lambda_{m})\) where \(\epsilon_{m}\in\{\pm 1\}\) for \(m\geqslant m_{0}\). The finiteness of the limit (2.6) is equivalent to there being an \(m^{\prime}\geqslant m_{0}\) such that \(\epsilon_{m}=-1\) for all \(m\geqslant m^{\prime}\).
### Spectrum of the Laplacian for Sierpinski lattices
For a Sierpinski lattice, we define the Laplacian \(\Delta_{\mathcal{L}}\) by
\[(\Delta_{\mathcal{L}}f)(x)=s_{x}\sum_{y\sim x}(f(y)-f(x))\]
with
\[s_{x}=\begin{cases}2&\text{ if }x\text{ is a boundary point},\\ 1&\text{ if }x\text{ is not a boundary point},\end{cases}\]
which is a well-defined and bounded operator from \(\ell^{2}(V^{\infty})\) to itself (this follows from the fact that each vertex of \(\mathcal{L}\) has at most \(4\) neighbours).
**Remark 2.10**.: We note that our definition of \(V^{\infty}\) and \(\mathcal{L}\) depended on the choice of a sequence \((\omega_{n})_{n\in\mathbb{N}}\), and graphs resulting from different sequences are typically not isometric [31, Lemma 2.3(ii)]. On the other hand, the spectrum \(\sigma(-\Delta_{\mathcal{L}})\) turns out to be independent of this choice (see [31, Remark 4.2] or [26, Proposition 1]).
The operator \(-\Delta_{\mathcal{L}}\colon\ell^{2}(V^{\infty})\to\ell^{2}(V^{\infty})\) has a more complicated spectrum which depends on the following definition.
**Definition 2.11** (cf. [8]).: We define the Julia set associated to \(R_{\mathcal{T}}\) to be the smallest non-empty closed set \(\mathcal{J}_{\mathcal{T}}\subset[0,5]\) such that
\[\mathcal{J}_{\mathcal{T}}=S_{-1,\mathcal{T}}(\mathcal{J}_{\mathcal{T}})\cup S _{+1,\mathcal{T}}(\mathcal{J}_{\mathcal{T}}).\]
This leads to the following description of the spectrum \(\sigma(-\Delta_{\mathcal{L}})\).
**Proposition 2.12** ([31, Theorem 2]).: _The operator \(-\Delta_{\mathcal{L}}\) on \(\ell^{2}(V^{\infty})\) is bounded, non-negative and self-adjoint and has spectrum_
\[\sigma(-\Delta_{\mathcal{L}})=\mathcal{J}_{\mathcal{T}}\cup\left(\{6\}\cup \bigcup_{n=0}^{\infty}R^{-n}(\{3\})\right).\]
This immediately leads to the following.
**Corollary 2.13**.: _We have that \(\dim_{H}(\sigma(-\Delta_{\mathcal{L}}))=\dim_{H}(\mathcal{J}_{\mathcal{T}})\)._
Thus estimating the Hausdorff dimension of the spectrum \(\sigma(-\Delta_{\mathcal{L}})\) is equivalent to estimating that of the Julia set \(\mathcal{J}_{\mathcal{T}}\). The following provides a related application.
**Example 2.14** (Pascal graph).: Consider the Pascal graph \(\mathcal{P}\)[18], which is an infinite \(3\)-regular graph, see Figure 4. Its edges graph is the Sierpinski lattice \(\mathcal{L}\), and as was shown by Quint [18], the spectrum \(\sigma(-\Delta_{\mathcal{P}})\) of its Laplacian \(-\Delta_{\mathcal{P}}\) is the union of a countable set and the Julia set of a certain polynomial (affinely) conjugated to \(R_{\mathcal{T}}\). From this we deduce that
\[\dim_{H}(\sigma(-\Delta_{\mathcal{P}}))=\dim_{H}(\mathcal{J}_{\mathcal{P}})= \dim_{H}(\mathcal{J}_{\mathcal{L}})=\dim_{H}(\sigma(-\Delta_{\mathcal{L}})),\]
which we estimate in Theorem 1.4.
Figure 4: The Pascal graph.
### Spectrum of the Laplacian for infinite Sierpinski gaskets
We finally turn to the case of an infinite Sierpinski gasket \(\mathcal{T}^{\infty}\). The Laplacian \(\Delta_{\mathcal{T}^{\infty}}\) is an operator with a domain in \(L^{2}(\mathcal{T}^{\infty},\mu^{\infty})\). Here \(\mu^{\infty}\) is the natural measure on \(\mathcal{T}^{\infty}\), whose restriction to \(\mathcal{T}\) equals \(\mu\), and such that any two isometric sets are of equal measure (see [31]).
Remark 2.10 applies almost identically also to the Sierpinski gasket case: \(\mathcal{T}^{\infty}\) depends non-trivially on the choice of a sequence \(\omega\) in its definition, and different sequences typically give rise to non-isometric gaskets, with the boundary of \(\mathcal{T}^{\infty}\) empty if and only if \(\omega\) is eventually constant [31, Lemma 5.1]. The spectrum \(\sigma(-\Delta_{\mathcal{T}^{\infty}})\), however, is independent of \(\omega\) (even if the spectral decomposition is not, see [31, Remarks 5.4] or [26, Proposition 1]). Using the notation
\[\mathcal{R}(z)=\lim_{n\to\infty}5^{n}(S_{-1,\mathcal{T}})^{n}(z),\]
we have the following result on the spectrum \(\sigma(-\Delta_{\mathcal{T}^{\infty}})\).
**Proposition 2.15** ([31, Theorem 4]).: _The operator \(-\Delta_{\mathcal{T}^{\infty}}\) is an unbounded self-adjoint operator from a dense domain in \(L^{2}(\mathcal{T}^{\infty},\mu^{\infty})\) to \(L^{2}(\mathcal{T}^{\infty},\mu^{\infty})\). Its spectrum is \(\sigma(-\Delta_{\mathcal{T}^{\infty}})=\mathcal{J}^{\infty}\cup\Sigma_{3}^{\infty}\) with_
\[\mathcal{J}^{\infty}=\bigcup_{n=-\infty}^{\infty}5^{n}\mathcal{R}(\mathcal{J} )\quad\text{ and }\quad\Sigma_{3}^{\infty}=\bigcup_{n=-\infty}^{\infty}5^{n} \mathcal{R}(\Sigma_{3}),\]
_where \(\Sigma_{3}=\bigcup_{n=0}^{\infty}R^{-n}(\{3\})\)._
A number of generalizations of this result for other unbounded nested fractals have been proved, see e.g. [25, 27]. The proposition immediately yields the following corollary.
**Corollary 2.16**.: _We have that \(\dim_{H}(\sigma(-\Delta_{\mathcal{T}^{\infty}}))=\dim_{H}(\mathcal{J}_{ \mathcal{T}})\)._
Thus estimating the Hausdorff dimension of the spectrum \(\sigma(-\Delta_{\mathcal{T}^{\infty}})\) is again equivalent to estimating the Hausdorff dimension of the Julia set \(\mathcal{J}_{\mathcal{T}}\).
## 3 Related results for other gaskets and lattices
In this section we describe other examples of fractal sets to which the same approach can be applied. In practice the computations may be more complicated, but the same basic method still applies.
### Higher-dimensional infinite Sierpinski gaskets
Let \(d\geqslant 2\) and \(T_{i}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) be contractions defined by
\[T_{i}(x_{1},\ldots,x_{d})=\Big{(}\frac{x_{1}}{2},\ldots,\frac{x_{d}}{2}\Big{)} +\frac{1}{2}e_{i},\quad\text{ for }i=1,\ldots,d,\]
where \(e_{i}\) is the \(i\)th coordinate vector. The \(d\)-dimensional Sierpinski gasket \(\mathcal{T}^{d}\subset\mathbb{R}^{d}\) is the smallest non-empty closed set such that
\[\mathcal{T}^{d}=\bigcup_{i=1}^{d}T_{i}(\mathcal{T}^{d}).\]
In [21], the analogous results are presented for the spectrum of the Laplacian \(\Delta_{\mathcal{T}^{d}}\) associated to the corresponding Sierpinski gasket \(\mathcal{T}^{d}\subset\mathbb{R}^{d}\) in \(d\) dimensions \((d\geqslant 3)\).
**Definition 3.1**.: For a sequence \((\omega_{n})_{n\in\mathbb{N}}\subset\{1,\ldots,d\}^{\mathbb{N}}\) we can define an _infinite Sierpinski gasket in \(d\) dimensions_ as
\[\mathcal{T}^{d,\infty}=\bigcup_{n=1}^{\infty}T_{\omega_{1}}^{-1}\circ\cdots \circ T_{\omega_{n}}^{-1}(\mathcal{T}^{d}).\]
As before, we can associate a Julia set \(\mathcal{J}_{\mathcal{T}^{d}}\) and consider its Hausdorff dimension \(\dim_{H}(\mathcal{J}_{\mathcal{T}^{d}})\). More precisely, in each case, we can consider the decimation polynomial \(R_{\mathcal{T}^{d}}\colon[0,3+d]\to\mathbb{R}\) defined by
\[R_{\mathcal{T}^{d}}(x)=x((3+d)-x),\]
with two local inverses \(S_{\pm 1,\mathcal{T}^{d}}\colon[0,3+d]\to[0,3+d]\) given by
\[S_{\epsilon,\mathcal{T}^{d}}(x)=\frac{1}{2}\left(3+d+\epsilon\sqrt{9+6d+d^{2} -4x}\right)\quad\text{ with }\epsilon=\pm 1.\]
Let \(\mathcal{J}_{\mathcal{T}^{d}}\subset[0,3+d]\) be the limit set of these two contractions, i.e., the smallest non-empty closed set such that
\[\mathcal{J}_{\mathcal{T}^{d}}=S_{-1,\mathcal{T}^{d}}(\mathcal{J}_{\mathcal{T }^{d}})\cup S_{+1,\mathcal{T}^{d}}(\mathcal{J}_{\mathcal{T}^{d}}).\]
**Theorem 3.2**.: _The Hausdorff dimension \(\dim_{H}(\mathcal{J}_{\mathcal{T}^{d}})\) of the Julia set \(\mathcal{J}_{\mathcal{T}^{d}}\) for \(d\in\{2,\ldots,10\}\) associated to the Sierpinski gasket in \(d\) dimensions is given by the values in Table 1, accurate to the number of decimals stated._
The proof uses the same algorithmic method as that of Theorem 1.4, see Section 4.
**Remark 3.3**.: By arguments developed in [9] and [26], one can deduce that similarly to Proposition 2.15 and Corollary 2.16, the Hausdorff dimensions of the spectrum of the appropriately defined Laplacian on \(\mathcal{T}^{d,\infty}\) and the Julia set \(\dim_{H}(\mathcal{J}_{\mathcal{T}^{d}})\) coincide.
We can observe empirically from the table that the dimension decreases as \(d\to+\infty\). The following simple lemma confirms that \(\lim_{d\to+\infty}\dim_{H}(\mathcal{J}_{\mathcal{J}^{d}})=0\) with explicit bounds.
**Lemma 3.4**.: _As \(d\to+\infty\) we can bound_
\[\frac{\log 2}{\log(d+3)}\leqslant\dim_{H}(\mathcal{J}_{\mathcal{J}^{d}}) \leqslant\frac{2\log 2}{\log(d+3)+\log(d-1)}.\]
Proof.: We can write
\[I_{1}:=R_{\mathcal{J}^{d}}^{-1}([0,3+d])\cap\left[0,\frac{3+d}{2}\right]=\left[ 0,\frac{3+d}{2}\left(1-\sqrt{1-\frac{4}{3+d}}\right)\right].\]
Thus for \(x\in I_{1}\) we have bounds
\[\sqrt{(3+d)(d-1)}\leqslant|R_{\mathcal{J}^{d}}^{\prime}(x)|\leqslant 3+d.\]
Similarly, we can define \(I_{2}:=R_{\mathcal{J}^{d}}^{-1}([0,3+d])\cap\left[\frac{3+d}{2},3+d\right]\) and obtain the same bounds on \(|R_{\mathcal{J}^{d}}^{\prime}(x)|\) for \(x\in I_{2}\). In particular, we can then bound
\[\frac{\log 2}{\log(3+d)}\leqslant\dim_{H}(\mathcal{J}_{\mathcal{J}^{d}}) \leqslant\frac{2\log 2}{\log(3+d)+\log(d-1)}.\qed\]
### Post-critically finite self-similar sets
The method of spectral decimation used for the Sierpinski gasket by Fukushima and Shima [9], was extended by Shima [28] to post-critically finite self-similar sets and thus provided a method for analyzing the spectra of their Laplacians.
**Definition 3.5**.: Let \(\Sigma=\{1,\ldots,k\}^{\mathbb{Z}_{+}}\) be the space of (one-sided) infinite sequences with the Tychonoff product topology, and \(\sigma\) the usual left-shift map on \(\Sigma\).
Let \(T_{1},\ldots,T_{k}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) be contracting similarities and let \(\mathcal{X}\) be the limit set, i.e., the smallest closed subset with \(\mathcal{X}=\bigcup_{i=1}^{k}T_{i}(\mathcal{X})\). Let \(\pi\colon\Sigma\to\mathcal{X}\) be the natural continuous map defined by
\[\pi\left((w_{n})_{n=0}^{\infty}\right)=\lim_{n\to+\infty}T_{w_{0}}T_{w_{1}} \cdots T_{w_{n}}(0).\]
We say that \(\mathcal{X}\) is post-critically finite if
\[\#\left(\bigcup_{n=0}^{\infty}\sigma^{n}\left\{(w_{n})\in\Sigma:\pi(w_{n}) \in K\right\}\right)<+\infty\]
where \(K=\bigcup_{i\neq j}T_{i}\mathcal{X}\cap T_{j}\mathcal{X}\).
The original Sierpinski triangle \(\mathcal{T}\) is an example of a limit set which is post-critically finite. So is the following variant on the Sierpinski triangle.
**Example 3.6** (\(SG_{3}\) gasket).: We can consider the Sierpinski gasket \(SG_{3}\) (see Figure 5) which is the smallest non-empty closed set \(\mathcal{X}_{SG_{3}}\) such that \(\mathcal{X}_{SG_{3}}=\bigcup_{i=1}^{6}T_{i}\mathcal{X}_{SG_{3}}\) where
\[T_{j}(x,y)=p_{j}+\left(\frac{x}{3},\frac{y}{3}\right)\quad\text{ for }j=1,\ldots,6,\]
with
\[p_{1}=(0,0),p_{2}=\left(\frac{1}{3},0\right),p_{3}=\left(\frac{2}{3},0\right),p_{4 }=\left(\frac{1}{6},\frac{1}{2\sqrt{3}}\right),p_{5}=\left(\frac{1}{2},\frac{1} {2\sqrt{3}}\right),p_{6}=\left(\frac{1}{3},\frac{1}{\sqrt{3}}\right).\]
In this case we can associate the decimation rational function \(R_{SG_{3}}\colon[0,6]\to[0,6]\) given by
\[R_{SG_{3}}(x)=\frac{3x(5-x)(4-x)(3-x)}{14-2x},\]
for which there are four local inverses \(S_{j,SG_{3}}\) (for \(j=1,2,3,4\)) [7], see Figure 6. The associated Julia set \(\mathcal{J}_{SG_{3}}\), which is the smallest non-empty closed set such that \(\mathcal{J}_{SG_{3}}=\bigcup_{j=1}^{4}S_{j,SG_{3}}(\mathcal{J}_{SG_{3}})\), has Hausdorff dimension \(\dim_{H}(\mathcal{J}_{SG_{3}})\).
Using Mathematica with a sufficiently high precision setting (see Example 4.4 for more details) we can numerically compute the Hausdorff dimension of the Julia set \(\mathcal{J}_{SG_{3}}\) associated to the Sierpinski gasket \(SG_{3}\) to be
\[\dim_{H}(\mathcal{J}_{SG_{3}})=0.617506301862352229042494874316407096341976\ldots\]
**Example 3.7** (Vicsek graph).: The Vicsek set \(X_{\mathcal{V}}\) is the smallest non-empty closed set such that \(X_{\mathcal{V}}=\bigcup_{j=1}^{5}T_{j}(X_{\mathcal{V}})\) where
\[T_{j}(x,y)=p_{j}+\left(\frac{x}{3},\frac{y}{3}\right)\quad\text{ for }j=1,\ldots,5,\]
with
\[p_{1}=(0,0),\:p_{2}=\left(\frac{2}{3},0\right),\:p_{3}=\left(\frac{2}{3}, \frac{2}{3}\right),\:p_{4}=\left(0,\frac{2}{3}\right),\:p_{5}=\left(\frac{1}{ 3},\frac{1}{3}\right).\]
Figure 5: The first two graphs for \(SG_{3}\) (left, centre) and the \(SG_{3}\) gasket (right).
Figure 6: The function \(R_{SG_{3}}(x)\) and the four contracting inverse branches for the \(SG_{3}\) gasket.
In this case, studied in [20, Example 6.3], one has that \(R_{\mathcal{V}}\colon[-1,0]\to\mathbb{R}\) is given by
\[R_{\mathcal{V}}(z)=z(6z+3)(6z+5),\]
with three inverse branches \(S_{1},S_{2},S_{3}\colon[-1,0]\to[-1,0]\) given by
\[S_{1}(x) =\frac{1}{36}\left(i(\sqrt{3}+i)t(x)-\frac{19(1+i\sqrt{3})}{t(x)} -16\right),\] \[S_{2}(x) =\frac{1}{36}\left(-i(\sqrt{3}-i)t(x)-\frac{19(1-i\sqrt{3})}{t(x) }-16\right),\] \[S_{3}(x) =\frac{1}{18}\left(t(x)+\frac{19}{t(x)}-8\right),\]
where \(t(x)=\left(9\cdot(81x^{2}+56x-75)^{1/2}+81x+28\right)^{1/3}\). The associated Julia set \(\mathcal{J}_{\mathcal{V}}\) is the smallest non-empty closed set such that \(\mathcal{J}_{\mathcal{V}}=\bigcup_{j=1}^{3}S_{j,\mathcal{V}}(\mathcal{J}_{ \mathcal{V}})\). The following theorem is proved similarly to Theorem 1.4, as described in Section 4.
**Theorem 3.8**.: _The Hausdorff dimension of the Julia set \(\mathcal{J}_{\mathcal{V}}\) is_
\[\dim_{H}(\mathcal{J}_{\mathcal{V}})=0.49195457005266\ldots,\]
_accurate to the number of decimals stated._
**Remark 3.9**.: Analogously to the case of the Sierpinski lattice \(\mathcal{L}\), we can define lattices \(\mathcal{L}_{SG_{3}}\) and \(\mathcal{L}_{\mathcal{V}}\) for the \(SG_{3}\) and Vicsek sets from the previous two examples, as well as corresponding graph Laplacians \(\Delta_{\mathcal{L}_{SG_{3}}}\) and \(\Delta_{\mathcal{L}_{\mathcal{V}}}\). The Hausdorff dimensions of their spectra can again be directly related to those of the respective Julia sets \(\mathcal{J}_{SG_{3}}\) and \(\mathcal{J}_{\mathcal{V}}\). By [20, Theorem 5.8], one has that \(\mathcal{J}_{SG_{3}}\subseteq\sigma(-\Delta_{\mathcal{L}_{SG_{3}}})\subseteq \mathcal{J}_{SG_{3}}\cup\mathcal{D}_{SG_{3}}\) and \(\mathcal{J}_{\mathcal{V}}\subseteq\sigma(-\Delta_{\mathcal{L}_{\mathcal{V}}}) \subseteq\mathcal{J}_{\mathcal{V}}\cup\mathcal{D}_{\mathcal{V}}\), where \(\mathcal{D}_{SG_{3}}\) and \(\mathcal{D}_{\mathcal{V}}\) are countable sets. It follows, analogously to Corollary 2.13, that \(\dim_{H}(\sigma(-\Delta_{\mathcal{L}_{SG_{3}}}))=\dim_{H}(\mathcal{J}_{SG_{3}})\) and \(\dim_{H}(\sigma(-\Delta_{\mathcal{L}_{\mathcal{V}}}))=\dim_{H}(\mathcal{J}_{ \mathcal{V}})\).
**Remark 3.10**.: Other examples to which the same method could be applied include the modified Koch curve (see [19, 15]) for which the associated rational function is \(R(x)=9x(x-1)(x-\frac{4}{3})(x-\frac{5}{3})/(x-\frac{3}{2}).\) More families of such examples can be found in [32].
**Remark 3.11**.: The spectral decimation method can also apply to some non-post-critically finite examples, such as the diamond fractal [14], for which the associated polynomial is \(R(x)=2x(2+x)\). On the other hand, there are symmetric fractal sets which do not admit spectral decimation, such as the pentagasket, as studied in [1].
## 4 Dimension estimate algorithm for Theorem 1.4
This section is dedicated to finishing the proof of Theorem 1.4, by describing an algorithm yielding estimates (with rigorous error bounds) for the values of the Hausdorff dimension.
By the above discussion we have reduced the estimation of the Hausdorff dimensions of \(\sigma(-\Delta_{\mathcal{L}})\) and \(\sigma(-\Delta_{\mathcal{T}^{\infty}})\) to that of \(\dim_{H}(\mathcal{J}_{\mathcal{T}})\) for the limit set \(\mathcal{J}_{\mathcal{T}}\) associated to \(S_{\pm 1,\mathcal{T}}\) from (2.4)
(and similarly for the other examples). Unfortunately, since the maps \(S_{\pm 1,\mathcal{T}}\) are non-linear it is not possible to give an explicit closed form for the value \(\dim_{H}(\sigma(-\Delta_{\mathcal{T}}))=\dim_{H}(\mathcal{J}_{\mathcal{T}})\). Recently developed simple methods make the numerical estimation of this value relatively easy to implement, which we summarize in the following subsections.
### A functional characterization of dimension
Let \(\mathcal{B}=C(I)\) be the Banach space of continuous functions on the interval \(I=[0,5]\) with the norm \(\|f\|_{\infty}=\sup_{x\in I}|f(x)|\).
**Definition 4.1**.: Let \(\mathcal{L}_{t}\) (for \(t\geqslant 0\)) be the _transfer operator_ defined by
\[\mathcal{L}_{t}f(x)=|S^{\prime}_{-1,\mathcal{T}}(x)|^{t}f(S_{-1,\mathcal{T}}( x))+|S^{\prime}_{+1,\mathcal{T}}(x)|^{t}f(S_{+1,\mathcal{T}}(x))\]
where \(f\in\mathcal{B}\) and \(x\in I\), and \(S_{\pm,\mathcal{T}}\) are as in (2.4).
It is well known that the transfer operator \(\mathcal{L}_{t}\) (for \(t\geqslant 0\)) is a well-defined positive bounded operator from \(\mathcal{B}\) to itself. To make use of the results in the previous sections, we employ the following "min-max method" result:
**Lemma 4.2** ([17]).: _Given choices of \(0<t_{0}<t_{1}<1\) and strictly positive continuous functions \(f,g\colon I\to\mathbb{R}^{+}\) with_
\[\inf_{x\in I}\frac{\mathcal{L}_{t_{0}}f(x)}{f(x)}>1\quad\text{ and }\quad\sup_{x\in I}\frac{\mathcal{L}_{t_{1}}g(x)}{g(x)}<1, \tag{4.1}\]
_then \(t_{0}<\dim_{H}(\mathcal{J}_{\mathcal{T}})<t_{1}\)._
Proof.: We briefly recall the proof. We require the following standard properties.
1. For any \(t\geqslant 0\) the operator \(\mathcal{L}_{t}\) has a maximal positive simple eigenvalue \(e^{P(t)}\) (with positive eigenfunction), where \(P\) is the pressure function [23, 16].
2. \(P\colon\mathbb{R}^{+}\to\mathbb{R}\) is real analytic and convex [23].
3. The value \(t=\dim(\mathcal{J}_{\mathcal{T}})\) is the unique solution to \(P(t)=0\), see [5, 24].
By property 1. and the first inequality in (4.1) we can deduce that
\[P(t_{0})=\lim_{n\to+\infty}\frac{1}{n}\log\|\mathcal{L}_{t_{0}}^{n}f\|_{\infty }>0. \tag{4.2}\]
By property 1. and the second inequality in (4.1) we can deduce that
\[P(t_{1})=\lim_{n\to+\infty}\frac{1}{n}\log\|\mathcal{L}_{t_{1}}^{n}g\|_{\infty }<0. \tag{4.3}\]
Comparing properties 2. and 3. with (4.2) and (4.3), the result follows.
### Rigorous verification of minmax inequalities
Next, we explain how we rigorously verify the conditions of Lemma 4.2 for a function \(f\colon I\to\mathbb{R}^{+}\), that is,
1. \(f>0\),
2. \(\inf_{x\in I}h(x)>1\) or \(\sup_{x\in I}h(x)<1\) for \(h(x):=(\mathcal{L}_{t}f)(x)/f(x)\).
In order to obtain rigorous results we make use of the arbitrary precision ball arithmetic library Arb [11], which for a given interval \([c-r,c+r]\) and function \(f\) outputs an interval \([c^{\prime}-r^{\prime},c^{\prime}+r^{\prime}]\) such that \(f([c-r,c+r])\subseteq[c^{\prime}+r^{\prime},c^{\prime}-r^{\prime}]\) is guaranteed. Clearly, the smaller the size of the input interval, the tighter the bounds on its image. Thus, in order to verify the above conditions we partition the interval \(I\) adaptively using a bisection method up to depth \(k\in\mathbb{N}_{0}\) into at most \(2^{k}\) subintervals, and check these conditions on each subinterval. While the first condition is often immediately satisfied for chosen test functions \(f\) on the whole interval \(I\), the second condition is much harder to check as \(h\) is very close to \(1\) and would require very large depth \(k\).
To counteract the exponential growth of the number of required subintervals, we use tighter bounds on the image of \(h\). Clearly for \(x\in[c-r,c+r]\) with \(c\in\mathbb{R}\) and \(r>0\), we have that \(|h(x)-h(c)|\leqslant\sup_{y\in[c-r,c+r]}|h^{\prime}(y)|r\) by the mean value theorem. More generally, we obtain for \(p\in\mathbb{N}\) that
\[|h(x)-h(c)|\leqslant\sum_{i=1}^{p-1}|h^{(i)}(c)|r^{i}+\sup_{y\in[c-r,c+r]}|h^ {(p)}(y)|r^{p}.\]
This allows to achieve substantially tighter bounds on \(h([c-r,c+r])\) while using a moderate number of subintervals, at the cost of additionally computing the first \(p\) derivatives of \(h\).
### Choice of \(f\) and \(g\) via an interpolation method
Here we explain how to choose suitable functions \(f\) and \(g\) for use in Lemma 4.2, so that given candidate values \(t_{0}<t_{1}\) we can confirm that \(t_{0}<\dim_{H}(\mathcal{J}_{7})<t_{1}\). Clearly, if \(f\) and \(g\) are eigenfunctions of \(\mathcal{L}_{t_{0}}\) and \(\mathcal{L}_{t_{1}}\) for the eigenvalues \(\lambda_{t_{0}}\) and \(\lambda_{t_{1}}\), respectively, then condition (4.2) is easy to check. As these eigenfunctions are not known explicitly, we will use the Lagrange-Chebyshev interpolation method to approximate the respective transfer operators by finite-rank operators of rank \(m\), and thus obtain approximations \(f^{(m)}\) and \(g^{(m)}\) of \(f\) and \(g\). As the maps \(S_{\pm 1,\mathcal{I}}\) involved in the definition of the transfer operator (Definition 4.1) extend to holomorphic functions on suitable ellipses, Theorem 3.3 and Corollary 3 of [3] guarantee that the (generalized) eigenfunctions of the finite-rank operator converge (in supremum norm) exponentialy fast in \(m\) to those of the transfer operator. In particular, for large enough \(m\) the functions \(f^{(m)}\) and \(g^{(m)}\) are positive on the interval \(I\) and are good candidates for Lemma 4.2.
**Initial choice of \(m\).** We first make an initial choice of \(m\geqslant 1\). Let \(\ell_{n}\colon I\to\mathbb{R}\) (for \(n=0,\ldots,m-1\)) denote the Lagrange polynomials scaled to \([0,5]\) and let \(x_{n}\in[0,5]\) (for \(n=0,\ldots,m-1\)) denote the associated Chebyshev points.
**Initial construction of test functions.** Let \(v^{t}=(v^{t}_{i})_{i=0}^{m-1}\) be the left eigenvector for the maximal eigenvalue of the \(m\times m\) matrix3\(M_{t}(i,j)=(\mathcal{L}_{t}\ell_{i})(x_{j})\) (for \(0\leqslant i,j\leqslant m-1\)) and
set
\[f^{(m)}:=\sum_{i=0}^{m-1}v_{i}^{t_{0}}\ell_{i}\quad\text{ and }\quad g^{(m)}:=\sum_{i=0}^{m-1}v_{i}^{t_{1}}\ell_{i}.\]
If the choices \(f=f^{(m)}\) and \(g=g^{(m)}\) satisfy the hypotheses of Lemma 4.2 (which can be checked rigorously with the method in the previous section) then we proceed to the next step. If they do not, we increase \(m\) and try again.
**Conclusion.** When the hypothesis of Lemma 4.2 holds then its assertion confirms that \(t_{0}<\dim_{H}(\mathcal{J}_{\mathcal{T}})<t_{1}\).
It remains to iteratively make the best possible choices of \(t_{0}<t_{1}\) using the following approach.
### The bisection method
Fix \(\epsilon>0\). We can combine the above method of choosing \(f\) and \(g\) with a bisection method to improve given lower and upper bounds \(t_{0}\) and \(t_{1}\) until the latter are \(\epsilon\)-close:
**Initial choice.** First we can set \(t_{0}^{(1)}=0\) and \(t_{1}^{(1)}=1\), for which \(t_{0}^{(1)}<\dim_{H}(\mathcal{J}_{\mathcal{T}})<t_{1}^{(1)}\) is trivially true.
**Iterative step.** Given \(n\geqslant 0\) we assume that we have chosen \(t_{0}^{(n)}<t_{1}^{(n)}\). We can then set \(T=(t_{0}^{(n)}+t_{1}^{(n)})/2\) and proceed as follows.
1. If \(t_{0}^{(n)}<\dim_{H}(\mathcal{J}_{\mathcal{T}})<T\) then set \(t_{0}^{(n+1)}=t_{0}^{(n)}\) and \(t_{1}^{(n+1)}=T\).
2. If \(T<\dim_{H}(\mathcal{J}_{\mathcal{T}})<t_{1}^{(n)}\) then set \(t_{0}^{(n+1)}=T\) and \(t_{1}^{(n+1)}=t_{1}^{(n)}\).
3. If \(\dim_{H}(\mathcal{J}_{\mathcal{T}})=T\) then we have the final value.4 Footnote 4: In practical implementation, the case (iii) is of no relevance, and the only meaningful termination condition is given by \(t_{1}-t_{0}<\epsilon\).
**Final choice.** Once we arrive at \(t_{1}^{(n)}-t_{0}^{(n)}<\epsilon\) then we can set \(t_{0}=t_{0}^{(n)}\) and \(t_{1}=t_{1}^{(n)}\) as the resulting upper and lower bounds for the true value of \(\dim_{H}(\mathcal{J}_{\mathcal{T}})\).
Applying this algorithm yields the proof of Theorem 1.4 (and with the obvious modifications also those of Theorems 3.2 and 3.8). Specifically, we computed the value of \(\dim_{H}(\mathcal{J}_{\mathcal{T}})\) efficiently to the \(14\) decimal places as stated with the above method, by setting \(\epsilon=10^{-15}\), using finite-rank approximation up to rank \(m=30\), running interval bisections for rigorous minmax inequality verification up to depth \(k=18\), i.e. using up to \(2^{18}\) subintervals, and using \(p=2\) derivatives. There are of course many ways to improve accuracy further, e.g., with more computation or the use of higher derivatives.
**Example 4.3** (Sierpinski triangle).: To cheaply obtain a more accurate estimate (albeit without the rigorous guarantee resulting from the use of ball arithmetic), we use the MaxValue routine from Mathematica. To get an estimate on \(\dim_{H}(\mathcal{J}_{\mathcal{T}})\) to \(60\) decimal places, we work with \(100\) decimal places as Mathematica's precision setting. Taking \(m=60\) we use the
bisection method and starting from an interval \([0.2,0.8]\) after \(199\) iterations we have upper and lower bounds \(t_{0}\leqslant\dim_{H}(\mathcal{J}_{7})\leqslant t_{1}\), where
\[t_{0}= 0.5516185683724609316975708723135206545360797417440422\] \[082662966000504800341581203344828264869391054705\]
and
\[t_{1}= 0.5516185683724609316975708723135206545360797417440422\] \[082662980935741467208321300490581993941689232122.\]
With a little more computational effort (200 decimals of precision, \(m=100\), \(329\) iterations) we can improve the estimate to \(100\) decimal places:
\[t_{0}= 0.55161856837246093169757087608456543417211766450713\] \[88681168316991686668142241904865834395086581396924\] \[80473399364569014861603996382396316337795734913712\] \[92389795501216939500532891268573684698907908711334\]
and
\[t_{1}= 0.55161856837246093169757087608456543417211766450713\] \[88681168316991686668142241904865834395086581396926\] \[63351381969733012016129364111250869850101334085360\] \[70969237514708581622707399079704491867257671463809,\]
which yields the estimate:
\[\dim_{H}(\mathcal{J}_{7})=0.55161856837246093169757087608456543417 21176\] \[6450718868116831699168666814224190486583439508658139692\dots.\]
We next consider as a second example, \(SG_{3}\), see Example 3.6.
**Example 4.4** (\(SG_{3}\) gasket).: With the same method as in the previous example, we estimate bounds on \(\dim_{H}(\mathcal{J}_{SG_{3}})\) to \(60\) decimal places:
\[t_{0}= 0.6175063018623522290424948743164070963419768663609616\] \[039516140619156598666691050499356772905041875773\]
and
\[t_{1}= 0.6175063018623522290424948743164070963419768663609616\] \[039516151934758805391761943498290334758478481658,\]
which yields the estimate:
\[\dim_{H}(\mathcal{J}_{SG_{3}})=0.617506301862352229042494874316\] \[40709634197686636096160395161\dots\]
**Remark 4.5**.: A significant contribution to the time complexity of the algorithm is that of estimating the top eigenvalue and corresponding eigenvector of an \(m\times m\) matrix which is \(O(n\cdot m^{2})\) with \(n\) denoting the number of steps of the power iteration method. Moreover, by perturbation theory one might expect that in order to get an error in the eigenvector of \(\epsilon>0\) one needs to choose \(m=O(\log(1/\epsilon))\) and \(n=O(\log(1/\epsilon))\).
Conclusion
In this note, we have leveraged the existing theory on Laplacians associated to Sierpinski lattices, infinite Sierpinski gaskets and other post-critically finite self-similar sets, in order to establish the Hausdorff dimensions of their respective spectra. We used the insight that, by virtue of the iterative description of these spectra, these dimensions coincide with those of the Julia sets of certain rational functions. Since the contractive local inverse branches of these functions are non-linear, the values of the Hausdorff dimensions are not available in an explicit closed form, in contrast to the dimensions of the (infinite) Sierpinski gaskets themselves, or other self-similar fractals constructing using contracting similarities and satisfying an open set condition. Therefore we use the fact that the Hausdorff dimension can be expressed implicitly as the unique zero of a so-called pressure function, which itself corresponds to the maximal positive simple eigenvalue of a family of positive transfer operators. Using a min-max method combined with the Lagrange-Chebyshev interpolation scheme we can rigorously estimate the leading eigenvalues for every operator in this family. Combined with a bisection method we then accurately and efficiently estimate the zeros of the respective pressure functions, yielding rigorous and effective bounds on the Hausdorff dimensions of the spectra of the relevant Laplacians.
|
2309.06805 | FedDIP: Federated Learning with Extreme Dynamic Pruning and Incremental
Regularization | Federated Learning (FL) has been successfully adopted for distributed
training and inference of large-scale Deep Neural Networks (DNNs). However,
DNNs are characterized by an extremely large number of parameters, thus,
yielding significant challenges in exchanging these parameters among
distributed nodes and managing the memory. Although recent DNN compression
methods (e.g., sparsification, pruning) tackle such challenges, they do not
holistically consider an adaptively controlled reduction of parameter exchange
while maintaining high accuracy levels. We, therefore, contribute with a novel
FL framework (coined FedDIP), which combines (i) dynamic model pruning with
error feedback to eliminate redundant information exchange, which contributes
to significant performance improvement, with (ii) incremental regularization
that can achieve \textit{extreme} sparsity of models. We provide convergence
analysis of FedDIP and report on a comprehensive performance and comparative
assessment against state-of-the-art methods using benchmark data sets and DNN
models. Our results showcase that FedDIP not only controls the model sparsity
but efficiently achieves similar or better performance compared to other model
pruning methods adopting incremental regularization during distributed model
training. The code is available at: https://github.com/EricLoong/feddip. | Qianyu Long, Christos Anagnostopoulos, Shameem Puthiya Parambath, Daning Bi | 2023-09-13T08:51:19Z | http://arxiv.org/abs/2309.06805v1 | # FedDIP: Federated Learning with Extreme Dynamic Pruning and Incremental Regularization
###### Abstract
Federated Learning (FL) has been successfully adopted for distributed training and inference of large-scale Deep Neural Networks (DNNs). However, DNNs are characterized by an extremely large number of parameters, thus, yielding significant challenges in exchanging these parameters among distributed nodes and managing the memory. Although recent DNN compression methods (e.g., sparsification, pruning) tackle such challenges, they do not holistically consider an adaptively controlled reduction of parameter exchange while maintaining high accuracy levels. We, therefore, contribute with a novel FL framework (coined FedDIP), which combines (i) dynamic model pruning with error feedback to eliminate redundant information exchange, which contributes to significant performance improvement, with (ii) incremental regularization that can achieve _extreme_ sparsity of models. We provide convergence analysis of FedDIP and report on a comprehensive performance and comparative assessment against state-of-the-art methods using benchmark data sets and DNN models. Our results showcase that FedDIP not only controls the model sparsity but efficiently achieves similar or better performance compared to other model pruning methods adopting incremental regularization during distributed model training. The code is available at : [https://github.com/EricLoong/feddip](https://github.com/EricLoong/feddip).
Federated Learning, dynamic pruning, extreme sparsification, incremental regularization.
## I Introduction
Federated Learning (FL) [1] is a prevalent _distributed learning_ paradigm due to its ability to tackle learning at scale. FL plays a significant role in large-scale predictive analytics by enabling the decentralization of knowledge discovery. FL contributes towards privacy preservation, which overcomes fundamental issues of data governance and ownership [2]. Distributed training and deploying large-scale Machine Learning (ML) models, i.e., Deep Neural Networks (DNNs), impose significant challenges due to the huge volumes of training data, _large_ models, and diversity in data distributions.
Distributed computing nodes, mainly located at the network edge being as close to data sources as possible, collaboratively engineer ML models rather than depending on collecting all the data to a centralized location (data center or Cloud) for training [3]. This computing paradigm coined Edge Computing, has been successfully applied to various predictive modeling, mining, and analytics applications, e.g., in finance [4], healthcare [5] and wireless sensor networks [6].
DNNs are characterized by an extremely large number of parameters. For instance, the Convolutional Neural Networks (CNN) ResNet50 [7] and VGG16 [8] consist of 27 and 140 million parameters, respectively, while generative AI models, like GPT-2 have more than 1.5 billion parameters [9]. Evidently, this places a great burden on distributed computing nodes when exchanging model parameters during training, tuning, and inference.
Model size reduction (pruning) methods, e.g., [10, 11, 12] aim to retain the prediction accuracy while reducing the communication overhead by decreasing the number of model parameters exchanged among nodes. However, most pruning methods focus on the compression of model gradients. Even though they yield high compression rates, they do not achieve significantly compact models for exchange. But in general, methods that can produce compact models along with significant redundancy in the number of DNN weights by sophisticatedly pruning the weights are deemed appropriate [13]. In contrast to model gradient compression, model weight compression significantly shrinks the model size by setting most of the weights to zero. This is desirable for eliminating redundancy in model exchange during distributed knowledge extraction. But often such models result in performance degradation. Therefore, the question we are addressing is: _How to effectively introduce model pruning mechanisms in a decentralized learning setting that is capable of achieving extremely high compression rates while preserving optimal predictive performance?_ We contribute with an efficient method based on _dynamic pruning with error feedback_ and _incremental regularization_, coined FedDIP. FedDIP's novelty lies in the principle of adapting dynamic pruning in a decentralized way by pushing unimportant weights to zeros (extreme pruning) whilst maintaining high accuracy through incremental regularization. To the best of our knowledge, FedDIP is the first approach that combines incremental regularization and extreme dynamic pruning in FL.
The paper is organized as follows: Section II reports on related work and our contribution. Section III provides preliminaries in FL and model pruning methods. Section IV elaborates on the FedDIP framework, while Section V reports on the theoretical properties of FedDIP and convergence analysis. Our experimental results in Section VI showcase the efficiency of FedDIP in distributed learning. Section VII concludes the paper with future research directions.
## II Related Work & Contribution
### _Model Gradient & Model Weight Sparsification_
Expensive and redundant sharing of model weights is a significant obstacle in distributed learning [14]. The size of the exchanged models among nodes can be reduced by compression and sparsification. The work in [11] adopts magnitude selection on model gradients to yield sparsification when using Stochastic Gradient Descent (SGD). Instead of dense updates of weights, [10] proposed a distributed SGD that keeps 1% of the gradients by comparing their magnitude values. The method in [15] scales up SGD training of DNN via controlling the rate of weight update per individual weight. [16] develops encoding SGD-based vectors achieving reduced communication overhead. [17] proposed the periodic quantized averaging SGD strategy that attains similar model predictive performance while the size of shared model gradients is reduced \(95\%\). In [18], the authors argued that 99% of gradients are redundant and introduced a deep gradient compression method, which achieves compression rates in the range 270-600 with sacrificing accuracy. The _gTop-k_ gradient sparsification method in [19] reduces communication cost based on the _Top-k_ method in [18]. [20] develops a method based on [21] that adaptively compresses the size of exchanged model gradients via quantization.
In contrast to gradient sparsification, the shrinkage of the _entire_ model size is of paramount importance in distributed learning. It not only eliminates communication redundancy during training but also enables less storage and inference time, which makes FL welcome in distributed knowledge systems. However, so far, only centralized learning adopts model compression via, e.g., weight pruning, quantization, low-rank factorization, transferred convolutional filters, and knowledge distillation [22], with pruning being our focus in this work. SNIP [23] introduces a method that prunes a DNN model once (i.e., prior to training) based on the identification of important connections in the model. [24] proposes a centralized two-step method that prunes each layer of a DNN via regression-based channel selection and least squares reconstruction. The method in [25] prunes CNNs centrally using the Alternating Direction Method of Multipliers (ADMM). Following [25], the PruneTrain method [26] uses structured group-LASSO regularization to accelerate CNN training in a centralized location only. The DPF [27] method allows dynamic management of the model sparsity with a feedback mechanism that re-activates pruned weights.
### _Contribution_
Most of the approaches in FL take into account only the communication overhead and thus adopt gradient sparsification. Nonetheless, weight sparsification is also equally important and can lead to _accurate_ distributed sparse models. Such sparse models are lightweight and, thus, suitable for storage, transfer, training, and fast inference. As shown in [28], model weights and gradients averaging policies are _equivalent_ only when the local number of model training epochs equals one. FedDIP tries to bridge the gap of weights average pruning in FL by obtaining highly accurate sparse models through incremental regularization and reducing communication during training through dynamic pruning.
To the best of our knowledge in distributed learning, PruneFL [12] FedDST [29] and LotteryFL [30] methods attempt model pruning. However, LotteryFL focuses on a completely different problem from ours. LotteryFL tries to _discover sparse local_ sub-networks (a.k.a. Lottery Ticket Networks) of a base DNN model. In contrast, FedDIP searches for a _sparse global_ DNN model with mask readjustments on a central server, as we will elaborate on later. PruneFL starts with a pre-selected node to train a global shared mask function, while FedDIP generates the mask function with weights following the Erdos-Reneyi-Kernel (ERK) distribution [31], as we will discuss in the later sections. FedDST, as proposed by Bibikar et al., initially derives a pruning mask based on the ERK distribution. Subsequent stages involve layerwise pruning on the global model. The method ensures efficient training through a prune-regrow procedure, which maintains a local sparse mask, particularly under non-iid data distributions. Our technical contributions are:
* An innovative federated learning paradigm, coined FedDIP, combines extreme sparsity-driven model pruning with incremental regularization.
* FedDIP achieves negligible overhead keeping accuracy at the same or even higher levels over extremely pruned models.
* Theoretical convergence and analysis of FedDIP.
* A comprehensive performance evaluation and comparative assessment of FedDIP with benchmark i.i.d. and non-i.i.d. datasets and various DNN models. Our experimental results reveal that FedDIP, in the context of high model compression rates, delivers superior prediction performance compared to the baseline methods and other approaches found in the literature, specifically, FedAvg [1], PruneFL [12], PruneTrain [26], FedDST [29], DPF [27], and SNIP [23].
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Notations** & **Definition** \\ \hline \hline \(N,K\) & \(N\): total number of nodes, where \(K<N\) nodes \\ & participated in each training round \\ \hline \(n\) indices a node; \(z\) indexes a DNN layer; \(n\in[N]\), \(z\in[Z]\) \(|N|\) abbreviates the integer sequence; \(1,2,\ldots,N\) \\ \hline \(\mathcal{D}_{n},D_{n}\) & Dataset and its size on node \(n\). \\ \((\mathbf{x},y)\in\mathcal{D}_{n}\) & \(\mathbf{x},y\) are features and labels in node \(n\)’s dataset \\ \(f(\cdot),\forall f(\cdot)\) & Loss function and its derivative \\ \(\rho_{n},\eta\) & Weight percentage and learning rate \\ \(\omega_{G},\boldsymbol{\omega}_{n},\boldsymbol{\omega}^{\prime}_{n}\) & Global, local and pruned local model parameters \\ \(T,E_{l},\tau,\ell\) & Global and local rounds, global and local epochs \\ \(\lambda\) & Regularization hyperparameter \\ \(\odot\) & Element-wise (Hadamard) product \\ \(s_{0},\,s_{t},\,s_{p}\) & initial sparsity, sparsity at round \(t\), final sparsity \\ \hline \end{tabular}
\end{table} TABLE I: Table of Notations
## III Preliminaries
### _Federated Learning_
For the general notations and definitions, please refer to Table I. Consider a distributed learning system involving a set of \(N\) nodes (clients) \(\mathcal{N}=\{1,2,\ldots,N\}\). Let \(\mathcal{D}_{n}=\{(\mathbf{x},y)\}\) be the local dataset associated with a node \(n\in\mathcal{N}\) such that \(\mathbf{x}\in\mathcal{X}\subset\mathbb{R}^{d}\), \(y\in\mathcal{Y}\subset\mathbb{R}\), and \(D_{n}=|\mathcal{D}_{n}|\). In the standard FL setting, given a subset of \(K<N\) nodes \(\mathcal{N}_{c}\subset\mathcal{N}\), the local loss is given by:
\[f_{n}(\mathbf{\omega})=\frac{1}{D_{n}}\sum_{(\mathbf{x},y)\in\mathcal{D}_{n}} \mathcal{L}(\mathcal{G}(\mathbf{\omega},\mathbf{x}),y) \tag{1}\]
where \(\mathbf{\omega}\) is the model parameter, \(\mathcal{G}\) is the discriminant function that maps the input space to output space and \(\mathcal{L}\) is a loss function that measures the quality of the prediction, e.g., mean-squared-error, maximum likelihood, cross-entropy loss. The global loss function for all the selected nodes \(n\in\mathcal{N}_{c}\) is:
\[f(\mathbf{\omega})=\sum_{n\in\mathcal{N}_{c}}\rho_{n}f_{n}(\mathbf{\omega}),\text{ where }\rho_{n}=\frac{D_{n}}{\sum_{j\in\mathcal{N}_{c}}D_{j}}. \tag{2}\]
The model training process spans periodically over \(T\) global rounds with \(L\) local rounds. Let \(t\in=\{0,1,\ldots,T-1\}\) be a discrete-time instance during the training process. Then, \(\tau=\lfloor\frac{t}{L}\rfloor L\) is the start time of the current global epoch. At \(\tau\), the nodes (clients) receive updated aggregated weights \(\bar{\mathbf{\omega}}^{\tau}\) from the node responsible for aggregating the nodes' model parameters, a.k.a. the server node. The local training at client \(n\) at local epoch \(l=1,\ldots,L\) proceeds as:
\[\mathbf{\omega}_{n}^{(\tau+l)+1}=\mathbf{\omega}_{n}^{\tau+l}-\eta_{\tau+l}\nabla f_{ n}(\mathbf{\omega}_{n}^{\tau+l}), \tag{3}\]
where \(\eta\in(0,1)\) is the learning rate. The weight averaging policy on the server node can be written as:
\[\bar{\mathbf{\omega}}^{\tau} = \sum_{n\in\mathcal{N}}\rho_{n}\mathbf{\omega}_{n}^{\tau}. \tag{4}\]
### _Model Pruning_
In centralized learning systems (e.g., in Cloud), where all data are centrally stored and available, the model pruning [32] aims to sparsify various connection matrices that represent the weights of the DNN models. Notably, _sparsity_, hereinafter noted by \(s\in[0,1]\), indicates the proportion of non-zero weights among overall weights. A 100% sparse (\(s=1\)) model indicates that all the weights are negligible (their values are close to \(0\)), while a 0% sparse (\(s=0\)) model stands for the full model with original weight values. Typically, the reduction of the number of nonzero weights (pruning) of a DNN model is achieved using _mask functions_. A mask function \(\mathbf{m}\) acts like an indicator function that decides whether the parameter/weight at a certain position in a layer of a DNN model is zero or not. The model pruning based on mask functions requires a criterion to select the parameters to prune. The most common pruning criterion considers the absolute value of the weights of each parameter in a layer. Generally, a parameter is removed from the training process if its absolute value of the weight is less than a predefined threshold.
On the other hand, model pruning in FL is vital in light of reducing communication cost in _each_ training round. Moreover, the global number of rounds should be reduced as this significantly contributes to the overall communication overhead. Hence, in FL, pruning aims at _extreme_ model compression rates, i.e., \(s\geq 0.8\) with a relatively small compromise in prediction accuracy. It is then deemed appropriate to introduce a distributed and adaptive pruning method with relatively high and controlled DNN model sparsity, which reduces communication costs per round along with ensuring convergence under high sparsity with only a marginal decrease in prediction accuracy.
The pruning techniques are typically categorized into three: _pruning before training_ (e.g., SNIP [23]), _pruning during training_ (e.g., PruneTrain [26], FedDST [29], DPF [27] and PruneFL [12]), and _pruning after training._ In this work, we concentrate on the two former techniques, which deal with efficient model training. The _pruning after training_ approach offers limited utility in the context of distributed learning. The two commonly employed techniques for pruning are: (i) Regularization-based Pruning (RP) and (ii) Importance-based Pruning (IP) [33]. The interested reader may refer to [24, 25, 33] and the references therein for a comprehensive survey of RP and IP techniques. RP uses intrinsic sparsity-inducing properties of \(L_{1}\) (Manhattan distance) and \(L_{2}\) (Euclidean distance) norms to limit the _importance_ of different model parameters. The sparsity-inducing norms constrain the weights of the unimportant parameters to small absolute values during training. Moreover, RP can effectively constrain the weights into a sparse model space via tuning the regularization hyperparameter \(\lambda\). Whereas in IP, parameters are pruned purely based on predefined formulae that are defined in terms of the weights of the parameters or the sum of the weights. IP techniques were originally proposed in the unstructured pruning settings that can result in sparse models not capable of
Fig. 1: **Illustration of the FedDIP framework:**
(1) During the downlink phase, the pruned global model \(\omega_{G}^{\prime}\) is broadcasted to participating clients.
(2) In the uplink phase, each selected client communicates its local dense model \(\omega_{n}\) back to the server for aggregation.
(3) The global mask \(\mathbf{m}_{G}\) is derived from the global model, directing the sparse training (DPF) across clients.
speeding up the computation. Even though RP techniques are considered superior to IP techniques, they struggle with two fundamental challenges: (**C1**) The first challenge pertains to controlling the sparsity value \(s\) during pruning. For example, in PruneTrain [26], employing a pruning threshold value of \(10^{-4}\) to eliminate model parameters does not guarantee the delivery of a sparse model. (**C2**) The second challenge is dynamically tuning a regularization parameter \(\lambda\). A large \(\lambda\) leads to model divergence during training, as the model may excessively lean towards penalty patterns. By adding regularization terms in DNN training traditionally aims for overfitting issues. However, additional regularization for prunable layers is required for RP, which is the core difference between traditional training and RP-based training.
## IV The FedDIP Framework
The proposed FedDIP framework integrates extreme dynamic pruning _with_ error feedback and incremental regularization in distributed learning environments. Figure 1 illustrates a schematic representation of the FedDIP, which will be elaborated on in this section. FedDIP attempts to effectively train pruned DNN models across collaborative clients ensuring convergence by addressing the two challenges **C1** and **C2** prevalent in RP-based methods discussed in Section III-B.
The dynamic pruning method (DPF) in [27] demonstrates improved performance in comparison with other baselines under high sparsity. Given the SGD update scheme, the model gradient in DPF is computed on the pruned model as:
\[\mathbf{\omega}_{t+1}=\mathbf{\omega}_{t}-\eta_{t}\nabla f(\mathbf{\omega}^{\prime}_{t})= \mathbf{\omega}_{t}-\eta_{t}\nabla f(\mathbf{\omega}_{t}\odot\mathbf{m}_{t}), \tag{5}\]
taking into account the error feedback (analytically):
\[\mathbf{\omega}_{t+1}=\mathbf{\omega}_{t}-\eta_{t}\nabla f(\mathbf{\omega}_{t}+\mathbf{e} _{t}), \tag{6}\]
where \(\mathbf{e}_{t}=\mathbf{\omega}^{\prime}_{t}-\mathbf{\omega}_{t}\). In (5), \(\odot\) represents the Hadamard (element-wise) product between the two model weights, \(\mathbf{\omega}_{t}\) represents the entire model parameters, \(\mathbf{\omega}^{\prime}_{t}\) represents the pruned model parameters, and \(\mathbf{m}\) is the adopted mask function used for pruning as in, e.g., in [12, 26], and [27]. The mask is applied on the model parameters \(\mathbf{\omega}_{t}\) to eliminate weights according to the magnitude of each weight, thus, producing the pruned \(\mathbf{\omega}^{\prime}_{t}\). Applying the gradient, in this case, allows recovering from errors due to premature masking out of important weights, i.e., the rule in (5) takes a step that best suits the pruned model (our target). In contrast, all the pruning methods adopted in FL, e.g., [12], led to sub-optimal decisions by adopting the rule:
\[\mathbf{\omega}_{t+1}=\mathbf{\omega}^{\prime}_{t}-\eta_{t}\nabla f(\mathbf{\omega}^{\prime }_{t}). \tag{7}\]
One can observe that the update rule in (5) retains more information, as it only computes gradients of the pruned model, compared to the update rule in (7). This is expected to yield superior performance under high sparsity.
Moreover, it is known that the multi-collinearity1 challenge is alleviated by the Least Absolute Shrinkage and Selection Operator (LASSO). LASSO performs simultaneous variable selection and regularisation [34]. LASSO adds the \(L_{1}\) regularization term to the regression loss function, providing a solution to cases where the number of model parameters is significantly larger than the available observations. Apparently, this is the case in DNNs, which typically involve millions of parameters with only tens of thousands of observations. The two challenges reported in Section III-B deal with selecting appropriate dynamic policies for sparsity control and regularization hyperparameter \(\lambda\). To address the challenge **C1**, we dynamically drop the least \(s\cdot 100\%\) percentile according to weights magnitude. The challenge **C2** is addressed by incrementally increasing the regularization parameter departing from the principles of LASSO regression. It is also evidenced in [33] that growing regularization benefits pruning. Based on these observations, we establish the FedDIP algorithm to maintain the predictive model performance under extreme sparsity with incremental regularization and dynamic pruning. To clarify terminology, we refer to our algorithm that directly applies dynamic pruning as 'FedDP' (addressing challenge **C1**), while 'FedDIP' represents the variant that also adds incremental regularization (addressing both challenges **C1** and **C2**). Collectively, we refer to these variants as 'FedD(I)P'. Each node \(n\in\mathcal{N}\) first trains a local sparse DNN model, which contains weights with relatively small magnitudes (see also Fig. 1). Then, the node \(n\) optimizes the proposed _local incrementally regularized loss function_ at round \(t\) as:
Footnote 1: In multi-collinearity, two or more independent variables are highly correlated in a regression model, which violates the _independence_ assumption.
\[f_{n}(\mathbf{\omega}_{t})=\frac{1}{D_{n}}\sum_{(\mathbf{x},y)\in\mathcal{D}} \mathcal{L}(G(\mathbf{\omega}_{t},\mathbf{x}),y)+\lambda_{t}\sum_{z=1}^{Z}\|\mathbf{ \omega}_{t}^{(z)}\|_{2}, \tag{8}\]
where the step \(t\) dependent regularization parameter \(\lambda_{t}\) controls the degree of model shrinkage, i.e., the _sparsity_, and \(Z\) is the number of the DNN layers (this, of course, depends on the DNN architecture; in our experiments, it is the sum of convolutional and fully connected layers). The norm \(\|\mathbf{\omega}^{(z)}\|_{2}=(\sum_{k}|\omega_{k}^{(z)}|^{2})^{1/2}\) is the \(L_{2}\) norm of the pruned \(z^{th}\) layer of model weights \(\mathbf{\omega}^{(z)}\). We then introduce the incremental regularization over \(\lambda_{t}\) based on the schedule:
\[\lambda_{t}=\begin{cases}0&\text{if }0\leq t<\frac{T}{Q}\\ \vdots&\vdots\\ \frac{\lambda_{\max}\cdot(i-1)}{Q}&\text{if }\frac{(i-1)T}{Q}\leq t<\frac{iT}{Q}\\ \vdots&\vdots\\ \frac{\lambda_{\max}(Q-1)}{Q}&\text{if }\frac{(Q-1)T}{Q}\leq t\leq T \end{cases} \tag{9}\]
with quantization step size \(Q>0\). The influence of \(Q\) on regularization is controlled by adapting \(\lambda_{max}\). Such step size divides the regularization parameter space from \(\frac{\lambda_{\max}}{Q}\) to \(\lambda_{\max}\) to achieve a gradual increase of regularization at every \(\frac{T}{Q}\) rounds. In addition, each node \(n\) adopts dynamic pruning to progressively update its local model weights \(\mathbf{\omega}_{n}^{\tau+L}\) to optimize (8) as:
\[\mathbf{\omega}_{n}^{\tau,l+1}=\mathbf{\omega}_{n}^{\tau,l}-\eta_{\tau}\nabla f_{n}(\bm {\omega}_{n}^{\prime(\tau,l)}), \tag{10}\]
where \(\mathbf{\omega}_{n}^{\prime(\tau+l)}\) is obtained through pruning based on a global mask function \(\mathbf{m}_{\tau}\) generated by the server node. Moreover, our gradual pruning policy modifies the sparsity update policy per round from [35] by incrementally updating the sparsity as:
\[s_{t}=s_{p}+(s_{0}-s_{p})\Big{(}1-\frac{t}{T}\Big{)}^{3}, \tag{11}\]
where \(s_{t}\) represents the sparsity applied to the model pruning at round \(t\), \(s_{0}\) is the initial sparsity, and \(s_{p}\) is the desired/target sparsity. Notably, in our approach \(s_{0}\) is strictly non-zero; this can be a moderate sparsity of \(s_{0}=0.5\). Such adaptation differentiates our method from [35], where \(s_{0}=0\). In essence, we permit the sparsity to increment from moderate to extreme levels throughout the process. If considering \(s_{0}>0\), the layer-wise sparsity of the initial mask follows the ERK distribution introduced in [31]. At the end of a local epoch \(l\), the server node collects \(K<N\) model weights \(\mathbf{\omega}_{n}^{\tau+l}\) from the selected nodes \(n\in\mathcal{N}_{c}\), and calculates the global weights average as:
\[\bar{\mathbf{\omega}}_{G}^{\tau+l}=\sum_{n\in\mathcal{N}}\rho_{n}\mathbf{\omega}_{n}^ {\tau+l}. \tag{12}\]
In addition, the \(\mathbf{m}_{\tau}\) mask function is generated based on pruning on \(\bar{\mathbf{\omega}}_{G}^{\tau+l}\) with current sparsity \(s_{\tau}\). The FedDIP process is summarized in Algorithm 1, where _only_ pruned models are exchanged from server to nodes, while pruning is _locally_ achieved in the clients. **Note:** FedDIP achieves data-free initialization and generalizes the DPF [27] in dynamic pruning process. When we set initial \(s_{0}=0\) and no incremental regularization, i.e., \(\lambda_{t}=0\), \(\forall t\), then FedDIP reduces to DPF. Moreover, we obtain our variant FedDP if we set \(\lambda_{t}=0\), \(\forall t\) with \(s_{0}>0\) w.r.t. ERK distribution.
**Remark 1**.: _Trade-off between Pruning and Fine-tuning: The FedDIP approach introduces a reconfiguration horizon, denoted as \(R\), during the model training phase to periodically update the mask function. Specifically, the mask function \(\mathbf{m}_{\tau}\) is updated at every \(R\) global round, i.e., when \(\tau\mod R=0\), to ensure a consistent and smooth accuracy learning curve. The value of this horizon is determined empirically. Potential Outcomes of Insufficient Pruning: If the mask function remains unchanged throughout the horizon \(T\), there's a risk that the model could converge to a local optimum. Consequences of Insufficient Fine-tuning: Conversely, if the mask function undergoes frequent updates, the changes in the model might not align with the alterations in the sparse model structure._
**Remark 2**.: _Integration of Incremental Regularization and DPF:_ _Differing from the approach in [33], which centralizes increasing penalty factors on pre-trained models, FedDIP initiates this from the outset within a distributed learning context. The integration of incremental regularization with DPF offers advantages, primarily because DPF obviates the need for post-pruning fine-tuning, making it preferable to one-shot pruning methods like SNIP._
```
0:\(N\) nodes; \(T\) global rounds; \(E_{l}\) local rounds; initial and target sparsity \(s_{0}\) and \(s_{p}\); maximum regularization \(\lambda_{\max}\); quantization step \(Q\); reconfiguration horizon \(R\)
0: Global pruned DNN model weights \(\mathbf{\omega}_{G}^{\prime}\)
1://Server initiliazation
2:if\(s_{0}>0\)then
3: Server initializes global mask \(\mathbf{m}_{0}\) (ERK distribution)
4:endif
5://Node update & pruning
6:for global round \(\tau=1,\dots,T\)do
7: Server randomly selects \(K\) nodes \(\mathcal{N}_{c}\subset\mathcal{N}\)
8:for selected node \(n\in\mathcal{N}_{c}\) in parallel do
9: Receive pruned weights \(\mathbf{\omega}_{G}^{\prime(\tau-1)}\) from server node
10: Obtain mask \(\mathbf{m}_{\tau-1}\) from \(\mathbf{\omega}_{G}^{\prime(\tau-1)}\)
11: Train \(\mathbf{\omega}_{n}^{\tau}\) over \(E_{l}\) rounds on data \(\mathcal{D}_{n}\) using (10)
12:if incremental regularization is chosen then
13: Optimize (8) with incremental \(\lambda_{\tau}\) in (9)
14:else
15: Optimize (1)
16:endif
17:endfor
18://Server update, aggregation & reconfiguration
19: Server receives models and aggregates \(\mathbf{\omega}_{G}^{\tau}\) in (12)
20:if\(\tau\mod R==0\)then
21: Reconfigure global mask \(\mathbf{m}_{\tau}\) based on pruning \(\mathbf{\omega}_{G}^{\tau}\)
22:endif
23: Server prunes global model with \(\mathbf{m}_{\tau}\) and obtains \(\mathbf{\omega}_{G}^{\prime(\tau)}\)
24: Server node returns \(\mathbf{\omega}_{G}^{\prime(\tau)}\) to all nodes.
25:endfor
```
**Algorithm 1** The FedDIP Algorithm
## V Theoretical & Convergence Analysis
In this section, we provide a theoretical analysis of FedDIP including the convergence Theorem 1 ensuring stability in training models w.r.t. incremental regularization and dynamic extreme pruning. **Note for Proofs:**_The proofs of our Theorem 1 and lemmas are in the Appendix A_
At each global round \(t\in\{1,\dots,T\}\), \(K\) out of \(N\) nodes participate, each one selected with probability \(\rho_{n}\) aligned with [36, 37] and \(\sum_{n=1}^{N}\rho_{n}=1\). Let \(\mathbf{\omega}_{n}^{t}\) and \(\mathbf{\omega}_{n}^{\prime(t)}\) be the weights and pruned ones at round \(t\) on node \(n\), respectively, with
\[\mathbf{\omega}_{n}^{\prime(t)}=\mathbf{\omega}_{n}^{t}\odot\mathbf{m}^{t}. \tag{13}\]
Let also \(\mathbf{v}_{n}^{t}\) and \(\tilde{\mathbf{v}}_{n}^{t}\) be the expected and estimated gradients at \(t\), respectively, on node \(n\). Based on \(\mathbf{\omega}_{n}^{\prime(t)}\), we obtain: \(\mathbf{v}_{n}^{\prime(t)}=\nabla f(\mathbf{\omega}_{n}^{\prime(t)})\) while \(\tilde{\mathbf{v}}_{n}^{\prime(t)}\) is the estimated one. The global aggregated model for FedAvg is:
\[\bar{\mathbf{\omega}}^{t}=\frac{1}{K}\sum_{n\in\mathcal{N}_{c}}\mathbf{\omega}_{n}^{t}, \tag{14}\]
while before the server sends the model, it is pruned as
\[\bar{\mathbf{\omega}}^{\prime(t)}=\frac{1}{K}\sum_{n\in\mathcal{N}_{c}}\mathbf{\omega}_{ n}^{t}\odot\mathbf{m}^{t}. \tag{15}\]
The global estimated aggregated gradient and expected global gradient, respectively, are:
\[\mathbf{\tilde{v}}^{t}=\frac{1}{K}\sum_{n\in\mathcal{N}_{c}}\mathbf{\tilde{v}}_{n }^{t}\text{ and }\mathbf{\tilde{v}}^{t}=\frac{1}{K}\sum_{n\in\mathcal{N}_{c}}\mathbf{v}_{n}^{t}. \tag{16}\]
Similarly, for DPF, we have that:
\[\mathbf{\tilde{v}}^{\prime(t)}=\frac{1}{K}\sum_{n\in\mathcal{N}_{c}}\mathbf{ \tilde{v}}_{n}^{\prime(t)}\text{ and }\mathbf{\tilde{v}}^{\prime(t)}=\frac{1}{K}\sum_{n\in \mathcal{N}_{c}}\mathbf{v}_{n}^{\prime(t)}. \tag{17}\]
In FedAvg, \(\mathbf{\tilde{\omega}}^{t}\) is updated as: \(\mathbf{\tilde{\omega}}^{t+1}=\mathbf{\tilde{\omega}}^{t}-\eta_{t}\mathbf{ \tilde{v}}^{t}\), while the update rule based on DPF at node \(n\) is:
\[\mathbf{\omega}_{n}^{t+1}=\mathbf{\omega}_{n}^{t}-\eta_{t}\mathbf{\tilde{v}}_{ n}^{\prime(t)}, \tag{18}\]
where \(\mathbf{\omega}_{n}^{t}=\mathbf{\tilde{\omega}}^{\prime(t)}\). Similarly, \(\mathbf{\tilde{\omega}}^{t+1}\) is updated as:
\[\mathbf{\tilde{\omega}}^{t+1}=\mathbf{\tilde{\omega}}^{\prime(t)}-\eta_{t} \mathbf{\tilde{v}}^{\prime(t)}. \tag{19}\]
**Definition 1**.: _According to [27], the quality of pruning is defined by the parameter \(\delta_{t}\in[0,1]\) as:_
\[\delta_{t}:=\frac{\|\mathbf{\omega}^{t}-\mathbf{\omega}^{\prime(t)}\|_{F}^{2} }{\|\mathbf{\omega}^{t}\|_{F}^{2}} \tag{20}\]
where \(\|.\|_{F}^{2}\) is the square of Frobenius matrix norm. \(\delta_{t}\) indicates the degree of information loss by pruning in terms of magnitude. A smaller \(\delta_{t}\) stands for less information loss.
**Definition 2**.: _Following the Definition \(1\) in [38], a measurement \(\gamma\) of non-i.i.d. (non-independent and identically distributed) data is defined as follows:_
\[\gamma=\frac{\sum_{n=1}^{N}p_{n}\|\nabla f_{n}(\mathbf{\omega})\|^{2}}{\| \sum_{n=1}^{N}p_{n}\nabla f_{n}(\mathbf{\omega})\|^{2}}, \tag{21}\]
_with \(\gamma\geq 1\); \(\gamma=1\) holds in i.i.d case._
We list our assumptions for proving the convergence of FedDIP in the learning phase.
**Assumption 1**.: \(L-\)_Smoothness_. \(,\forall\mathbf{\omega}^{t_{1}},\mathbf{\omega}^{t_{2}}\in\mathbb{R}^{d},\ L\in \mathbb{R}\)__
\[f(\mathbf{\omega}^{t_{1}})\leq f(\mathbf{\omega}^{t_{2}})+(\mathbf{\omega}^{ t_{1}}-\mathbf{\omega}^{t_{2}})^{\top}\nabla f(\mathbf{\omega}^{t_{2}})+\frac{L}{2} \|\mathbf{\omega}^{t_{1}}-\mathbf{\omega}^{t_{2}}\|^{2}\]
**Assumption 2**.: \(\mu-\)_Lipschitzness_. \(\forall\mathbf{\omega}^{t_{1}},\mathbf{\omega}^{t_{2}}\in\mathbb{R}^{d}\) _and \(\mu\in\mathbb{R}\)__
\[\|f(\mathbf{\omega}^{t_{1}})-f(\mathbf{\omega}^{t_{2}})\|\leq\mu\|\mathbf{ \omega}^{t_{1}}-\mathbf{\omega}^{t_{2}}\| \tag{22}\]
**Assumption 3**.: _Bounded variance for gradients_. _Following Assumption 3 in [37], the local model gradients on each node \(n\) are self-bounded in variance:_
\[\mathbb{E}[\|\mathbf{\tilde{v}}_{n}^{t}-\mathbf{v}_{n}^{t}\|^{2}]\leq\sigma_ {n}^{2}. \tag{23}\]
**Assumption 4**.: _Bounded weighted aggregation of gradients_. Following Assumption 4 in [38], the aggregation of local gradients at time \(t\) are bounded as:
\[\|\sum_{n=1}^{N}\rho_{n}\mathbf{v}_{n}^{t}\|^{2}\leq G^{2}, \tag{24}\]
_where \(\sum_{n=1}^{N}\rho_{n}=1\) and \(\sum_{n=1}^{N}\rho_{n}\mathbf{v}_{n}^{t}\) stands for the weighted aggregation of local gradients; \(G\in\mathbb{R}\)._
**Theorem 1** (FidDIP Convergence).: _Consider the Assumptions 1, 2 and 3, Lemmas 1, 2, 3, and let \(\eta_{t}=\frac{1}{tL}\), \(L>0\). Then, the convergence rate of the FedDIP process is bounded by:_
\[\frac{1}{T}\sum_{t=1}^{T}\|\nabla f(\mathbf{\tilde{\omega}}^{ \prime(t)})\|^{2}\leq 2L\mathbb{E}(f(\mathbf{\omega}_{1})-f^{*})+\\ 2L\sum_{t=1}^{T}[\mu\mathbb{E}[\sqrt{\delta_{t+1}}\|\mathbf{ \tilde{\omega}}^{t+1}\|]+\frac{\pi^{2}}{3L^{2}}\chi, \tag{25}\]
_where \(f(\mathbf{\omega}_{1})\) and \(f^{*}\) stand for the initial loss and the final convergent stable loss, with \(\chi=\frac{(\gamma-1)L^{2}+L}{2K}\sum_{n=1}^{N}\rho_{n}\sigma_{n}^{2}+\frac{( \gamma-1)\gamma E_{1}^{2}L^{2}G^{2}}{2}\), and \(\gamma\) defined in Definition 2._
Proof.: Refer to 'Note for Proofs' at the beginning of this section.
In Theorem 1, the first term of the right-hand side of the inequality (25) denotes the gap between the initial and final loss, while \(\chi\) goes to zero as \(K\gg 1\) and the i.i.d. case assumption holds. This also suggests that non-i.i.d. case results in large boundaries. The quantity \(\frac{1}{T}\sum_{t=1}^{T}\|\nabla f(\mathbf{\tilde{\omega}}^{\prime(t)})\|^{2}\) is bounded by the loss produced by pruning. Overall, the convergence result shows that the \(L_{2}\) norms of the pruned gradients parameters vanish over time, which indicates that a stable model is obtained at the end (recall, a stable gradient vector enables a small change on the model under SGD).
## VI Experimental Evaluation
### _Experimental Setup_
**Datasets and Models:** We experiment with the datasets _Fashion-MNIST_[39], _CIFAR10_, and _CIFAR100_[40]. _Fashion-MNIST_ consists of \(60,000\) training and \(10,000\) test 28x28 grayscale images labeled from 10 classes. Both of _CIFAR_ datasets consist of \(50,000\) training and \(10,000\) test 32x32 color images; in _CIFAR10_ and _CIFAR100_ there are 10 classes (6000 images per class) and 100 classes (600 images per class), respectively. We consider the i.i.d. (independent and identically distributed) case to compare all the algorithms and extend FedDIP to be applied for non-i.i.d. cases. To test and compare the efficiency of FedDIP, we use different well-known CNN architectures: _LeNet-5_[41], _AlexNet_[42] and _Resnet-18_[7] as backbone (dense or unpruned) models, with the baseline FedAvg [1] and the pruning baselines PruneFL [12], PruneTrain [26], FedDST [29], DPF [27] (equivalent to FedDP as discussed above), and SNIP [23]. For the non-i.i.d. case, we adopt the pathological data partition method in [1], which assigns only two classes for each node. We merge FedDIP with FedProx [43], a generalization and re-parametrization of FedAvg to address the heterogeneity of data (coined FedDIP+Prox), and compare with baseline FedAvg and FedProx. Our target is to evaluate FedDIP's accuracy, storage, and communication efficiency in FL environments under extreme sparsity.
**Configurations:** Table II details our configurations. For PruneFL, FedDST, and PruneTrain, we experimentally determined the optimal reconfiguration intervals \(R\) to be \(20\), \(20\), and
\(1\), respectively, to ensure the _best_ possible model performance; the same for step size \(Q\) for all models. Especially, the annealing factor for FedDST is set as \(0.5\). As SNIP prunes the model before training, the global mask is pruned via one-shot achieving the target sparsity \(s_{p}\). We used grid-search to fix the penalty factor for PruneTrain ranging from \(10^{-1}\) to \(10^{-5}\) for different experiments. When necessary, other hyperparameters were set to match ours. In non-i.i.d. case, the penalty for the proximal term in FedProx is determined via grid-search ranging from \(10^{-1}\) to \(10^{-5}\). FedDIP+Prox adopts the optimal combination of penalty values for FedDIP and FedProx.
**Hardware:** Our FedDIP framework and experiments are implemented and conducted on _GeForce RTX 3090s_ GPUs in the institution's HPC environment.
### _Performance Under Extreme Sparsity_
To demonstrate the performance of FedDIP and other baseline methods under extreme sparsity, we set target \(s_{p}=0.9\) for both _Fashion-MNIST_ and _CIFAR10_ tasks and \(s_{p}=0.8\) for the _CIFAR100_ task. Notably, as \(s_{p}=0.9\) causes divergence during the training of _AlexNet_ with SNIP, we adjust \(s_{p}\) to \(0.8\) for SNIP in this particular case.
#### Iv-B1 Accuracy
Figures 1(a), 1(a), and 1(a) demonstrate that FedDIP surpasses other baselines in achieving the highest _top-1_ accuracy (ratio of the correctly classified images) while maintaining the same extreme target sparsity. As indicated in Table III, to attain target sparsity of \(s_{p}=0.9\) and \(s_{p}=0.8\) respectively, FedDIP only compromises _LeNet-5_ and _ResNet-18_ model accuracy by \(1.24\%\) and \(1.25\%\), respectively. For _AlexNet_, FedDIP can even improve model performance \(0.7\%\), compared with FedAvg with \(s_{p}=0.9\).
#### Iv-B2 Cumulative Communication & Training Cost
To make a fair comparison of cumulative communication cost during training (amount of information exchanged in MB) w.r.t. a fixed budget, we showcase the relationship between communication cost and accuracy. Figures 1(b), 1(b), 1(b), and specifically Table IV present a comprehensive overview, emphasizing that FedDIP, when provided with adequate communication cost (budget), effectively prunes the model across all experiments outperforming the other models. This indicates the trade-off between model performance and communication/training cost. FedDIP demonstrates comparable communication efficiency to other baselines, principally due to the minimal decrement in model performance. Through our experiments, it is evidenced that FedDIP achieves optimally pruned models under conditions of extreme sparsity, while incurring less or equivalent communication costs compared to FedAvg. Even in the early stages (i.e., in restricted budget cases), FedDIP manages to match the communication efficiency of other pruning methods in the _CIFAR_ experiments. This underscores the capacity of our approach to effectively balance model performance and communication expenditure. All in all, FedDIP introduces _only_ minor computational overhead due to the incremental regularization, while achieving high accuracy compared to baselines. This computational requirement is on par with that of PruneTrain, PruneFL, and SNIP, given the same sparsity at each epoch. A slight increase in computational cost can be justified by the improvements achieved in the final model performance considering extremely high sparsity. The size of the pruned CNN models (Table III) has been significantly reduced (\(\sim 1\) order of magnitude) from the un-pruned models in FedAvg.
#### Iv-B3 Experiments with non-i.i.d. data
As shown in Table V, our methodology exhibits strong adaptability to FedProx (non-pruning), yielding commendable results on non-i.i.d. data. When juxtaposed with FedAvg, our approach manages to maintain comparable results even after pruning \(90\%\) of model parameters, albeit at a slight trade-off of 1-2% in model accuracy in the experiments with _LeNet-5_ and _AlexNet_. Across a span of \(T=1000\) rounds, FedDIP emerges as the superior performer in terms of _top-1_ accuracy, particularly at sparsity \(s_{p}=0.8\) in _ResNet-18_. This comprehensive suite of results underscores the adaptability of FedDIP in effectively managing non-i.i.d. cases, even in extreme sparsity.
### _FedDIP Sparsity Analysis_
#### Iv-B1 Layerwise sparsity
Figure 5 shows the sparsity _per_ layer of ResNet-18 (\(s_{p}=0.8\)), LeNet-5 (\(s_{p}=0.9\)), and AlexNet (\(s_{p}=0.9\)). Notably, the first layers of all models are the least pruned (\(0.3\leq s\leq 0.4\)), which is attributed to their significant role in general feature extraction. Furthermore, there is a correlation between the number of weights per layer and the corresponding sparsity level. This stems from the
Fig. 2: Fashion-MNIST experiment with LeNet-5.
initial ERK distribution, which allocates a higher degree of sparsity to layers containing more weights, although we adopt global magnitude pruning in a later process. Such correlation is remarkable in both convolutional and fully-connected layers of the models. In convolutional layers, the correlations are found to be perfectly linear for _LeNet-5_ with a correlation coefficient \(\varrho\simeq 1\), for _AlexNet_ we obtain \(\varrho=0.86\), while for _ResNet-18_\(\varrho=0.8\). For fully-connected layers, since only one exists in _ResNet-18_, we obtain \(\varrho=(0.91,0.82)\) for _LeNet-5_, _AlexNet_, respectively. These findings highlight the dependency of layerwise sparsity and the number of weights per layer, reflecting the influence of the ERK distribution in FedDIP's initialization.
#### Iv-B2 FedDIP in extreme sparsity
We examine the efficiency of FedDIP under varying conditions of extreme sparsity. For _Fashion-MNIST_ and _CIFAR10_ experiments, we investigate two additional extreme sparsity levels \(s_{p}=0.95\) and \(s_{p}=0.99\), and for _CIFAR100_ experiments, we investigate \(s_{p}=0.9\) and \(s_{p}=0.95\). These conditions provide a robust assessment of FedDIP's performance across a range of extreme sparsity. As shown in Figure 6, under extreme sparsity like \(0.95\) and \(0.99\), the largest drops \(\Delta\) in classification accuracy are only \(\Delta=6.97\%\), \(\Delta=5.03\%\), and \(\Delta=8.08\%\), respectively. This also comes with _further_ 90%, 89%, and 74% reduction on LeNet-5, Alex-Net, and ResNet-18 model sizes, respectively. This indicates (i) FedDIP's efficiency in storing and managing trained and pruned models as well as (ii) efficiency in inference
\begin{table}
\begin{tabular}{|l||l|l|l|} \hline Datasets & Fashion-MNIST & CIFAR10 & CIFAR100 \\ \hline DNN/CNN Model & LeNet-5 & AlexNet & ResNet-18 \\ \hline Number of pruning layers (\(Z\)) & 5 & 8 & 18 \\ \hline Initial learning rate (\(\eta_{0}\)) & \(0.01\) & \(0.1\) & \(0.1\) \\ \hline Number of clients per round (\(K\)) & 5 (out of \(50\)) & 5 (out of \(50\)) & 5 (out of \(50\)) \\ \hline Batchsize in SGD & 64 & 128 & 128 \\ \hline Initial sparsity (\(s_{0}\)) & \(0.5\) & \(0.5\) & \(0.05\) \\ \hline Global rounds (\(7\)) & \(1,000\) & \(1,000\) & \(1,000\) \\ \hline Reconfiguration interval (\(R\)) & \(5\) & \(5\) & 5 \\ \hline Regularization step size (\(Q\)) & \(10\) & \(10\) & \(10\) \\ \hline Local round (\(E_{l}\)) & 5 & \(5\) & 5 \\ \hline Maximum penalty (\(\lambda_{\max}\)) & \(10^{-3}\) & \(10^{-3}\) & \(5\cdot 10^{-3}\) \\ \hline \end{tabular}
\end{table} TABLE II: Configuration Table
Fig. 4: CIFAR100 experiment with ResNet18.
Fig. 3: CIFAR10 experiment with AlexNet.
tasks (after training) due to relatively small models. All in all, the pruned DNN models' performance is relatively high with small accuracy drops and high model compression (92%) across different tasks.
## VII Conclusions
We propose FedDIP, a novel FL framework with dynamic pruning and incremental regularization achieving highly accurate and extremely sparse DNN models. FedDIP gradually regularizes sparse DNN models obtaining extremely compressed models that maintain baseline accuracy and ensure controllable communication overhead. FedDIP is a data-free initialization method based on ERK distribution. We provide a theoretical convergence analysis of FedDIP and evaluate it across different DNN structures. FedDIP achieves comparable and higher accuracy against FL baselines and state-of-the-art FL-based model pruning approaches, respectively, over extreme sparsity using benchmark data sets (i.i.d. & non-i.i.d. cases). Our agenda includes addressing heterogeneity in personalized FL environments.
## Acknowledgement
The authors would like to express their sincere gratitude to Dr. Fani Deligianni for her invaluable insights and discussions during peer communications.
This work is partially funded by the EU Horizon Grant 'Integration and Harmonization of Logistics Operations' TRACE (#101104278) and 'National Natural Science Foundation of China' (NSFC) under Grant #72201093.
|
Subsets and Splits